How can I help you?
Integrate AI AssistView with LiteLLM
20 Jan 202621 minutes to read
The AI AssistView component can also be integrated with LiteLLM, an open-source proxy that provides a unified, OpenAI-compatible API for multiple LLM providers such as OpenAI and Azure OpenAI.
In this setup:
- AI AssistView serves as the user interface for entering prompts.
- Prompts are sent to the LiteLLM proxy, which forwards them to the configured LLM provider.
- The LLM provider processes the prompt and returns a response through LiteLLM.
- This enables natural language understanding and context-aware responses without changing the AssistView integration logic, as LiteLLM uses the same OpenAI-style API.
Prerequisites
Before starting, ensure you have the following:
-
Node.js: Version 16 or higher with npm installed.
-
OpenAI Account: Access to OpenAI services and a generated API key.
-
Python: Required to run the LiteLLM proxy.
-
Syncfusion AI AssistView: Install the package @syncfusion/ej2-react-interactive-chat.
-
Marked Library: For parsing Markdown responses
npm install marked --saveConfigure AI AssistView with LiteLLM
To integrate LiteLLM with the Syncfusion AI AssistView component, update the src/App.js file in your React application. The component will send user prompts to the LiteLLM proxy, which forwards them to the configured LLM provider (e.g., OpenAI or Azure OpenAI) and returns natural language responses.
In the following example:
- The promptRequest event sends the user prompt to the LiteLLM proxy at
/v1/chat/completions. - The proxy uses the model alias defined in
config.yaml(e.g.,openai/gpt-4o-mini) and routes the request to the actual LLM provider. - The response is parsed as Markdown using the
markedlibrary and displayed in the AI AssistView component.
import { AIAssistViewComponent } from '@syncfusion/ej2-react-interactive-chat';
import * as React from 'react';
import * as ReactDOM from "react-dom";
// 'marked' removed—no Markdown parsing, plain text only
const liteLLMHost = 'http://localhost:4000'; // LiteLLM proxy host
const liteLLMApiKey = ''; // LiteLLM proxy auth token (master_key if configured, else empty string)
let stopStreaming = false;
function App() {
const assistInstance = React.useRef(null);
const suggestions = [
'How do I prioritize my tasks?',
'How can I improve my time management skills?'
];
const bannerTemplate = '<div class="banner-content"><div class="e-icons e-assistview-icon"></div><h3>How can I help you today?</h3></div>';
const toolbarItemClicked = (args) => {
if (args.item.iconCss === 'e-icons e-refresh') {
assistInstance.current.prompts = [];
assistInstance.current.promptSuggestions = suggestions;
stopStreaming = true; // Stop streaming on refresh
}
};
const assistViewToolbarSettings = {
items: [{ iconCss: 'e-icons e-refresh', align: 'Right' }],
itemClicked: toolbarItemClicked
};
const streamResponse = async (response) => {
let lastResponse = '';
const responseUpdateRate = 10;
let i = 0;
const responseLength = response.length;
while (i < responseLength && !stopStreaming) {
lastResponse += response[i];
i++;
if (i % responseUpdateRate === 0 || i === responseLength) {
// Plain text—no marked.parse
assistInstance.current.addPromptResponse(lastResponse, i === responseLength);
assistInstance.current.scrollToBottom();
}
await new Promise(resolve => setTimeout(resolve, 15)); // Delay before the next chunk
}
assistInstance.current.promptSuggestions = suggestions;
};
const onPromptRequest = (args) => {
const url = liteLLMHost.replace(/\/$/, '') + '/v1/chat/completions';
const headersObj = {
'Content-Type': 'application/json'
};
if (liteLLMApiKey) {
headersObj['Authorization'] = `Bearer ${liteLLMApiKey}`;
}
fetch(url, {
method: 'POST',
headers: headersObj,
body: JSON.stringify({
model: 'openai/gpt-4o-mini', // must match model_name in config.yaml
messages: [{ role: 'user', content: args.prompt }],
temperature: 0.7,
max_tokens: 300,
stream: false,
}),
})
.then((res) => {
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return res.json();
})
.then((reply) => {
let responseText = 'No response received.';
if (reply.choices && reply.choices[0] && reply.choices[0].message && reply.choices[0].message.content) {
responseText = reply.choices[0].message.content.trim();
}
stopStreaming = false;
streamResponse(responseText);
})
.catch((error) => {
console.error(error);
assistInstance.current.addPromptResponse(
'⚠️ Something went wrong while connecting to the AI service. Please check your LiteLLM proxy, model name, or try again later.',
true
);
stopStreaming = true;
});
};
const handleStopResponse = () => {
stopStreaming = true;
};
return (
<AIAssistViewComponent
id="aiAssistView"
ref={assistInstance}
promptRequest={onPromptRequest}
promptSuggestions={suggestions}
bannerTemplate={bannerTemplate}
toolbarSettings={assistViewToolbarSettings}
stopRespondingClick={handleStopResponse}
/>
);
}
ReactDOM.render(<App />, document.getElementById('container'));import { AIAssistViewComponent, ToolbarItemClickArgs } from '@syncfusion/ej2-react-interactive-chat';
import * as React from 'react';
import * as ReactDOM from 'react-dom';
const liteLLMHost: string = 'http://localhost:4000';
const liteLLMApiKey: string = '';
let stopStreaming: boolean = false;
function App() {
const assistInstance = React.useRef<AIAssistViewComponent | null>(null);
const suggestions: string[] = [
'How do I prioritize my tasks?',
'How can I improve my time management skills?'
];
const bannerTemplate: string = '<div class="banner-content"><div class="e-icons e-assistview-icon"></div><h3>How can I help you today?</h3></div>';
const toolbarItemClicked = (args: ToolbarItemClickArgs): void => {
if (args.item.iconCss === 'e-icons e-refresh') {
if (assistInstance.current) {
assistInstance.current.prompts = [];
assistInstance.current.promptSuggestions = suggestions;
stopStreaming = true;
}
}
};
const assistViewToolbarSettings = {
items: [{ iconCss: 'e-icons e-refresh', align: 'Right' }],
itemClicked: toolbarItemClicked
};
const streamResponse = async (response: string): Promise<void> => {
let lastResponse: string = '';
const responseUpdateRate: number = 10;
let i: number = 0;
const responseLength: number = response.length;
while (i < responseLength && !stopStreaming) {
lastResponse += response[i];
i++;
if (i % responseUpdateRate === 0 || i === responseLength) {
if (assistInstance.current) {
assistInstance.current.addPromptResponse(lastResponse, i === responseLength);
assistInstance.current.scrollToBottom();
}
}
await new Promise(resolve => setTimeout(resolve, 15));
}
if (assistInstance.current) {
assistInstance.current.promptSuggestions = suggestions;
}
};
const onPromptRequest = (args: { prompt: string }): void => {
const url: string = liteLLMHost.replace(/\/$/, '') + '/v1/chat/completions';
const headersObj: { [key: string]: string } = {
'Content-Type': 'application/json'
};
if (liteLLMApiKey) {
headersObj['Authorization'] = `Bearer ${liteLLMApiKey}`;
}
fetch(url, {
method: 'POST',
headers: headersObj,
body: JSON.stringify({
model: 'openai/gpt-4o-mini',
messages: [{ role: 'user', content: args.prompt }],
temperature: 0.7,
max_tokens: 300,
stream: false,
}),
})
.then((res: Response) => {
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return res.json();
})
.then((reply: any) => {
let responseText: string;
if (reply.choices && reply.choices[0] && reply.choices[0].message && reply.choices[0].message.content) {
responseText = reply.choices[0].message.content.trim();
} else {
responseText = 'No response received.';
}
stopStreaming = false;
streamResponse(responseText);
})
.catch((error: unknown) => {
console.error(error);
if (assistInstance.current) {
assistInstance.current.addPromptResponse(
'⚠️ Something went wrong while connecting to the AI service. Please check your LiteLLM proxy, model name, or try again later.',
true
);
}
stopStreaming = true;
});
};
const handleStopResponse = (): void => {
stopStreaming = true;
};
return (
<AIAssistViewComponent
id="aiAssistView"
ref={assistInstance}
promptRequest={onPromptRequest}
promptSuggestions={suggestions}
bannerTemplate={bannerTemplate}
toolbarSettings={assistViewToolbarSettings}
stopRespondingClick={handleStopResponse}
/>
);
}
ReactDOM.render(<App />, document.getElementById('container'));# LiteLLM proxy configuration (YAML)
model_list:
- model_name: openai/gpt-4o-mini # Alias your frontend will use
litellm_params:
model: gpt-4o-mini # OpenAI base model name
api_key:
router_settings:
# Optional: master_key for proxy authentication
# master_key: test_key
cors:
- "*"
cors_allow_origins:
- "*"Run and Test
Start the proxy:
Navigate to your project folder and run the following command to start the proxy:
pip install "litellm[proxy]"
litellm --config ".\config.yaml" --port 4000 --host 0.0.0.0Start the frontend:
In a separate terminal window, navigate to your project folder and start the development server:
npm startOpen your app to interact with the AI AssistView component integrated with LiteLLM.
Troubleshooting
-
401 Unauthorized: Verify yourAPI_KEYand model deployment name. -
Model not found: Ensure model matchesmodel_nameinconfig.yaml. -
CORS issues: Configurerouter_settings.cors_allow_originsproperly.