📖

FAQ

Frequently Asked Questions (FAQ)

General

Q: Is the ug-core SDK only for React?
A: No. The SDK is written in vanilla TypeScript and is completely framework-agnostic. It can be integrated into any JavaScript project, whether it's built with React, Vue, Svelte, Angular, or no framework at all. The ChatAvatar.tsx component is just one example of how to integrate it into a React application.
Q: Can I use this SDK for a text-only chat experience?
A: Yes. The SDK is fully configurable to handle different input and output capabilities. To create a text-only experience, you would configure the ConversationManager like this:
const convConfig: ConversationConfig = { // ... other config inputCapabilities: { audio: false, // Disable audio input text: true, // Enable text input }, capabilities: { audio: false, // Disable audio output subtitles: true, // Use subtitles for text display viseme: false, // Not being used at the moment avatar: false, } };

Customization

Q: How do I change the avatar?
A: The AvatarManager emits animation events (e.g., { name: 'body_idle', layer: 0, loop: true }), but it does not render the avatar itself. Your UI code is responsible for listening to these events and controlling your chosen animation system (like Spine, Rive, or Three.js). To change the avatar, you would swap out your animation component and assets and ensure it responds to the events from the onAvatarAnimationChanged hook.
Q: Can I change the sensitivity of the Voice Activity Detection (VAD)?
A: Currently, the VAD parameters (using the Silero VAD model) are pre-configured for a balance of responsiveness and accuracy. We plan to expose these configuration options in a future release to allow for fine-tuning.

Technical

Q: How does the "barge-in" (interrupt) feature work without the user's voice echoing?
A: This is handled by Acoustic Echo Cancellation (AEC). The SDK requests that the browser enable its built-in AEC when accessing the microphone. On most modern devices, especially mobile phones, this is a hardware-level feature that is very effective at preventing the assistant's audio from being picked up by the microphone. If hardware AEC isn't available, a software-based version is used.
Q: How do I handle errors like the user denying microphone permissions?
A: The ConversationManager will emit an onError event through the configuration hooks. The error object provided will contain a type and a message. For a microphone permission error, the type would likely be mic_denied. You can use a switch statement or if/else block in your onError hook to handle different types of errors gracefully in your UI.
// In your component hooks: { onError: (error: ConversationError) => { if (error.type === 'mic_denied') { // Show a message to the user explaining how to enable their mic } else if (error.type === 'network_timeout') { // Show a reconnecting or network error message } else { // Show a generic error message toast.error(error.message); } } }
Q: Where can I find the setup server settings?
A: Here