The LLMAI plugin provides the core AI integration capabilities for Unreal Engine 5 projects. This plugin handles all the low-level communication with OpenAI's, LocalAI's, and Grok's Realtime API, for text and audio processing, and provides Blueprint-friendly components for easy integration.
// Add component to any Actor
UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = "AI")
class ULLMAIClientComponent* AIClient;
// In constructor
AIClient = CreateDefaultSubobject<ULLMAIClientComponent>(TEXT("AIClient"));
// Connect to OpenAI (cloud)
AIClient->ConnectToAI(
"OpenAI",
"gpt-realtime",
"You are a helpful assistant",
{"text", "audio"}
);
// Or connect to LocalAI (local/offline)
AIClient->ConnectToAI(
"LocalAI",
"your-local-model",
"You are a helpful assistant",
{"text", "audio"}
);
// Or connect to Grok (xAI)
AIClient->ConnectToAI(
"Grok",
"grok-1118",
"You are a helpful assistant",
{"text", "audio"},
0.8f,
"Rex" // Voice parameter - REQUIRED for Grok!
);
// Bind to events
AIClient->OnAITextResponseDelta.AddDynamic(this, &AMyActor::HandleAIResponse);
AIClient->OnAIFunctionCallRequested.AddDynamic(this, &AMyActor::HandleFunctionCall);
If you have downloaded the full project, then there are example demos and further documentation in the parent folders.
Windows (Win64):
Access through Edit > Project Settings > Plugins > LLMAI:
Note: These are global project settings. For per-component instance settings (visible in the Details panel), see Component Instance Properties below.
OpenAI (cloud) or LocalAI (local/offline)Project Settings > LLMAI > OpenAI Provider)gpt-realtime)alloy)Project Settings > LLMAI > LocalAI Provider)Note: LocalAI is available as a separate distribution. See LocalAI Quick Start Guide for download and setup.
Project Settings > LLMAI > Grok Provider)grok-1118)Unlike OpenAI, Grok locks the voice after the first session update. You MUST specify the voice parameter when calling ConnectToAI or ConnectToGrok, otherwise it will default to "Ara" and cannot be changed without reconnecting.
API Key Discovery: The plugin automatically checks (in order):
XAI_API_KEY, LLMAI_GROK_API_KEY, GROK_API_KEY-GrokKey=YOUR_KEY or -XAIKey=YOUR_KEYWhen you add a ULLMAIClientComponent to an Actor, these properties are configurable in the Details panel. These are per-instance settings that can vary between different components.
| Property | Default | Range | Description |
|---|---|---|---|
VoiceThreshold |
0.01 | 0.001 - 1.0 | Minimum audio level to detect speech. Lower = more sensitive. |
MaxSilenceDuration |
1.5 | 0.5 - 5.0 sec | How long to wait after speech stops before committing audio. |
bEnableClientsideVAD |
true | — | Detect microphone input and interrupt server responses. Essential for interrupting local audio playback. |
InterruptionThreshold |
0.01 | 0.001 - 0.1 | Audio level needed to trigger voice interruption. |
bEnableAutoMicGating |
false | — | Automatically mute microphone when AI is speaking to prevent feedback. |
OutputAudioThreshold |
0.01 | 0.001 - 0.1 | Output audio level threshold for mic gating when interruption is disabled. |
OutputAudioDecayRate |
0.95 | 0.8 - 0.999 | How quickly output audio envelope decays. Higher = slower decay. |
VoiceOutputHoldTimeSeconds |
2.0 | 0.0 - 3.0 sec | Hold time after AI speech ends before triggering OnVoiceOutputEnd. Covers natural speech pauses. |
| Property | Default | Range | Description |
|---|---|---|---|
AudioGainMultiplier |
1.0 | 0.1 - 10.0 | Amplify or reduce microphone input volume before sending to AI. |
bAutoCreateAudioStream |
true | — | Automatically create audio stream component for microphone capture. |
bAutoCreateAudioPlayback |
true | — | Automatically create audio component for AI voice playback. |
// In Blueprint: Set via Details panel or use nodes
// In C++: Modify properties before connecting
AIClient->VoiceThreshold = 0.02f; // Less sensitive speech detection
AIClient->MaxSilenceDuration = 2.0f; // Wait longer before committing
AIClient->bEnableAutoMicGating = true; // Prevent feedback
AIClient->AudioGainMultiplier = 1.5f; // Boost quiet microphones
If using a custom ULLMAIAudioStreamComponent, these properties can be configured:
| Property | Default | Description |
|---|---|---|
AudioSourceMode |
Microphone |
Microphone - Capture mic input to send to AI Loopback - Capture AI voice output for MetaHuman lip-sync |
SampleRate |
24000 | Audio sample rate in Hz. 24000 is recommended for AI voice. |
NumChannels |
1 | Number of audio channels (mono = 1, stereo = 2). |
BitsPerSample |
16 | Audio bit depth. 16-bit PCM is standard for AI services. |
StreamingChunkSize |
4096 | Size of audio chunks for streaming. Smaller = lower latency. |
OutputFilePath |
(empty) | Optional path to save captured audio to WAV file for debugging. |
Create function definition assets in the Content Browser:
A function profile is a set of AI Functions. This makes it easy to associate a set of functions with a particular purpose or connection.
UFUNCTION()
void HandleAIFunctionCall(const FLLMFunctionCall& FunctionCall)
{
if (FunctionCall.Name == "my_function")
{
// Extract parameters
FString Param = ULLMAIBlueprintLibrary::GetAIFunctionStringParameter(
FunctionCall.ArgumentsJson, "parameter_name", "default_value"
);
// Execute your logic
FString Result = ExecuteMyFunction(Param);
// Return result to AI
AIClient->SendAIFunctionCallResult(FunctionCall.CallId, Result);
}
}
The client component will automatically create needed audo components and begin and end streaming where necessary. You only need to set up the audio components manaually if you have a specific scenerio or settings you need to account for. For example if you wanted to connect the audio to a seperate actor or dynamically choose particular outputs and inputs.
// Setup voice capture (automatic microphone detection)
AIClient->SetupVoiceCapture();
// Setup audio playback
UAudioComponent* AudioComp = CreateDefaultSubobject<UAudioComponent>(TEXT("AIAudio"));
AIClient->SetupAudioPlayback(AudioComp);
// Create audio stream component for custom capture
ULLMAIAudioStreamComponent* StreamComp = CreateDefaultSubobject<ULLMAIAudioStreamComponent>(TEXT("AudioStream"));
AIClient->SetAudioStreamComponent(StreamComp);
// Handle custom audio data
StreamComp->OnAudioDataReceived.AddDynamic(this, &AMyActor::HandleAudioData);
AIClient->OnConnected.AddDynamic(this, &AMyActor::OnAIConnected);
AIClient->OnDisconnected.AddDynamic(this, &AMyActor::OnAIDisconnected);
AIClient->OnError.AddDynamic(this, &AMyActor::OnAIError);
AIClient->OnAISessionReady.AddDynamic(this, &AMyActor::OnAIReady);
AIClient->OnAITextResponseDelta.AddDynamic(this, &AMyActor::OnAITextReceived);
AIClient->OnAIResponseComplete.AddDynamic(this, &AMyActor::OnAIResponseFinished);
AIClient->OnVoiceModeActivated.AddDynamic(this, &AMyActor::OnVoiceModeStarted);
AIClient->OnInputAudioTranscriptionDelta.AddDynamic(this, &AMyActor::OnTranscription);
The plugin is designed for thread-safe operation:
Always check IsInGameThread() before UI updates in custom handlers.
The plugin includes a comprehensive debug & logging system for troubleshooting issues. See Debug-Logging.htm for complete details.
# In Unreal Console (` key):
llmai.debug.EnableAll # Turn on all debugging
llmai.debug.OpenAILevel 2 # Debug API communication
llmai.debug.AudioLevel 2 # Debug voice issues
llmai.debug.FunctionLevel 2 # Debug function calling
llmai.debug.AudioLevel 2 to diagnosellmai.debug.OpenAILevel 2 to see API errors (OpenAI), or verify LocalAI is runningllmai.debug.LogConnections 1 to monitorMonitor these log categories in Output Log:
Full Debug & Logging Guide: See Debug-Logging.htm for complete troubleshooting commands and techniques.
For plugin installation, compilation, and initial configuration, see Installation.htm.