Transports

Understand how UnrealPilot communicates between the AI and your Unreal Engine editor.

What are Transports?

Transports are the communication layer between the AI model and Unreal Engine. They handle sending prompts to the AI and receiving tool execution commands back.

For most users: You don't need to worry about transports. The default configuration works out of the box.

Built-in Transports

HTTP/HTTPS Transport

Standard web-based communication with AI providers. Used by OpenAI, Anthropic, and most cloud services.

Use case: Cloud AI providers (default)

Local Transport

Direct communication with locally running AI models (Ollama, LM Studio, etc.).

Use case: Local AI models without internet

Streaming Transport

Real-time streaming of AI responses for immediate feedback.

Use case: Better UX with progressive responses

Configuration

Transports are automatically selected based on your AI provider choice. Advanced users can configure transport settings in:

Edit → Project Settings → UnrealPilot → Advanced → Transport Settings

Common Settings

  • Timeout: Maximum wait time for AI responses (default: 60s)
  • Retry Count: Number of retries on failure (default: 3)
  • Streaming: Enable/disable progressive response streaming
  • Custom Headers: Add authentication or custom headers

Custom Transports

Advanced users can implement custom transports for specialized use cases:

Proxy/VPN Support

Route requests through corporate proxies or VPNs

Custom Authentication

Implement OAuth, JWT, or other auth mechanisms

Request Logging

Log all AI requests for debugging or compliance

Rate Limiting

Add custom rate limiting or request throttling

Custom transport development requires C++ knowledge.

→ View examples on GitHub

Troubleshooting Transports

Connection timeouts

Increase timeout value in Transport Settings

Check your network connection and firewall settings

SSL/TLS errors

Ensure your system certificates are up to date

May occur with corporate proxies or antivirus software

Local model not responding

Verify the local server is running (e.g., Ollama)

Check the endpoint URL is correct (usually localhost:11434)

Last updated: December 2025