Bifrost Integration Guide¶
Learn how to integrate AccuKnox Prompt Firewall with Bifrost using a custom Go plugin to monitor and filter prompts and responses via the AccuKnox API.
Step 1: Prerequisites¶
We expect you to have the bifrost-http binary in-place. You can download it from the Bifrost Releases.
This integration guide uses the bifrost-http binary to test the configuration. You can use this config with any Bifrost deployment method.
Step 2: Download AccuKnox Plugin¶
Download the latest AccuKnox plugin for Bifrost:
Using wget¶
wget https://github.com/accuknox/bifrost-accuknox-integration/releases/latest/download/accuknox-plugin.so
Using curl¶
curl -LO https://github.com/accuknox/bifrost-accuknox-integration/releases/latest/download/accuknox-plugin.so
Source Code
The plugin source code is available at github.com/accuknox/bifrost-accuknox-integration for reference.
Step 3: Configure Bifrost¶
Create or update your Bifrost configuration file with the following settings:
{
"log_level": "debug",
"server": {
"port": 8080,
"host": "0.0.0.0"
},
"plugins": [
{
"enabled": true,
"name": "accuknox-logger",
"path": "./accuknox-plugin.so",
"config": {
"enabled": true,
"api_key": "<YOUR_ACCUKNOX_JWT_TOKEN>",
"user_info": "<username@accuknox.com>"
}
}
],
"providers": {
"openai": {
"keys": [
{
"value": "<YOUR_OPENAI_API_KEY>",
"models": ["gpt-3.5-turbo"],
"weight": 1.0
}
]
}
}
}
Step 4: Configuration Details¶
Plugin Configuration¶
enabled: Set totrueto activate the pluginname: Plugin identifier (must match the name returned byGetName())path: Path to the compiled.sofileapi_key: Your AccuKnox Prompt Firewall JWT token (obtained from AccuKnox dashboard)user_info: Your email or username for tracking
Provider Configuration¶
- Add your LLM provider credentials (OpenAI, Anthropic, etc.)
- Each key can be restricted to specific models
- Use
weightfor load balancing across multiple keys
Step 5: Run Bifrost Server¶
Copy the Plugin¶
cp accuknox-plugin.so /path/to/bifrost/
cd /path/to/bifrost/
Start the Server¶
./bifrost-http -app-dir . -log-level debug -log-style pretty
Step 6: Test the Integration¶
Send a test request using curl:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say hello"}],
"max_tokens": 50
}'