API Reference
Adversec v1 — API-first adversarial testing for AI agents.
Adversec provides two core capabilities via REST API:
- Test Generation — Describe your agent, receive a tailored suite of adversarial test cases mapped to the OWASP LLM Top 10.
- Test Execution — Point a test suite at any live agent endpoint. Get back a scored vulnerability report with confidence values and severity classifications.
https://api.adversec.io
Authentication
All endpoints require an API key passed as a Bearer token in the Authorization header:
Authorization: Bearer advc_sk_YOUR_KEYAPI keys are generated in the dashboard by visiting the Get API Key page. Keys are hashed server-side (SHA-256) — only the hash is stored, never the raw key.
Quick Start
Generate your first adversarial test suite in three steps:
# Step 1: Generate custom adversarial tests for your agent
curl https://api.adversec.io/v1/tests/generate \
--header "Authorization: Bearer advc_sk_YOUR_KEY" \
--header "Content-Type: application/json" \
--data '{
"agent_name": "CustomerSupportBot",
"description": "Handles refund requests and account queries for an e-commerce platform",
"domain": "customer-support",
"num_tests": 50,
"intensity": "standard"
}'
# Returns a test_id and array of test cases# Step 2: Point the test suite at your agent
curl https://api.adversec.io/v1/tests/{test_id}/run \
--header "Authorization: Bearer advc_sk_YOUR_KEY" \
--header "Content-Type: application/json" \
--data '{
"target": {
"url": "https://your-agent.example.com/chat",
"method": "POST",
"input_field": "prompt"
},
"concurrency": 5,
"timeout": 30
}'
# Returns a scored vulnerability reportGenerate Tests
Generate a tailored adversarial test suite for any AI agent. The engine uses attack taxonomies mapped to the OWASP LLM Top 10 to produce precise, contextual test cases.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
| agent_name | string | required | Name of the agent under test |
| description | string | required | What the agent does — its purpose and context |
| domain | string | optional | e.g. legal, medical, coding, customer-support |
| expected_input_format | string | optional | What format the agent expects as input |
| expected_output_format | string | optional | What format the agent should return |
| focus_areas | string[] | optional | Attack categories to emphasize (empty = balanced spread) |
| num_tests | int | optional | Number of test cases to generate (default: 50, 5-500) |
| intensity | string | optional | quick | standard | deep (default: standard) |
| include_multi_turn | bool | optional | Include Crescendo-style multi-turn escalation tests |
Response
| Field | Type | Description |
|---|---|---|
| test_id | string | UUID for this test suite (used for /run) |
| agent_name | string | Echo of the agent name |
| tests_generated | int | Number of test cases generated |
| coverage | object | Tests per attack category |
| tests | array | Full array of AdversarialTestCase objects |
| estimated_cost | float | Approximate cost in $ to run via /v1/tests/run |
intensity: "deep" + include_multi_turn: true for maximum coverage on production-bound agents. Expect higher generation costs but significantly better vulnerability detection.
Run Test Suite
Execute a previously generated test suite against a live agent endpoint. The engine fires all adversarial inputs in parallel and scores results using an independent LLM judge.
Path Parameters
| Parameter | Type | Description |
|---|---|---|
| test_id | string | The test suite ID from /generate |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
| target.url | string | required | Full URL of the agent endpoint |
| target.method | string | optional | GET or POST (default: POST) |
| target.input_field | string | optional | JSON key to place the test input under (default: "input") |
| target.output_field | string | optional | JSON key to extract response from (default: root) |
| target.auth_type | string | optional | bearer | api_key | none |
| target.auth_value | string | optional | Token or API key for the target endpoint |
| target.headers | object | optional | Additional HTTP headers |
| concurrency | int | optional | Parallel requests (default: 5, 1-20) |
| timeout | int | optional | Per-request timeout in seconds (default: 30) |
| custom_assertions | string[] | optional | Extra natural-language scoring rules |
Response
| Field | Type | Description |
|---|---|---|
| run_id | string | UUID for this test run |
| total_tests | int | Total tests executed |
| passed | int | Tests where the agent resisted the attack |
| failed | int | Tests where the attack succeeded (vulnerability found) |
| pass_rate | float | 0.0 - 1.0 |
| attack_success_rate | float | 0.0 - 1.0 — how many attacks got through |
| summary | object | Failed tests by severity: critical, high, medium, low |
| results_by_category | object | Pass/fail per attack category |
| worst_results | array | Top 5 failures by confidence (the most dangerous vulnerabilities) |
| time_taken_seconds | float | Total execution time |
Get Test Suite Status
Check the status of a test suite and view metadata about generated tests and associated runs.
Response
| Field | Type | Description |
|---|---|---|
| test_id | string | Suite ID |
| agent_name | string | Agent name |
| status | string | generating | ready | failed | running | complete |
| tests_generated | int | Number of tests |
| coverage | object | Tests per category |
| runs | string[] | Associated run IDs |
| created_at | string | ISO 8601 timestamp |
Health Check
No authentication required. Returns service status and uptime.
Response:
{
"status": "ok",
"version": "0.1.0",
"uptime": 3600.0
}AdversarialTestCase
Each test case in the generated suite includes:
| Field | Type | Description |
|---|---|---|
| id | int | Sequential test number |
| category | string | Attack category (e.g., prompt_injection) |
| attack_type | string | Specific technique (e.g., indirect_content) |
| input | string | The adversarial input to send to the target |
| description | string | Why this test matters |
| expected_failure | string | What happens if the agent is vulnerable |
| severity | string | low | medium | high | critical |
| owasp_category | string | OWASP LLM Top 10 mapping |
| multi_turn_steps | string[] | Multi-turn conversation (if applicable) |
Error Codes
| Code | Description |
|---|---|
| 401 | Invalid or inactive API key |
| 404 | Test suite not found |
| 422 | Invalid request body — validation error |
| 429 | Monthly quota exceeded |
| 500 | Internal server error |
Rate Limits
| Tier | Monthly Quota | Concurrent Requests |
|---|---|---|
| Free | 50 | 1 |
| Pro | 5,000 | 5 |
| Team | 50,000 | 10 |
| Enterprise | Unlimited | 20 |
/generate endpoint runs synchronously. For large test suites (100+ tests), expect response times of 15-60 seconds depending on intensity.
SDKs & Libraries
Official SDKs are coming. The API is REST-standard — any HTTP client works.
# Python example with requests
import requests
API_BASE = "https://api.adversec.io/v1"
API_KEY = "advc_sk_YOUR_KEY"
# Generate tests
response = requests.post(
f"{API_BASE}/tests/generate",
headers={"Authorization": f"Bearer {API_KEY}"},
json={
"agent_name": "MySupportAgent",
"description": "Handles customer inquiries",
"num_tests": 50,
"intensity": "standard",
},
)
suite = response.json()
print(suite["tests_generated"], "tests generated")
# Run the suite
result = requests.post(
f"{API_BASE}/tests/{suite['test_id']}/run",
headers={"Authorization": f"Bearer {API_KEY}"},
json={
"target": {"url": "https://agent.example.com/chat"},
"concurrency": 5,
},
)
report = result.json()
print(f"Attack success rate: {report['attack_success_rate']:.1%}")