Authorizations
Bearer token authentication. Format: Bearer YOUR_LLMLAYER_API_KEY
Body
The search query or question to answer
LLM model to use (e.g., openai/gpt-4o-mini, anthropic/claude-sonnet-4, groq/llama-3.3-70b-versatile)
"openai/gpt-4o-mini"
Country code for localized search results
"us"
Your own API key for the model provider (optional)
Custom system prompt to override default behavior
Language for the response (auto detects from query)
"auto"
Format of the response
markdown
, html
, json
Type of web search to perform
general
, news
JSON schema as string for structured responses (required when answer_type=json)
Include inline citations [1] in the response
Return source documents used for answer generation
Return relevant images from search ($0.001 additional cost)
Filter search results by recency
anytime
, hour
, day
, week
, month
, year
Maximum tokens in the LLM response
x >= 1
Controls randomness (0=deterministic, 2=very creative)
0 <= x <= 2
Include/exclude domains (use '-' prefix to exclude)
["wikipedia.org", "-reddit.com"]
Number of search queries to generate ($0.004 per query)
1 <= x <= 5
Amount of search context to extract
low
, medium
, high
Response
Successful response
The AI-generated answer based on web search results
Source documents (when return_sources=true)
Relevant images (when return_images=true)
Processing time in seconds
"2.34"
Total input tokens processed
Total output tokens generated
Cost in USD for model usage (null if using provider_key)
Cost in USD for LLMLayer search infrastructure