POST
/
api
/
v1
/
search
Answer API - Web-enhanced AI responses
curl --request POST \
  --url https://api.llmlayer.dev/api/v1/search \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "query": "<string>",
  "model": "openai/gpt-4o-mini",
  "location": "us",
  "provider_key": "<string>",
  "system_prompt": "<string>",
  "response_language": "auto",
  "answer_type": "markdown",
  "search_type": "general",
  "json_schema": "<string>",
  "citations": false,
  "return_sources": false,
  "return_images": false,
  "date_filter": "anytime",
  "max_tokens": 1500,
  "temperature": 0.7,
  "domain_filter": [
    "wikipedia.org",
    "-reddit.com"
  ],
  "max_queries": 1,
  "search_context_size": "medium"
}'
{
"llm_response": "<string>",
"sources": [
{
"title": "<string>",
"link": "<string>",
"snippet": "<string>"
}
],
"images": [
{
"title": "<string>",
"imageUrl": "<string>",
"thumbnailUrl": "<string>",
"source": "<string>",
"link": "<string>"
}
],
"response_time": "2.34",
"input_tokens": 123,
"output_tokens": 123,
"model_cost": 123,
"llmlayer_cost": 123
}

Authorizations

Authorization
string
header
required

Bearer token authentication. Format: Bearer YOUR_LLMLAYER_API_KEY

Body

application/json
query
string
required

The search query or question to answer

model
string
required

LLM model to use (e.g., openai/gpt-4o-mini, anthropic/claude-sonnet-4, groq/llama-3.3-70b-versatile)

Example:

"openai/gpt-4o-mini"

location
string
default:us

Country code for localized search results

Example:

"us"

provider_key
string | null

Your own API key for the model provider (optional)

system_prompt
string | null

Custom system prompt to override default behavior

response_language
string
default:auto

Language for the response (auto detects from query)

Example:

"auto"

answer_type
enum<string>
default:markdown

Format of the response

Available options:
markdown,
html,
json
search_type
enum<string>
default:general

Type of web search to perform

Available options:
general,
news
json_schema
string | null

JSON schema as string for structured responses (required when answer_type=json)

citations
boolean
default:false

Include inline citations [1] in the response

return_sources
boolean
default:false

Return source documents used for answer generation

return_images
boolean
default:false

Return relevant images from search ($0.001 additional cost)

date_filter
enum<string>
default:anytime

Filter search results by recency

Available options:
anytime,
hour,
day,
week,
month,
year
max_tokens
integer
default:1500

Maximum tokens in the LLM response

Required range: x >= 1
temperature
number
default:0.7

Controls randomness (0=deterministic, 2=very creative)

Required range: 0 <= x <= 2
domain_filter
string[] | null

Include/exclude domains (use '-' prefix to exclude)

Example:
["wikipedia.org", "-reddit.com"]
max_queries
integer
default:1

Number of search queries to generate ($0.004 per query)

Required range: 1 <= x <= 5
search_context_size
enum<string>
default:medium

Amount of search context to extract

Available options:
low,
medium,
high

Response

Successful response

llm_response

The AI-generated answer based on web search results

sources
object[]

Source documents (when return_sources=true)

images
object[]

Relevant images (when return_images=true)

response_time

Processing time in seconds

Example:

"2.34"

input_tokens
integer

Total input tokens processed

output_tokens
integer

Total output tokens generated

model_cost
number | null

Cost in USD for model usage (null if using provider_key)

llmlayer_cost
number

Cost in USD for LLMLayer search infrastructure