AI Visibility API
Export AI conversation data from ChatGPT, Claude, and Gemini. Track how LLMs respond to queries in your market, including citations and brand mentions.
Endpoint
GET https://app.serp360.ai/api/v1/visibility
Authentication
Pass your API key in the X-API-Key header:
curl -X GET "https://app.serp360.ai/api/v1/visibility?..." \ -H "X-API-Key: sk_live_your_key_here"
How it works
The Visibility endpoint uses a two-step pingback flow:
- Queue – Call with a
pingback_url. We return immediately with statusqueued. - Wait – We POST to your pingback URL when the data is ready (after the next LLM processing run).
- Retrieve – Call again without
pingback_urlto fetch the data. Credits are charged here.
This flow ensures you're notified when fresh data is available rather than polling repeatedly.
Parameters
| Parameter | Required | Description |
|---|---|---|
config_name |
Yes | Your AI visibility configuration name (case insensitive) |
platform |
No | Filter by LLM: chatgpt , claude , or gemini |
phase |
No | Filter by buyer journey phase: awareness , consideration , or decision |
pingback_url |
Step 1 only | HTTPS URL where we'll POST when data is ready |
Step 1: Queue the request
curl -X GET "https://app.serp360.ai/api/v1/visibility?config_name=CRM%20Research&pingback_url=https://your-server.com/webhook" \ -H "X-API-Key: sk_live_your_key_here"
With optional filters
curl -X GET "https://app.serp360.ai/api/v1/visibility?config_name=CRM%20Research&platform=chatgpt&phase=consideration&pingback_url=https://your-server.com/webhook" \ -H "X-API-Key: sk_live_your_key_here"
Response
{
"success": true,
"status": "queued",
"message": "We'll notify you when data is ready.",
"request_id": "vpb_123",
"credits_used": 0,
"meta": {
"request_id": "vpb_123",
"duration_ms": 38.56
}
}
Pingback notification
When your data is ready, we POST to your pingback_url :
{
"event": "visibility_data_ready",
"config_id": 1,
"config_name": "CRM Research",
"platform": "all",
"phase": "all",
"date": "2025-11-29",
"request_id": "vpb_123",
"message": "Your AI visibility data is ready. Call the API without pingback_url to retrieve."
}
Your endpoint should return a 2xx status. We'll retry failed deliveries up to 3 times.
Step 2: Retrieve the data
Once you receive the pingback, call the same endpoint without pingback_url :
curl -X GET "https://app.serp360.ai/api/v1/visibility?config_name=CRM%20Research" \ -H "X-API-Key: sk_live_your_key_here"
Response
{
"success": true,
"data": {
"config_name": "CRM Research",
"config_id": 1,
"conversations": [
{
"query": "Which enterprise CRM platforms offer AI-powered insights?",
"phase": "awareness",
"responses": [
{
"platform": "chatgpt",
"text": "For enterprise CRM with AI capabilities, the leading platforms include Salesforce with Einstein AI, Microsoft Dynamics 365 with Copilot, and HubSpot's AI-powered features. Salesforce Einstein provides predictive lead scoring and automated insights...",
"date": "2025-11-28",
"citations": [
{
"title": "Salesforce Einstein AI",
"url": "https://salesforce.com/einstein",
"snippet": "AI-powered CRM insights and predictions..."
},
{
"title": "Microsoft Dynamics 365 Copilot",
"url": "https://dynamics.microsoft.com/copilot",
"snippet": "AI assistant for sales and customer service..."
}
]
},
{
"platform": "claude",
"text": "Leading enterprise CRM platforms with AI capabilities include several major players. Salesforce offers Einstein AI for predictive analytics and automation...",
"date": "2025-11-28",
"citations": []
},
{
"platform": "gemini",
"text": "The top enterprise CRM solutions with AI features are Salesforce (Einstein), Microsoft Dynamics 365 (Copilot), and Oracle CX Cloud...",
"date": "2025-11-29",
"citations": []
}
]
},
{
"query": "Best CRM for mid-size B2B companies",
"phase": "consideration",
"responses": [
{
"platform": "chatgpt",
"text": "For mid-size B2B companies, HubSpot CRM and Pipedrive are excellent choices...",
"date": "2025-11-28",
"citations": [
{
"title": "HubSpot CRM",
"url": "https://hubspot.com/crm",
"snippet": "Free CRM with powerful features..."
}
]
}
]
}
],
"total_conversations": 14
},
"credits_used": 7,
"balance": 63993,
"meta": {
"request_id": "vpb_124",
"duration_ms": 234.56
}
}
Response fields
| Field | Type | Description |
|---|---|---|
config_name |
string | Your configuration name |
config_id |
integer | Internal configuration ID |
conversations |
array | Array of conversation objects |
conversations[].query |
string | The query/prompt sent to LLMs |
conversations[].phase |
string | Buyer journey phase (awareness, consideration, decision) |
conversations[].responses |
array | LLM responses for this query |
conversations[].responses[].platform |
string | LLM platform (chatgpt, claude, gemini) |
conversations[].responses[].text |
string | The LLM's response text |
conversations[].responses[].date |
string | Date of this response (YYYY-MM-DD) |
conversations[].responses[].citations |
array | URLs cited by the LLM |
conversations[].responses[].citations[].title |
string | Page title |
conversations[].responses[].citations[].url |
string | Full URL |
conversations[].responses[].citations[].snippet |
string | Snippet text |
total_conversations |
integer | Total number of conversations returned |
Data structure notes
- Latest response per platform: We return only the most recent response from each LLM for each conversation. Historical responses aren't included.
- Multiple platforms: A conversation may have responses from one, two, or all three platforms depending on your configuration.
- Citations vary: ChatGPT often provides citations; Claude and Gemini typically don't. Empty arrays are returned when no citations exist.
Error responses
Configuration not found
{
"success": false,
"error": "Configuration not found.",
"code": "CONFIG_NOT_FOUND",
"meta": {
"request_id": "vpb_125",
"duration_ms": 12.34
}
}
Invalid platform filter
{
"success": false,
"error": "Invalid platform. Use: chatgpt, claude, or gemini.",
"code": "INVALID_PLATFORM",
"meta": {
"request_id": "vpb_126",
"duration_ms": 8.12
}
}
No data available
{
"success": false,
"error": "No visibility data available for this configuration.",
"code": "NO_DATA",
"meta": {
"request_id": "vpb_127",
"duration_ms": 15.67
}
}
Pricing
0.5 credits per conversation in the response.
If your configuration has 14 conversations, you'll be charged 7 credits. Filtering by phase reduces the conversation count and therefore the cost. Platform filtering only affects which responses are returned, not the cost.
Example: Python integration
import requests
API_KEY = "sk_live_your_key_here"
BASE_URL = "https://app.serp360.ai/api/v1/visibility"
def queue_visibility_request(config_name, pingback_url, platform=None, phase=None):
"""Queue a visibility data request."""
params = {
"config_name": config_name,
"pingback_url": pingback_url
}
if platform:
params["platform"] = platform
if phase:
params["phase"] = phase
response = requests.get(
BASE_URL,
headers={"X-API-Key": API_KEY},
params=params
)
return response.json()
def retrieve_visibility_data(config_name, platform=None, phase=None):
"""Retrieve visibility data after pingback received."""
params = {"config_name": config_name}
if platform:
params["platform"] = platform
if phase:
params["phase"] = phase
response = requests.get(
BASE_URL,
headers={"X-API-Key": API_KEY},
params=params
)
return response.json()
# Queue the request
result = queue_visibility_request(
config_name="CRM Research",
pingback_url="https://your-server.com/webhook"
)
print(f"Queued: {result['request_id']}")
# After receiving pingback, retrieve data
data = retrieve_visibility_data(config_name="CRM Research")
# Process conversations
for conv in data['data']['conversations']:
print(f"\nQuery: {conv['query']}")
print(f"Phase: {conv['phase']}")
for resp in conv['responses']:
print(f" {resp['platform']}: {len(resp['citations'])} citations")
Example: Analysing brand mentions
def analyse_brand_mentions(data, brand_name):
"""Count how often a brand is mentioned across LLM responses."""
mentions = {"chatgpt": 0, "claude": 0, "gemini": 0}
citation_count = {"chatgpt": 0, "claude": 0, "gemini": 0}
brand_lower = brand_name.lower()
for conv in data['data']['conversations']:
for resp in conv['responses']:
platform = resp['platform']
# Count text mentions
if brand_lower in resp['text'].lower():
mentions[platform] += 1
# Count citation mentions
for citation in resp['citations']:
if brand_lower in citation['url'].lower() or brand_lower in citation['title'].lower():
citation_count[platform] += 1
return {
"text_mentions": mentions,
"citation_mentions": citation_count
}
# Usage
data = retrieve_visibility_data(config_name="CRM Research")
results = analyse_brand_mentions(data, "Salesforce")
print(f"Text mentions: {results['text_mentions']}")
print(f"Citation mentions: {results['citation_mentions']}")
Example: Webhook handler (Node.js/Express)
const express = require('express');
const app = express();
app.use(express.json());
app.post('/webhook', (req, res) => {
const { event, config_name, platform, phase, date } = req.body;
if (event === 'visibility_data_ready') {
console.log(`Visibility data ready for "${config_name}"`);
console.log(`Filters: platform=${platform}, phase=${phase}`);
// Trigger your data retrieval process
}
res.status(200).send('OK');
});
app.listen(3000);
Tips
- Filter by phase to reduce costs – Use the
phasefilter to retrieve only the buyer journey stage you need and pay for fewer conversations. - Filter by platform for focused analysis – Use
platformto get responses from a single LLM (doesn't affect cost, just response size). - Track citations – Citations are valuable for understanding which sources LLMs recommend. Build dashboards to track your citation share over time.
- Compare platforms – Different LLMs give different answers. Track how each one talks about your brand vs competitors.
- Monitor phases – Awareness, consideration, and decision queries reveal different aspects of your AI visibility.