AI Visibility API
Visibility API
Export AI conversation data from ChatGPT, Claude, and Gemini. Track how LLMs respond to queries in your market, including citations and brand mentions.
Endpoint
GET https://app.serp360.ai/api/v1/visibility
Authentication
Pass your API key in the X-API-Key header:
curl -X GET "https://app.serp360.ai/api/v1/visibility?workspace_id=ws_xxx&..." \ -H "X-API-Key: sk_live_your_key_here"
How it works
The Visibility endpoint uses a two-step flow with intelligent data checking:
- Register – Call with a
pingback_url. We check if data exists for this configuration:- Data exists → We POST to your pingback URL immediately and return
status: "ready" - No data yet → We return
status: "queued"and POST when the next LLM processing run completes
- Data exists → We POST to your pingback URL immediately and return
- Retrieve – Call again without
pingback_urlto fetch the data. Credits are charged here.
This flow ensures you're notified as soon as data is available—instantly if it already exists, or after processing completes.
Parameters
| Parameter | Required | Description |
|---|---|---|
workspace_id |
Yes | Workspace ID (e.g. ws_d04ae5a1f833 ) |
config_name |
Yes | Your AI visibility configuration name (case insensitive) |
platform |
No | Filter by LLM: chatgpt , claude , or gemini |
phase |
No | Filter by buyer journey phase: awareness , consideration , or decision |
pingback_url |
Step 1 only | HTTPS URL where we'll POST when data is ready |
Step 1: Register the request
curl -X GET "https://app.serp360.ai/api/v1/visibility?workspace_id=ws_d04ae5a1f833&config_name=CRM%20Research&pingback_url=https://your-server.com/webhook" \ -H "X-API-Key: sk_live_your_key_here"
With optional filters
curl -X GET "https://app.serp360.ai/api/v1/visibility?workspace_id=ws_d04ae5a1f833&config_name=CRM%20Research&platform=chatgpt&phase=consideration&pingback_url=https://your-server.com/webhook" \ -H "X-API-Key: sk_live_your_key_here"
Response (Data exists)
If visibility data already exists for this configuration, the pingback is sent immediately:
{
"success": true,
"status": "ready",
"message": "Data is ready. We've notified your pingback URL.",
"workspace_id": "ws_d04ae5a1f833",
"request_id": "b2c3d4e5",
"credits_used": 0,
"meta": {
"request_id": "b2c3d4e5",
"duration_ms": 38.56
}
}
Response (No data yet)
If no LLM runs exist for this configuration yet:
{
"success": true,
"status": "queued",
"message": "No AI visibility data exists for this configuration yet. We'll notify you when ready.",
"workspace_id": "ws_d04ae5a1f833",
"request_id": "b2c3d4e5",
"credits_used": 0,
"meta": {
"request_id": "b2c3d4e5",
"duration_ms": 38.56
}
}
Pingback notification
When your data is ready (immediately if it exists, or after LLM processing completes), we POST to your pingback_url :
{
"event": "visibility_data_ready",
"workspace_id": "ws_d04ae5a1f833",
"config_id": 1,
"config_name": "CRM Research",
"platform": "all",
"phase": "all",
"date": "2025-11-29",
"request_id": "vpb_123",
"message": "Your AI visibility data is ready. Call the API with workspace_id=ws_d04ae5a1f833 to retrieve.",
"timestamp": "2025-11-29T10:15:30+00:00"
}
Your endpoint should return a 2xx status. We'll retry failed deliveries up to 3 times.
Step 2: Retrieve the data
Once you receive the pingback, call the same endpoint without pingback_url :
curl -X GET "https://app.serp360.ai/api/v1/visibility?workspace_id=ws_d04ae5a1f833&config_name=CRM%20Research" \ -H "X-API-Key: sk_live_your_key_here"
Response
{
"success": true,
"data": {
"workspace_id": "ws_d04ae5a1f833",
"config_name": "CRM Research",
"config_id": 1,
"conversations": [
{
"query": "Which enterprise CRM platforms offer AI-powered insights?",
"phase": "awareness",
"responses": [
{
"platform": "chatgpt",
"text": "For enterprise CRM with AI capabilities, the leading platforms include Salesforce with Einstein AI, Microsoft Dynamics 365 with Copilot, and HubSpot's AI-powered features. Salesforce Einstein provides predictive lead scoring and automated insights...",
"date": "2025-11-28",
"citations": [
{
"title": "Salesforce Einstein AI",
"url": "https://salesforce.com/einstein",
"snippet": "AI-powered CRM insights and predictions..."
},
{
"title": "Microsoft Dynamics 365 Copilot",
"url": "https://dynamics.microsoft.com/copilot",
"snippet": "AI assistant for sales and customer service..."
}
]
},
{
"platform": "claude",
"text": "Leading enterprise CRM platforms with AI capabilities include several major players. Salesforce offers Einstein AI for predictive analytics and automation...",
"date": "2025-11-28",
"citations": []
},
{
"platform": "gemini",
"text": "The top enterprise CRM solutions with AI features are Salesforce (Einstein), Microsoft Dynamics 365 (Copilot), and Oracle CX Cloud...",
"date": "2025-11-29",
"citations": []
}
]
},
{
"query": "Best CRM for mid-size B2B companies",
"phase": "consideration",
"responses": [
{
"platform": "chatgpt",
"text": "For mid-size B2B companies, HubSpot CRM and Pipedrive are excellent choices...",
"date": "2025-11-28",
"citations": [
{
"title": "HubSpot CRM",
"url": "https://hubspot.com/crm",
"snippet": "Free CRM with powerful features..."
}
]
}
]
}
],
"total_conversations": 14
},
"request_id": "b2c3d4e6",
"credits_used": 7,
"balance": 63993,
"meta": {
"request_id": "b2c3d4e6",
"duration_ms": 234.56
}
}
Response fields
| Field | Type | Description |
|---|---|---|
workspace_id |
string | Workspace ID (ws_xxx format) |
config_name |
string | Your configuration name |
config_id |
integer | Internal configuration ID |
conversations |
array | Array of conversation objects |
conversations[].query |
string | The query/prompt sent to LLMs |
conversations[].phase |
string | Buyer journey phase (awareness, consideration, decision) |
conversations[].responses |
array | LLM responses for this query |
conversations[].responses[].platform |
string | LLM platform (chatgpt, claude, gemini) |
conversations[].responses[].text |
string | The LLM's response text |
conversations[].responses[].date |
string | Date of this response (YYYY-MM-DD) |
conversations[].responses[].citations |
array | URLs cited by the LLM |
conversations[].responses[].citations[].title |
string | Page title |
conversations[].responses[].citations[].url |
string | Full URL |
conversations[].responses[].citations[].snippet |
string | Snippet text |
total_conversations |
integer | Total number of conversations returned |
Data structure notes
- Latest response per platform: We return only the most recent response from each LLM for each conversation. Historical responses aren't included.
- Multiple platforms: A conversation may have responses from one, two, or all three platforms depending on your configuration.
- Citations vary: ChatGPT often provides citations; Claude and Gemini typically don't. Empty arrays are returned when no citations exist.
Error responses
Configuration not found
{
"success": false,
"error": {
"code": 404,
"message": "Configuration not found in this workspace"
},
"meta": {
"request_id": "b2c3d4e7"
}
}
Invalid platform filter
{
"success": false,
"error": {
"code": 400,
"message": "Invalid platform. Allowed: chatgpt, claude, gemini"
},
"meta": {
"request_id": "b2c3d4e8"
}
}
Invalid phase filter
{
"success": false,
"error": {
"code": 400,
"message": "Invalid phase. Allowed: awareness, consideration, decision"
},
"meta": {
"request_id": "b2c3d4e9"
}
}
No data ready
{
"success": false,
"error": {
"code": 400,
"message": "No data ready. First request with pingback_url, then retrieve after notification."
},
"meta": {
"request_id": "b2c3d4f0"
}
}
No conversations found
{
"success": false,
"error": {
"code": 404,
"message": "No conversations found for this configuration"
},
"meta": {
"request_id": "b2c3d4f1"
}
}
Pricing
0.5 credits per conversation in the response.
If your configuration has 14 conversations, you'll be charged 7 credits. Filtering by phase reduces the conversation count and therefore the cost. Platform filtering only affects which responses are returned, not the cost.
Example: Python integration
import requests
API_KEY = "sk_live_your_key_here"
WORKSPACE_ID = "ws_d04ae5a1f833"
BASE_URL = "https://app.serp360.ai/api/v1/visibility"
def register_visibility_request(config_name, pingback_url, platform=None, phase=None):
"""Register a visibility data request and get notified when ready."""
params = {
"workspace_id": WORKSPACE_ID,
"config_name": config_name,
"pingback_url": pingback_url
}
if platform:
params["platform"] = platform
if phase:
params["phase"] = phase
response = requests.get(
BASE_URL,
headers={"X-API-Key": API_KEY},
params=params
)
return response.json()
def retrieve_visibility_data(config_name, platform=None, phase=None):
"""Retrieve visibility data after pingback received."""
params = {
"workspace_id": WORKSPACE_ID,
"config_name": config_name
}
if platform:
params["platform"] = platform
if phase:
params["phase"] = phase
response = requests.get(
BASE_URL,
headers={"X-API-Key": API_KEY},
params=params
)
return response.json()
# Register the request
result = register_visibility_request(
config_name="CRM Research",
pingback_url="https://your-server.com/webhook"
)
# Check if data was ready immediately
if result['status'] == 'ready':
print("Data exists - pingback sent immediately!")
else:
print(f"Queued: {result['request_id']} - will notify when ready")
# After receiving pingback, retrieve data
data = retrieve_visibility_data(config_name="CRM Research")
# Process conversations
for conv in data['data']['conversations']:
print(f"\nQuery: {conv['query']}")
print(f"Phase: {conv['phase']}")
for resp in conv['responses']:
print(f" {resp['platform']}: {len(resp['citations'])} citations")
Example: Analysing brand mentions
def analyse_brand_mentions(data, brand_name):
"""Count how often a brand is mentioned across LLM responses."""
mentions = {"chatgpt": 0, "claude": 0, "gemini": 0}
citation_count = {"chatgpt": 0, "claude": 0, "gemini": 0}
brand_lower = brand_name.lower()
for conv in data['data']['conversations']:
for resp in conv['responses']:
platform = resp['platform']
# Count text mentions
if brand_lower in resp['text'].lower():
mentions[platform] += 1
# Count citation mentions
for citation in resp['citations']:
if brand_lower in citation['url'].lower() or brand_lower in citation['title'].lower():
citation_count[platform] += 1
return {
"text_mentions": mentions,
"citation_mentions": citation_count
}
# Usage
data = retrieve_visibility_data(config_name="CRM Research")
results = analyse_brand_mentions(data, "Salesforce")
print(f"Text mentions: {results['text_mentions']}")
print(f"Citation mentions: {results['citation_mentions']}")
Example: Webhook handler (Node.js/Express)
const express = require('express');
const app = express();
app.use(express.json());
app.post('/webhook', (req, res) => {
const { event, workspace_id, config_name, platform, phase, date } = req.body;
if (event === 'visibility_data_ready') {
console.log(`Visibility data ready for "${config_name}"`);
console.log(`Workspace: ${workspace_id}`);
console.log(`Filters: platform=${platform}, phase=${phase}`);
// Trigger your data retrieval process
}
res.status(200).send('OK');
});
app.listen(3000);
Tips
- Handle both statuses – Check for
"ready"(instant) or"queued"(wait for pingback) in the response. - Filter by phase to reduce costs – Use the
phasefilter to retrieve only the buyer journey stage you need and pay for fewer conversations. - Filter by platform for focused analysis – Use
platformto get responses from a single LLM (doesn't affect cost, just response size). - Track citations – Citations are valuable for understanding which sources LLMs recommend. Build dashboards to track your citation share over time.
- Compare platforms – Different LLMs give different answers. Track how each one talks about your brand vs competitors.
- Monitor phases – Awareness, consideration, and decision queries reveal different aspects of your AI visibility.