Skip to main content

API Guide: Getting started

Learn how to authenticate and make your first API call to automate data extraction with Browse AI's approved robots.

M
Written by Melissa Shires
Updated today

The Browse AI REST API is a powerful tool that lets you programmatically control your data extraction robots and access your scraped data. Instead of manually running robots and downloading data from the dashboard, you can automate everything through simple HTTP requests.

The API is available on all Browse AI plans at no additional cost, making it accessible whether you're running a few robots or managing enterprise-scale data operations.

You can review our full API and Webhook documentation here.

What you can do with the API

  • List your existing robots (GET /robots)

  • Get robot details (GET /robots/{robotId})

  • Run tasks on approved robots (POST /robots/{robotId}/tasks)

  • Retrieve task results and scraped data (GET /robots/{robotId}/tasks/{taskId})

  • List all tasks and their data (GET /robots/{robotId}/tasks)

  • Set up bulk runs (POST /robots/{robotId}/bulk-runs)

  • Manage webhooks (POST /robots/{robotId}/webhooks)

  • Update robot cookies (PATCH /robots/{robotId}/cookies)

What you'll need

Before you begin, make sure you have:

  • A Browse AI account (API access available on all plans)

  • At least one approved robot

  • Basic understanding of REST APIs and HTTP requests

Don't have an approved robot yet? You'll need to create and approve a robot first before using the API.

Step 1: Create an API key

Your API key authenticates all requests to Browse AI's API. Note that you can create multiple API keys.

  1. In your Browse AI dashboard, go to the API tab in the main navigation.

  2. Click Create API key to generate a new key.

  3. Give your API key a descriptive name.

  4. Copy and securely store your API key.

Step 2: Get your robot details

You'll need specific details about your approved robot for API calls.

  1. Go to your approved robot in the Browse AI dashboard.

  2. Click on the Integrate tab.

  3. In the "Integrate using Rest API" section, you'll find:

    1. Robot ID: Copy this unique identifier

    2. Workspace ID: Your workspace identifier

    3. Robot input parameters: The parameters your robot accepts (like originUrl)

Step 4: Test your connection

Make a simple API call to verify your setup works. We'll check the system status:

curl -X GET "https://api.browse.ai/v2/status" \
-H "Authorization: Bearer YOUR_SECRET_API_KEY"

Expected response:

{
"statusCode": 200,
"messageCode": "success",
"tasksQueueStatus": "OK"
}

Step 5: List your approved robots

Get your approved robot's details to verify everything is working:

curl -X GET "https://api.browse.ai/v2/robots" \
-H "Authorization: Bearer YOUR_SECRET_API_KEY"

Response structure:

{
"statusCode": 200,
"messageCode": "success",
"robots": {
"totalCount": 5,
"items": [
{
"id": "robot-uuid-here",
"name": "Your Approved Robot Name",
"createdAt": 1678795867879,
"inputParameters": [
{
"name": "originUrl",
"type": "url",
"required": true
}
]
}
]
}
}

Copy the id field from your approved robot - you'll need this for the next step.

Step 6: Run your first task

Use your approved robot ID to run a data extraction task:

curl -X POST "https://api.browse.ai/v2/robots/ROBOT_ID/tasks" \
-H "Authorization: Bearer YOUR_SECRET_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"inputParameters": {
"originUrl": "https://example.com"
}
}'

Success response:

{
"statusCode": 200,
"messageCode": "success",
"result": {
"id": "task-uuid-here",
"status": "running",
"robotId": "robot-uuid-here"
}
}

Step 7: Check task status and retrieve data

Monitor your task status and retrieve extracted data:

curl -X GET "https://api.browse.ai/v2/robots/ROBOT_ID/tasks/TASK_ID" \
-H "Authorization: Bearer YOUR_SECRET_API_KEY"

When the task completes successfully:

{
"statusCode": 200,
"messageCode": "success",
"result": {
"id": "task-uuid-here",
"status": "successful",
"capturedTexts": {
"title": "Example Page Title",
"price": "$99.99",
"description": "Product description here"
},
"finishedAt": 1678795867879
}
}

Your extracted data will be in the capturedTexts field with the field names you configured when creating your robot.

What you can do with approved robots

Once your robot is approved and you're making API calls successfully, you can:

  • Set up monitoring: automatically refresh your data at regular intervals.

  • Run bulk operations: extract data from up to 500,000 pages with a single command.

  • Use workflows: connect multiple robots for deep scraping scenarios.

  • Set up integrations: connect to 7,000+ apps and tools.

Rate limits and usage

  • Standard requests: 100 per minute per API key

  • Bulk operations: Up to 500,000 tasks per bulk run

  • Data retention: Varies by plan (check your account settings)

Did this answer your question?