RealdataAPI Store - Browse tools published by our community and use them for your projects right away
logo

Google Trends Scraper - Scrape Trending Search Data

RealdataAPI / google-trends-scraper

Scrape data for hundreds of search terms per Google sheet, fix time ranges to get faster outputs, and choose geolocations and other trending searches from the Google Trends platform using Google Trends Scraper. Store your data as a usable file like XML, CSV, etc. Our Google Trends data collection tool is available in the USA, UAE, UK, Australia, France, Germany, Canada, Singapore, Mexico, Spain, Brazil, and other countries worldwide.

What is Google Trends Scraper and How Does It Work?

It is a data scraping tool to collect search trends data from the Google Trends platform. Google Trends doesn't have an official API, but you can directly scrape Google Trends data using an unofficial scraper at scale. We have developed it on a robust SDK, allowing you to execute it locally or on our platform.

Why Extract Google Trends Data Using Our Scraper?

The Google Trends platform allows you to discover the topics people have been searching for on the internet globally. It also allows you to find new ideas and trends with a public interest level. After studying the Google Trends data at scale, you can decide where to use your resources and make an investment.

Whether you're an SEO expert or digital marketer monitoring Keywords, a journalist researching trending topics, an e-commerce seller looking for ways to dropship products, or a real estate developer checking and tracking the future of property values, Google Trends will help you with reliable data.

What is the Cost of Using Google Trends Scraper?

Estimating the required resources to scrape Google Trends data is exhausting due to variable use cases. Therefore, the best option to estimate the cost is to perform a trial run of the trending search data scraper with the sample data and restricted output. It will give you the price of every scrape you can multiply with the scrape counts you want it to perform. .

You can check out our stepwise tutorial for Google Trends Scraper and its pricing. If you choose a higher plan for the long term, you will save your money.

Important Note: It will work best to enter more search queries for every scrape in this scraper. As per our experience, if you enter one thousand queries in one scrape, it will do better than if you feed a single keyword.

Input

Field Type Description
searchTerms array It is the search query list to scrape Google Trends data. If spreadsheetId is unavailable, you must use a search term list.
spreadsheetId string spreadsheetId string It is an optional Google Sheet Id for crawling search query locations.
isPublic boolean If tested, you can upload the publicly available datasheet without authorization. Please go through the authorization below on the same page to upload private data sheets with false defaults.
timeRange string Select the time range with the last year as default.
category string category string Select the search filter category with All categories as default.
geo string geo string Retrieve geolocation results for specific areas with worldwide default.
maxItems number It is an optional field for the total product item count you want to scrape.
customTimeRange string Please input the custom range for a time. If you give it, it has priority instead of a regular timeRange. Check out the custom time range to check the suitable examples and format.
extendOutputFunction string extendOutputFunction string It is an optional function to take the JQuery handle($) argument and reflect the data to merge it with the default result. Check out the Extend Output Function to learn more.
proxyConfiguration object It is an optional field to set up proxy servers.

Essential Notes on Spreadsheet Input

Notes on Input as a Spreadsheet

  • Google Sheets is the only acceptable datasheet format.
  • The spreadsheet should contain only one data column.
  • The scraper will consider the first row in the Google Sheet as the column title, not the search query.
  • Check the Sample Google Sheet.

Essential Notes on timeRange

You can use the drop-down menu to select the time range on our platform. If you give the JSON input, here are the possible values of time ranges.

  • all (it equals values from 2004 to the current value)
  • today 5-y (It equals values in the last five years)
  • today 1-y (It is a default blank value string that equals last year's values)
  • today 3-m (It equals to last three months' values)
  • today 1-m (It equals the last month's values)
  • now 7-d (It equals the last week's values)
  • now 1-d (It equals the last 24-hour values)
  • now 4-H (It equals the values from the last four hours)
  • now 1-H (It equals the values from the last 60 minutes)

Input Example of Google Trends Scraper

{ "searchTerms": [ "test term", "another test term" ], "spreadsheetId": //spreadsheetId, "timeRange": "now 4-H", "isPublic": true, "maxItems": 100, "customTimeRange": "2020-03-24 2020-03-29", "extendOutputFunction": "($) => { return { test: 1234, test2: 5678 } }", "proxyConfiguration": { "useRealdataAPIProxy": true } }

Output Example of Google Trends Scraper

The Google Trends data scraper will save its output in a dataset. All items will include search queries; corresponding dates will let each value.

Here is the sample output of an item:

{ "searchTerm": "CNN", "‪Jan 13, 2019‬": "92", "‪Jan 20, 2019‬": "100", "‪Jan 27, 2019‬": "86", "‪Feb 3, 2019‬": "82", //... }

The outputAsISODate will display as accurate if you set it to true:

{ "Term / Date": "CNN", "2019-08-11T03:00:00.000Z": "43", "2019-08-18T03:00:00.000Z": "34", "2019-08-25T03:00:00.000Z": "34", // ... }

Authorization

If your Google spreadsheet is private, you'll need authorization.

The Google Trends data extractor internally executes the actor to import and export Google Sheets. You must conduct the authorization process by separately running this scraper in your account. After its single run, it will store the token in the key-value store, and the Google Trends data extractor will automatically use it. You can use automation full-fledged for the Google Trends Scraper without any authorization parameter setting all the time after running the import-export actor for Google Sheets once.

It would help if you had a separate tokensStore in every Google account to use more Google Sheets from multiple accounts. Further, you must adequately name each store to monitor the token location.

After the scraper run, you can export the resulting spreadsheets with structured format from your dataset.

Customized Time Range

It is a string with the custom range for a time in the startDate endDate order.

It has a YYYY-MM-DD YYYY-MM-DD format.

Samples:

2020-01-30 2020-03-29
2019-03-20 2019-03-26

All dates support the time only if there is a seven-day time range.

Samples:

2020-03-24T08 2020-03-29T15

Extend Output Function

Update the default result of this API using the extended output function. You can select the required data with the help of an argument JQuery handle $. The function will merge the scraper output with the default value.

The function should return its value in the object format.

You can get the following things with return fields:

  • Change a field - Return new value with existing field
  • Add a new field - Return the output without the default value field
  • Remove a field - Return undefined value with available field

You'll see a new field in the following example:

($) => { return { comment: 'This is a comment', } }

Also, get the related links and keyword trends using this example:

($) => { return { trends: $('a[href^="/trends/explore"] .label-text') .map((_, s) => ({ text: s.innerText, link: s.closest('a').href })) .toArray() } }

Google Trends Scraper with Integrations

Lastly, you can connect the Google Trends data collection tool with any web application or cloud service with the help of integrations available on our platform. To integrate the scraper, GitHub, Google Drive, Google Sheets, Slack, Zapier, Make, Airbyte, and other options are available. Use Webhooks to take any action for event occurrence, like getting an alert after the successful execution of the Google Trends Scraper.

Using Google Trends Data Extractor with Real Data API Actor

Our actor gives you programmatic permission to access the platform. We have organized it around the RESTful HTTP endpoints, allowing you to run, manage, and schedule scrapers on our platform. The actor also allows you to track scraper performances, update and create versions, access datasets, etc.

Use our client PyPl and client NPM packages to access the actor using Python and Node.js, respectively.

Develop a Customized Scraper With Real Data API

If you use our Google Trends data scraper and don't get what you want, you can develop a scraper with custom requirements on our platform. We provide many templates to design the scraper in TypeScript, JavaScript, and Python to begin the process. You can write the scraper script in Crawlee, the open-source library.

Contact our team to design it if you want to avoid working on it.

Your Feedback

We have a dedicated team to work on the scraper performance improvement on our platform. Therefore, if you find any issues or want to suggest any feature update for Google Trends Scraper, please visit the issue tab on your console account, and create an issue. API

Industries

Check out how industries use Google Trends Scraper worldwide.

saas-btn.webp

E-commerce & Retail

You should have a Real Data API account to execute the program examples. Replace < YOUR_API_TOKEN> in the program using the token of your actor. Read about the live APIs with Real Data API docs for more explanation.

import { RealdataAPIClient } from 'RealdataAPI-Client';

// Initialize the RealdataAPIClient with API token
const client = new RealdataAPIClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare actor input
const input = {
    "searchTerms": [
        "webscraping"
    ],
    "timeRange": "",
    "proxyConfiguration": {
        "useRealdataAPIProxy": true
    }
};

(async () => {
    // Run the actor and wait for it to finish
    const run = await client.actor("emastra/google-trends-scraper").call(input);

    // Fetch and print actor results from the run's dataset (if any)
    console.log('Results from dataset');
    const { items } = await client.dataset(run.defaultDatasetId).listItems();
    items.forEach((item) => {
        console.dir(item);
    });
})();
from RealdataAPI_client import RealdataAPIClient

# Initialize the RealdataAPIClient with your API token
client = RealdataAPIClient("<YOUR_API_TOKEN>")

# Prepare the actor input
run_input = {
    "searchTerms": ["webscraping"],
    "timeRange": "",
    "proxyConfiguration": { "useRealdataAPIProxy": True },
}

# Run the actor and wait for it to finish
run = client.actor("emastra/google-trends-scraper").call(run_input=run_input)

# Fetch and print actor results from the run's dataset (if there are any)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)
# Set API token
API_TOKEN=<YOUR_API_TOKEN>

# Prepare actor input
cat > input.json <<'EOF'
{
  "searchTerms": [
    "webscraping"
  ],
  "timeRange": "",
  "proxyConfiguration": {
    "useRealdataAPIProxy": true
  }
}
EOF

# Run the actor
curl "https://api.RealdataAPI.com/v2/acts/emastra~google-trends-scraper/runs?token=$API_TOKEN" /
  -X POST /
  -d @input.json /
  -H 'Content-Type: application/json'

Search Queries

searchTerms Optional Array

It is a required list of search queries to scrape if spreadsheetId is unavailable.

Multiple Terms

isMultiple Optional Boolean

The scraper will treat the comma as a search for multiple options.

Time Range

timeRange Optional Enum

Select the time range of the predefined search with the last year as the default.

Options:

all string, today 5-y string, today 3-m string, today 1-m string, now 7-d string, now 1-d string, now 4-H string, now 1-H string.

Geolocation Area

geo Optional Enum

Gather output from particular geolocation with worldwide default.

Options:

US string , UK string, UAE string, CA string, IN string, DE string, FR string, AU string, NZ string, IT string, BR string, Keeling Islands string, Malvinas string, Vatican City State string, The Republic of string, The Democratic People's Republic of string, MK string, MY string ,

Google Sheet ID

spreadsheetId Optional String

It is an optional string for Google Sheet ID to load search queries treating Google Sheets as the data source. Sheets should have a single column. Here, the first row of the column will have its title.

Is Google Spreadsheet Public

isPublic Optional Boolean

You can upload the public Google sheet without authorization. Please explore the authorization in the Readme section to upload private sheets.

Trend Categories

category Optional Enum

Filter the search for using category with All Categories as the default value.

Options:

7 string, 3 string, 16 string, 12 string, 22 string, 71 string, 958 string, 533 string

Maximum Items

maxItems Optional Integer

Set the limit to scrape product items. There is no limit if you enter zero value

Custom Time Range

customTimeRange Optional String

Enter the customized time range. If available, it will lead the regular timeRange in the YYYY-MM-DD YYYY-MM-DD format. Check the detailed explanation and examples in the Readme section.

Extend Output Function

extendOutputFunction Optional String

It will combine default results with the scraper output, with the object as the output format.

Proxy Configuration

proxyConfiguration Required Object

Use proxy setup to support the scraper execution without any restriction. There Re multiple options. To set up proxies with US country default.

Output as ISO Date

outputAsISODate Optional Boolean

It is for ISO date with time string for output date.

CSV-based Output

csvOutput Optional Boolean

The result columns will be keywords, dates, etc., in the CSV format.

Maximum Concurrency

maxConcurrency Optional Integer

It is to open available pages with sufficient CPU parallelly.

Page Load Timeout

pageLoadTimeoutSecs Optional Integer

It is the timeline to load the page before failing to load.

{
  "searchTerms": [
    "webscraping"
  ],
  "isMultiple": false,
  "timeRange": "",
  "geo": "",
  "isPublic": false,
  "category": "",
  "maxItems": 0,
  "extendOutputFunction": "($) => {/n    const result = {};/n    // Uncomment to add an extra field to each item of the output/n    // result.extraField = 'Test field';/n/n    return result;/n}",
  "proxyConfiguration": {
    "useRealdataAPIProxy": true
  },
  "outputAsISODate": false,
  "csvOutput": false,
  "maxConcurrency": 10,
  "pageLoadTimeoutSecs": 180
}