logo

Naver Stores Scraper – Naver Store API

RealdataAPI / naver-stores-scraper

Real Data API offers a powerful solution for businesses looking to gain deep insights through our advanced Naver Stores Scraper. With seamless and reliable Naver store data scraping, we help you collect essential information like product listings, pricing, inventory, and reviews from Naver e-commerce platforms. Our scalable and efficient Naver store API supports automated data extraction across key global markets including Australia, Canada, Germany, France, Singapore, USA, UK, UAE, and India. Whether you're focused on Naver product data extraction, full-scale Naver e-commerce scraping, or performing in-depth Naver store market analysis, our tool ensures fast and accurate results. Use our scraper for Naver sales data scraping to make informed business decisions, optimize pricing strategies, and monitor your competitors with ease. Stay ahead in the digital marketplace with Real Data API—your go-to solution for Naver data scraping.

What is Naver Stores Scraper, and How does it Work?

A Naver Stores Scraper is a powerful data extraction tool designed to collect structured information from Naver’s online retail platform. It automates the process of gathering key store data such as product titles, categories, prices, availability, images, and customer reviews. This tool is essential for businesses, researchers, and digital marketers who rely on real-time insights for competitive analysis, market research, or pricing strategies. Using advanced algorithms and custom scripts, the scraper performs web scraping for Naver stores by accessing store URLs, parsing HTML content, and extracting desired data points. Users can customize it for Naver product listings scraping, enabling efficient tracking of multiple SKUs across various categories. Additionally, it offers robust features for Naver store inventory tracking, helping businesses monitor stock levels in real-time. Another critical function is Naver store price scraping, which allows for automated comparison of product prices across sellers. The extracted data can be delivered in structured formats like CSV, JSON, or through APIs for easy integration into business dashboards or analytics tools. With a Naver Stores Scraper, businesses can stay informed, competitive, and data-driven in the rapidly evolving e-commerce landscape of South Korea and beyond.

Why extract data from Naver Stores?

Extracting data from Naver Stores is essential for businesses looking to gain a competitive edge in South Korea’s rapidly growing e-commerce market. With millions of users actively shopping on the platform, leveraging a reliable Naver scraper provides direct access to valuable market insights. Using Naver store data scraping, businesses can collect real-time information on product listings, prices, stock levels, ratings, and customer reviews. This data is crucial for understanding consumer behavior, monitoring competitor activity, and identifying trending products. Whether you're a seller, aggregator, or analyst, tapping into this data allows for more informed decision-making. By integrating a powerful Naver store API, users can automate the entire process, ensuring accurate and up-to-date data delivery without manual effort. This saves time and resources while maintaining a competitive business strategy. With Naver product data extraction, companies can analyze pricing trends, optimize inventory planning, and tailor their marketing strategies based on real consumer demand. Ultimately, extracting data from Naver Stores allows businesses to scale faster, respond to market changes, and make smarter decisions using real, actionable data. Whether for pricing intelligence or trend analysis, the right Naver scraper is a game-changer in modern e-commerce.

Is it legal to extract Naver Stores data?

The legality of Naver e-commerce scraping depends on how the data is accessed and used. While publicly available information on Naver Stores can often be scraped for analysis, it's crucial to follow ethical scraping practices and comply with local laws, data usage policies, and Naver’s terms of service. When businesses engage in Naver sales data scraping for internal insights—like price monitoring, competitor benchmarking, or trend tracking—it’s generally considered legal if done without violating site security or user privacy. However, scraping sensitive or personal data, overloading servers, or bypassing authentication mechanisms may result in legal challenges or IP bans. To stay compliant, businesses should use respectful scraping techniques like rate limiting, user-agent identification, and honoring robots.txt. Alternatively, some opt for partnerships or tools that provide data via legitimate APIs, reducing risk while enabling accurate Naver store market analysis. In short, Naver e-commerce scraping can be legal when performed responsibly and within the platform’s guidelines. For best results, companies should consult legal professionals, use scraping services that follow ethical standards, and ensure the scraped data is used only for lawful and non-invasive purposes. This way, businesses can extract insights while minimizing compliance risks.

How can I extract data from Naver Stores?

Here’s a detailed step-by-step process to extract data from Naver Stores, using both custom scripting and Naver Scraping API, while integrating keywords like web scraping for Naver stores, Naver product listings scraping, Naver store inventory tracking, and Naver store price scraping.

Step-by-Step Process to Extract Data from Naver Stores

1. Identify Data Requirements

Decide what you want to extract:

  • Product titles, descriptions, images
  • Prices, discounts (Naver store price scraping)
  • Stock levels (Naver store inventory tracking)
  • Reviews and ratings (Naver product listings scraping)

2. Find Target URLs

Manually search or crawl Naver Stores to collect URLs of:

  • Individual product pages
  • Category listings
  • Store overview pages

This is essential for web scraping for Naver stores.

3. Choose Your Extraction Method

Option A: Use a Naver Scraping API

  • Ideal for businesses needing bulk, real-time data
  • No need to manage scraping infrastructure
  • Ensures consistent, structured output
  • Integrates easily with BI tools, CRMs, or Excel

Option B: Build a Custom Scraper

Use Python libraries like:

  • requests – to send HTTP requests
  • BeautifulSoup or lxml – to parse HTML
  • Selenium – for dynamic content loading

Example Python snippet:

import requests
from bs4 import BeautifulSoup

url = 'https://smartstore.naver.com/store-example/products/123456'
headers = {'User-Agent': 'Mozilla/5.0'}

response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')

product_name = soup.select_one('h3.product-title').text
price = soup.select_one('span.product-price').text

print(product_name, price)

4. Handle Pagination & Dynamic Content

Use loops or scripts to:

  • Navigate through multiple product pages
  • Extract data from dynamically loaded elements using Selenium

5. Implement Data Cleaning & Formatting

  • Convert currency to numeric formats
  • Remove HTML tags from descriptions
  • Structure your output into JSON/CSV for easy use

6. Automate & Schedule

Set up periodic scraping:

  • With cron jobs or task schedulers
  • Daily/weekly for inventory or price tracking

7. Store & Use the Data

  • Feed into analytics dashboards
  • Use for competitive analysis
  • Optimize your pricing strategy or inventory decisions

8. Ensure Legal & Ethical Compliance

  • Respect robots.txt
  • Avoid excessive server requests
  • Don’t scrape login-restricted or sensitive data
  • Prefer official Naver Scraping API when available

Whether you're performing Naver product listings scraping, tracking stock levels, or comparing prices across sellers, using a Naver Scraping API or custom script gives you the edge. It's scalable, flexible, and essential for smart e-commerce decisions.

Input Options

When setting up a Naver Stores Scraper—either through a Naver Scraping API or a custom web scraping solution—you’ll need to define clear input options. These inputs guide the scraper on what data to collect, from where, and how often.

Here’s a list of common and customizable input options:

1. Store URL
  • Single store page URL
  • List of multiple store URLs (batch scraping)
  • Specific product or category page

Example:

https://smartstore.naver.com/store-name
2. Keyword Search Input
  • Input keyword(s) to search Naver Smart Store
  • Scrape based on keyword relevance

Example:

"wireless headphones"

"organic skincare"

3. Product Category
  • Choose specific Naver store categories to extract relevant products only

Example:

  • Electronics
  • Fashion
  • Home & Living
4. Location Filter (Optional)
  • Filter products based on seller location or delivery regions (if available)

Countries:

South Korea (default), viewable for global trend mapping

5. Price Range
  • Set minimum and maximum price limits
  • Useful for Naver store price scraping

Example:

  • Min: 10,000 KRW
  • Max: 100,000 KRW
6. Pagination Control
  • Number of pages to scrape
  • Products per page

Example:

  • Page 1–5
  • 20 products per page
7. Data Fields to Extract

Select which fields to include in the output:

  • Product title
  • Price
  • Stock availability (Naver store inventory tracking)
  • Ratings & reviews
  • Product image URL
  • Product description
  • Seller name
8. Output Format
  • CSV
  • JSON
  • Excel (.xlsx)
  • API response (for real-time delivery)
9. Schedule / Frequency
  • One-time
  • Daily / Weekly / Monthly (automated via scheduler)
10. Proxy / User-Agent Rotation
  • For custom scrapers: to avoid detection/blocking
  • Optional input for advanced users
Example Request Parameters for Naver Stores Scraping API:
{
  "store_url": "https://smartstore.naver.com/example-store",
  "search_keyword": "wireless earbuds",
  "category": "electronics",
  "min_price": 10000,
  "max_price": 100000,
  "pages": 3,
  "products_per_page": 20,
  "fields": [
    "product_name",
    "product_url",
    "price",
    "availability",
    "rating",
    "review_count",
    "image_url",
    "seller_name"
  ],
  "output_format": "json",
  "schedule": "once",
  "geo_filter": ["South Korea"],
  "use_proxy": true,
  "user_agent_rotation": true
}
Parameter Breakdown
Parameter Description
store_url Naver Smart Store URL to target
search_keyword Optional: Keyword for filtering product listings
category Specific product category to narrow down results
min_price / max_price Price range filter for Naver store price scraping
pages Number of pages to scrape
products_per_page Limit per page
fields Choose data points like title, price, stock, etc. (Naver store inventory tracking)
output_format json, csv, or xlsx
schedule once, daily, weekly, etc.
geo_filter Location-based filtering (if supported)
use_proxy Enable proxy rotation
user_agent_rotation Avoid detection by rotating user agents

Sample Result of Naver Stores Data Scraper

A sample result of a Naver Stores Data Scraper might include structured information about products listed on Naver's online marketplace. This data could be used for market analysis, consumer insights, and improving customer experience. Here's what a sample result could look like:

Sample Data for Naver Stores Scraper
Product Information:
Product Name Category Price (KRW) Stock Status Ratings Reviews Seller Name
Wireless Earbuds X100 Electronics 45,000 In Stock 4.5 123 TechShop
Organic Green Tea 500g Grocery 8,000 Out of Stock 4.7 320 HealthyLife Market
Winter Jacket - Red Fashion 120,000 In Stock 4.3 98 Cozy Apparel
Gaming Mouse - Pro M3 Electronics 35,000 In Stock 4.6 215 GamerZone
LED Desk Lamp Home Decor 25,000 Low Stock 4.8 45 BrightLiving
Seller Information:
Seller Name Store Rating Total Products Listed Location
TechShop 4.5 120 Seoul
HealthyLife Market 4.6 85 Busan
Cozy Apparel 4.7 60 Incheon
GamerZone 4.8 150 Daejeon
BrightLiving 4.5 70 Daegu
Product Price Trends:
Product Last 7 Days Price Trend (KRW) Price Change (%)
Wireless Earbuds X100 45,000 → 43,500 → 44,000 -2.22%
Organic Green Tea 500g 8,000 → 8,500 → 8,000 0%
Winter Jacket - Red 120,000 → 115,000 → 118,000 -1.67%
Gaming Mouse - Pro M3 35,000 → 35,500 → 35,000 0%
LED Desk Lamp 25,000 → 24,500 → 25,000 0%
Review Insights:
Product Name Top Positive Comment Top Negative Comment
Wireless Earbuds X100 "Great sound quality, very comfortable" "Battery life is a bit short"
Organic Green Tea 500g "The tea tastes amazing!" "The packaging could be improved"
Winter Jacket - Red "Keeps me warm even in the coldest weather" "Color faded after one wash"
Gaming Mouse - Pro M3 "Perfect for gaming, smooth response" "A bit too light for my taste"
LED Desk Lamp "Perfect lighting for work" "A bit flimsy, could be sturdier"
Pricing Analysis by Category:
Category Average Price (KRW) Highest Price (KRW) Lowest Price (KRW)
Electronics 55,000 120,000 25,000
Grocery 8,000 15,000 5,000
Fashion 100,000 200,000 40,000
Home Decor 30,000 60,000 15,000
Use Cases:
  • Market Research: Understanding pricing trends and consumer preferences.
  • Competitive Analysis: Comparing products and prices across sellers.
  • Customer Insights: Analyzing reviews to improve products or services.
  • Stock and Price Monitoring: Real-time tracking of product availability and price fluctuations.

This sample result shows how a Naver Stores Data Scraper can provide valuable insights into product offerings, pricing strategies, stock levels, seller performance, and consumer feedback.

Integrations with Naver Stores Data Scraper

Integrating a Naver Stores Data Scraper with various tools and platforms can enhance the functionality of the scraper and streamline data collection, analysis, and reporting. Here are some key integrations that could improve your scraping process:

1. Database Integrations (MySQL, PostgreSQL, MongoDB)
  • Purpose: Store and manage the large datasets scraped from Naver Stores.
  • Benefit: Enables easy querying, analysis, and long-term storage of scraped data. Using relational databases (MySQL, PostgreSQL) or NoSQL (MongoDB) helps organize data efficiently.
  • Use Case: You can store product information, reviews, price trends, and seller details in a database for fast retrieval and analysis.
2. Data Analytics Tools (Power BI, Tableau, Google Data Studio)
  • Purpose: Visualize and analyze scraped data.
  • Benefit: Integrating with these tools allows you to create interactive dashboards, analyze trends, and make informed decisions.
  • Use Case: Create visual representations of pricing trends, product performance, and stock status across different sellers.
3. CRM Platforms (Salesforce, HubSpot)
  • Purpose: Integrate Naver store data with customer relationship management systems.
  • Benefit: Leverage customer data insights from reviews and product details for better engagement strategies and targeted marketing.
  • Use Case: Connect scraped product information and reviews to a CRM to track how specific products are received by customers.
4. Pricing Optimization Tools
  • Purpose: Use scraped product pricing data to automate pricing strategies and competitive pricing analysis.
  • Benefit: Ensure competitive pricing by using real-time data on Naver stores to optimize your own pricing structure.
  • Use Case: Implement dynamic pricing based on competitor price movements and stock levels.
5. E-Commerce Platforms (Shopify, WooCommerce)
  • Purpose: Export scraped product data to populate your own e-commerce platform.
  • Benefit: Allows you to automatically update your online store’s inventory and pricing based on the data scraped from Naver Stores.
  • Use Case: Use the scraped data to track products with high demand or adjust your stock based on competitors’ pricing and availability.
6. Email Marketing Platforms (Mailchimp, SendGrid)
  • Purpose: Automate email campaigns based on product trends, stock levels, and price changes.
  • Benefit: Use the scraped data to send personalized offers, product updates, and promotions to customers.
  • Use Case: Send targeted email campaigns to your users based on the new products or price drops observed from Naver Stores.
7. Web Scraping Automation (Zapier, Integromat)
  • Purpose: Automate scraping tasks and integrate with various apps and services without coding.
  • Benefit: Automate data collection and trigger workflows in other applications based on certain conditions (e.g., when a price drops below a certain threshold).
  • Use Case: Automatically Scrape Data From Naver Stores at regular intervals and push it into Google Sheets, databases, or analytics tools for further use.
8. APIs for Real-Time Data Integration (REST APIs, Webhooks)
  • Purpose: Enable real-time integration of Naver Stores data with external systems.
  • Benefit: Push real-time data to external platforms or applications like a pricing engine, analytics dashboards, or marketing systems.
  • Use Case: Set up APIs to send real-time product updates to your inventory management system or connect scraped data to customer engagement platforms.
9. Data Storage (AWS S3, Google Cloud Storage)
  • Purpose: Store large amounts of scraped data securely in the cloud.
  • Benefit: Scalable and cost-effective storage solutions for handling large data volumes. AWS S3 and Google Cloud provide easy access to your scraped datasets.
  • Use Case: Save product, review, and pricing data in the cloud for secure backup and easy sharing between teams.
10. Sentiment Analysis Tools (MonkeyLearn, TextRazor)
  • Purpose: Analyze customer reviews and feedback for sentiment and insights.
  • Benefit: Understand customer sentiment around specific products or sellers by analyzing reviews and feedback in real-time.
  • Use Case: Integrate sentiment analysis to assess how well a product is received by customers and adjust marketing strategies accordingly.
11. Product Recommendations (AI Models, Machine Learning APIs)
  • Purpose: Provide personalized product recommendations based on user behavior and product data.
  • Benefit: Use machine learning models to generate tailored product recommendations for users based on trends identified in scraped data.
  • Use Case: Integrate product recommendations into your e-commerce website or app based on the products people view or purchase most frequently.
Key Benefits of Integrating Naver Stores Data Scraper:
  • Streamlined Data Management: Automated workflows and real-time integrations make handling large volumes of data easier.
  • Improved Market Intelligence: Analyzing competitor pricing and stock trends helps in making better business decisions.
  • Enhanced Customer Engagement: Use data-driven insights to personalize marketing campaigns and optimize your online store.
  • Faster Decision-Making: With real-time data updates, you can respond faster to market changes and competitor strategies.

By leveraging these integrations, you can maximize the potential of your Naver Stores Data Scraper and turn the data into actionable insights for your business or marketing strategies.

Executing Naver Stores Data Scraping with Real Data API Naver Stores Scraper

Executing Naver Stores Data Scraping using Real Data API involves a few key steps: setting up the scraper, configuring the API, and processing the data. Below is a general approach for executing the scraping process using the Real Data API for Naver Stores:

1. Set Up the Real Data API Naver Stores Scraper

Before you start scraping, you’ll need to ensure you have access to the Real Data API and configure it to extract Naver Stores data.

Steps:

  • Create an account with Real Data API: Sign up and get access to their API credentials (API key or token).
  • Obtain API Documentation: Download or access the API documentation that explains how to use endpoints, request formats, and available features for Naver Stores scraping.
  • Set Up the API in Your Environment: Use the credentials to set up the connection in your development environment. You can use Python, Node.js, or any other language that supports HTTP requests.
2. API Configuration for Scraping

Real Data API should provide endpoints for scraping product data, reviews, prices, sellers, etc. You’ll need to configure your scraper based on the specific endpoints available for Naver Stores.

Example API Request Configuration:
import requests

# Define API endpoint and your API key
api_url = "https://api.realdatapi.com/naver-stores/scrape"
api_key = "your_api_key"

# Define parameters (e.g., category, search keywords, location, etc.)
params = {
    "category": "electronics",  # or any relevant category
    "search": "wireless earbuds",  # keywords to search for
    "location": "seoul",  # location (optional)
    "page": 1,  # pagination (optional)
}

# Set up headers with authentication
headers = {
    "Authorization": f"Bearer {api_key}"
}

# Send GET request to the API endpoint
response = requests.get(api_url, headers=headers, params=params)

# Check response status
if response.status_code == 200:
    data = response.json()  # Parse the response data (JSON)
    print(data)
else:
    print(f"Failed to scrape data: {response.status_code}")
3. Extract Data Fields

Once the data is successfully scraped, the API response will contain valuable product information like:

  • Product Name
  • Price
  • Stock Status
  • Product Rating
  • Reviews
  • Seller Information
Example Response (JSON Format):
{
    "products": [
        {
            "name": "Wireless Earbuds X100",
            "category": "Electronics",
            "price": 45000,
            "rating": 4.5,
            "reviews_count": 123,
            "stock_status": "In Stock",
            "seller": "TechShop",
            "product_url": "https://store.naver.com/techshop/wireless-earbuds-x100"
        },
        {
            "name": "Gaming Mouse Pro M3",
            "category": "Electronics",
            "price": 35000,
            "rating": 4.6,
            "reviews_count": 215,
            "stock_status": "In Stock",
            "seller": "GamerZone",
            "product_url": "https://store.naver.com/gamerzone/gaming-mouse-pro-m3"
        }
    ],
    "pagination": {
        "current_page": 1,
        "total_pages": 10
    }
}
4. Process and Store the Data

Once you have scraped the data, you may want to store it in a database (e.g., MySQL, MongoDB) or an analytics platform (e.g., Power BI, Tableau) for further analysis.

Example of Storing Data in a MySQL Database:
import mysql.connector

# Set up the MySQL connection
conn = mysql.connector.connect(
    host="localhost",
    user="your_username",
    password="your_password",
    database="scraped_data"
)

cursor = conn.cursor()

# Define SQL Insert Query
insert_query = """
    INSERT INTO products (name, category, price, rating, reviews_count, stock_status, seller, product_url)
    VALUES (%s, %s, %s, %s, %s, %s, %s, %s)
"""

# Loop through the data and insert into the database
for product in data['products']:
    cursor.execute(insert_query, (
        product['name'],
        product['category'],
        product['price'],
        product['rating'],
        product['reviews_count'],
        product['stock_status'],
        product['seller'],
        product['product_url']
    ))

# Commit and close connection
conn.commit()
conn.close()
5. Monitor and Schedule Data Scraping

To keep your data up-to-date, you can schedule the scraper to run at specific intervals using a task scheduler (e.g., cron for Linux or Task Scheduler for Windows).

  • Automate Scraping: Schedule the script to run every hour, day, or week to keep data refreshed.
  • Monitor Errors: Set up error handling in the script to log failed scraping attempts for future debugging.
6. Data Analysis and Reporting

After scraping and storing the data, you can use analytics tools like Google Data Studio, Tableau, or Power BI to generate reports and dashboards, providing insights like:

  • Pricing trends across competitors.
  • Product availability and stock levels.
  • Popular products and consumer sentiment (from reviews).
  • Seller performance comparison.
Example Workflow:
  • Scrape: Use the Real Data API to extract product data.
  • Store: Save the data into a database (SQL, NoSQL, or cloud storage).
  • Process: Analyze trends and generate reports using analytics platforms.
  • Monitor: Set up regular scraping intervals to maintain up-to-date data.

By executing Naver Stores data scraping with Real Data API, you can efficiently gather and manage key data points from Naver Stores for competitive analysis, market research, or price optimization. The integration with databases and analytics tools further enhances the value derived from the scraped data.

Key Benefits of Real Data API Naver Stores Scraper

The Real Data API Naver Stores Scraper offers a range of powerful benefits for businesses looking to leverage Naver store data scraping for market research, competitive analysis, and product optimization. Here are the key advantages:

  • Comprehensive Data Extraction: With Naver product data extraction, businesses can access detailed information on products, prices, ratings, reviews, and seller data across Naver e-commerce scraping. This helps companies track product performance and market trends in real time, ensuring they stay ahead of competitors.
  • Real-Time Updates: The Naver store API provides real-time access to data, making it possible to monitor changes in stock levels, pricing, and promotions. This ensures that your product listings and marketing strategies are based on the most current information available, helping you optimize your operations.
  • Competitive Intelligence: By scraping Naver sales data, businesses gain a clear view of competitor pricing and stock statuses, enabling them to adjust their strategies accordingly. Whether you're looking to price match or identify gaps in the market, Naver store market analysis is a game-changer for e-commerce businesses.
  • Enhanced Decision-Making: The insights gathered from Naver store data scraping allow businesses to make data-driven decisions. From identifying high-demand products to understanding consumer sentiment through reviews, the data helps inform marketing campaigns, product selection, and pricing strategies.
  • Scalability and Automation: With Naver product data extraction, businesses can scale their scraping efforts efficiently. Automated data collection through the Naver store API ensures that you continuously gather fresh data without manual intervention, saving time and resources.
  • Improved Customer Engagement: Analyzing product reviews and ratings from Naver e-commerce scraping gives you a deeper understanding of customer needs and pain points. This enables you to tailor your marketing messages and improve your product offerings based on customer feedback.

In conclusion, the Real Data API Naver Stores Scraper empowers businesses with the tools needed for advanced market analysis, competitor tracking, and data-driven decision-making in the ever-competitive Naver e-commerce scraping landscape.

You should have a Real Data API account to execute the program examples. Replace in the program using the token of your actor. Read about the live APIs with Real Data API docs for more explanation.

import { RealdataAPIClient } from 'RealDataAPI-client';

// Initialize the RealdataAPIClient with API token
const client = new RealdataAPIClient({
    token: '',
});

// Prepare actor input
const input = {
    "categoryOrProductUrls": [
        {
            "url": "https://www.amazon.com/s?i=specialty-aps&bbn=16225009011&rh=n%3A%2116225009011%2Cn%3A2811119011&ref=nav_em__nav_desktop_sa_intl_cell_phones_and_accessories_0_2_5_5"
        }
    ],
    "maxItems": 100,
    "proxyConfiguration": {
        "useRealDataAPIProxy": true
    }
};

(async () => {
    // Run the actor and wait for it to finish
    const run = await client.actor("junglee/amazon-crawler").call(input);

    // Fetch and print actor results from the run's dataset (if any)
    console.log('Results from dataset');
    const { items } = await client.dataset(run.defaultDatasetId).listItems();
    items.forEach((item) => {
        console.dir(item);
    });
})();
from realdataapi_client import RealdataAPIClient

# Initialize the RealdataAPIClient with your API token
client = RealdataAPIClient("")

# Prepare the actor input
run_input = {
    "categoryOrProductUrls": [{ "url": "https://www.amazon.com/s?i=specialty-aps&bbn=16225009011&rh=n%3A%2116225009011%2Cn%3A2811119011&ref=nav_em__nav_desktop_sa_intl_cell_phones_and_accessories_0_2_5_5" }],
    "maxItems": 100,
    "proxyConfiguration": { "useRealDataAPIProxy": True },
}

# Run the actor and wait for it to finish
run = client.actor("junglee/amazon-crawler").call(run_input=run_input)

# Fetch and print actor results from the run's dataset (if there are any)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)
# Set API token
API_TOKEN=<YOUR_API_TOKEN>

# Prepare actor input
cat > input.json <<'EOF'
{
  "categoryOrProductUrls": [
    {
      "url": "https://www.amazon.com/s?i=specialty-aps&bbn=16225009011&rh=n%3A%2116225009011%2Cn%3A2811119011&ref=nav_em__nav_desktop_sa_intl_cell_phones_and_accessories_0_2_5_5"
    }
  ],
  "maxItems": 100,
  "proxyConfiguration": {
    "useRealDataAPIProxy": true
  }
}
EOF

# Run the actor
curl "https://api.realdataapi.com/v2/acts/junglee~amazon-crawler/runs?token=$API_TOKEN" \
  -X POST \
  -d @input.json \
  -H 'Content-Type: application/json'

Place the Amazon product URLs

productUrls Required Array

Put one or more URLs of products from Amazon you wish to extract.

Max reviews

Max reviews Optional Integer

Put the maximum count of reviews to scrape. If you want to scrape all reviews, keep them blank.

Link selector

linkSelector Optional String

A CSS selector saying which links on the page (< a> elements with href attribute) shall be followed and added to the request queue. To filter the links added to the queue, use the Pseudo-URLs and/or Glob patterns setting. If Link selector is empty, the page links are ignored. For details, see Link selector in README.

Mention personal data

includeGdprSensitive Optional Array

Personal information like name, ID, or profile pic that GDPR of European countries and other worldwide regulations protect. You must not extract personal information without legal reason.

Reviews sort

sort Optional String

Choose the criteria to scrape reviews. Here, use the default HELPFUL of Amazon.

Options:

RECENT,HELPFUL

Proxy configuration

proxyConfiguration Required Object

You can fix proxy groups from certain countries. Amazon displays products to deliver to your location based on your proxy. No need to worry if you find globally shipped products sufficient.

Extended output function

extendedOutputFunction Optional String

Enter the function that receives the JQuery handle as the argument and reflects the customized scraped data. You'll get this merged data as a default result.

{
  "categoryOrProductUrls": [
    {
      "url": "https://www.amazon.com/s?i=specialty-aps&bbn=16225009011&rh=n%3A%2116225009011%2Cn%3A2811119011&ref=nav_em__nav_desktop_sa_intl_cell_phones_and_accessories_0_2_5_5"
    }
  ],
  "maxItems": 100,
  "detailedInformation": false,
  "useCaptchaSolver": false,
  "proxyConfiguration": {
    "useRealDataAPIProxy": true
  }
}
INQUIRE NOW