Bash
$ “http://api.infatica.io/?api_key=APIKEY&url=URL”
   
   <!DOCTYPE html>
     <html lang="en">
       <DOCTYPE html>
         <meta charset=”utf-8”>
           ...          

$ Click here to see full documentation...

Superior Web Scraper API: Handling Headless (JS Rendering) and Rotation Proxy

Web scraping can streamline your product to the next level – but acquiring data from a major tech platform can be tricky. Scraper API is a modern scraping tool for professional data collection: extract data from websites in any format of your choice without any roadblocks.
Free Trial

Web Scraping Should Be Simple

When designing Scraper API, we had a simple goal: Make it efficient for power users – and intuitive for home users. Data extraction sounds really complicated: Will this website break? What if JavaScript rendering or geotargeting is required? What about anti-scraping systems like reCAPTCHA and Cloudflare? Scraper API solves all of these problems – and finally makes web scraping simple.

  • Built for scalability: our large pool of residential proxies suits any large-scale project.
  • Free 24/7 support: our specialists are ready to troubleshoot any of your technical problems.
  • Easy to use: we handle the technical side of the scraping workflow (e.g. proxy management) and save your time.
  • Reliable and stable: we’ve designed this product with performance and connection stability in mind.
API Mode
Proxy Mode
Bash
Node
PHP
Python
Ruby
Java
Bash
curl "“http://api.infatica.io/?api_key=APIKEY&url=URL”             
Node
const request = require('request-promise');
  
request('http://api.infatica.io/?api_key=APIKEY&url=http://httpbin.org/ip')
   .then(response => {
      console.log(response)
   })
   .catch(error => {
      console.log(error)
   })                                  
PHP
<?php
$url = "http://api.infatica.io?api_key=APIKEY&url=http://httpbin.org/ip";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
$response = curl_exec($ch);
curl_close($ch);
print_r($response);
Python
import requests
payload = {'api_key': 'APIKEY', 'url': 'https://httpbin.org/ip'}
r = requests.get('http://api.infatica.io', params=payload)
print r.text
                                       
# Scrapy users can simply replace the urls in their start_urls and parse function
# ...other scrapy setup code
start_urls = ['http://api.infatica.io?api_key=APIKEY&url=' + url]
                                       
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request('http://api.infatica.io/?api_key=APIKEY&url=' + url, self.parse)  
Ruby
require 'net/http'
require 'json'
params = {
  :api_key => "APIKEY",
  :url => "http://httpbin.org/ip"
}
uri = URI('http://api.infatica.io/')
uri.query = URI.encode_www_form(params)
website_content = Net::HTTP.get(uri)
print(website_content)          
Java
try {                     
   String apiKey = "APIKEY";
   String url = "http://api.infatica.io?api_key=" + apiKey + "&url=http://httpbin.org/ip";
   URL urlForGetRequest = new URL(url);
   String readLine = null;
   HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
   conection.setRequestMethod("GET");
   int responseCode = conection.getResponseCode();
   if (responseCode == HttpURLConnection.HTTP_OK) {
      BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
      StringBuffer response = new StringBuffer();
      while ((readLine = in.readLine()) != null) {
         response.append(readLine);
      }
      in.close();
      System.out.println(response.toString());
   } else {
      throw new Exception("Error in API Call");
   }
} catch (Exception ex) {
   ex.printStackTrace();
} 
Bash
Node
PHP
Python
Ruby
Java
Bash
curl -x "http://infatica:APIKEY@proxy-server.infatica.io:8001" -k "http://httpbin.org/ip"                  
Node
const request = require('request-promise');

options = {
   method: 'GET',
   url: 'http://httpbin.org/ip',
   proxy: 'http://infatica:APIKEY@proxy-server.infatica.io:8001'
}
request(options)
   .then(response => {
      console.log(response)
   })
   .catch(error => {
      console.log(error)
   })  
PHP
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://httpbin.org/ip");
curl_setopt($ch, CURLOPT_PROXY, "http://infatica:APIKEY@proxy-server.infatica.io:8001");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
$response = curl_exec($ch);
curl_close($ch);
var_dump($response);
Python
import requests
proxies = {
   "http": "http://infatica:APIKEY@proxy-server.infatica.io:8001"
}
r = requests.get('http://httpbin.org/ip', proxies=proxies, verify=False)
print(r.text)
                                       
# Scrapy users can likewise simply pass their API key in headers.
# NB: Scrapy skips SSL verification by default.
# ...other scrapy setup code
start_urls = ['http://httpbin.org/ip']
meta = {
   "proxy": "http://infatica:APIKEY@proxy-server.infatica.io:8001"
}
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request(url, callback=self.parse, headers=headers, meta=meta)
Ruby
require 'httparty'
HTTParty::Basement.default_options.update(verify: false)
response = HTTParty.get('http://httpbin.org/ip', {
   http_proxyaddr: "proxy-server.infatica.io",
   http_proxyport: "8001",
   http_proxyuser: "infatica",
   http_proxypass: "APIKEY"
})
results = response.body
puts results         
Java
try {                     
   String apiKey = "APIKEY";
   String proxy = "http://infatica:" + apiKey + "@proxy-server.infatica.io";
   URL server = new URL("http://httpbin.org/ip");
   Properties systemProperties = System.getProperties();
   systemProperties.setProperty("http.proxyHost", proxy);
   systemProperties.setProperty("http.proxyPort", "8001");
   HttpURLConnection httpURLConnection = (HttpURLConnection) server.openConnection();
   httpURLConnection.connect();
   String readLine = null;
   int responseCode = httpURLConnection.getResponseCode();
   if (responseCode == HttpURLConnection.HTTP_OK) {
      BufferedReader in = new BufferedReader(new InputStreamReader(httpURLConnection.getInputStream()));
      StringBuffer response = new StringBuffer();
      while ((readLine = in.readLine()) != null) {
         response.append(readLine);
      }
      in.close();
      System.out.println(response.toString());
   } else {
      throw new Exception("Error in API Call");
   }
catch (Exception ex) {
   ex.printStackTrace();
} 

Extract Data From Dynamic Websites

Dynamic content is the backbone of modern tech platforms: real-time price changes, product updates, messaging, efficient pagination, and much more. The constant flow of this data is enabled by web browsers’ JavaScript rendering capabilities – but this code can be problematic for parsers to process correctly.

Infatica’s Scraper API addresses this issue thanks to its robust rendering engine which features full JavaScript rendering, Ajax support, and pagination handlers – the latter allows us to parse both single-page and multi-page websites and their components. Working together, these features enable you to scrape all URLs on any popular website without a single data point missing.

Export Your Data In CSV, XLSX, and JSON

Scraping data is half the job done – now you need a proper method of processing it. File formats are used to organize data in a machine-readable way, allowing human users to view and edit them easily. Popular response formats include CSV and XLSX for arranging tabular data (e.g. as Excel spreadsheets), and JSON for organizing data in web applications.

Infatica’s Scraper API supports all of these response formats, providing you full control over your data organization workflow. Export data in XLSX spreadsheets and CSV files to analyze in Excel or use JSON for easy API and webhook access.

Get Structured Data Fast – Without Roadblocks

Thousands of companies are investing their resources into web data extraction – and data owners respond by adopting anti-scraping systems like reCAPTCHA and Cloudflare. These security measures are designed to distinguish between real users and web scrapers, which they attempt to via a set of factors. One of these factors is the IP address: If it raises suspicion, using a web scraper becomes much harder due to regular IP bans.

Scraper API solves this problem using Infatica’s own residential proxy network, which makes the requests your crawlers send appear human-like – and this helps to avoid triggering CAPTCHAs and IP address bans. Thanks to Infatica’s proxies and reliable server infrastructure, Scraper API achieves a high request success rate, low response time, maximum uptime, and best performance.

High request success rate
Low response time
Maximum uptime
Best performance

Features of our advanced data collection suite

Millions of proxies & IPs

Infatica Scraper utilizes our own network of residential IP addresses across dozens of global ISPs, supporting real devices, smart retries, and proxy rotation.

100+ global locations

Choose from 100+ global locations via powerful geotargeting to send your web scraping API requests from – or simply use random geo-targets from a set of major cities all across the globe.

Robust infrastructure

Scrape the web at scale at an unparalleled speed and enjoy advanced features like concurrent API requests, CAPTCHA solving, browser support and JS rendering.

Free and premium options

Are you here to test the API without any commitments? Sign up for our Free Plan. If you ever need more advanced access, premium pricing plans start at $19.99 per month.

What Customers Say about us

Our residential proxy users come from different backgrounds and create both small- and large-scale projects, utilizing millions of real IPs. Whatever your project may be, we would be pleased to have you as our client.

“I tried many providers of mobile proxies, I chose infatica as the favorite, because the price is adequate, the reviews are good and they showed themselves well during the test period, I recommend it for cooperation.”

Explore More Reviews
Aug 10, 2022

I use proxies mainly for parsing

I use proxies mainly for parsing site positions, collecting the necessary information from sites in large quantities for analytics them. I needed fast resident proxies, so I ordered them from Infatica. I have used many other services before, but Infatica was more liked for its interface and fast proxies. It is evident that they are doing their best to improve their services. Yes, there can be some moments but they can be quickly solved with technical support.I can safely recommend them.

Stephanie K.
Jul 26, 2022

Affordability and quality

Speaking about prices, everything is democratic, it all depends on which tariff to choose. I think that this is the ideal and safe option for work. Quality is at a high level. I can recommend Infatica due to its stability and speed of work. All the problems that were - technical support helped me to solve. Special thanks to the manager Alina for her help in paying the tariff (there were problems on my part)

Peyton Jackie
Jul 21, 2022

This proxy provider is a reliable one

Hello to all lovers of quality service! Two months ago, I had a chance to take a proxy here for myself. In general, I was satisfied! Technical support answered all my questions. I was very pleased with the proxy itself; everything worked without any interruptions. It fits perfectly for my needs. Mainly I use for parsing Google, Amazon, YouTube and etc. Great for using A-Parser. Infatica also has a new Scraping API, haven't tried it out yet, but I'll try it out and update my review in the future.

Loyd
Jul 21, 2022

Nothing is hidden, proxies are affordable

The first thing that attracted me is the site. I have never used thing like this before, and it helped me to put things in order. I chose their Mobile Proxy service and like it! Time will show if there are any minuses.

Raelene
May 2, 2022

Infatica keeps us excited with their continuous updates and add ons.

If you want to get both residential and mobile proxy service, I believe, Infatica is the place to visit. Why? Because it has the best ROI and will give you a chance to anonymously and authentically gather info about the content and strategy from other marketers. We use Infatica to view PPC assets and study final destination URLs. We've also employed Infatica's proxies to help us by keeping bots away.

Vito Tskipurishvili
Apr 13, 2022

I tried many providers of mobile…

I tried many providers of mobile proxies, I chose infatica as the favorite, because the price is adequate, the reviews are good and they showed themselves well during the test period, I recommend it for cooperation.

Nora Fay
Mar 12, 2022

We use proxies for marketing research

We use proxies for marketing research. Infatica proxies solve our needs in full. Thank you

Adele Osborne
Feb 4, 2022

After 5 months of using their proxies

After 5 months of using their proxies, I have not noticed any deterioration in success rates. Great product at the moment

Rin
Nov 22, 2021

Very good scraping success rates

Very good scraping success rates, including when we crawl social media. Quick replies from technical support

Silas Wegner
Rated 4.5 / 5 by TrustPilot users. Trustpilot rating 4.7

Use Scraper by yourself

Get Free Trial
Willing to be in charge of the process? No problem! Use Scraper to get the best results.
$25 /month

Small Project

Access to premium proxies and up to 250k monthly requests.

API Credits-250K
  • JS Rendering - Yes
  • Json parsing - Yes
  • Built-in residential proxy - Yes
  • US & EU Geotargeting - Yes
  • Threads - 10
  • Ticket support - Yes
Get Started
$90 /month

Medium Project

Access to premium proxies and up to 1 million monthly requests.

API Credits-1M
  • JS Rendering - Yes
  • Json parsing - Yes
  • Built-in residential proxy - Yes
  • US & EU Geotargeting - Yes
  • Threads - 50
  • Ticket support - Yes
Get Started
from $1000 /month

Enterprise

Enterpise level - everything we have and up to custom monthly requests.

API Credits-Custom
  • JS Rendering - Yes
  • Json parsing - Yes
  • Built-in residential proxy - Yes
  • US & EU Geotargeting - Yes
  • Threads - Customize
  • Ticket support - Yes

Commitment-Free Trial

Scraping solutions come in all shapes and sizes, so you may see a myriad of software when searching for your next web scraper. We believe that Infatica has the most to offer as your user experience isn’t limited to technical factors like modular selector systems – it’s also 24/7 support, powerful geotargeting, ease-of-use, and more. Start a commitment-free trial to try these benefits for yourself.

Trial request count: 5,000 requests. Trial duration: 7 days.

1
Share your contact details with us.
2
We’ll send you a trial account login.
3
Try Infatica. Pay only if you stay.
Start your Free Trial

Frequently Asked Questions

  • Generally, yes: As of 2022, intellectual property laws are not explicitly prohibiting web scraping. A recent decision of the US Supreme Court states: If a website provides publicly available data and doesn’t require authorization, accessing this data is legal.
  • Web scraping means automated collection of website data. The keyword here is automated: Although you can save web data manually, specialized software (e.g. scrapers and crawlers) enable this process to scale across thousands of websites – and this software can run efficiently even on a regular home computer.
  • Upon collecting data, you can analyze it to explain trends and make educated guesses. Some good examples of these products include price aggregation platforms, e-commerce businesses, search engine optimization services, fraud protection software, and more.
  • The simplest method is using software with point and click interfaces: You click at the given website’s element (e.g. a table) and the program saves its data. Power users create more advanced scrapers that use the browser to read the website’s code, providing more control over their web scraping workflow.
  • In data collection, Python is arguably the fan favorite thanks to its wide range of pre-made libraries for networking and file operations. Still, other languages (e.g. JavaScript) have web scraping utilities of their own, so choosing something other than Python shouldn’t present any problems.
  • Home versions of Google Chrome or Microsoft’s Edge browser aren’t suitable for scraping, so their specialized versions are used instead. They are called headless browsers because they lack the graphical interface that we normally use to browse websites. Some popular examples of these browsers include Headless Chrome, Headless Firefox, and PhantomJS.
  • In general, intellectual property laws do not consider scraping platforms like Google or Amazon to be illegal. Google’s Terms of Service, however, prohibits automated access; the consequences of breaking ToS may include IP blocks (making web scraper’s job harder), but Google hasn’t actually sued any company for scraping its data.
  • Yes, but legal action isn’t the likely outcome in most web scraping scenarios. For this to happen, you have to extract data from a website and republish it. Conversely, transforming this data in a meaningful way (e.g. to create a price aggregator) falls under the fair use doctrine and is OK.
  • This largely depends on the scale of operation – a simple scraper for a project will cost less. If you don’t want to run the web scraper from your home computer, virtual machines are available for rent starting at just a few dollars per month. Additionally, you will require proxies to protect the data miner’s requests: Similarly, their pricing starts at $3-4 per GB.
  • Yes, but there are some caveats. Amazon’s data is public, so accessing and collecting using data miners is legal. To keep a pipeline that involves Amazon’s data legal, you need to transform it so that it offers a new perspective – a good example is price monitoring website.
  • Proxy rotation is a feature of Infatica’s proxy network: It monitors its entire pool of IP addresses and detects if the given address has been blocked by the target website. If this happens, the blocked IP is replaced with a new one, keeping the scraping pipeline uninterrupted and making web data extraction quicker.
  • It is a powerful scraper that allows you to crawl various websites at a large scale, in real time. As a tool for professional data collection, Scraper API makes web scraping easier via automating different processes like bulk scrape jobs, scheduled scrape processes, own customized extraction rule, and more.
  • There is barely the “best” user agent – you only need one that isn’t deemed suspicious by the target server. Most common user agents for web scraping include combinations of Chrome 101.0 + Windows 10 (9.9% of users), Firefox 100.0 + Windows 10 (8.1% of users), and Chrome 101.0 + macOS (5.1% of users.)
  • Technically, yes – as long as the data is actually public and isn’t locked behind an authorization gateway. Thanks to Scraper API’s JavaScript rendering capabilities, you can extract data from any popular website – search engines, ecommerce platforms, knowledge bases, forums, newspapers, file archives, social media platforms, aggregators, and more.
  • In most cases, yes. However, some APIs feature free and premium options, with the latter typically lifting the platform’s scraping restrictions. More importantly, large-scale scraping projects come with associated costs: renting a virtual machine and purchasing proxies are a must – without these upgrades, the scraping pipeline may be inefficient.
  • Infatica Scraper offers a set of response formats for exporting and organizing scraped data: JSON and HTML. You can use them to arrange data in a tabular manner or feed it to your web application via an API.