coooold/crawler

Most powerful, popular and production crawling/scraping package for PHP

v1.0.0 2019-12-11 02:05 UTC

This package is auto-updated.

Last update: 2024-04-11 14:13:55 UTC


README

Most powerful, popular and production crawling/scraping package for PHP, happy hacking :)

Features:

  • server-side DOM & automatic DomParser insertion with Symfony\Component\DomCrawler
  • Configurable pool size and retries
  • Control rate limit
  • forceUTF8 mode to let crawler deal for you with charset detection and conversion
  • Compatible with PHP 7.2

Thanks to

  • Amp a non-blocking concurrency framework for PHP
  • Artax An Asynchronous HTTP Client for PHP
  • node-crawler Most powerful, popular and production crawling/scraping package for Node

node-crawler is really a great crawler. PHPCrawler tries its best effort to keep similarity with it.

中文说明

Table of Contents

Get started

Install

$ composer require "coooold/crawler"

Basic usage

use PHPCrawler\PHPCrawler;
use PHPCrawler\Response;
use Symfony\Component\DomCrawler\Crawler;

$logger = new Monolog\Logger("fox");
try {
    $logger->pushHandler(new \Monolog\Handler\StreamHandler(STDOUT, \Monolog\Logger::INFO));
} catch (\Exception $e) {
}

$crawler = new PHPCrawler([
    'maxConnections' => 2,
    'domParser' => true,
    'timeout' => 3000,
    'retries' => 3,
    'logger' => $logger,
]);

$crawler->on('response', function (Response $res) use ($cli) {
    if (!$res->success) {
        return;
    }

    $title = $res->dom->filter("title")->html();
    echo ">>> title: {$title}\n";
    $res->dom
        ->filter('.related-item a')
        ->each(function (Crawler $crawler) {
            echo ">>> links: ", $crawler->text(), "\n";
        });
});

$crawler->queue('https://www.foxnews.com/');
$crawler->run();

Slow down

Use rateLimit to slow down when you are visiting web sites.

$crawler = new PHPCrawler([
    'maxConnections' => 10,
    'rateLimit' => 2,   // reqs per second
    'domParser' => true,
    'timeout' => 30000,
    'retries' => 3,
    'logger' => $logger,
]);

for ($page = 1; $page <= 100; $page++) {
    $crawler->queue([
        'uri' => "http://www.qbaobei.com/jiaoyu/gshb/List_{$page}.html",
        'type' => 'list',
    ]);
}

$crawler->run(); //between two tasks, avarage time gap is 1000 / 2 (ms)

Custom parameters

Sometimes you have to access variables from previous request/response session, what should you do is passing parameters as same as options:

$crawler->queue([
    'uri' => 'http://www.google.com',
    'parameter1' => 'value1',
    'parameter2' => 'value2',
])

then access them in callback via $res->task['parameter1'], $res->task['parameter2'] ...

Raw body

If you are downloading files like image, pdf, word etc, you have to save the raw response body which means Crawler shouldn't convert it to string. To make it happen, you need to set encoding to null

$crawler = new PHPCrawler([
    'maxConnections' => 10,
    'rateLimit' => 2,   // req per second
    'domParser' => false,
    'timeout' => 30000,
    'retries' => 3,
    'logger' => $logger,
]);

$crawler->on('response', function (Response $res, PHPCrawler $crawler) {
    if (!$res->success) {
        return;
    }

    echo "write ".$res->task['fileName']."\n";
    file_put_contents($res->task['fileName'], $res->body);
});

$crawler->queue([
    'uri' => "http://www.gutenberg.org/ebooks/60881.txt.utf-8",
    'fileName' => '60881.txt',
]);

$crawler->queue([
    'uri' => "http://www.gutenberg.org/ebooks/60882.txt.utf-8",
    'fileName' => '60882.txt',
]);

$crawler->queue([
    'uri' => "http://www.gutenberg.org/ebooks/60883.txt.utf-8",
    'fileName' => '60883.txt',
]);

$crawler->run();

Events

Event::RESPONSE

Triggered when a request is done.

$crawler->on('response', function (Response $res, PHPCrawler $crawler) {
    if (!$res->success) {
        return;
    }
});

Event::DRAIN

Triggered when queue is empty.

$crawler->on('drain', function () {
    echo "queue is drained\n";
});

Advanced

Encoding

HTTP body will be converted to utf-8 from the default encoding.

$crawler = new PHPCrawler([
    'encoding' => 'gbk,
]);

Logger

A PSR logger instance could be used.

$logger = new Monolog\Logger("fox");
$logger->pushHandler(new \Monolog\Handler\StreamHandler(STDOUT, \Monolog\Logger::INFO));

$crawler = new PHPCrawler([
    'logger' => $logger,
]);

See Monolog Reference.

Coroutine

PHPCrawler, based on amp non-blocking concurrency framework, could work with coroutines, ensuring excellent performance. Amp async packages should be used in callbacks, that is to say, neither php native mysql client nor php native file io is not recommended. The keyword yield like await in ES6, introduced the non-blocking io.

$crawler->on('response', function (Response $res) use ($cli) {
    /** @var \Amp\Artax\Response $res */
    $res = yield $cli->request("https://www.foxnews.com/politics/lindsey-graham-adam-schiff-is-doing-a-lot-of-damage-to-the-country-and-he-needs-to-stop");
    $body = yield $res->getBody();
    echo "=======> body " . strlen($body) . " bytes \n";
});

Work with DomParser

Symfony\Component\DomCrawler is a handy tool for crawling pages. Response::dom will be injected with an instance of Symfony\Component\DomCrawler\Crawler.

$crawler->on('response', function (Response $res) use ($cli) {
    if (!$res->success) {
        return;
    }

    $title = $res->dom->filter("title")->html();
    echo ">>> title: {$title}\n";
    $res->dom
        ->filter('.related-item a')
        ->each(function (Crawler $crawler) {
            echo ">>> links: ", $crawler->text(), "\n";
        });
});

See DomCrawler Reference.

Other

API reference

Configuration