spatie / crawler
Crawl all internal links found on a website
Fund package maintenance!
spatie
spatie.be/open-source/support-us
Installs: 9 471 859
Dependents: 47
Suggesters: 1
Security: 0
Stars: 2 518
Watchers: 66
Forks: 359
Open Issues: 2
Requires
- php: ^8.1
- guzzlehttp/guzzle: ^7.3
- guzzlehttp/psr7: ^2.0
- illuminate/collections: ^10.0|^11.0
- nicmart/tree: ^0.8.0
- spatie/browsershot: ^3.45|^4.0
- spatie/robots-txt: ^2.0
- symfony/dom-crawler: ^6.0|^7.0
Requires (Dev)
- pestphp/pest: ^2.0
- spatie/ray: ^1.37
- dev-main
- 8.2.3
- 8.2.2
- 8.2.1
- 8.2.0
- 8.1.0
- 8.0.4
- 8.0.3
- 8.0.2
- 8.0.1
- 8.0.0
- v7.x-dev
- 7.1.3
- 7.1.2
- 7.1.1
- 7.1.0
- 7.0.5
- 7.0.4
- 7.0.3
- 7.0.2
- 7.0.1
- 7.0.0
- v6.x-dev
- 6.0.2
- 6.0.1
- 6.0.0
- v5.x-dev
- 5.0.2
- 5.0.1
- 5.0.0
- v4.x-dev
- 4.7.6
- 4.7.5
- 4.7.4
- 4.7.3
- 4.7.2
- 4.7.1
- 4.7.0
- 4.6.9
- 4.6.8
- 4.6.7
- 4.6.6
- 4.6.5
- 4.6.4
- 4.6.3
- 4.6.2
- 4.6.1
- 4.6.0
- 4.5.0
- 4.4.3
- 4.4.2
- 4.4.1
- 4.4.0
- 4.3.2
- 4.3.1
- 4.3.0
- 4.2.0
- 4.1.7
- 4.1.6
- 4.1.5
- 4.1.4
- 4.1.3
- 4.1.2
- 4.1.1
- 4.1.0
- 4.0.5
- 4.0.4
- 4.0.3
- 4.0.2
- 4.0.1
- 4.0.0
- 3.2.1
- 3.2.0
- 3.1.3
- 3.1.2
- 3.1.1
- 3.1.0
- 3.0.1
- 3.0.0
- v2.x-dev
- 2.7.1
- 2.7.0
- 2.6.2
- 2.6.1
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.2.0
- 2.1.2
- 2.1.1
- 2.1.0
- 2.0.7
- 2.0.6
- 2.0.5
- 2.0.4
- 2.0.3
- 2.0.2
- 2.0.1
- 2.0.0
- 1.3.1
- 1.3.0
- 1.2.3
- 1.2.2
- 1.2.1
- 1.2.0
- 1.1.1
- 1.1.0
- 1.0.2
- 1.0.1
- 1.0.0
- 0.0.1
This package is auto-updated.
Last update: 2024-10-01 00:06:53 UTC
README
This package provides a class to crawl links on a website. Under the hood Guzzle promises are used to crawl multiple urls concurrently.
Because the crawler can execute JavaScript, it can crawl JavaScript rendered sites. Under the hood Chrome and Puppeteer are used to power this feature.
Support us
We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.
We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You'll find our address on our contact page. We publish all received postcards on our virtual postcard wall.
Installation
This package can be installed via Composer:
composer require spatie/crawler
Usage
The crawler can be instantiated like this
use Spatie\Crawler\Crawler; Crawler::create() ->setCrawlObserver(<class that extends \Spatie\Crawler\CrawlObservers\CrawlObserver>) ->startCrawling($url);
The argument passed to setCrawlObserver
must be an object that extends the \Spatie\Crawler\CrawlObservers\CrawlObserver
abstract class:
namespace Spatie\Crawler\CrawlObservers; use GuzzleHttp\Exception\RequestException; use Psr\Http\Message\ResponseInterface; use Psr\Http\Message\UriInterface; abstract class CrawlObserver { /* * Called when the crawler will crawl the url. */ public function willCrawl(UriInterface $url, ?string $linkText): void { } /* * Called when the crawler has crawled the given url successfully. */ abstract public function crawled( UriInterface $url, ResponseInterface $response, ?UriInterface $foundOnUrl = null, ?string $linkText, ): void; /* * Called when the crawler had a problem crawling the given url. */ abstract public function crawlFailed( UriInterface $url, RequestException $requestException, ?UriInterface $foundOnUrl = null, ?string $linkText = null, ): void; /** * Called when the crawl has ended. */ public function finishedCrawling(): void { } }
Using multiple observers
You can set multiple observers with setCrawlObservers
:
Crawler::create() ->setCrawlObservers([ <class that extends \Spatie\Crawler\CrawlObservers\CrawlObserver>, <class that extends \Spatie\Crawler\CrawlObservers\CrawlObserver>, ... ]) ->startCrawling($url);
Alternatively you can set multiple observers one by one with addCrawlObserver
:
Crawler::create() ->addCrawlObserver(<class that extends \Spatie\Crawler\CrawlObservers\CrawlObserver>) ->addCrawlObserver(<class that extends \Spatie\Crawler\CrawlObservers\CrawlObserver>) ->addCrawlObserver(<class that extends \Spatie\Crawler\CrawlObservers\CrawlObserver>) ->startCrawling($url);
Executing JavaScript
By default, the crawler will not execute JavaScript. This is how you can enable the execution of JavaScript:
Crawler::create() ->executeJavaScript() ...
In order to make it possible to get the body html after the javascript has been executed, this package depends on our Browsershot package. This package uses Puppeteer under the hood. Here are some pointers on how to install it on your system.
Browsershot will make an educated guess as to where its dependencies are installed on your system.
By default, the Crawler will instantiate a new Browsershot instance. You may find the need to set a custom created instance using the setBrowsershot(Browsershot $browsershot)
method.
Crawler::create() ->setBrowsershot($browsershot) ->executeJavaScript() ...
Note that the crawler will still work even if you don't have the system dependencies required by Browsershot.
These system dependencies are only required if you're calling executeJavaScript()
.
Filtering certain urls
You can tell the crawler not to visit certain urls by using the setCrawlProfile
-function. That function expects
an object that extends Spatie\Crawler\CrawlProfiles\CrawlProfile
:
/* * Determine if the given url should be crawled. */ public function shouldCrawl(UriInterface $url): bool;
This package comes with three CrawlProfiles
out of the box:
CrawlAllUrls
: this profile will crawl all urls on all pages including urls to an external site.CrawlInternalUrls
: this profile will only crawl the internal urls on the pages of a host.CrawlSubdomains
: this profile will only crawl the internal urls and its subdomains on the pages of a host.
Custom link extraction
You can customize how links are extracted from a page by passing a custom UrlParser
to the crawler.
Crawler::create() ->setUrlParserClass(<class that implements \Spatie\Crawler\UrlParsers\UrlParser>::class) ...
By default, the LinkUrlParser
is used. This parser will extract all links from the href
attribute of a
tags.
There is also a built-in SitemapUrlParser
that will extract & crawl all links from a sitemap. It does support sitemap index files.
Crawler::create() ->setUrlParserClass(SitemapUrlParser::class) ...
Ignoring robots.txt and robots meta
By default, the crawler will respect robots data. It is possible to disable these checks like so:
Crawler::create() ->ignoreRobots() ...
Robots data can come from either a robots.txt
file, meta tags or response headers.
More information on the spec can be found here: http://www.robotstxt.org/.
Parsing robots data is done by our package spatie/robots-txt.
Accept links with rel="nofollow" attribute
By default, the crawler will reject all links containing attribute rel="nofollow". It is possible to disable these checks like so:
Crawler::create() ->acceptNofollowLinks() ...
Using a custom User Agent
In order to respect robots.txt rules for a custom User Agent you can specify your own custom User Agent.
Crawler::create() ->setUserAgent('my-agent')
You can add your specific crawl rule group for 'my-agent' in robots.txt. This example disallows crawling the entire site for crawlers identified by 'my-agent'.
// Disallow crawling for my-agent User-agent: my-agent Disallow: /
Setting the number of concurrent requests
To improve the speed of the crawl the package concurrently crawls 10 urls by default. If you want to change that number you can use the setConcurrency
method.
Crawler::create() ->setConcurrency(1) // now all urls will be crawled one by one
Defining Crawl Limits
By default, the crawler continues until it has crawled every page it can find. This behavior might cause issues if you are working in an environment with limitations such as a serverless environment.
The crawl behavior can be controlled with the following two options:
- Total Crawl Limit (
setTotalCrawlLimit
): This limit defines the maximal count of URLs to crawl. - Current Crawl Limit (
setCurrentCrawlLimit
): This defines how many URLs are processed during the current crawl.
Let's take a look at some examples to clarify the difference between these two methods.
Example 1: Using the total crawl limit
The setTotalCrawlLimit
method allows you to limit the total number of URLs to crawl, no matter how often you call the crawler.
$queue = <your selection/implementation of a queue>; // Crawls 5 URLs and ends. Crawler::create() ->setCrawlQueue($queue) ->setTotalCrawlLimit(5) ->startCrawling($url); // Doesn't crawl further as the total limit is reached. Crawler::create() ->setCrawlQueue($queue) ->setTotalCrawlLimit(5) ->startCrawling($url);
Example 2: Using the current crawl limit
The setCurrentCrawlLimit
will set a limit on how many URls will be crawled per execution. This piece of code will process 5 pages with each execution, without a total limit of pages to crawl.
$queue = <your selection/implementation of a queue>; // Crawls 5 URLs and ends. Crawler::create() ->setCrawlQueue($queue) ->setCurrentCrawlLimit(5) ->startCrawling($url); // Crawls the next 5 URLs and ends. Crawler::create() ->setCrawlQueue($queue) ->setCurrentCrawlLimit(5) ->startCrawling($url);
Example 3: Combining the total and crawl limit
Both limits can be combined to control the crawler:
$queue = <your selection/implementation of a queue>; // Crawls 5 URLs and ends. Crawler::create() ->setCrawlQueue($queue) ->setTotalCrawlLimit(10) ->setCurrentCrawlLimit(5) ->startCrawling($url); // Crawls the next 5 URLs and ends. Crawler::create() ->setCrawlQueue($queue) ->setTotalCrawlLimit(10) ->setCurrentCrawlLimit(5) ->startCrawling($url); // Doesn't crawl further as the total limit is reached. Crawler::create() ->setCrawlQueue($queue) ->setTotalCrawlLimit(10) ->setCurrentCrawlLimit(5) ->startCrawling($url);
Example 4: Crawling across requests
You can use the setCurrentCrawlLimit
to break up long running crawls. The following example demonstrates a (simplified) approach. It's made up of an initial request and any number of follow-up requests continuing the crawl.
Initial Request
To start crawling across different requests, you will need to create a new queue of your selected queue-driver. Start by passing the queue-instance to the crawler. The crawler will start filling the queue as pages are processed and new URLs are discovered. Serialize and store the queue reference after the crawler has finished (using the current crawl limit).
// Create a queue using your queue-driver. $queue = <your selection/implementation of a queue>; // Crawl the first set of URLs Crawler::create() ->setCrawlQueue($queue) ->setCurrentCrawlLimit(10) ->startCrawling($url); // Serialize and store your queue $serializedQueue = serialize($queue);
Subsequent Requests
For any following requests you will need to unserialize your original queue and pass it to the crawler:
// Unserialize queue $queue = unserialize($serializedQueue); // Crawls the next set of URLs Crawler::create() ->setCrawlQueue($queue) ->setCurrentCrawlLimit(10) ->startCrawling($url); // Serialize and store your queue $serialized_queue = serialize($queue);
The behavior is based on the information in the queue. Only if the same queue-instance is passed in the behavior works as described. When a completely new queue is passed in, the limits of previous crawls -even for the same website- won't apply.
An example with more details can be found here.
Setting the maximum crawl depth
By default, the crawler continues until it has crawled every page of the supplied URL. If you want to limit the depth of the crawler you can use the setMaximumDepth
method.
Crawler::create() ->setMaximumDepth(2)
Setting the maximum response size
Most html pages are quite small. But the crawler could accidentally pick up on large files such as PDFs and MP3s. To keep memory usage low in such cases the crawler will only use the responses that are smaller than 2 MB. If, when streaming a response, it becomes larger than 2 MB, the crawler will stop streaming the response. An empty response body will be assumed.
You can change the maximum response size.
// let's use a 3 MB maximum. Crawler::create() ->setMaximumResponseSize(1024 * 1024 * 3)
Add a delay between requests
In some cases you might get rate-limited when crawling too aggressively. To circumvent this, you can use the setDelayBetweenRequests()
method to add a pause between every request. This value is expressed in milliseconds.
Crawler::create() ->setDelayBetweenRequests(150) // After every page crawled, the crawler will wait for 150ms
Limiting which content-types to parse
By default, every found page will be downloaded (up to setMaximumResponseSize()
in size) and parsed for additional links. You can limit which content-types should be downloaded and parsed by setting the setParseableMimeTypes()
with an array of allowed types.
Crawler::create() ->setParseableMimeTypes(['text/html', 'text/plain'])
This will prevent downloading the body of pages that have different mime types, like binary files, audio/video, ... that are unlikely to have links embedded in them. This feature mostly saves bandwidth.
Using a custom crawl queue
When crawling a site the crawler will put urls to be crawled in a queue. By default, this queue is stored in memory using the built-in ArrayCrawlQueue
.
When a site is very large you may want to store that queue elsewhere, maybe a database. In such cases, you can write your own crawl queue.
A valid crawl queue is any class that implements the Spatie\Crawler\CrawlQueues\CrawlQueue
-interface. You can pass your custom crawl queue via the setCrawlQueue
method on the crawler.
Crawler::create() ->setCrawlQueue(<implementation of \Spatie\Crawler\CrawlQueues\CrawlQueue>)
Here
- ArrayCrawlQueue
- RedisCrawlQueue (third-party package)
- CacheCrawlQueue for Laravel (third-party package)
- Laravel Model as Queue (third-party example app)
Change the default base url scheme
By default, the crawler will set the base url scheme to http
if none. You have the ability to change that with setDefaultScheme
.
Crawler::create() ->setDefaultScheme('https')
Changelog
Please see CHANGELOG for more information what has changed recently.
Contributing
Please see CONTRIBUTING for details.
Testing
First, install the Puppeteer dependency, or your tests will fail.
npm install puppeteer
To run the tests you'll have to start the included node based server first in a separate terminal window.
cd tests/server
npm install
node server.js
With the server running, you can start testing.
composer test
Security
If you've found a bug regarding security please mail security@spatie.be instead of using the issue tracker.
Postcardware
You're free to use this package, but if it makes it to your production environment we highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using.
Our address is: Spatie, Kruikstraat 22, 2018 Antwerp, Belgium.
We publish all received postcards on our company website.
Credits
License
The MIT License (MIT). Please see License File for more information.