truercm / laravel-webscrape
Scrape web paged within Laravel application
Requires
- php: ^8.0
- dbrekelmans/bdi: ^1.2
- frictionlessdigital/actions: ^9.0|^10.0|^11.0
- illuminate/contracts: ^8.0|^9.0|^10.0|^11.0
- spatie/laravel-package-tools: ^1.12
- symfony/browser-kit: ^6.0|^7.0
- symfony/http-client: ^6.0|^7.0
- symfony/panther: ^2.1
Requires (Dev)
- dg/bypass-finals: ^1.7
- nunomaduro/collision: ^5.0|^6.0|^7.0|^8.0
- orchestra/testbench: ^6.0|^7.0|^8.0|^9.0
- pestphp/pest: ^1.0|^2.0
- phpspec/prophecy: ~1.0
README
Webscrape
Scrape web pages with a Laravel application.
Installation
You can install the package via composer:
composer require truercm/laravel-webscrape
You can publish and run the migrations with:
php artisan vendor:publish --tag="laravel-webscrape-migrations"
php artisan migrate
You can publish the config file with:
php artisan vendor:publish --tag="laravel-webscrape-config"
This is the contents of the published config file:
return [ /* |-------------------------------------------------------------------------- | Webscrape models |-------------------------------------------------------------------------- */ 'models' => [ /* |-------------------------------------------------------------------------- | Subject model holds the credentials, target_id and the final scraping result |-------------------------------------------------------------------------- */ 'subject' => TrueRcm\LaravelWebscrape\Models\CrawlSubject::class, /* |-------------------------------------------------------------------------- | Target model stores the remote target, authentication url and processing job |-------------------------------------------------------------------------- */ 'target' => TrueRcm\LaravelWebscrape\Models\CrawlTarget::class, /* |-------------------------------------------------------------------------- | TargetUrl model collects all URLs for the Target |-------------------------------------------------------------------------- */ 'target_url' => TrueRcm\LaravelWebscrape\Models\CrawlTargetUrl::class, /* |-------------------------------------------------------------------------- | Url Result model stores processed results |-------------------------------------------------------------------------- */ 'result' => TrueRcm\LaravelWebscrape\Models\CrawlResult::class, ], /* |-------------------------------------------------------------------------- | Selenium driver url |-------------------------------------------------------------------------- */ 'selenium_driver_url' => env('SELENIUM_DRIVER_URL', null), ];
Laravel Webcrawler uses Selenium to crawl the pages, so make sure you have it installed.
Usage
This is a generic package, you would need to implement all the crawling steps yourself.
The high concept overview involves:
- Having a CrawlTarget - the model, containing the entry point to the list of pages you need to crawl
- Crawl subject - a model that connects the credentials with the crawl target
Once you have registered a target, you can:
- Initialize subject with credentials and target urls
- Start remote url crawling and processing the result
$crawlSubject = \TrueRcm\LaravelWebscrape\Actions\StoreCrawlSubject::run([ 'model_type' => App\Models\User::class, 'model_id' => 1, 'crawl_target_id' => 1, 'credentials' => ['values' => 'that would be piped', 'into' => 'crawl target'], ]);
and from here:
resolve($crawlSubject->crawlTarget->crawling_job) ->dispatch($crawlSubject);
- After the job is finished we have final result in CrawlSubject's result column
$crawlSubject->result;
Testing
composer test
Changelog
Please see CHANGELOG for more information on what has changed recently.
Contributing
Please see CONTRIBUTING for details.
Security Vulnerabilities
Please review our security policy on how to report security vulnerabilities.
Credits
License
The MIT License (MIT). Please see License File for more information.