laoqianjunzi / phpquery
phpQuery是一个基于PHP的服务端开源项目,它可以让PHP开发人员轻松处理DOM文档内容。更有意思的是,它采用了jQuery的思想,使得可以像使用jQuery一样处理页面内容,获取想要的页面信息。
1.0
2023-10-11 05:04 UTC
Requires
- php: >=5.3.0
This package is not auto-updated.
Last update: 2024-10-05 04:42:40 UTC
README
Spider
Spider
is a simple, elegant, extensible PHP Web Scraper ,based on phpQuery.
Features
- Have the same CSS3 DOM selector as jQuery
- Have the same DOM manipulation API as jQuery
- Have a generic list crawling program
- Have a strong HTTP request suite, easy to achieve such as: simulated landing, forged browser, HTTP proxy and other complex network requests
- Have a messy code solution
- Have powerful content filtering, you can use the jQuey selector to filter content
- Has a high degree of modular design, scalability and strong
- Have an expressive API
- Has a wealth of plug-ins
Through plug-ins you can easily implement things like:
- Multithreaded crawl
- Crawl JavaScript dynamic rendering page (PhantomJS/headless WebKit)
- Image downloads to local
- Simulate browser behavior such as submitting Form forms
- Web crawler
- .....
Requirements
- PHP >= 7.1
Installation
By Composer installation:
composer require laoqianjunzi/spider
Usage
DOM Traversal and Manipulation
- Crawl「GitHub」all picture links
Spider::get('https://github.com')->find('img')->attrs('src');
- Crawl Google search results
$ql = Spider::get('https://www.google.co.jp/search?q=Spider');
$ql->find('title')->text(); //The page title
$ql->find('meta[name=keywords]')->content; //The page keywords
$ql->find('h3>a')->texts(); //Get a list of search results titles
$ql->find('h3>a')->attrs('href'); //Get a list of search results links
$ql->find('img')->src; //Gets the link address of the first image
$ql->find('img:eq(1)')->src; //Gets the link address of the second image
$ql->find('img')->eq(2)->src; //Gets the link address of the third image
// Loop all the images
$ql->find('img')->map(function($img){
echo $img->alt; //Print the alt attribute of the image
});
- More usage
$ql->find('#head')->append('<div>Append content</div>')->find('div')->htmls();
$ql->find('.two')->children('img')->attrs('alt'); // Get the class is the "two" element under all img child nodes
// Loop class is the "two" element under all child nodes
$data = $ql->find('.two')->children()->map(function ($item){
// Use "is" to determine the node type
if($item->is('a')){
return $item->text();
}elseif($item->is('img'))
{
return $item->alt;
}
});
$ql->find('a')->attr('href', 'newVal')->removeClass('className')->html('newHtml')->...
$ql->find('div > p')->add('div > ul')->filter(':has(a)')->find('p:first')->nextAll()->andSelf()->...
$ql->find('div.old')->replaceWith( $ql->find('div.new')->clone())->appendTo('.trash')->prepend('Deleted')->...
List crawl
Crawl the title and link of the Google search results list:
$data = Spider::get('https://www.google.co.jp/search?q=Spider')
// Set the crawl rules
->rules([
'title'=>array('h3','text'),
'link'=>array('h3>a','href')
])
->query()->getData();
print_r($data->all());
Results:
Array
(
[0] => Array
(
[title] => Angular - Spider
[link] => https://angular.io/api/core/Spider
)
[1] => Array
(
[title] => Spider | @angular/core - Angularリファレンス - Web Creative Park
[link] => http://www.webcreativepark.net/angular/Spider/
)
[2] => Array
(
[title] => SpiderにQueryを追加したり、追加されたことを感知する | TIPS ...
[link] => http://www.webcreativepark.net/angular/Spider_query_add_subscribe/
)
//...
)
Encode convert
// Out charset :UTF-8
// In charset :GB2312
Spider::get('https://top.etao.com')->encoding('UTF-8','GB2312')->find('a')->texts();
// Out charset:UTF-8
// In charset:Automatic Identification
Spider::get('https://top.etao.com')->encoding('UTF-8')->find('a')->texts();
HTTP Client (GuzzleHttp)
Carry cookie login GitHub
//Crawl GitHub content $ql = Spider::get('https://github.com','param1=testvalue & params2=somevalue',[ 'headers' => [ // Fill in the cookie from the browser 'Cookie' => 'SINAGLOBAL=546064; wb_cmtLike_2112031=1; wvr=6;....' ] ]); //echo $ql->getHtml(); $userName = $ql->find('.header-nav-current-user>.css-truncate-target')->text(); echo $userName;
Analog login
// Post login $ql = Spider::post('http://xxxx.com/login',[ 'username' => 'admin', 'password' => '123456' ])->get('http://xxx.com/admin'); // Crawl pages that need to be logged in to access $ql->get('http://xxx.com/admin/page'); //echo $ql->getHtml();
Submit forms
Login GitHub
// Get the Spider instance
$ql = Spider::getInstance();
// Get the login form
$form = $ql->get('https://github.com/login')->find('form');
// Fill in the GitHub username and password
$form->find('input[name=login]')->val('your github username or email');
$form->find('input[name=password]')->val('your github password');
// Serialize the form data
$fromData = $form->serializeArray();
$postData = [];
foreach ($fromData as $item) {
$postData[$item['name']] = $item['value'];
}
// Submit the login form
$actionUrl = 'https://github.com'.$form->attr('action');
$ql->post($actionUrl,$postData);
// To determine whether the login is successful
// echo $ql->getHtml();
$userName = $ql->find('.header-nav-current-user>.css-truncate-target')->text();
if($userName)
{
echo 'Login successful ! Welcome:'.$userName;
}else{
echo 'Login failed !';
}
Bind function extension
Customize the extension of a myHttp
method:
$ql = Spider::getInstance();
//Bind a `myHttp` method to the Spider object
$ql->bind('myHttp',function ($url){
// $this is the current Spider object
$html = file_get_contents($url);
$this->setHtml($html);
return $this;
});
// And then you can call by the name of the binding
$data = $ql->myHttp('https://toutiao.io')->find('h3 a')->texts();
print_r($data->all());
Or package to class, and then bind:
$ql->bind('myHttp',function ($url){
return new MyHttp($this,$url);
});
- Using the CURL multithreading plug-in, multi-threaded crawling GitHub trending :
$ql = Spider::use(CurlMulti::class);
$ql->curlMulti([
'https://github.com/trending/php',
'https://github.com/trending/go',
//.....more urls
])
// Called if task is success
->success(function (Spider $ql,CurlMulti $curl,$r){
echo "Current url:{$r['info']['url']} \r\n";
$data = $ql->find('h3 a')->texts();
print_r($data->all());
})
// Task fail callback
->error(function ($errorInfo,CurlMulti $curl){
echo "Current url:{$errorInfo['info']['url']} \r\n";
print_r($errorInfo['error']);
})
->start([
// Maximum number of threads
'maxThread' => 10,
// Number of error retries
'maxTry' => 3,
]);