Interactive logging via named pipes and sockets
LogPipe is a library and an application to monitor PHP applications and monolog logs in real time. This is done by serializing the data and sending the serialized blobs over one of the supported transports. The transport doesn't need to have a listener in the other end; LogPipe is designed to fail quietly without affecting your application. Once you want to get an insight into what is going on, fire up the dumper.
LogPipe is designed to have a minimal impact on performance, and as such it will discard events if it encounters any issues while sending it.
To install into a project using composer:
$ composer require noccylabs/logpipe:@stable
Install globally for using with shell scripts etc:
$ composer global require noccylabs/logpipe:@stable
To use with Monolog, push a
LogPipeHandler onto your
To use LogPipe with Symfony you only need to register the handler as a service so that it can be used with Monolog.
This is preferably done in the
config_dev.yml file. If there is a
services: block, add the sections to it, otherwise
services: logpipe.handler: class: NoccyLabs\LogPipe\Handler\LogPipeHandler arguments: [ "tcp:127.0.0.1:6601" ]
Then define the handler in the same file. By doing it in
config_dev.yml, your live environment will not use the
monolog: handlers: ... logpipe: type: service id: logpipe.handler
LogPipe can be set up to automatically log exceptions and errors:
use NoccyLabs\LogPipe\Handler\ConsoleHandler; $handler = new ConsoleHandler("tcp:127.0.0.1:9999:serializer=json"); $handler->setExceptionReporting(true); $handler->setErrorReporting(true);
You can also write events manually: (not implemented)
$handler->debug("This is a debug message!"); $handler->warning("Danger! Danger!");
To start (listening for and) dumping events on the default transport (
tcp:127.0.0.1:6601) just use the dump command:
$ bin/logpipe dump
You can also specify to explicityly listen on a specific transport by providing it as an argument.
$ bin/logpipe dump tcp:0.0.0.0:9999
You can create some test events by using the test command in another terminal while the dump command is running:
$ bin/logpipe test
To put the transport under some serious stress by spawning several dumpers to send a barrage of data
to the dumper, you can do
logpipe dump -t stress <transport>.
To save the log while viewing it, try using tee:
$ bin/logpipe dump --no-ansi | tee messages.log or: $ bin/logpipe dump --tee messages.log
You can also write events from the console, or from scripts:
$ bin/logpipe write -c "cron" --error "Setup failed"
Or pass events straight through from stdin:
$ some_command | bin/logpipe log:pass
The connection is set up using a simple connection string, consisting of the desired transport and any parameters
needed to set it up separated by colon (
:). Additional configuration can be added in HTML query-string style after
the last parameter:
The default transport is over UDP port 6999. Messages sent over UDP are tagged with a 6-byte header specifying size and crc32 of the payload. Messages are serialized, transmitted, and once fully received and with a valid checksum unserialized and parsed. Note that due to how UDP works, if you spawn another dumper on the same port, the first one will stop receiving data without indicating an error.
The TCP transport works kinda like the UDP transport. However, since TCP is connection-oriented some complications may occur if no dumper is available. This needs more testing. It should however be able to handle bigger messages.
The pipe transport is the default when no colon is found in the transport URI. Thus,
/var/run/foo.sock will be
internally translated to
pipe:/var/run/foo.sock. The listen() method will create the named pipe and start
listening for connections. Only use the pipe transport if you really have to. Concurrency might be an issue, as well
as some unexpected blocking issues.
What serializer is used is set in the sending transport. The serialization format is embedded in the message frame (together with checksum, size and flags) to that the appropriate unserializer can be invoked. The supported serializers are:
php: The built-in PHP serializer
json: Uses Json to serialize the data
msgpack: Like binary json, should result in smaller messages. Requires
bson: Like msgpack, but slightly larger in size. Requires
To use a custom serializer, provide it with the endpoint URI:
As the policy is fail and forget, you will not receive any errors if the serializer is not supported. Calling
on a non-existing serializer will throw an exception.
When launching the dumper in interactive mode (by passing
--interactive) some additional tools are available.
The last bunch of messages (normally 1000, but can be set with
-Cbuffer.size=N on the command line or
:set buffer.size N
while in the dumper) are stored in a buffer. You can at any time search this buffer and dump any matches. To do this, just
press slash (
/) and start typing. The input will be parsed as a regular expression, so you can add modifiers to the end:
/exception/i <- will perform a case independent match
Currently the only supported command is
set, but you can go ahead and invoke it by pressing colon (
:) while in the
:set <- list all settings :set buffer.size <- show the value of buffer.size :set buffer.size 999 <- set buffer size to 999
Q: I can't see all logged messages!!!
LogPipe will fail quietly if anything goes wrong. This includes serialization of the log event, transport errors and more. This is done so that a problematic logger or transport will not cause the application being diagnosed to misbehave.
Q: LogPipe is causing my application to misbehave!
Please report this ASAP, unless you are able to fix the issue and commit a pull request. As previously mentioned, the strategy is fail and forget, meaning that ALL AND ANY errors that occur should be silently consumed, as to prevent the application from failing or misbehaving due to an auxillary logger.
Q: The interactive mode doesn't work!
LogPipe uses Stty to switch the terminal from line-buffered mode to raw character mode in order to implement custom readline functionality. In the long run, this will mean that you can enter commands or filter expressions while the log keep updating, but today it means that certain platforms may encounter issues.