PK m,Ce(G G structlog-0.1.0/api.html
Convenience function that returns a logger according to configuration.
>>> from structlog import get_logger
>>> log = get_logger(y=23)
>>> log.msg('hello', x=42)
y=23 x=42 event='hello'
Parameters: | initial_values – Values that are used to pre-populate your contexts. |
---|
See Configuration for details.
If you prefer CamelCase, there’s an alias for your reading pleasure: structlog.getLogger().
CamelCase alias for structlog.get_logger().
This function is supposed to be in every source file – I don’t want it to stick out like a sore thumb in frameworks like Twisted or Zope.
Create a new bound logger for an arbitrary logger.
Default values for processors, wrapper_class, and context_class can be set using configure().
If you set processors or context_class here, calls to configure() have no effect for the respective attribute.
In other words: selective overwriting of the defaults is possible.
Parameters: |
|
---|---|
Return type: | A proxy that creates a correctly configured bound logger when necessary. |
Configures the global defaults.
They are used if wrap_logger() has been called without arguments.
Also sets the global class attribute is_configured to True on first call. Can be called several times, keeping an argument at None leaves is unchanged from the current setting.
Use reset_defaults() to undo your changes.
Parameters: |
|
---|
Configures iff structlog isn’t configured yet.
It does not matter whether is was configured using configure() or configure_once() before.
Raises a RuntimeWarning if repeated configuration is attempted.
Resets global default values to builtins.
That means [format_exc_info(), KeyValueRenderer] for processors, BoundLogger for wrapper_class, OrderedDict for context_class, and PrintLogger for logger_factory.
Also sets the global class attribute is_configured to True.
Immutable, context-carrying wrapper.
Public only for sub-classing, not intended to be instantiated by yourself. See wrap_logger() and get_logger().
Clear context and binds initial_values using bind().
Only necessary with dict implementations that keep global state like those wrapped by structlog.threadlocal.wrap_dict() when threads are re-used.
Return type: | BoundLogger |
---|
Return a new logger with new_values added to the existing ones.
Return type: | BoundLogger |
---|
Return a new logger with keys removed from the context.
Raises KeyError: | |
---|---|
If the key is not part of the context. | |
Return type: | BoundLogger |
Prints events into a file.
Parameters: | file (file) – File to print to. (default: stdout) |
---|
>>> from structlog import PrintLogger
>>> PrintLogger().msg('hello')
hello
Useful if you just capture your stdout with tools like runit or if you forward your stderr to syslog.
Also very useful for testing and examples since logging is sometimes finicky in doctests.
Returns the string that it’s called with.
>>> from structlog import ReturnLogger
>>> ReturnLogger().msg('hello')
'hello'
Useful for unit tests.
If raised by an processor, the event gets silently dropped.
Derives from BaseException because it’s technically not an error.
Primitives to keep context global but thread (and greenlet) local.
Wrap a dict-like class and return the resulting class.
The wrapped class and used to keep global in the current thread.
Parameters: | dict_class (type) – Class used for keeping context. |
---|---|
Return type: | type |
Bind tmp_values to logger & memorize current state. Rewind afterwards.
>>> from structlog import wrap_logger, PrintLogger
>>> from structlog.threadlocal import tmp_bind, wrap_dict
>>> logger = wrap_logger(PrintLogger(), context_class=wrap_dict(dict))
>>> with tmp_bind(logger, x=5) as tmp_logger:
... logger = logger.bind(y=3)
... tmp_logger.msg('event')
y=3 x=5 event='event'
>>> logger.msg('event')
event='event'
Extract the context from a thread local logger into an immutable logger.
Parameters: | logger (BoundLogger) – A logger with possibly thread local state. |
---|---|
Return type: | BoundLogger with an immutable context. |
Processors useful regardless of the logging framework.
Bases: object
Render the event_dict using json.dumps(event_dict, **json_kw).
>>> from structlog.processors import JSONRenderer
>>> JSONRenderer(sort_keys=True)(None, None, {'a': 42, 'b': [1, 2, 3]})
'{"a": 42, "b": [1, 2, 3]}'
Bases: object
Render event_dict as a list of Key=repr(Value) pairs.
>>> from structlog.processors import KeyValueRenderer
>>> KeyValueRenderer()(None, None, {'a': 42, 'b': [1, 2, 3]})
'a=42 b=[1, 2, 3]'
Parameters: | sort_keys (bool) – Whether to sort keys when formatting. |
---|
Bases: object
Add a timestamp to event_dict.
Parameters: |
|
---|
>>> from structlog.processors import TimeStamper
>>> TimeStamper()(None, None, {})
{'timestamp': 1378994017}
>>> TimeStamper(fmt='iso')(None, None, {})
{'timestamp': '2013-09-12T13:54:26.996778Z'}
>>> TimeStamper(fmt='%Y')(None, None, {})
{'timestamp': '2013'}
Bases: object
Encode unicode values in event_dict.
Useful for KeyValueRenderer if you don’t want to see u-prefixes:
>>> from structlog.processors import KeyValueRenderer, UnicodeEncoder
>>> KeyValueRenderer()(None, None, {'foo': u'bar'})
"foo=u'bar'"
>>> KeyValueRenderer()(None, None,
... UnicodeEncoder()(None, None, {'foo': u'bar'}))
"foo='bar'"
Just put it in the processor chain before KeyValueRenderer.
Replace an exc_info field by an exception string field:
If event_dict contains the key exc_info, there are two possible behaviors:
If there is no exc_info key, the event_dict is not touched. This behavior is analogue to the one of the stdlib’s logging.
>>> from structlog.processors import format_exc_info
>>> try:
... raise ValueError
... except ValueError:
... format_exc_info(None, None, {'exc_info': True})
{'exception': 'Traceback (most recent call last):...
Processors and helpers specific to the logging module from the Python standard library.
Build a standard library logger when an instance is called.
>>> from structlog import configure
>>> from structlog.stdlib import LoggerFactory
>>> configure(logger_factory=LoggerFactory())
Check whether logging is configured to accept messages from this log level.
Should be the first processor if stdlib’s filtering by level is used so possibly expensive processors like exception formatters are avoided in the first place.
>>> import logging
>>> from structlog.stdlib import filter_by_level
>>> logging.basicConfig(level=logging.WARN)
>>> logger = logging.getLogger()
>>> filter_by_level(logger, 'warn', {})
{}
>>> filter_by_level(logger, 'debug', {})
Traceback (most recent call last):
...
DropEvent
Processors and tools specific to the Twisted networking engine.
Build a Twisted logger when an instance is called.
>>> from structlog import configure
>>> from structlog.twisted import LoggerFactory
>>> configure(logger_factory=LoggerFactory())
Adapt an event_dict to Twisted logging system.
Particularly, make a wrapped twisted.python.log.err behave as expected.
Must be the last processor in the chain and requires a dictFormatter for the actual formatting as an constructor argument in order to be able to fully support the original behaviors of log.msg() and log.err().
Behaves like structlog.processors.JSONRenderer except that it formats tracebacks and failures itself if called with err().
Not an adapter like EventAdapter but a real formatter. Nor does it require to be adapted using it.
structlog is licensed under the permissive Apache License, Version 2. The full license text can be also found in the source code repository.
structlog is written and maintained by Hynek Schlawack. It’s inspired on previous work done by Jean-Paul Calderone and David Reid.
The following folks helped forming structlog into what it is now:
Some of them disapprove of the addition of thread local context data. :)
This chapter is intended to give you a taste of realistic usage of structlog.
In the simplest case, you bind a unique request ID to every incoming request so you can easily see which log entries belong to which request.
import uuid
import flask
import structlog
from .some_module import some_function
logger = structlog.get_logger()
app = flask.Flask(__name__)
@app.route('/login', methods=['POST', 'GET'])
def some_route():
log = logger.new(
request_id=str(uuid.uuid4()),
)
# do something
# ...
log.info('user logged in', user='test-user')
# gives you:
# request_id='ffcdc44f-b952-4b5f-95e6-0f1f3a9ee5fd' event='user logged in' user='test-user'
# ...
some_function()
# ...
if __name__ == "__main__":
from structlog.stdlib import LoggerFactory
from structlog.threadlocal import wrap_dict
structlog.configure(
context_class=wrap_dict(dict),
logger_factory=LoggerFactory(),
)
app.run()
some_module.py
from structlog import get_logger
logger = get_logger()
def some_function():
# later then:
logger.error('user did something', something='shot_in_foot')
# gives you:
# request_id='ffcdc44f-b952-4b5f-95e6-0f1f3a9ee5fd' something='shot_in_foot' event='user did something'
While wrapped loggers are immutable by default, this example demonstrates how to circumvent that using a thread local dict implementation for context data for convenience (hence the requirement for using new() for re-initializing the logger).
Please note that structlog.stdlib.LoggerFactory is a totally magic-free class that just deduces the name of the caller’s module and does a logging.getLogger(). with it. It’s used by structlog.get_logger() to rid you of logging boilerplate in application code.
If you prefer to log less but with more context in each entry, you can bind everything important to your logger and log it out with each log entry.
import sys
import uuid
import structlog
import twisted
from twisted.internet import protocol, reactor
logger = structlog.get_logger()
class Counter(object):
i = 0
def inc(self):
self.i += 1
def __repr__(self):
return str(self.i)
class Echo(protocol.Protocol):
def connectionMade(self):
self._counter = Counter()
self._log = logger.new(
connection_id=str(uuid.uuid4()),
peer=self.transport.getPeer().host,
count=self._counter,
)
def dataReceived(self, data):
self._counter.inc()
log = self._log.bind(data=data)
self.transport.write(data)
log.msg('echoed data!')
if __name__ == "__main__":
from structlog.twisted import LoggerFactory, EventAdapter
structlog.configure(
processors=[EventAdapter()],
logger_factory=LoggerFactory(),
)
twisted.python.log.startLogging(sys.stderr)
reactor.listenTCP(1234, protocol.Factory.forProtocol(Echo))
reactor.run()
gives you something like:
... peer='127.0.0.1' connection_id='1c6c0cb5-...' count=1 data='123\n' event='echoed data!'
... peer='127.0.0.1' connection_id='1c6c0cb5-...' count=2 data='456\n' event='echoed data!'
... peer='127.0.0.1' connection_id='1c6c0cb5-...' count=3 data='foo\n' event='echoed data!'
... peer='10.10.0.1' connection_id='85234511-...' count=1 data='cba\n' event='echoed data!'
... peer='127.0.0.1' connection_id='1c6c0cb5-...' count=4 data='bar\n' event='echoed data!'
Since Twisted’s logging system is a bit peculiar, structlog ships with an adapter so it keeps behaving like you’d expect it to behave.
I’d also like to point out the Counter class that doesn’t do anything spectacular but gets bound once per connection to the logger and since its repr is the number itself, it’s logged out correctly for each event. This shows off the strength of keeping a dict of objects for context instead of passing around serialized strings.
Processors are a both simple and powerful feature of structlog.
So you want timestamps as part of the structure of the log entry, censor passwords, filter out log entries below your log level before they even get rendered, and get your output as JSON for convenient parsing? Here you go:
>>> import datetime, logging, sys
>>> from structlog import wrap_logger
>>> from structlog.processors import JSONRenderer
>>> from structlog.stdlib import filter_by_level
>>> logging.basicConfig(stream=sys.stdout, format='%(message)s')
>>> def add_timestamp(_, __, event_dict):
... event_dict['timestamp'] = datetime.datetime.utcnow()
... return event_dict
>>> def censor_password(_, __, event_dict):
... pw = event_dict.get('password')
... if pw:
... event_dict['password'] = '*CENSORED*'
... return event_dict
>>> log = wrap_logger(
... logging.getLogger(__name__),
... processors=[
... filter_by_level,
... add_timestamp,
... censor_password,
... JSONRenderer(indent=1, sort_keys=True)
... ]
... )
>>> log.info('something.filtered')
>>> log.warning('something.not_filtered', password='secret')
{
"event": "something.not_filtered",
"password": "*CENSORED*",
"timestamp": "datetime.datetime(..., ..., ..., ..., ...)"
}
structlog comes with many handy processors build right in – for a list of shipped processors, check out the API documentation.
A custom wrapper class helps you to cast the shackles of your underlying logging system even further and get rid of even more boilerplate.
>>> from structlog import BoundLogger, PrintLogger, wrap_logger
>>> class SemanticLogger(BoundLogger):
... def msg(self, event, **kw):
... if not 'status' in kw:
... self.info(event, status='ok', **kw)
... else:
... self.info(event, **kw)
...
... def user_error(self, event, **kw):
... self.msg(event, status='user_error', **kw)
>>> log = wrap_logger(PrintLogger(), wrapper_class=SemanticLogger)
>>> log = log.bind(user='fprefect')
>>> log.user_error('user.forgot_towel')
user='fprefect' status='user_error' event='user.forgot_towel'
I like to have semantically meaningful logger names. If you agree, this is a nice way to achieve that.
Of course, you can configure default processors, the wrapper class and the context classes globally.
s | ||
structlog | ||
structlog.processors | ||
structlog.stdlib | ||
structlog.threadlocal | ||
structlog.twisted |
Please activate JavaScript to enable the search functionality.
From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.
structlog can be easily installed using:
$ pip install structlog
If you’re running Python 2.6 and want to use OrderedDicts for your context (which is the default), you also have to install the respective compatibility package:
$ pip install ordereddict
If the order of the keys of your context doesn’t matter (e.g. if you’re logging JSON that gets parsed anyway), simply use a vanilla dict to avoid this dependency. See Configuration on how to achieve that.
A lot of effort went into making structlog accessible without reading pages of documentation. And indeed, the simplest possible usage looks like this:
>>> import structlog
>>> log = structlog.get_logger()
>>> log.msg('greeted', whom='world', more_than_a_string=[1, 2, 3])
whom='world' more_than_a_string=[1, 2, 3] event='greeted'
Here, structlog takes full advantage of its hopefully useful default settings:
It should be noted that even in most complex logging setups the example would still look just like that thanks to Configuration.
There you go, structured logging! However, this alone wouldn’t warrant its own package. After all, there’s even a recipe on structured logging for the standard library. So let’s go a step further.
Imagine a hypothetical web application that wants to log out all relevant data with just the API from above:
from structlog import get_logger
log = get_logger()
def view(request):
user_agent = request.get('HTTP_USER_AGENT', 'UNKNOWN')
peer_ip = request.client_addr
if something:
log.msg('something', user_agent=user_agent, peer_ip=peer_ip)
return 'something'
elif something_else:
log.msg('something_else', user_agent=user_agent, peer_ip=peer_ip)
return 'something_else'
else:
log.msg('else', user_agent=user_agent, peer_ip=peer_ip)
return 'else'
The calls themselves are nice and straight to the point, however you’re repeating yourself all over the place. At this point, you’ll be tempted to write a closure like
def log_closure(event):
log.msg(event, user_agent=user_agent, peer_ip=peer_ip)
inside of the view. Problem solved? Not quite. What if the parameters are introduced step by step? Do you really want to have a logging closure in each of your views?
Let’s have a look at a better approach:
from structlog import get_logger
logger = get_logger()
def view(request):
log = logger.bind(
user_agent=request.get('HTTP_USER_AGENT', 'UNKNOWN'),
peer_ip=request.client_addr,
)
foo = request.get('foo')
if foo:
log = log.bind(foo=foo)
if something:
log.msg('something')
return 'something'
elif something_else:
log.msg('something_else')
return 'something_else'
else:
log.msg('else')
return 'else'
Suddenly your logger becomes your closure!
For structlog, a log entry is just a dictionary called event dict[ionary]:
structlog’s primary application isn’t printing though. Instead, it’s intended to wrap your existing loggers and add structure and incremental context building to them. For that, structlog is completely agnostic of your underlying logger – you can use it with any logger you like.
The most prominent example of such an ‘existing logger’ is without doubt the logging module in the standard library. To make this common case as simple as possible, structlog comes with some tools to help you:
>>> import logging
>>> logging.basicConfig()
>>> from structlog import get_logger, configure
>>> from structlog.stdlib import LoggerFactory
>>> configure(logger_factory=LoggerFactory())
>>> log = get_logger()
>>> log.warn('it works!', difficulty='easy')
WARNING:structlog...:difficulty='easy' event='it works!'
In other words, you tell structlog that you would like to use the standard library logger factory and keep calling get_logger() like before.
structlog makes structured logging in Python easy by augmenting your existing logger. It’s licensed under the permissive Apache License, version 2, available from PyPI, and the source code can be found on GitHub. The full documentation is on Read the Docs.
structlog targets Python 2.6, 2.7, 3.2, and 3.3 as well as PyPy with no additional dependencies for core functionality.
The true power of structlog lies in its combinable log processors. A log processor is a regular callable, i.e. a function or an instance of a class with a __call__() method.
The processor chain is a list of processors. Each processors receives three positional arguments:
The return value of each processor is passed on to the next one as event_dict until finally the return value of the last processor gets passed into the wrapped logging method.
If you set up your logger like:
from structlog import BoundLogger, PrintLogger
wrapped_logger = PrintLogger()
logger = BoundLogger.wrap(wrapped_logger, processors=[f1, f2, f3, f4])
log = logger.new(x=42)
and call log.msg('some_event', y=23), it results in the following call chain:
wrapped_logger.msg(
f4(wrapped_logger, 'msg',
f3(wrapped_logger, 'msg',
f2(wrapped_logger, 'msg',
f1(wrapped_logger, 'msg', {'event': 'some_event', 'x': 42, 'y': 23})
)
)
)
)
In this case, f4 has to make sure it returns something wrapped_logger.msg can handle (see Adapting and Rendering).
The simplest modification a processor can make is adding new values to the event_dict. Parsing human-readable timestamps is tedious, not so UNIX timestamps – let’s add one to each log entry!
import calendar
import time
def timestamper(logger, log_method, event_dict):
event_dict['timestamp'] = calendar.timegm(time.gmtime())
return event_dict
Easy, isn’t it? Please note, that structlog comes with such an processor built in: TimeStamper.
If a processor raises structlog.DropEvent, the event is silently dropped.
Therefore, the following processor drops every entry:
from structlog import DropEvent
def dropper(logger, method_name, event_dict):
raise DropEvent
But we can do better than that!
How about dropping only log entries that are marked as coming from a certain peer (e.g. monitoring)?
from structlog import DropEvent
class ConditionalDropper(object):
def __init__(self, peer_to_ignore):
self._peer_to_ignore = peer_to_ignore
def __call__(self, logger, method_name, event_dict):
"""
>>> cd = ConditionalDropper('127.0.0.1')
>>> cd(None, None, {'event': 'foo', 'peer': '10.0.0.1'})
{'peer': '10.0.0.1', 'event': 'foo'}
>>> cd(None, None, {'event': 'foo', 'peer': '127.0.0.1'})
Traceback (most recent call last):
...
DropEvent
"""
if event_dict.get('peer') == self._peer_to_ignore:
raise DropEvent
else:
return event_dict
An important role is played by the last processor because its duty is to adapt the event_dict into something the underlying logging method understands. With that, it’s also the only processor that needs to know anything about the underlying system.
For that, it can either return a string that is passed as the first (and only) positional argument to the underlying logger or a tuple of (args, kwargs) that are passed as log_method(*args, **kwargs). Therefore return 'hello world' is a shortcut for return (('hello world',), {}) (the example in Chains assumes this shortcut has been taken).
This should give you enough power to use structlog with any logging system while writing agnostic processors that operate on dictionaries.
The probably most useful formatter for string based loggers is JSONRenderer. Advanced log aggregation and analysis tools like logstash offer features like telling them “this is JSON, deal with it” instead of fiddling with regular expressions.
More examples can be found in the examples chapter. For a list of shipped processors, check out the API documentation.