Python Logging for Software Development Services

Many times when we provide software development services for application development, advanced features are required. This is because we work with some of the most complex technology in the world. This article is about the logging module built in to Python. Logging keeps track of events when the software is running with the help of debug, info, warning, error and other critical methods. This is really important for software quality engineering. In logging, these terms are given a greater priority than others. Here are some of the relevant terms we are going to look over in more detail: loggers, levels, handlers, formatters and filters.


This object is responsible for all logging related stuff. The hierarchy is maintained within the logger. The “root” logger is always at the top of the hierarchy. To maintain hierarchy, logger names are separated by periods. For example, a logger with the name “bob” is the parent of the logger with the name “bob1” when read in the following manner: “bob.bob1”. Multiple calls to get Logger() with the same name will give a reference to the same logger object. If the level is not set in the logger, then the level of its parent is used. The root loger level is always set to WARNING by default.

import logging

logger = logging.getLogger(“logr”)
#creates logger object with the name “logr”
#”logr” is the name outputted in log message


Levels are associated with loggers. In your application development, levels are what determine whether something should be logged or not. If the level set on the logger is higher than the level of the method called then no logging is done. “Critical” is considered to be the highest level. Here are the various levels in order:


logger = logging.getLogger('logr')
#setting level to logger


Handlers are responsible for dispatching log messages to specific destinations. Logger objects can have zero or more handlers. Consider that depending on the severity of different issues, they may need to be logged to different locations. This is where handlers become useful to create different log locations according to severity. There are many useful handlers out there for you to explore. A few examples are: StreamHandler, FileHandler, RotatingFileHandler, SocketHandler, HTTPHandler.

logger = logging.getLogger('logr')

#create handler and set level
hd = logging.StreamHandler()

#adding handler

[See also: The Rise of MEAN for Application Development]


Formatters determine the format in which your log messages appear.

logger = logging.getLogger('simple_example')
hd = logging.StreamHandler()

You can format the log in a variety of ways depending on your requirements. Here is an example:

formatter = logging.Formatter('[%(asctime)s] [%(levelname)s] %(message)s')

Add a formatter to hd:


Add hd to logger:


Logging the message:'info message')

Date and time:
%(asctime)s helps you to put date and time in log messages. The default is:

%Y-%m-%d %H:%M:%S

Appending the level in log messages:


The message you want to log:


After running the code shown above, the format of the log message looks like something like this:

[2014-11-24 16:02:55] [INFO] info message

[See also: Using Angular.JS Directive for Product Development]


Filters add more control to logging for software quality engineering, which is important for many cloud computing companies. They can be added to both logger and handler with the help of their addFilter method. Before processing the message further, both logger and handler consult their filters for permission. If the filter returns a false value message then the item is not processed further.

Configuring Logging

With regards to file configuration, there is a neat way to do configuration by using a dictionary of configuration information and passing it to the dictConfig() function. File configuration has an advantage because there is a separation between the configuration and code. The config file should be:


The fileConfig function is:

logging.config.fileConfig(fname, defaults=None, disable_existing_loggers=True)

  • “fname” is the name of the file
  • “defaults” are defaults that are to be passed to ConfigParser
  • “disable_existing_loggers” when this is false, loggers that exist when this call is made are left enabled

Configuring using file:


The config file is something like this:

keys = root,foo

keys = fileHandler

keys = msgFormatter

level = DEBUG
handlers = fileHandler

level = DEBUG
handlers = fileHandler
qualname = foo
propagate = 1

class = handlers.RotatingFileHandler
level = DEBUG
formatter = msgFormatter
args = ('%(log_path)s','a',1048576,5)

format = [%(asctime)s] [%(levelname)s] %(message)s
datefmt = %Y-%m-%d %H:%M:%S

[See also: Virtualization in Custom Software Development and Testing]

This is how you can make a simple config file. The file above contains the logger root and foo, one handler rotating file handler, and formatter. If you want to pass a value at runtime to config you can use the following method:

logging.config.fileConfig(“filepath/fname”, {'log_path': log_path}, False)

To get the log_path value you can use the code shown below in the config file:

args = ('%(log_path)s','a',1048576,5)

Manipulating the config file

If you want to change the config file at runtime, for example adding loggers, handlers, formatters and so on, you can use configparser:

config = ConfigParser()

The config-parser formatted file has three terms: section, option and value. In the following code “loggers” is section, “keys” is option and value is the value given to keys:

keys = root,foo

add_section method in config-parser:


And then set its option using the set method:

config.set(“section”, “option”, “value”)

Real-world example

Recently while building a cloud solution I had a requirement to log messages using modules. That means if one module was done then the logs needed to be written to a file with the name of that module. If logging is done afterwards from another module then the log messages needed to be written to the file with the name of the second module.

To tackle this challenge I created the loggers with all the module names and an associated handler for each logger. By having a specific handler for each logger, I was able to create different log-path/log-file names for each logger. Basically I was able to log module-wise. All this was achiever with the help of a config parser.

Good luck coding with Python!

Everything you need to know about outsourcing technology development
Access a special Introduction Package with everything you want to know about outsourcing your software development. How should you evaluate a partner? What components of your solution that are suitable to be handed off to a partner? These answers and more below.


About The Author

Speak to our Experts
Lets Talk

Our Latest Blogs

With Zymr you can