How to Profile Python Code: Accurately Measure the Performance of your Software
As software engineers, we are paid to create high-quality code that performs.
Performance can be measured in a number of ways: the problems it solves, its uptime, or a slew of quantitative metrics.
Speed is unquestionably an important component of software performance. In this article, I’ll show you how to measure the speed of Python code using a process called “profiling”, step-by-step.
What is Profiling?
To measure the speed of Python code, we use a process called ‘profiling’. Profiling measures the speed of your code and also provides additional information like:
The number of times each method was called
The average runtime of each method per call
The total time spent running each function
There are a number of terms that you should familiarize yourself with before you start profiling your first application:
Method profiling: Measuring the performance of methods and functions
Line profiling: Measuring the performance of an application line-by-line
APM Tools: APM stands for application performance management. These are off-the-shelf tools that you can pair with your applications to monitor their performance in real-time. I would say that these could be useful for production-level business applications but might not be worth the cost for weekend warriors building side projects.
Fortunately, there are a number of Python modules that make it easy to profile your code. We’ll explore them in the next section of this article.
The Profiling Ecosystem
Python has a robust ecosystem for profiling code. The major components are:
cProfile: a C extension suitable for profiling long-running programs. cProfile is recommend for most Python users.
Profile: a pure Python module that adds significant technical overhead to profiled programs. Unless you’re trying to extend your profiler beyond the functionality provided by cProfile, I would recommend against using the profile module.
pstats: a Python module for formatting data and generating reports from the information generated by profile and cProfile.
In this article, I will demonstrate how to profile your code using the functionality provided by cProfile and pstats. Specifically, I will provide an easy template where you can paste in your code and get a nicely-formatted report on the runtimes of the functions it contains.
Step 1: Finish Writing and Debugging Your Code
While this step is basically a formality, it’s worth mentioning that you need to finish and debug the code first. Said differently, your code needs to run properly before you can profile it. Spend time completing and debugging your code before
Open up some sort of Python environment (like a Jupyter Notebook, vim, or VS Code) and open the profiler.py file.
Step 3: Enter the Code You Wish to Profile
In the profiler.py file, you’ll see the following code block:
@profiledefcode_to_profile():#################################### Insert code to be profiled here###################################pass
Within the commented block, replace it with the code you’d like to profile. For the purpose of this tutorial, we’ll be profiling a very simple program called code-to-test.py, which imports population data from an open source GitHub repository and prints each country’s name along with its population.
The entire program we’re profiling is only 13 lines long. I have embedded it below.
import numpy as np
import pandas as pd
populationData = pd.read_json('https://raw.githubusercontent.com/samayo/country-json/master/src/country-by-population.json')
stringDataFrame = pd.DataFrame(pd.np.empty((0,1)))for index, row in populationData.iterrows():
stringDataFrame.append(pd.Series(row+' has population '+str(row)),ignore_index =True)
i=0while i <10**8:
i+=1print('The code is done!')
tottime is the total time spent in the function alone. cumtime is the total time spent in the function plus all functions that this function called.
From time to time when profiling code, you may also see something like this:
CPU times: user 2 µs, sys:0 ns, total:2 µs
Wall time:5.72 µs
This means that the CPU actually performed calculations for 2 microseconds, while the program took a total of 5.7 microseconds to really complete. If you’re ever confused about what CPU time or wall time means, it is helpful to remember that ‘wall time’ really means ‘time as measured by a clock on the wall’.
Why are CPU time and wall time different? It is usually because the CPU sends a request across a network and must wait for a response before continuing.
In this case, the pd.read_json() function sends a request to GitHub for the population statistics database and must wait for the request to be served before printing the strings specified by the program.
In this article, I explained how to measure the performance of Python code using a profiler. I also included links to a GitHub repository with a profiler already built, and some prepopulated code for you to profile yourself.
One of my goals for 2020 is to create many tutorials on Python programming. If you’d like to suggest a tutorial, please email me. You can also join my mailing list to stay up-to-date on my latest articles.