A simple histogram can be a great first step in understanding a dataset. Earlier, we saw a preview of Matplotlib's histogram function (see Comparisons, Masks, and Boolean Logic), which creates a basic histogram in one line, once the normal boiler-plate imports are done:
hist() function has many options to tune both the calculation and the display;
here's an example of a more customized histogram:
plt.hist docstring has more information on other customization options available.
I find this combination of
histtype='stepfilled' along with some transparency
alpha to be very useful when comparing histograms of several distributions:
If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the
np.histogram() function is available:
[ 12 190 468 301 29]
Just as we create histograms in one dimension by dividing the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins.
We'll take a brief look at several ways to do this here.
We'll start by defining some data—an
y array drawn from a multivariate Gaussian distribution:
plt.hist2d: Two-dimensional histogram
One straightforward way to plot a two-dimensional histogram is to use Matplotlib's
Just as with
plt.hist2d has a number of extra options to fine-tune the plot and the binning, which are nicely outlined in the function docstring.
Further, just as
plt.hist has a counterpart in
plt.hist2d has a counterpart in
np.histogram2d, which can be used as follows:
For the generalization of this histogram binning in dimensions higher than two, see the
plt.hexbin: Hexagonal binnings
The two-dimensional histogram creates a tesselation of squares across the axes.
Another natural shape for such a tesselation is the regular hexagon.
For this purpose, Matplotlib provides the
plt.hexbin routine, which will represents a two-dimensional dataset binned within a grid of hexagons:
plt.hexbin has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.).
Another common method of evaluating densities in multiple dimensions is kernel density estimation (KDE).
This will be discussed more fully in In-Depth: Kernel Density Estimation, but for now we'll simply mention that KDE can be thought of as a way to "smear out" the points in space and add up the result to obtain a smooth function.
One extremely quick and simple KDE implementation exists in the
Here is a quick example of using the KDE on this data: