Example ADM analysis¶
See this notebook for an introduction to the ADMDatamart class to get an overview of the currently implemented features in the Python version of CDH Tools. If you have any suggestions for new features, please do not hesitate to raise an issue in Git, or even better: create a pull request yourself!
This notebook builds upon the Getting Started guide.
Reading the data¶
Reading the data is quite simple. All you need to do is to give a directory location to the ADMSnapshot class and it will automatically detect the latest files and import them. There is also a default function to import the CDH Sample data directly from the internet, as you can see below:
[2]:
from pdstools import ADMDatamart, datasets
import polars as pl
CDHSample = datasets.cdh_sample()
Bubble Chart¶
To start out with the bubble chart, which we can simply call by calling plotPerformanceSuccessRateBubbleChart with our main class.
[3]:
CDHSample.plot.bubble_chart()
Looks like a healthy bubble plot, but sometimes it is useful to consider only certain models in the analysis. Note that the bubble chart automatically considers only the last snapshot by default, though this is a parameter.
To reduce the information, let’s only consider models with more than 500 responses within the CreditCards group.
[4]:
query = (pl.col("ResponseCount") > 500) & (pl.col("Group") == "CreditCards")
CDHSample.plot.bubble_chart(query=query)
Alternatively, we could only look at the top n best performing models within our query. To do this, we need to supply a list of model IDs which we can easily extract from the data as such.
Note here the alternative querying syntax you can use, which was default in the previous version of CDH Tools: if you have a list (list) to subset a column’s values with, you can simply supply a dictionary with ‘column name’:list to only get values in that list for that column.
[5]:
top30ids = (
CDHSample.aggregates.last()
.sort("Performance", descending=True)
.select("ModelID")
.head(30)
.collect()
.to_series()
.to_list()
)
CDHSample.plot.bubble_chart(query={"ModelID": top30ids})
The bubble chart gives some information about which models perform well, but that is not always informative: if we don’t know in which channels, issues or groups our issues lie then we may not be looking in the right place. This is where the Treemap visualisation is quite handy.
[6]:
CDHSample.plot.tree_map()
By default the Treemap shows the weighted performance, where the performance is weighted by the response count. The squares represent Model IDs: the larger a square, the more model IDs are within that combination of context keys. We can also color the Treemap by another variable, such as the SuccessRate:
[7]:
CDHSample.plot.tree_map("SuccessRate")
Similar to the responses, the success rate over time can also be of interest. With ‘plotOverTime’, you can plot the success rate of different models as they develop over time.
[8]:
CDHSample.plot.over_time("SuccessRate", by="ModelID", query=pl.col("Channel") == "Web")
And if it is not interesting to consider the success rate over time, there is also ‘plotPropositionSuccessRates’, which by default considers the last state of the models and plots the histogram of their success rates.
[9]:
CDHSample.plot.proposition_success_rates(query=pl.col("Channel") == "Web")
If we want to look at the distribution of responses and their propensities for a given model, we can subset that model and call plotScoreDistribution. Note here we subset the model by its ID.
[10]:
CDHSample.plot.score_distribution(model_id="08ca1302-9fc0-57bf-9031-d4179d400493")
Alternatively, we can also subset a model by its model name, and then further drill down by group/issue/channel/configuration. See the example below.
[11]:
CDHSample.plot.multiple_score_distributions(
query=(pl.col("Name") == "HomeOwners")
& (pl.col("Group") == "Bundles")
& (pl.col("Issue") == "Sales")
& (pl.col("Channel") == "Web")
& (pl.col("Configuration") == "OmniAdaptiveModel"),
show_all=True,
);
Similarly, we can also display the distribution of a single predictor and its binning. This function loops through each predictor of a model and generates the binning image for that predictor. For that reason we recommend subsetting the predictor names ahead of time or, depending on how many predictors the model has, a lot of images will be generated.
[12]:
CDHSample.plot.multiple_predictor_binning(
model_id="08ca1302-9fc0-57bf-9031-d4179d400493",
query=(
pl.col("PredictorName").is_in(
[
"Customer.Age",
"Customer.AnnualIncome",
"IH.Email.Outbound.Accepted.pxLastGroupID",
]
)
),
show_all=True,
);
Alternatively we can look at the performance of a predictor over multiple models. Again, we recommend subsetting the predictor names with a list to make it more legible.
[13]:
CDHSample.plot.predictor_performance(
query=pl.col("PredictorName").is_in(
[
"Customer.Age",
"Customer.AnnualIncome",
"IH.Email.Outbound.Accepted.pxLastGroupID",
]
)
)
What the two previous visualisations could not represent very well is the performance of the predictors over different models. That is what the plotPredictorPerformanceHeatmap function does; again with subsetting of predictors as a recommended step.
[14]:
CDHSample.plot.predictor_performance_heatmap(
query=pl.col("PredictorName").is_in(
[
"Customer.Age",
"Customer.AnnualIncome",
"IH.Email.Outbound.Accepted.pxLastGroupID",
]
)
)