pdstools

Pega Data Scientist Tools Python library

Submodules

Classes

ADMDatamart

Monitor and analyze ADM data from the Pega Datamart.

IH

ImpactAnalyzer

Prediction

Monitor Pega Prediction Studio Predictions

ValueFinder

Analyze the Value Finder dataset for detailed insights

Functions

read_ds_export(→ Optional[polars.LazyFrame])

Read in most out of the box Pega dataset export formats

default_predictor_categorization() → polars.Expr)

Function to determine the 'category' of a predictor.

cdh_sample(→ pdstools.adm.ADMDatamart.ADMDatamart)

Import a sample dataset from the CDH Sample application

sample_value_finder(...)

Import a sample dataset of a Value Finder simulation

show_versions(…)

Get a list of currently installed versions of pdstools and its dependencies.

Package Contents

class ADMDatamart(model_df: polars.LazyFrame | None = None, predictor_df: polars.LazyFrame | None = None, *, query: pdstools.utils.types.QUERY | None = None, extract_pyname_keys: bool = True)

Monitor and analyze ADM data from the Pega Datamart.

To initialize this class, either 1. Initialize directly with the model_df and predictor_df polars LazyFrames 2. Use one of the class methods: from_ds_export, from_s3, from_dataflow_export etc.

This class will read in the data from different sources, properly structure them from further analysis, and apply correct typing and useful renaming.

There is also a few “namespaces” that you can call from this class:

  • .plot contains ready-made plots to analyze the data with

  • .aggregates contains mostly internal data aggregations queries

  • .agb contains analysis utilities for Adaptive Gradient Boosting models

  • .generate leads to some ready-made reports, such as the Health Check

  • .bin_aggregator allows you to compare the bins across various models

Parameters:
  • model_df (pl.LazyFrame, optional) – The Polars LazyFrame representation of the model snapshot table.

  • predictor_df (pl.LazyFrame, optional) – The Polars LazyFrame represenation of the predictor binning table.

  • query (QUERY, optional) – An optional query to apply to the input data. For details, see pdstools.utils.cdh_utils._apply_query().

  • extract_pyname_keys (bool, default = True) – Whether to extract extra keys from the pyName column. In older Pega versions, this contained pyTreatment among other (customizable) fields. By default True

Examples

>>> from pdstools import ADMDatamart
>>> from glob import glob
>>> dm = ADMDatamart(
         model_df = pl.scan_parquet('models.parquet'),
         predictor_df = pl.scan_parquet('predictors.parquet')
         query = {"Configuration":["Web_Click_Through"]}
         )
>>> dm = ADMDatamart.from_ds_export(base_path='/my_export_folder')
>>> dm = ADMDatamart.from_s3("pega_export")
>>> dm = ADMDatamart.from_dataflow_export(glob("data/models*"), glob("data/preds*"))

Note

This class depends on two datasets:

  • pyModelSnapshots corresponds to the model_data attribute

  • pyADMPredictorSnapshots corresponds to the predictor_data attribute

For instructions on how to download these datasets, please refer to the following article: https://docs.pega.com/bundle/platform/page/platform/decision-management/exporting-monitoring-database.html

See also

pdstools.adm.Plots

The out of the box plots on the Datamart data

pdstools.adm.Reports

Methods to generate the Health Check and Model Report

pdstools.utils.cdh_utils._apply_query

How to query the ADMDatamart class and methods

model_data: polars.LazyFrame | None
predictor_data: polars.LazyFrame | None
combined_data: polars.LazyFrame | None
plot: pdstools.adm.Plots.Plots
aggregates: pdstools.adm.Aggregates.Aggregates
agb: pdstools.adm.ADMTrees.AGB
generate: pdstools.adm.Reports.Reports
cdh_guidelines: pdstools.adm.CDH_Guidelines.CDHGuidelines
bin_aggregator: pdstools.adm.BinAggregator.BinAggregator
first_action_dates: polars.LazyFrame | None
context_keys: List[str] = ['Channel', 'Direction', 'Issue', 'Group', 'Name']
_get_first_action_dates(df: polars.LazyFrame | None) polars.LazyFrame
Parameters:

df (Optional[polars.LazyFrame])

Return type:

polars.LazyFrame

classmethod from_ds_export(model_filename: str | None = None, predictor_filename: str | None = None, base_path: os.PathLike | str = '.', *, query: pdstools.utils.types.QUERY | None = None, extract_pyname_keys: bool = True)

Import the ADMDatamart class from a Pega Dataset Export

Parameters:
  • model_filename (Optional[str], optional) – The full path or name (if base_path is given) to the model snapshot files, by default None

  • predictor_filename (Optional[str], optional) – The full path or name (if base_path is given) to the predictor binning snapshot files, by default None

  • base_path (Union[os.PathLike, str], optional) – A base path to provide so that we can automatically find the most recent files for both the model and predictor snapshots, if model_filename and predictor_filename are not given as full paths, by default “.”

  • query (Optional[QUERY], optional) – An optional argument to filter out selected data, by default None

  • extract_pyname_keys (bool, optional) – Whether to extract additional keys from the pyName column, by default True

Returns:

The properly initialized ADMDatamart class

Return type:

ADMDatamart

Examples

>>> from pdstools import ADMDatamart
>>> # To automatically find the most recent files in the 'my_export_folder' dir:
>>> dm = ADMDatamart.from_ds_export(base_path='/my_export_folder')
>>> # To specify individual files:
>>> dm = ADMDatamart.from_ds_export(
        model_df='/Downloads/model_snapshots.parquet',
        predictor_df = '/Downloads/predictor_snapshots.parquet'
        )

Note

By default, the dataset export in Infinity returns a zip file per table. You do not need to open up this zip file! You can simply point to the zip, and this method will be able to read in the underlying data.

See also

pdstools.pega_io.File.read_ds_export

More information on file compatibility

pdstools.utils.cdh_utils._apply_query

How to query the ADMDatamart class and methods

classmethod from_s3()

Not implemented yet. Please let us know if you would like this functionality!

classmethod from_dataflow_export(model_data_files: Iterable[str] | str, predictor_data_files: Iterable[str] | str, *, query: pdstools.utils.types.QUERY | None = None, extract_pyname_keys: bool = True, cache_file_prefix: str = '', extension: Literal['json'] = 'json', compression: Literal['gzip'] = 'gzip', cache_directory: os.PathLike | str = 'cache')

Read in data generated by a data flow, such as the Prediction Studio export.

Dataflows are able to export data from and to various sources. As they are meant to be used in production, they are highly resiliant. For every partition and every node, a dataflow will output a small json file every few seconds. While this is great for production loads, it can be a bit more tricky to read in the data for smaller-scale and ad-hoc analyses.

This method aims to make the ingestion of such highly partitioned data easier. It reads in every individual small json file that the dataflow has output, and caches them to a parquet file in the cache_directory folder. As such, if you re-run this method later with more data added since the last export, we will not read in from the (slow) dataflow files, but rather from the (much faster) cache.

Parameters:
  • model_data_files (Union[Iterable[str], str]) – A list of files to read in as the model snapshots

  • predictor_data_files (Union[Iterable[str], str]) – A list of files to read in as the predictor snapshots

  • query (Optional[QUERY], optional) – A, by default None

  • extract_pyname_keys (bool, optional) – Whether to extract extra keys from the pyName column, by default True

  • cache_file_prefix (str, optional) – An optional prefix for the cache files, by default “”

  • extension (Literal["json"], optional) – The extension of the source data, by default “json”

  • compression (Literal["gzip"], optional) – The compression of the source files, by default “gzip”

  • cache_directory (Union[os.PathLike, str], optional) – Where to store the cached files, by default “cache”

Returns:

An initialized instance of the datamart class

Return type:

ADMDatamart

Examples

>>> from pdstools import ADMDatamart
>>> import glob
>>> dm = ADMDatamart.from_dataflow_export(glob("data/models*"), glob("data/preds*"))

See also

pdstools.utils.cdh_utils._apply_query

How to query the ADMDatamart class and methods

glob

Makes creating lists of files much easier

classmethod from_pdc(df: polars.LazyFrame, return_df=False)
Parameters:

df (polars.LazyFrame)

_validate_model_data(df: polars.LazyFrame | None, extract_pyname_keys: bool = True) polars.LazyFrame | None

Internal method to validate model data

Parameters:
  • df (Optional[polars.LazyFrame])

  • extract_pyname_keys (bool)

Return type:

Optional[polars.LazyFrame]

_validate_predictor_data(df: polars.LazyFrame | None) polars.LazyFrame | None

Internal method to validate predictor data

Parameters:

df (Optional[polars.LazyFrame])

Return type:

Optional[polars.LazyFrame]

apply_predictor_categorization(df: polars.LazyFrame | None = None, categorization: polars.Expr | Callable[Ellipsis, polars.Expr] = cdh_utils.default_predictor_categorization)

Apply a new predictor categorization to the datamart tables

In certain plots, we use the predictor categorization to indicate what ‘kind’ a certain predictor is, such as IH, Customer, etc. Call this method with a custom Polars Expression (or a method that returns one) - and it will be applied to the predictor data (and the combined dataset too).

For a reference implementation of a custom predictor categorization, refer to pdstools.utils.cdh_utils.default_predictor_categorization.

Parameters:
  • df (Optional[pl.LazyFrame], optional) – A Polars Lazyframe to apply the categorization to. If not provided, applies it over the predictor data and combined datasets. By default, None

  • categorization (Union[pl.Expr, Callable[..., pl.Expr]]) – A polars Expression (or method that returns one) to apply the mapping with. Should be based on Polars’ when.then.otherwise syntax. By default, pdstools.utils.cdh_utils.default_predictor_categorization

Examples

>>> dm = ADMDatamart(my_data) #uses the OOTB predictor categorization
>>> dm.apply_predictor_categorization(categorization=pl.when(
>>> pl.col("PredictorName").cast(pl.Utf8).str.contains("Propensity")
>>> ).then(pl.lit("External Model")
>>> ).otherwise(pl.lit("Adaptive Model)")
>>> # Now, every subsequent plot will use the custom categorization
save_data(path: os.PathLike | str = '.', selected_model_ids: List[str] | None = None) Tuple[pathlib.Path | None, pathlib.Path | None]

Caches model_data and predictor_data to files.

Parameters:
  • path (str) – Where to place the files

  • selected_model_ids (List[str]) – Optional list of model IDs to restrict to

Returns:

The paths to the model and predictor data files

Return type:

(Optional[Path], Optional[Path])

property unique_channels

A consistently ordered set of unique channels in the data

Used for making the color schemes in different plots consistent

property unique_configurations

A consistently ordered set of unique configurations in the data

Used for making the color schemes in different plots consistent

property unique_channel_direction

A consistently ordered set of unique channel+direction combos in the data Used for making the color schemes in different plots consistent

property unique_configuration_channel_direction

A consistently ordered set of unique configuration+channel+direction Used for making the color schemes in different plots consistent

property unique_predictor_categories

A consistently ordered set of unique predictor categories in the data Used for making the color schemes in different plots consistent

classmethod _minMaxScoresPerModel(bin_data: polars.LazyFrame) polars.LazyFrame
Parameters:

bin_data (polars.LazyFrame)

Return type:

polars.LazyFrame

active_ranges(model_ids: str | List[str] | None = None) polars.LazyFrame

Calculate the active, reachable bins in classifiers.

The classifiers exported by Pega contain (in certain product versions) more than the bins that can be reached given the current state of the predictors. This method first calculates the min and max score range from the predictor log odds, then maps that to the interval boundaries of the classifier(s) to find the min and max index.

It returns a LazyFrame with the score min/max, the min/max index, as well as the AUC as reported in the datamart data, when calculated from the full range, and when calculated from the reachable bins only.

This information can be used in the Health Check documents or when verifying the AUC numbers from the datamart.

Parameters:

model_ids (Optional[Union[str, List[str]]], optional) – An optional list of model id’s, or just a single one, to report on. When not given, the information is returned for all models.

Returns:

A table with all the index and AUC information for all the models with the following fields:

Model Identification: - ModelID - The unique identifier for the model

AUC Metrics: - AUC_Datamart - The AUC value as reported in the datamart - AUC_FullRange - The AUC calculated from the full range of bins in the classifier - AUC_ActiveRange - The AUC calculated from only the active/reachable bins

Classifier Information: - Bins - The total number of bins in the classifier - nActivePredictors - The number of active predictors in the model

Log Odds Information (mostly for internal use): - classifierLogOffset - The log offset of the classifier (baseline log odds) - sumMinLogOdds - The sum of minimum log odds across all active predictors - sumMaxLogOdds - The sum of maximum log odds across all active predictors - score_min - The minimum score (normalized sum of log odds including classifier offset) - score_max - The maximum score (normalized sum of log odds including classifier offset)

Active Range Information: - idx_min - The minimum bin index that can be reached given the current binning of all predictors - idx_max - The maximum bin index that can be reached given the current binning of all predictors

Return type:

pl.LazyFrame

class IH(data: polars.LazyFrame)
Parameters:

data (polars.LazyFrame)

data: polars.LazyFrame
positive_outcome_labels: Dict[str, List[str]]
aggregates
plot
negative_outcome_labels
classmethod from_ds_export(ih_filename: os.PathLike | str, query: pdstools.utils.types.QUERY | None = None)

Create an IH instance from a file with Pega Dataset Export

Parameters:
  • ih_filename (Union[os.PathLike, str]) – The full path to the dataset files

  • query (Optional[QUERY], optional) – An optional argument to filter out selected data, by default None

Returns:

The properly initialized IH object

Return type:

IH

classmethod from_s3()

Not implemented yet. Please let us know if you would like this functionality!

classmethod from_mock_data(days=90, n=100000)

Initialize an IH instance with sample data

Parameters:
  • days (number of days, defaults to 90 days)

  • n (number of interaction data records, defaults to 100k)

Returns:

The properly initialized IH object

Return type:

IH

get_sequences(positive_outcome_label: str, level: str, outcome_column: str, customerid_column: str) tuple[list[tuple[str, Ellipsis]], list[tuple[int, Ellipsis]], list[collections.defaultdict[tuple[str], int]], list[collections.defaultdict[tuple[str, Ellipsis], int]]]

Generates customer sequences, outcome labels, counts needed for PMI (Pointwise Mutual Information) calculations.

This function processes customer interaction data to produce: 1. Action sequences per customer. 2. Corresponding binary outcome sequences (1 for positive outcome, 0 otherwise). 3. Counts of bigrams and ≥3-grams that end with a positive outcome. 4. Counts of all possible bigrams within that corpus.

Parameters:
  • positive_outcome_label (str) – The outcome label that marks the final event in a sequence.

  • level (str) – Column name that contains the action (offer / treatment).

  • outcome_column (str) – Column name that contains the outcome label.

  • customerid_column (str) – Column name that identifies a unique customer / subject.

Returns:

  • customer_sequences (list[tuple[str, …]]) – Sequences of actions per customer.

  • customer_outcomes (list[tuple[int, …]]) – Binary outcomes (0 or 1) for each customer action sequence.

  • count_actions (list[defaultdict[tuple[str], int]]) – Actions frequency counts. Index 0 = count of first element in all bigrams Index 1 = count of second element in all bigrams

  • count_sequences (list[defaultdict[tuple[str, …], int]]) – Sequence frequency counts. Index 0 = bigrams (all) Index 1 = ≥3-grams that end with positive outcome Index 2 = bigrams that end with positive outcome Index 3 = unique ngrams per customer

Return type:

tuple[list[tuple[str, Ellipsis]], list[tuple[int, Ellipsis]], list[collections.defaultdict[tuple[str], int]], list[collections.defaultdict[tuple[str, Ellipsis], int]]]

static calculate_pmi(count_actions: list[collections.defaultdict[tuple[str], int]], count_sequences: list[collections.defaultdict[tuple[str, Ellipsis], int]]) tuple[dict[tuple[str, str], float], dict[tuple[str, Ellipsis], float]]

Computes PMI scores for n-grams (n ≥ 2) in customer action sequences. Returns an unsorted dictionary mapping sequences to their PMI values, providing insights into significant action associations.

Bigrams values are calculated by PMI. N-gram values are computed by averaging the PMI of their constituent bigrams. Higher values indicate more informative or surprising paths.

Parameters:
  • count_actions (list[defaultdict[tuple[str], int]]) – Actions frequency counts. Index 0 = count of first element in all bigrams Index 1 = count of second element in all bigrams

  • count_sequences (list[defaultdict[tuple[str, ], int]]) – Sequence frequency counts. Index 0 = bigrams (all) Index 1 = ≥3-grams that end with positive outcome Index 2 = bigrams that end with positive outcome Index 3 = unique ngrams per customer

Returns:

ngrams_pmi – Dictionary containing PMI information for bigrams and n-grams. For bigrams, the value is a float representing the PMI value. For higher-order n-grams, the value is a dictionary with:

  • ’average_pmi: The average PMI value.

  • ’links’: A dictionary mapping each constituent bigram to its PMI value.

Return type:

dict[tuple[str, …], float | dict[str, float | dict[tuple[str, str], float]]]

static pmi_overview(ngrams_pmi: Dict[str, Dict[str, Dict[str, float] | float]], count_sequences: list[collections.defaultdict[tuple[str, Ellipsis], int]], customer_sequences: list[tuple[str, Ellipsis]], customer_outcomes: list[tuple[int, Ellipsis]]) polars.DataFrame

Analyzes customer sequences to identify patterns linked to positive outcomes. Returns a sorted Polars DataFrame of significant n-grams

Parameters:
  • ngrams_pmi (dict[tuple[str, ...], float | dict[str, float | dict[tuple[str, str], float]]]) –

    Dictionary containing PMI information for bigrams and n-grams. For bigrams, the value is a float representing the PMI value. For higher-order n-grams, the value is a dictionary with:

    • ’average_pmi: The average PMI value.

    • ’links’: A dictionary mapping each constituent bigram to its PMI value.

  • count_sequences (list[defaultdict[tuple[str, ...], int]]) – Sequence frequency counts. Index 1 = ≥3-grams ending in positive outcome. Index 2 = bigrams ending in positive outcome.

  • customer_sequences (list[tuple[str, ...]]) – Sequences of actions per customer.

  • customer_outcomes (list[tuple[int, ...]]) – Binary outcomes (0 or 1) for each customer action sequence.

Returns:

DataFrame containing: - ‘Sequence’: the action sequence - ‘Length’: number of actions - ‘Avg PMI’: average PMI value - ‘Frequency’: number of times the sequence appears - ‘Unique freq’: number of unique customers who had this sequence ending in a positive outcome - ‘Score’: Avg PMI x log(Frequency), sorted descending

Return type:

pl.DataFrame

class ImpactAnalyzer(raw_data: polars.LazyFrame)
Parameters:

raw_data (polars.LazyFrame)

ia_data: polars.LazyFrame
default_ia_experiments
default_ia_controlgroups
plot
classmethod from_pdc(pdc_source: os.PathLike | str | dict, *, query: pdstools.utils.types.QUERY | None = None, return_input_df: bool | None = False, return_df: bool | None = False)

Create an ImpactAnalyzer instance from a PDC file

Parameters:
  • pdc_filename (Union[os.PathLike, str]) – The full path to the PDC file

  • query (Optional[QUERY], optional) – An optional argument to filter out selected data, by default None

  • return_input_df (Optional[QUERY], optional) – Debugging option to return the wide data from the raw JSON file as a DataFrame, by default False

  • return_df (Optional[QUERY], optional) – Returns the processed input data as a DataFrame. Multiple of these can be stacked up and used to initialize the ImpactAnalyzer class, by default False

  • pdc_source (Union[os.PathLike, str, dict])

Returns:

The properly initialized ImpactAnalyzer object

Return type:

ImpactAnalyzer

classmethod _from_pdc_json(json_data: dict, *, query: pdstools.utils.types.QUERY | None = None, return_input_df: bool | None = False, return_df: bool | None = False)

Internal method to create an ImpactAnalyzer instance from PDC JSON data

The PDC data is really structured as a list of expriments: control group A vs control group B. There is no explicit indicator whether the B’s are really the same customers or not. The PDC data also contains a lot of UI related information that is not necessary.

We turn this data into a series of control groups with just counts of impressions and accepts. This does need to assume a few implicit assumptions.

Parameters:
  • json_data (dict)

  • query (Optional[pdstools.utils.types.QUERY])

  • return_input_df (Optional[bool])

  • return_df (Optional[bool])

summary_by_channel() polars.LazyFrame

Summarization of the experiments in Impact Analyzer split by Channel.

Returns:

Summary across all running Impact Analyzer experiments as a dataframe with the following fields:

Channel Identification: - Channel: The channel name

Performance Metrics: - CTR_Lift Adaptive Models vs Random Propensity: Lift in Engagement when testing prioritization with just Adaptive Models vs just Random Propensity - CTR_Lift NBA vs No Levers: Lift in Engagement for the full NBA Framework as configured vs prioritization without levers (only p, V and C) - CTR_Lift NBA vs Only Eligibility Rules: Lift in Engagement for the full NBA Framework as configured vs Only Eligibility policies applied (no Applicability or Suitability, and prioritized with pVCL) - CTR_Lift NBA vs Propensity Only: Lift in Engagement for the full NBA Framework as configured vs prioritization with model propensity only (no V, C or L) - CTR_Lift NBA vs Random: Lift in Engagement for the full NBA Framework as configured vs a Random eligible action (all engagement policies but randomly prioritized) - Value_Lift Adaptive Models vs Random Propensity: Lift in Expected Value when testing prioritization with just Adaptive Models vs just Random Propensity - Value_Lift NBA vs No Levers: Lift in Expected Value for the full NBA Framework as configured vs prioritization without levers (only p, V and C) - Value_Lift NBA vs Only Eligibility Rules: Lift in Expected Value for the full NBA Framework as configured vs Only Eligibility policies applied (no Applicability or Suitability, and prioritized with pVCL) - Value_Lift NBA vs Propensity Only: Lift in Expected Value for the full NBA Framework as configured vs prioritization with model propensity only (no V, C or L) - Value_Lift NBA vs Random: Lift in Expected Value for the full NBA Framework as configured vs a Random eligible action (all engagement policies but randomly prioritized)

Return type:

pl.LazyFrame

overall_summary() polars.LazyFrame

Summarization of the experiments in Impact Analyzer.

Returns:

Summary across all running Impact Analyzer experiments as a dataframe with the following fields:

Performance Metrics: - CTR_Lift Adaptive Models vs Random Propensity: Lift in Engagement when testing prioritization with just Adaptive Models vs just Random Propensity - CTR_Lift NBA vs No Levers: Lift in Engagement for the full NBA Framework as configured vs prioritization without levers (only p, V and C) - CTR_Lift NBA vs Only Eligibility Rules: Lift in Engagement for the full NBA Framework as configured vs Only Eligibility policies applied (no Applicability or Suitability, and prioritized with pVCL) - CTR_Lift NBA vs Propensity Only: Lift in Engagement for the full NBA Framework as configured vs prioritization with model propensity only (no V, C or L) - CTR_Lift NBA vs Random: Lift in Engagement for the full NBA Framework as configured vs a Random eligible action (all engagement policies but randomly prioritized) - Value_Lift Adaptive Models vs Random Propensity: Lift in Expected Value when testing prioritization with just Adaptive Models vs just Random Propensity - Value_Lift NBA vs No Levers: Lift in Expected Value for the full NBA Framework as configured vs prioritization without levers (only p, V and C) - Value_Lift NBA vs Only Eligibility Rules: Lift in Expected Value for the full NBA Framework as configured vs Only Eligibility policies applied (no Applicability or Suitability, and prioritized with pVCL) - Value_Lift NBA vs Propensity Only: Lift in Expected Value for the full NBA Framework as configured vs prioritization with model propensity only (no V, C or L) - Value_Lift NBA vs Random: Lift in Expected Value for the full NBA Framework as configured vs a Random eligible action (all engagement policies but randomly prioritized)

Return type:

pl.LazyFrame

summarize_control_groups(by: List[str] | str | None = None, drop_internal_cols=True) polars.LazyFrame
Parameters:

by (Optional[Union[List[str], str]])

Return type:

polars.LazyFrame

summarize_experiments(by: List[str] | str | None = None) polars.LazyFrame
Parameters:

by (Optional[Union[List[str], str]])

Return type:

polars.LazyFrame

read_ds_export(filename: str | io.BytesIO, path: str | os.PathLike = '.', verbose: bool = False, **reading_opts) polars.LazyFrame | None

Read in most out of the box Pega dataset export formats Accepts one of the following formats: - .csv - .json - .zip (zipped json or CSV) - .feather - .ipc - .parquet

It automatically infers the default file names for both model data as well as predictor data. If you supply either ‘modelData’ or ‘predictorData’ as the ‘file’ argument, it will search for them. If you supply the full name of the file in the ‘path’ directory, it will import that instead. Since pdstools V3.x, returns a Polars LazyFrame. Simply call .collect() to get an eager frame.

Parameters:
  • filename (Union[str, BytesIO]) – Can be one of the following: - A string with the full path to the file - A string with the name of the file (to be searched in the given path) - A BytesIO object containing the file data (e.g., from an uploaded file in a webapp)

  • path (str, default = '.') – The location of the file

  • verbose (bool, default = True) – Whether to print out which file will be imported

Keyword Arguments:

Any – Any arguments to plug into the scan_* function from Polars.

Returns:

  • pl.LazyFrame – The (lazy) dataframe

  • Examples – >>> df = read_ds_export(filename=’full/path/to/ModelSnapshot.json’) >>> df = read_ds_export(filename=’ModelSnapshot.json’, path=’data/ADMData’) >>> df = read_ds_export(filename=uploaded_file) # Where uploaded_file is a BytesIO object

Return type:

Optional[polars.LazyFrame]

class Prediction(df: polars.LazyFrame, *, query: pdstools.utils.types.QUERY | None = None)

Monitor Pega Prediction Studio Predictions

Parameters:
  • df (polars.LazyFrame)

  • query (Optional[pdstools.utils.types.QUERY])

predictions: polars.LazyFrame
plot: PredictionPlots
prediction_validity_expr
cdh_guidelines
classmethod from_pdc(df: polars.LazyFrame, return_df=False)
Parameters:

df (polars.LazyFrame)

static from_mock_data(days=70)
property is_available: bool
Return type:

bool

property is_valid: bool
Return type:

bool

summary_by_channel(custom_predictions: List[List] | None = None, *, start_date: datetime.datetime | None = None, end_date: datetime.datetime | None = None, window: int | datetime.timedelta | None = None, by_period: str | None = None, debug: bool = False) polars.LazyFrame

Summarize prediction per channel

Parameters:
  • custom_predictions (Optional[List[CDH_Guidelines.NBAD_Prediction]], optional) – Optional list with custom prediction name to channel mappings. Defaults to None.

  • start_date (datetime.datetime, optional) – Start date of the summary period. If None (default) uses the end date minus the window, or if both absent, the earliest date in the data

  • end_date (datetime.datetime, optional) – End date of the summary period. If None (default) uses the start date plus the window, or if both absent, the latest date in the data

  • window (int or datetime.timedelta, optional) – Number of days to use for the summary period or an explicit timedelta. If None (default) uses the whole period. Can’t be given if start and end date are also given.

  • by_period (str, optional) – Optional additional grouping by time period. Format string as in polars.Expr.dt.truncate (https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.dt.truncate.html), for example “1mo”, “1w”, “1d” for calendar month, week day. Defaults to None.

  • debug (bool, optional) – If True, enables debug mode for additional logging or outputs. Defaults to False.

Returns:

Summary across all Predictions as a dataframe with the following fields:

Time and Configuration Fields: - DateRange Min - The minimum date in the summary time range - DateRange Max - The maximum date in the summary time range - Duration - The duration in seconds between the minimum and maximum snapshot times - Prediction: The prediction name - Channel: The channel name - Direction: The direction (e.g., Inbound, Outbound) - ChannelDirectionGroup: Combined Channel/Direction identifier - isValid: Boolean indicating if the prediction data is valid - isStandardNBADPrediction: Boolean indicating if this is a standard NBAD prediction - isMultiChannelPrediction: Boolean indicating if this is a multichannel prediction - ControlPercentage: Percentage of responses in control group - TestPercentage: Percentage of responses in test group

Performance Metrics: - Performance: Weighted model performance (AUC) - Positives: Sum of positive responses - Negatives: Sum of negative responses - Responses: Sum of all responses - Positives_Test: Sum of positive responses in test group - Positives_Control: Sum of positive responses in control group - Positives_NBA: Sum of positive responses in NBA group - Negatives_Test: Sum of negative responses in test group - Negatives_Control: Sum of negative responses in control group - Negatives_NBA: Sum of negative responses in NBA group - CTR: Clickthrough rate (Positives over Positives + Negatives) - CTR_Test: Clickthrough rate for test group (model propensitities) - CTR_Control: Clickthrough rate for control group (random propensities) - CTR_NBA: Clickthrough rate for NBA group (available only when Impact Analyzer is used) - Lift: Lift in Engagement when testing prioritization with just Adaptive Models vs just Random Propensity

Technology Usage Indicators: - usesImpactAnalyzer: Boolean indicating if Impact Analyzer is used

Return type:

pl.LazyFrame

overall_summary(custom_predictions: List[List] | None = None, *, start_date: datetime.datetime | None = None, end_date: datetime.datetime | None = None, window: int | datetime.timedelta | None = None, by_period: str | None = None, debug: bool = False) polars.LazyFrame

Overall prediction summary. Only valid prediction data is included.

Parameters:
  • custom_predictions (Optional[List[CDH_Guidelines.NBAD_Prediction]], optional) – Optional list with custom prediction name to channel mappings. Defaults to None.

  • start_date (datetime.datetime, optional) – Start date of the summary period. If None (default) uses the end date minus the window, or if both absent, the earliest date in the data

  • end_date (datetime.datetime, optional) – End date of the summary period. If None (default) uses the start date plus the window, or if both absent, the latest date in the data

  • window (int or datetime.timedelta, optional) – Number of days to use for the summary period or an explicit timedelta. If None (default) uses the whole period. Can’t be given if start and end date are also given.

  • by_period (str, optional) – Optional additional grouping by time period. Format string as in polars.Expr.dt.truncate (https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.dt.truncate.html), for example “1mo”, “1w”, “1d” for calendar month, week day. Defaults to None.

  • debug (bool, optional) – If True, enables debug mode for additional logging or outputs. Defaults to False.

Returns:

Summary across all Predictions as a dataframe with the following fields:

Time and Configuration Fields: - DateRange Min - The minimum date in the summary time range - DateRange Max - The maximum date in the summary time range - Duration - The duration in seconds between the minimum and maximum snapshot times - ControlPercentage: Weighted average percentage of control group responses - TestPercentage: Weighted average percentage of test group responses

Performance Metrics: - Performance: Weighted average performance across all valid channels - Positives Inbound: Sum of positive responses across all valid inbound channels - Positives Outbound: Sum of positive responses across all valid outbound channels - Responses Inbound: Sum of all responses across all valid inbound channels - Responses Outbound: Sum of all responses across all valid outbound channels - Overall Lift: Weighted average lift across all valid channels - Minimum Negative Lift: The lowest negative lift value found

Channel Statistics: - Number of Valid Channels: Count of unique valid channel/direction combinations - Channel with Minimum Negative Lift: Channel with the lowest negative lift value

Technology Usage Indicators: - usesImpactAnalyzer: Boolean indicating if any channel uses Impact Analyzer

Return type:

pl.LazyFrame

default_predictor_categorization(x: str | polars.Expr = pl.col('PredictorName')) polars.Expr

Function to determine the ‘category’ of a predictor.

It is possible to supply a custom function. This function can accept an optional column as input And as output should be a Polars expression. The most straight-forward way to implement this is with pl.when().then().otherwise(), which you can chain.

By default, this function returns “Primary” whenever there is no ‘.’ anywhere in the name string, otherwise returns the first string before the first period

Parameters:

x (Union[str, pl.Expr], default = pl.col('PredictorName')) – The column to parse

Return type:

polars.Expr

cdh_sample(query: pdstools.utils.types.QUERY | None = None) pdstools.adm.ADMDatamart.ADMDatamart

Import a sample dataset from the CDH Sample application

Parameters:

query (Optional[QUERY], optional) – An optional query to apply to the data, by default None

Returns:

The ADM Datamart class populated with CDH Sample data

Return type:

ADMDatamart

sample_value_finder(threshold: float | None = None) pdstools.valuefinder.ValueFinder.ValueFinder

Import a sample dataset of a Value Finder simulation

This simulation was ran on a stock CDH Sample system.

Parameters:

threshold (Optional[float], optional) – Optional override of the propensity threshold in the system, by default None

Returns:

The Value Finder class populated with the Value Finder simulation data

Return type:

ValueFinder

show_versions(print_output: Literal[True] = True) None
show_versions(print_output: Literal[False] = False) str

Get a list of currently installed versions of pdstools and its dependencies.

Parameters:

print_output (bool, optional) – If True, print the version information to stdout. If False, return the version information as a string. Default is True.

Returns:

Version information as a string if print_output is False, else None.

Return type:

Optional[str]

Examples

>>> from pdstools import show_versions
>>> show_versions()
--- Version info ---
pdstools: 4.0.0-alpha
Platform: macOS-14.7-arm64-arm-64bit
Python: 3.12.4 (main, Jun  6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.3.9.4)]

— Dependencies — typing_extensions: 4.12.2 polars>=1.9: 1.9.0

— Dependency group: adm — plotly>=5.5.0: 5.24.1

— Dependency group: api — pydantic: 2.9.2 httpx: 0.27.2

class ValueFinder(df: polars.LazyFrame, *, query: pdstools.utils.types.QUERY | None = None, n_customers: int | None = None, threshold: float | None = None)

Analyze the Value Finder dataset for detailed insights

Parameters:
  • df (polars.LazyFrame)

  • query (Optional[pdstools.utils.types.QUERY])

  • n_customers (Optional[int])

  • threshold (Optional[float])

df: polars.LazyFrame
n_customers: int
nbad_stages = ['Eligibility', 'Applicability', 'Suitability', 'Arbitration']
aggregates
plot
classmethod from_ds_export(filename: str | None = None, base_path: os.PathLike | str = '.', *, query: pdstools.utils.types.QUERY | None = None, n_customers: int | None = None, threshold: float | None = None)
Parameters:
  • filename (Optional[str])

  • base_path (Union[os.PathLike, str])

  • query (Optional[pdstools.utils.types.QUERY])

  • n_customers (Optional[int])

  • threshold (Optional[float])

classmethod from_dataflow_export(files: Iterable[str] | str, *, query: pdstools.utils.types.QUERY | None = None, n_customers: int | None = None, threshold: float | None = None, cache_file_prefix: str = '', extension: Literal['json'] = 'json', compression: Literal['gzip'] = 'gzip', cache_directory: os.PathLike | str = 'cache')
Parameters:
  • files (Union[Iterable[str], str])

  • query (Optional[pdstools.utils.types.QUERY])

  • n_customers (Optional[int])

  • threshold (Optional[float])

  • cache_file_prefix (str)

  • extension (Literal['json'])

  • compression (Literal['gzip'])

  • cache_directory (Union[os.PathLike, str])

set_threshold(new_threshold: float | None = None)
Parameters:

new_threshold (Optional[float])

property threshold
save_data(path: os.PathLike | str = '.') pathlib.Path | None

Cache the pyValueFinder dataset to a Parquet file

Parameters:

path (str) – Where to place the file

Returns:

The paths to the model and predictor data files

Return type:

(Optional[Path], Optional[Path])