pdstools.adm.ADMTrees¶
Classes¶
Functions for ADM Gradient boosting |
|
Module Contents¶
- class AGB(datamart: pdstools.adm.ADMDatamart.ADMDatamart)¶
- Parameters:
datamart (pdstools.adm.ADMDatamart.ADMDatamart)
- datamart¶
- discover_model_types(df: polars.LazyFrame, by: str = 'Configuration') Dict¶
Discovers the type of model embedded in the pyModelData column.
By default, we do a group_by Configuration, because a model rule can only contain one type of model. Then, for each configuration, we look into the pyModelData blob and find the _serialClass, returning it in a dict.
- Parameters:
- Return type:
Dict
- get_agb_models(last: bool = False, by: str = 'Configuration', n_threads: int = 1, query: pdstools.utils.types.QUERY | None = None, verbose: bool = True, **kwargs) ADMTrees¶
Method to automatically extract AGB models.
Recommended to subset using the querying functionality to cut down on execution time, because it checks for each model ID. If you only have AGB models remaining after the query, it will only return proper AGB models.
- Parameters:
last (bool, default = False) – Whether to only look at the last snapshot for each model
by (str, default = 'Configuration') – Which column to determine unique models with
n_threads (int, default = 6) – The number of threads to use for extracting the models. Since we use multithreading, setting this to a reasonable value helps speed up the import.
query (Optional[Union[pl.Expr, List[pl.Expr], str, Dict[str, list]]]) – Please refer to
_apply_query()verbose (bool, default = False) – Whether to print out information while importing
- Return type:
- class ADMTrees¶
- static get_multi_trees(file: polars.DataFrame, n_threads=1, verbose=True, **kwargs)¶
- Parameters:
file (polars.DataFrame)
- class ADMTreesModel(file: str, **kwargs)¶
Functions for ADM Gradient boosting
ADM Gradient boosting models consist of multiple trees, which build upon each other in a ‘boosting’ fashion. This class provides some functions to extract data from these trees, such as the features on which the trees split, important values for these features, statistics about the trees, or visualising each individual tree.
- Parameters:
file (str) – The input file as a json (see notes)
- trees¶
- Type:
Dict
- properties¶
- Type:
Dict
- model¶
- Type:
Dict
- treeStats¶
- Type:
Dict
- splitsPerTree¶
- Type:
Dict
- gainsPerTree¶
- Type:
Dict
- gainsPerSplit¶
- Type:
pl.DataFrame
- groupedGainsPerSplit¶
- Type:
Dict
- predictors¶
- Type:
Set
- allValuesPerSplit¶
- Type:
Dict
Notes
The input file is the extracted json file of the ‘save model’ action in Prediction Studio. The Datamart column ‘pyModelData’ also contains this information, but it is compressed and the values for each split is encoded. Using the ‘save model’ button, only that data is decompressed and decoded.
- nospaces = True¶
- _read_model(file, **kwargs)¶
- _decode_trees()¶
- _post_import_cleanup(decode, **kwargs)¶
- _depth(d: Dict) int¶
Calculates the depth of the tree, used in TreeStats.
- Parameters:
d (Dict)
- Return type:
- _safe_numeric_compare(left: int | float, operator: str, right: int | float) bool¶
Safely compare two numeric values without using eval().
This method replaces dangerous eval() calls with safe numeric comparisons.
- Parameters:
- Returns:
Result of the comparison
- Return type:
- Raises:
ValueError – If operator is not supported
- _safe_condition_evaluate(value: Any, operator: str, comparison_set: set | float | str) bool¶
Safely evaluate conditions without using eval().
This method replaces dangerous eval() calls with safe condition evaluation.
- property metrics: dict[str, Any]¶
Compute CDH_ADM005-style diagnostic metrics for this model.
Returns a flat dictionary of key/value pairs aligned with the CDH_ADM005 telemetry event specification. Metrics that cannot be computed from an exported model (e.g. saturation counts that require bin-level data) are omitted.
See also
Pega
- static metric_descriptions() dict[str, str]¶
Return a dictionary mapping metric names to human-readable descriptions.
These descriptions document every metric returned by the
metricsproperty. They can be used programmatically to annotate reports or plots.
- _compute_metrics() dict[str, Any]¶
Walk the trees once to gather all diagnostic metrics.
For the full list of returned keys and their descriptions, see
metric_descriptions().For exported (decoded) models, predictor types are inferred from split operators (
<→ numeric,in/==→ symbolic). For encoded models (from datamart blobs withinputsEncoder), the encoder metadata provides authoritative type information.
- static _accumulate_gain(tree: Dict, var_gain: dict[str, float]) None¶
Recursively accumulate gain per predictor variable.
- _get_encoder_info() dict[str, dict[str, Any]] | None¶
Extract predictor metadata from the inputsEncoder if present.
Returns a dict mapping predictor name to a dict with keys: -
type:"numeric"or"symbolic"-used_bins: number of bins currently used -max_bins: maximum bins allowed (Noneif unknown)Returns
Noneif no encoder metadata is available (e.g. for exported/decoded models).
- property predictors¶
- property tree_stats¶
- property splits_per_tree¶
- property gains_per_tree¶
- property gains_per_split¶
- property grouped_gains_per_split¶
- property all_values_per_split¶
- property splits_per_variable_type¶
- parse_split_values(value) Tuple[str, str, str]¶
Parses the raw ‘split’ string into its three components.
Once the split is parsed, Python can use it to evaluate.
- get_predictors() Dict | None¶
Extract predictor names and types from model metadata.
Tries to find predictor metadata from the
configurationsection of the JSON. Models exported via the Prediction Studio “Save Model” button include aconfigurationkey with an explicit predictor list. However, models exported in the newer format (e.g. via automated pipelines or newer Pega versions) may omit theconfigurationsection entirely, containing onlytype,modelVersion,algorithm,trainingStats,auc, etc. at the top level. In that case, predictor names and types are inferred from the tree split nodes instead.- Return type:
Optional[Dict]
- _infer_predictors_from_splits() Dict | None¶
Infer predictor names and types from tree split nodes.
When no explicit predictor metadata is available (e.g. in exported models without a
configurationsection), we walk the trees and derive predictor names from splits. The operator determines the type:<→ numeric,in/==→ symbolic.
- get_gains_per_split() Tuple[Dict, Dict, polars.DataFrame]¶
Function to compute the gains of each split in each tree.
- Return type:
Tuple[Dict, Dict, polars.DataFrame]
- get_grouped_gains_per_split() polars.DataFrame¶
Function to get the gains per split, grouped by split.
It adds some additional information, such as the possible values, the mean gains, and the number of times the split is performed.
- Return type:
polars.DataFrame
- get_splits_recursively(tree: Dict, splits: List, gains: List) Tuple[List, List]¶
Recursively finds splits and their gains for each node.
By Python’s mutatable list mechanic, the easiest way to achieve this is to explicitly supply the function with empty lists. Therefore, the ‘splits’ and ‘gains’ parameter expect empty lists when initially called.
- Parameters:
tree (Dict)
splits (List)
gains (List)
- Returns:
Tuple[List, List]
Each split, and its corresponding gain
- Return type:
Tuple[List, List]
- plot_splits_per_variable(subset: Set | None = None, show=True)¶
Plots the splits for each variable in the tree.
- Parameters:
subset (Optional[Set]) – Optional parameter to subset the variables to plot
show (bool) – Whether to display each plot
- Return type:
plt.figure
- get_tree_stats() polars.DataFrame¶
Generate a dataframe with useful stats for each tree
- Return type:
polars.DataFrame
- get_all_values_per_split() Dict¶
Generate a dictionary with the possible values for each split
- Return type:
Dict
- get_nodes_recursively(tree: Dict, nodelist: Dict, counter: List, childs: Dict) Tuple[Dict, Dict]¶
Recursively walks through each node, used for tree representation.
Again, nodelist, counter and childs expects empty dict, dict and list parameters.
- Parameters:
tree (Dict)
nodelist (Dict)
counter (Dict)
childs (List)
- Returns:
Tuple[Dict, List]
The dictionary of nodes and the list of child nodes
- Return type:
Tuple[Dict, Dict]
- static _fill_child_node_ids(nodeinfo: Dict, childs: Dict) Dict¶
Utility function to add child info to nodes
- Parameters:
nodeinfo (Dict)
childs (Dict)
- Return type:
Dict
- get_tree_representation(tree_number: int) Dict¶
Generates a more usable tree representation.
In this tree representation, each node has an ID, and its attributes are the attributes, with parent and child nodes added as well.
- Parameters:
tree_number (int) – The number of the tree, in order of the original json
returns (Dict)
- Return type:
Dict
- plot_tree(tree_number: int, highlighted: Dict | List | None = None, show=True) pydot.Graph¶
Plots the chosen decision tree.
- Parameters:
tree_number (int) – The number of the tree to visualise
highlighted (Optional[Dict, List]) – Optional parameter to highlight nodes in green If a dictionary, it expects an ‘x’: i.e., features with their corresponding values. If a list, expects a list of node IDs for that tree.
- Return type:
pydot.Graph
- get_visited_nodes(treeID: int, x: Dict, save_all: bool = False) Tuple[List, float, List]¶
Finds all visited nodes for a given tree, given an x
- Parameters:
- Returns:
The list of visited nodes, The score of the final leaf node, The gains for each split in the visited nodes
- Return type:
List, float, List
- get_all_visited_nodes(x: Dict) polars.DataFrame¶
Loops through each tree, and records the scoring info
- Parameters:
x (Dict) – Features to split on, with their values
- Return type:
pl.DataFrame
- plot_contribution_per_tree(x: Dict, show=True)¶
Plots the contribution of each tree towards the final propensity.
- Parameters:
x (Dict)
- compute_categorization_over_time(predictorCategorization=None, context_keys=None)¶
- plot_splits_per_variable_type(predictor_categorization=None, **kwargs)¶