{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Explainability Extract Analysis\n", "\n", "__Pega__\n", "\n", "__2024-06-11__\n", "\n", "Welcome to the Explainability Extract Demo. This notebook is designed to guide you through the analysis of Explainability Extract v1 dataset using the DecisionAnalyzer class of pdstools library. At this point, the dataset is particularly targeted on the \"Arbitration\" stage, but we have intentions to widen the data scope in subsequent iterations. This dataset can be extracted from Infinity 24.1 and its preceding versions.\n", "\n", "We developed this notebook with a dual purpose. Firstly, we aim to familiarize you with the various functions and visualizations available in the DecisionAnalyzer class. You'll learn how to aggregate and visualize data in ways that are meaningful and insightful for your specific use cases.\n", "\n", "Secondly, we hope this notebook will inspire you. The analysis and visualizations demonstrated here are only the tip of the iceberg. We encourage you to think creatively and explore other ways this data can be utilized. Consider this notebook a springboard for your analysis journey.\n", "\n", "Each data point represents a decision made in real-time, providing a snapshot of the arbitration process. By examining this data, you have the opportunity to delve into the intricacies of these decisions, gaining a deeper understanding of the decision-making process.\n", "\n", "As you navigate through this notebook, remember that it is interactive. This means you can not only run each code cell to see the results but also tweak the code and experiment as you go along." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "nbsphinx": "hidden" }, "outputs": [], "source": [ "# These lines are only for rendering in the docs, and are hidden through Jupyter tags\n", "# Do not run if you're running the notebook seperately\n", "\n", "import plotly.io as pio\n", "\n", "pio.renderers.default = \"notebook_connected\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pdstools.decision_analyzer.data_read_utils import read_data\n", "from pdstools.decision_analyzer.decision_data import DecisionAnalyzer\n", "from pdstools import read_ds_export\n", "import polars as pl" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Read the Data and create DecisionData instance" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = read_ds_export(\n", " filename=\"sample_explainability_extract.parquet\",\n", " path=\"https://raw.githubusercontent.com/pegasystems/pega-datascientist-tools/master/data\",\n", ")\n", "decision_data = DecisionAnalyzer(df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Overview" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`get_overview_stats` property of `DecisionData` shows general statistics of the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "decision_data.get_overview_stats" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets take a look at 1 decision. From the height of the dataframe you can see how many actions are available at the Arbitration Stage for this interaction of a customer. `pxRank` column shows the ranks of actions in the arbitration." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "selected_interaction_id = (\n", " decision_data.unfiltered_raw_decision_data.select(\"pxInteractionID\")\n", " .first()\n", " .collect()\n", " .row(0)[0]\n", ")\n", "print(f\"{selected_interaction_id=}\")\n", "decision_data.unfiltered_raw_decision_data.filter(\n", " pl.col(\"pxInteractionID\") == selected_interaction_id\n", ").sort(\"pxRank\").collect()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Action Distribution\n", "\n", "Shows the overall distribution of actions at the Arbitration Stage. One can detect if a group of actions survive rarely until Arbitration." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stage = \"Arbitration\"\n", "scope_options = [\"pyIssue\", \"pyGroup\", \"pyName\"]\n", "distribution_data = decision_data.getDistributionData(stage, scope_options)\n", "fig = decision_data.plot.distribution_as_treemap(\n", " df=distribution_data, stage=stage, scope_options=scope_options\n", ")\n", "fig.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Global Sensitivity\n", "\n", "The Global Sensitivity chart helps us understand how the 4 Arbitration factors propensity, value, levers, and context weights together affect the decision-making process. \n", "**Sensitivity** refers to the impact on our top actions if one of these factors is omitted. The percentages indicate the potential change in our final decisions due to the absence of each factor. \n", " \n", "- **X-Axis (Decisions)**: Represents the number of decisions affected by the exclusion of each factor. \n", "- **Y-Axis (Prioritization Factor)**: Lists the Arbitration formula components.\n", "- **Bars**: Each bar represents the percentage of decisions affected by the absence of the corresponding factor. \n", " \n", " \n", "- By identifying the most impactful factors, stakeholders can make strategic adjustments to enhance decision-making accuracy. \n", " \n", "- It highlights which factors need more attention or refinement. For instance, if \"Levers\" were to show a significant percentage, it would indicate a need for closer examination and potential improvement. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "decision_data.plot.sensitivity(win_rank=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wins and Losses in Arbitration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Displays the distribution of wins and losses for different \"Issues\" in the arbitration stage. You can change the level to \"Group\" or \"Action\". Based on the `win_rank` actions are classified as either winning or losing. \n", " \n", "**X-Axis (Percentage)**: Represents the percentage of actions that are either wins or losses. \n", "**Y-Axis (Status)**: Differentiates between wins and losses. \n", "**Color Legend (Issue)**: Each color represents a different issue category, such as \"Retention,\" \"Service,\" \"Growth,\" etc. \n", " \n", "#### How to Interpret the Visual: \n", " \n", "- **Dominant Issues**: The length of the bars helps identify which issues have the highest and lowest win and loss percentages. For example, if \"Retention\" has a longer bar in the Wins section, it indicates a higher percentage of winning actions for that issue. \n", "- **Comparative Analysis**: By comparing the bars, you can quickly see which issues are performing better in terms of winning in arbitration and which are underperforming. \n", "- **Resource Allocation**: By understanding which issues have higher loss percentages, resources can be reallocated to improve strategies in those areas. \n", "- **Decision-Making**: Provides a clear visual representation of how decisions are distributed across different issues, aiding in making data-driven decisions for future actions. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "decision_data.plot.global_winloss_distribution(level=\"pyIssue\", win_rank=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Personalization Analysis\n", " \n", "The Personalization Analysis chart helps us understand the availability of options to present to customers during the arbitration stage. If only a limited number of actions survive to this stage, it becomes challenging to personalize offers effectively. Some customers may have only one action available, limiting the ability of our machine learning algorithm to arbitrate effectively.\n", "\n", "In the chart:\n", "\n", "The x-axis represents the number of actions available per customer.\n", "The left y-axis shows the number of decisions.\n", "The right y-axis shows the propensity percentage.\n", "\n", "The bars (Optionality) indicate the number of decisions where customers had a specific number of actions available. For instance, a high bar at \"2\" means many customers had exactly two actions available in arbitration.\n", "The line (Propensity) represents the average propensity of the top-ranking actions within each bin.\n", "\n", "This analysis helps in understanding the distribution of available actions. We expect the average propensity to increase as the number of available actions increase. If there are many customers with little or no actions available, it should be investigated" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "decision_data.plot.propensity_vs_optionality(stage=\"Arbitration\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Win/Loss Analysis\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Win Analysis\n", "Let's select an action to determine how often it wins and identify which actions it defeats in arbitration." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "win_rank = 1\n", "selected_action = (\n", " decision_data.unfiltered_raw_decision_data.filter(pl.col(\"pxRank\") == 1)\n", " .group_by(\"pyName\")\n", " .len()\n", " .sort(\"len\", descending=True)\n", " .collect()\n", " .get_column(\"pyName\")\n", " .to_list()[1]\n", ")\n", "filter_statement = pl.col(\"pyName\") == selected_action\n", "\n", "interactions_where_comparison_group_wins = (\n", " decision_data.get_winning_or_losing_interactions(\n", " win_rank=win_rank,\n", " group_filter=filter_statement,\n", " win=True,\n", " )\n", ")\n", "\n", "print(\n", " f\"selected action '{selected_action}' wins(Rank{win_rank}) in {interactions_where_comparison_group_wins.collect().height} interactions.\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Graph below shows the competing actions going into arbitration together with the selected action and how many times they lose. It highlights which actions are being surpassed by the chosen action." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Losing actions in interactions where the selected action wins.\n", "groupby_cols = [\"pyIssue\", \"pyGroup\", \"pyName\"]\n", "winning_from = decision_data.winning_from(\n", " interactions=interactions_where_comparison_group_wins,\n", " win_rank=win_rank,\n", " groupby_cols=groupby_cols,\n", " top_k=20,\n", ")\n", "\n", "decision_data.plot.distribution_as_treemap(\n", " df=winning_from, stage=\"Arbitration\", scope_options=groupby_cols\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Loss Analysis\n", "Let's analyze which actions come out on top when the selected action fails" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "interactions_where_comparison_group_loses = (\n", " decision_data.get_winning_or_losing_interactions(\n", " win_rank=win_rank,\n", " group_filter=filter_statement,\n", " win=False,\n", " )\n", ")\n", "\n", "print(\n", " f\"selected action '{selected_action}' loses in {interactions_where_comparison_group_loses.collect().height} interactions.\"\n", ")\n", "# Winning actions in interactions where the selected action loses.\n", "losing_to = decision_data.winning_from(\n", " interactions=interactions_where_comparison_group_loses,\n", " win_rank=win_rank,\n", " groupby_cols=groupby_cols,\n", " top_k=20,\n", ")\n", "\n", "decision_data.plot.distribution_as_treemap(\n", " df=losing_to, stage=\"Arbitration\", scope_options=groupby_cols\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### What are the Prioritization Factors that make these actions win or lose?\n", "The analysis below shows the change in the number of times an action wins when each factor is individually removed from the prioritization calculation. Unlike the Global Sensitivity Analysis above, this chart can show negative numbers. A negative value means that the selected action would win more often if that component were removed from the arbitration process. Therefore, a component with a negative value is contributing to the action's loss." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "decision_data.plot.sensitivity(\n", " limit_xaxis_range=False, reference_group=pl.col(\"pyName\") == selected_action\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Why are the actions winning\n", "\n", "Here we show the distribution of the various arbitration factors of the comparison group vs the other actions that make it to arbitration for the same interactions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, warning_message = decision_data.plot.prio_factor_boxplots(\n", " reference=pl.col(\"pyName\") == selected_action,\n", " sample_size=10000,\n", ")\n", "if warning_message:\n", " print(warning_message)\n", "else:\n", " fig.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Rank Distribution of Comparison Group\n", "\n", "Showing the distribution of the prioritization rank of the selected actions.\n", "If the rank is low, the selected actions are not (often) winning." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "decision_data.plot.rank_boxplot(\n", " reference=pl.col(\"pyName\") == selected_action,\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3" } }, "nbformat": 4, "nbformat_minor": 2 }