{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "nbsphinx": "hidden" }, "outputs": [], "source": [ "# --- IGNORE ---\n", "import plotly.io as pio\n", "\n", "pio.renderers.default = \"notebook_connected\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Example Prediction Studio Analysis\n", "\n", "__Pega__\n", "\n", "__2025-07-04__\n", "\n", "This is a small notebook to report and analyse Prediction Studio data on Predictions. The underlying data is from the Data-DM-Snapshot table that is used to populate the Prediction Studio screen with Prediction Performance, Lift, CTR etc.\n", "\n", "Data can be exported from the **pyGetSnapshot** dataset in Pega Infinity from Dev Studio and Prediction Studio.\n", "\n", "Datamart tables are described [here](https://docs.pega.com/bundle/platform/page/platform/decision-management/database-tables-monitoring-models.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Raw data\n", "\n", "First, we're going to load the raw data. The raw data is in a \"long\" format with e.g. test and control groups in separate rows." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pathlib import Path\n", "from pdstools import Prediction\n", "\n", "# path to dataset export here\n", "# e.g. PR_DATA_DM_SNAPSHOTS.parquet\n", "data_export = \"\"\n", "\n", "if Path(data_export).exists():\n", " prediction = Prediction.from_ds_export(data_export)\n", "else:\n", " prediction = Prediction.from_mock_data(days=60)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prediction Data\n", "\n", "The actual prediction data is in a \"wide\" format with separate fields for Test and Control groups. Also, it is only the \"daily\" snapshots and the numbers and date are formatted to be normal Polars types." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prediction.predictions.head().collect()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary by Channel\n", "\n", "Standard functionality exists to summarize the predictions per channel. Note that we do not have the prediction to channel mapping in the data (this is an outstanding product issue), so apply the implicit naming conventions of NBAD. For a specific customer, custom mappings can be passed into the summarization function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prediction.summary_by_channel().collect()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prediction Trends\n", "\n", "Summarization by default is over all time. You can pass in an argument to summarize by day, week or any other period as supported by the (Polars time offset string language)[https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.dt.offset_by.html].\n", "\n", "This trend data can then easily be visualized." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = prediction.plot.performance_trend(\"1w\")\n", "fig.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = prediction.plot.lift_trend(\"1w\")\n", "fig.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = prediction.plot.ctr_trend(\"1w\", facetting=False)\n", "fig.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig = prediction.plot.responsecount_trend(\"1w\", facetting=False)\n", "fig.show()" ] } ], "metadata": { "kernelspec": { "display_name": "pdstools", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.13.7" } }, "nbformat": 4, "nbformat_minor": 2 }