pdstools.utils.report_utils._tables =================================== .. py:module:: pdstools.utils.report_utils._tables .. autoapi-nested-parse:: RAG-coloured metric table builders (itables and great_tables). Functions --------- .. autoapisummary:: pdstools.utils.report_utils._tables.create_metric_itable pdstools.utils.report_utils._tables.create_metric_gttable Module Contents --------------- .. py:function:: create_metric_itable(source_table: polars.DataFrame, column_to_metric: dict | None = None, column_descriptions: dict[str, str] | None = None, color_background: bool = False, strict_metric_validation: bool = True, highlight_issues_only: bool = False, rag_source: polars.DataFrame | None = None, **itable_kwargs) Create an interactive table with RAG coloring for metric columns. Displays the table using itables with cells colored based on RAG (Red/Amber/Green) status derived from metric thresholds. :param source_table: DataFrame containing data columns to be colored. :type source_table: pl.DataFrame :param column_to_metric: Mapping from column names (or tuples of column names) to one of: - **str**: metric ID to look up in MetricLimits.csv - **callable**: function(value) -> "RED"|"AMBER"|"YELLOW"|"GREEN"|None - **tuple**: (metric_id, value_mapping) where value_mapping is a dict that maps column values to metric values before evaluation. Supports tuple keys for multiple values: {("Yes", "yes"): True} If a column is not in this dict, its name is used as the metric ID. :type column_to_metric: dict, optional :param column_descriptions: Mapping from column names to tooltip descriptions. When provided, column headers will display the description as a tooltip on hover. Example: {"Performance": "Model AUC performance metric"} :type column_descriptions: dict, optional :param color_background: If True, colors the cell background. If False, colors the text (foreground). :type color_background: bool, default False :param strict_metric_validation: If True, raises an exception if a metric ID in column_to_metric is not found in MetricLimits.csv. Set to False to skip validation. :type strict_metric_validation: bool, default True :param highlight_issues_only: If True, only RED/AMBER/YELLOW values are styled (GREEN is not highlighted). Set to False to also highlight GREEN values. :type highlight_issues_only: bool, default False :param rag_source: If provided, RAG thresholds are evaluated against this DataFrame instead of ``source_table``. Use this when ``source_table`` contains non-numeric display values (e.g. HTML strings) but you still want RAG coloring based on the original numeric data. Must have the same columns and row order as ``source_table``. :type rag_source: pl.DataFrame, optional :param \*\*itable_kwargs: Additional keyword arguments passed to itables.show(). Common options include: lengthMenu, paging, searching, ordering. :returns: An itables display object that will render in Jupyter/Quarto. :rtype: itables HTML display .. rubric:: Examples >>> from pdstools.utils.report_utils import create_metric_itable >>> create_metric_itable( ... df, ... column_to_metric={ ... # Simple metric ID ... "Performance": "ModelPerformance", ... # Custom RAG function ... "Channel": standard_NBAD_channels_rag, ... # Value mapping: column values -> metric values ... "AGB": ("UsingAGB", {"Yes": True, "No": False}), ... # Multiple column values to same metric value ... "AGB": ("UsingAGB", {("Yes", "yes", "YES"): True, "No": False}), ... }, ... column_descriptions={ ... "Performance": "Model AUC performance metric", ... "Channel": "Communication channel for the action", ... }, ... paging=False ... ) .. py:function:: create_metric_gttable(source_table: polars.DataFrame, title: str | None = None, subtitle: str | None = None, column_to_metric: dict | None = None, column_descriptions: dict[str, str] | None = None, color_background: bool = True, strict_metric_validation: bool = True, highlight_issues_only: bool = True, **gt_kwargs) Create a great_tables table with RAG coloring for metric columns. Displays the table using great_tables with cells colored based on RAG (Red/Amber/Green) status derived from metric thresholds. :param source_table: DataFrame containing data columns to be colored. :type source_table: pl.DataFrame :param title: Table title. :type title: str, optional :param subtitle: Table subtitle. :type subtitle: str, optional :param column_to_metric: Mapping from column names (or tuples of column names) to one of: - **str**: metric ID to look up in MetricLimits.csv - **callable**: function(value) -> "RED"|"AMBER"|"YELLOW"|"GREEN"|None - **tuple**: (metric_id, value_mapping) where value_mapping is a dict that maps column values to metric values before evaluation. Supports tuple keys for multiple values: {("Yes", "yes"): True} If a column is not in this dict, its name is used as the metric ID. :type column_to_metric: dict, optional :param column_descriptions: Mapping from column names to tooltip descriptions. When provided, column headers will display the description as a tooltip on hover. Example: {"Performance": "Model AUC performance metric"} :type column_descriptions: dict, optional :param color_background: If True, colors the cell background. If False, colors the text. :type color_background: bool, default True :param strict_metric_validation: If True, raises an exception if a metric ID in column_to_metric is not found in MetricLimits.csv. Set to False to skip validation. :type strict_metric_validation: bool, default True :param highlight_issues_only: If True, only RED/AMBER/YELLOW values are styled (GREEN is not highlighted). Set to False to also highlight GREEN values. :type highlight_issues_only: bool, default True :param \*\*gt_kwargs: Additional keyword arguments passed to great_tables.GT constructor. Common options include: rowname_col, groupname_col. :returns: A great_tables instance with RAG coloring applied. :rtype: great_tables.GT .. rubric:: Examples >>> from pdstools.utils.report_utils import create_metric_gttable >>> create_metric_gttable( ... df, ... title="Model Overview", ... column_to_metric={ ... # Simple metric ID ... "Performance": "ModelPerformance", ... # Custom RAG function ... "Channel": standard_NBAD_channels_rag, ... # Value mapping: column values -> metric values ... "AGB": ("UsingAGB", {"Yes": True, "No": False}), ... # Multiple column values to same metric value ... "AGB": ("UsingAGB", {("Yes", "yes", "YES"): True, "No": False}), ... }, ... column_descriptions={ ... "Performance": "Model AUC performance metric", ... "Channel": "Communication channel for the action", ... }, ... rowname_col="Name", ... )