Excess deaths due to COVID-19#

import pandas as pd
from pymc_extras.prior import Prior

import causalpy as cp
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
seed = 42

Load data#

df = (
    cp.load_data("covid")
    .assign(date=lambda x: pd.to_datetime(x["date"]))
    .set_index("date")
)

treatment_time = pd.to_datetime("2020-01-01")
df.head()
temp deaths year month t pre
date
2006-01-01 3.8 49124 2006 1 0 True
2006-02-01 3.4 42664 2006 2 1 True
2006-03-01 3.9 49207 2006 3 2 True
2006-04-01 7.4 40645 2006 4 3 True
2006-05-01 10.7 42425 2006 5 4 True

The columns are:

  • date + year: self explanatory

  • month: month, numerically encoded. Needs to be treated as a categorical variable

  • temp: average UK temperature (Celsius)

  • t: time

  • pre: boolean flag indicating pre or post intervention

Run the analysis#

In this example we are going to standardize the data. So we have to be careful in how we interpret the inferred regression coefficients, and the posterior predictions will be in this standardized space.

Note

The random_seed keyword argument for the PyMC sampler is not necessary. We use it here so that the results are reproducible.

model = cp.pymc_models.LinearRegression(
    sample_kwargs={"random_seed": seed},
    priors={
        "beta": Prior(
            "Normal",
            mu=[42_000, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
            sigma=10_000,
            dims=["treated_units", "coeffs"],
        ),
        "y_hat": Prior(
            "Normal",
            sigma=Prior("HalfNormal", sigma=10_000, dims=["treated_units"]),
            dims=["obs_ind", "treated_units"],
        ),
    },
)
result = cp.InterruptedTimeSeries(
    df,
    treatment_time,
    formula="standardize(deaths) ~ 0 + standardize(t) + C(month) + standardize(temp)",
    model=model,
)
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [beta, y_hat_sigma]

Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 4 seconds.
Sampling: [beta, y_hat, y_hat_sigma]
Sampling: [y_hat]
Sampling: [y_hat]
Sampling: [y_hat]
Sampling: [y_hat]
fig, ax = result.plot()
../_images/432b21eb43d2592a085dcbafd6821ffc00606e56f7d0c4b22a624fb546add2b0.png
result.summary()
==================================Pre-Post Fit==================================
Formula: standardize(deaths) ~ 0 + standardize(t) + C(month) + standardize(temp)
Model coefficients:
    C(month)[1]        1.6, 94% HDI [1.1, 2]
    C(month)[2]        -0.2, 94% HDI [-0.66, 0.27]
    C(month)[3]        0.26, 94% HDI [-0.11, 0.64]
    C(month)[4]        -0.033, 94% HDI [-0.32, 0.25]
    C(month)[5]        -0.15, 94% HDI [-0.46, 0.15]
    C(month)[6]        -0.21, 94% HDI [-0.64, 0.2]
    C(month)[7]        -0.026, 94% HDI [-0.54, 0.5]
    C(month)[8]        -0.42, 94% HDI [-0.91, 0.056]
    C(month)[9]        -0.45, 94% HDI [-0.82, -0.062]
    C(month)[10]       -0.06, 94% HDI [-0.34, 0.23]
    C(month)[11]       -0.36, 94% HDI [-0.7, -0.029]
    C(month)[12]       0.072, 94% HDI [-0.36, 0.49]
    standardize(t)     0.23, 94% HDI [0.16, 0.31]
    standardize(temp)  -0.45, 94% HDI [-0.74, -0.15]
    y_hat_sigma        0.55, 94% HDI [0.5, 0.62]

Effect Summary Reporting#

For decision-making, you often need a concise summary of the causal effect with key statistics. The effect_summary() method provides a decision-ready report with average and cumulative effects, HDI intervals, tail probabilities, and relative effects. This provides a comprehensive summary without manual post-processing.

Note

Note that in this example, the data has been standardized, so the effect estimates are in standardized units. When interpreting the results, keep in mind that the effects are relative to the standardized scale of the outcome variable.

# Generate effect summary for the full post-period
stats = result.effect_summary()
stats.table
mean median hdi_lower hdi_upper p_gt_0 relative_mean relative_hdi_lower relative_hdi_upper
average 0.912301 0.914077 0.728512 1.110732 1.0 181.320390 86.884940 291.755568
cumulative 26.456723 26.508243 21.126834 32.211233 1.0 181.320393 86.884942 291.755575
# View the prose summary
print(stats.text)
Post-period (2020-01-01 00:00:00 to 2022-05-01 00:00:00), the average effect was 0.91 (95% HDI [0.73, 1.11]), with a posterior probability of an increase of 1.000. The cumulative effect was 26.46 (95% HDI [21.13, 32.21]); probability of an increase 1.000. Relative to the counterfactual, this equals 181.32% on average (95% HDI [86.88%, 291.76%]).
# You can also analyze a specific time window, e.g., the first 6 months of 2020
stats_window = result.effect_summary(
    window=(pd.to_datetime("2020-01-01"), pd.to_datetime("2020-06-30"))
)
stats_window.table
mean median hdi_lower hdi_upper p_gt_0 relative_mean relative_hdi_lower relative_hdi_upper
average 1.984056 1.984917 1.792856 2.195756 1.0 292.810259 182.813253 414.928539
cumulative 11.904334 11.909502 10.757138 13.174536 1.0 292.810263 182.813255 414.928545