Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
abdullahau committed Feb 14, 2025
1 parent d447be0 commit ae4cd46
Show file tree
Hide file tree
Showing 4 changed files with 309 additions and 0 deletions.
7 changes: 7 additions & 0 deletions 03c - Geocentric Models.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -463463,6 +463463,13 @@
"\n",
"ax.set(ylabel='Day of Year', xlabel='Year');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This looks a lot like our original model, except the left hand side of the spline is pulled down. This is likely due to the prior on `w`. The prior is centered on 0, but that assumes an intercept is present (i.e., the curves of the spline average a deviation of 0 from the mean). However, without the intercept, the prior drags the line down to actual zero when the first basis function in non-zero."
]
}
],
"metadata": {
Expand Down
231 changes: 231 additions & 0 deletions 05 - The Many Variables & The Suprious Waffles.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,231 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 05 - The Many Variables & The Spurious Waffles"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31mModule aliases imported by init_notebook.py:\n",
"--------------------------------------------\n",
"\u001b[32mimport\u001b[34m numpy \u001b[32mas\u001b[34m np\n",
"\n",
"\u001b[32mimport\u001b[34m pandas \u001b[32mas\u001b[34m pd\n",
"\n",
"\u001b[32mimport\u001b[34m statsmodels.formula.api \u001b[32mas\u001b[34m smf\n",
"\n",
"\u001b[32mimport\u001b[34m pymc \u001b[32mas\u001b[34m pm\n",
"\n",
"\u001b[32mimport\u001b[34m xarray \u001b[32mas\u001b[34m xr\n",
"\n",
"\u001b[32mimport\u001b[34m arviz \u001b[32mas\u001b[34m az\n",
"\n",
"\u001b[32mimport\u001b[34m utils \u001b[32mas\u001b[34m utils\n",
"\n",
"\u001b[32mimport\u001b[34m seaborn \u001b[32mas\u001b[34m sns\n",
"\n",
"\u001b[32mfrom\u001b[34m scipy \u001b[32mimport\u001b[34m stats \u001b[32mas\u001b[34m stats\n",
"\n",
"\u001b[32mfrom\u001b[34m matplotlib \u001b[32mimport\u001b[34m pyplot \u001b[32mas\u001b[34m plt\n",
"\n",
"\u001b[31mWatermark:\n",
"----------\n",
"\u001b[34mLast updated: 2025-02-14T19:14:08.536139+04:00\n",
"\n",
"Python implementation: CPython\n",
"Python version : 3.12.8\n",
"IPython version : 8.32.0\n",
"\n",
"Compiler : Clang 18.1.8 \n",
"OS : Darwin\n",
"Release : 24.3.0\n",
"Machine : arm64\n",
"Processor : arm\n",
"CPU cores : 8\n",
"Architecture: 64bit\n",
"\n",
"\u001b[34mxarray : 2025.1.2\n",
"watermark : 2.5.0\n",
"pymc : 5.20.1\n",
"arviz : 0.20.0\n",
"scipy : 1.12.0\n",
"numpy : 1.26.4\n",
"matplotlib : 3.10.0\n",
"seaborn : 0.13.2\n",
"statsmodels: 0.14.4\n",
"pandas : 2.2.3\n",
"\n"
]
}
],
"source": [
"# ruff: noqa: F405\n",
"from init_notebook import *\n",
"\n",
"%config InlineBackend.figure_formats = ['svg']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One of the most reliable sources of waffles in North America, if not the entire world, is a Waffle House diner. Waffle House is nearly always open, even just after a hurricane. Most diners invest in disaster preparedness, including having their own electrical generators. As a consequence, the United States’ disaster relief agency (FEMA) informally uses Waffle House as an index of disaster severity. If the Waffle House is closed, that’s a serious event.\n",
"\n",
"It is ironic then that steadfast Waffle House is associated with the nation’s highest divorce rates. States with many Waffle Houses per person, like Georgia and Alabama, also have some of the highest divorce rates in the United States. The lowest divorce rates are found where there are zero Waffle Houses. Could always-available waffles and hash brown potatoes put marriage at risk?\n",
"\n",
"Probably not. This is an example of a misleading correlation. No one thinks there is any plausible mechanism by which Waffle House diners make divorce more likely. Instead, when we see a correlation of this kind, we immediately start asking about other variables that are really driving the relationship between waffles and divorce. In this case, Waffle House began in Georgia in the year 1955. Over time, the diners spread across the Southern United States, remaining largely within it. So Waffle House is associated with the South. Divorce is not a uniquely Southern institution, but the Southern United States has some of the highest divorce rates in the nation. So it’s probably just an accident of history that Waffle House and high divorce rates both occur in the South."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Such accidents are commonplace. It is not surprising that Waffle House is correlated with divorce, because correlation in general is not surprising. In large data sets, every pair of variables has a statistically discernible non-zero correlation. But since most correlations do not indicate causal relationships, we need tools for distinguishing mere association from evidence of causation. This is why so much effort is devoted to **multiple regression**, using more than one predictor variable to simultaneously model an outcome. Reasons given for multiple regression models include:\n",
"\n",
"1) Statistical “control” for **confounds**. A confound is something that misleads us about a causal influence—there will be a more precise definition in the next chapter. The spurious waffles and divorce correlation is one type of confound, where southern-ness makes a variable with no real importance (Waffle House density) appear to be important. But confounds are diverse. They can hide important effects just as easily as they can produce false ones.\n",
"2) **Multiple and complex causation**. A phenomenon may arise from multiple simultaneous causes, and causes can cascade in complex ways. And since one cause can hide another, they must be measured simultaneously.\n",
"3) **Interactions**. The importance of one variable may depend upon another. For example, plants benefit from both light and water. But in the absence of either, the other is no benefit at all. Such interactions occur very often. Effective inference about one variable will often depend upon consideration of others."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this chapter, we begin to deal with the first of these two, using multiple regression to deal with simple confounds and to take multiple measurements of association. You’ll see how to include any arbitrary number of *main effects* in your linear model of the Gaussian mean. These main effects are additive combinations of variables, the simplest type of multiple variable model. We’ll focus on two valuable things these models can help us with: (1) revealing *spurious* correlations like the Waffle House correlation with divorce and (2) revealing important correlations that may be masked by unrevealed correlations with other variables. Along the way, you’ll meet **categorical variables**, which require special handling compared to continuous variables.\n",
"\n",
"However, multiple regression can be worse than useless, if we don’t know how to use it. Just adding variables to a model can do a lot of damage. In this chapter, we’ll begin to think formally about **causal inference** and introduce graphical causal models as a way to design and interpret regression models. The next chapter continues on this theme, describing some serious and common dangers of adding predictor variables, ending with a unifying framework for understanding the examples in both this chapter and the next."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Causal inference**. Despite its central importance, there is no unified approach to causal inference yet in the sciences. There are even people who argue that cause does not really exist; it’s just a psychological illusion. And in complex dynamical systems, everything seems to cause everything else. “Cause” loses intuitive value. About one thing, however, there is general agreement: Causal inference always depends upon unverifiable assumptions. Another way to say this is that it’s always possible to imagine some way in which your inference about cause is mistaken, no matter how careful the design or analysis. A lot can be accomplished, despite this barrier."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Spurious Association"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let’s leave waffles behind, at least for the moment. An example that is easier to understand is the correlation between divorce rate and marriage rate. The rate at which adults marry is a great predictor of divorce rate, as seen in the left-hand plot below. But does marriage cause divorce? In a trivial sense it obviously does: One cannot get a divorce without first getting married. But there’s no reason high marriage rate must cause more divorce. It’s easy to imagine high marriage rate indicating high cultural valuation of marriage and therefore being associated with low divorce rate.\n",
"\n",
"Another predictor associated with divorce is the median age at marriage, displayed in the right-hand plot below. Age at marriage is also a good predictor of divorce rate — higher age at marriage predicts less divorce. But there is no reason this has to be causal, either, unless age at marriage is very late and the spouses do not live long enough to get a divorce."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let’s load these data and standardize the variables of interest:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# load data\n",
"d = pd.read_csv(\"data/WaffleDivorce.csv\", sep=';')\n",
"\n",
"# standardize variables\n",
"d['D'] = utils.standardize(d.Divorce)\n",
"d['M'] = utils.standardize(d.Marriage)\n",
"d['A'] = utils.standardize(d.MedianAgeMarriage)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Median age of marriage and divorce rate linear regression model:\n",
"\n",
"$$\n",
"\\begin{align*}\n",
" D_i &\\sim \\text{Normal}(\\mu_i,\\sigma) \\\\\n",
" \\mu_i &= \\alpha + \\beta_A A_{i} \\\\\n",
" \\alpha &\\sim \\text{Normal}(0, 0.2) \\\\\n",
" \\beta_A &\\sim \\text{Normal}(0,0.5) \\\\\n",
" \\sigma &\\sim \\text{Exponential}(1)\n",
"\\end{align*}\n",
"$$\n",
"\n",
"$D_i$ is the standardized (zero centered, standard deviation one) divorce rate for State $i$, and $A_i$ is State $i$’s standardized median age at marriage. \n",
"\n",
"What about those priors? Since the outcome and the predictor are both standardized, the intercept $α$ should end up very close to zero. What does the prior slope $β_A$ imply? If $β_A =1$, that would imply that a change of one standard deviation in age at marriage is associated likewise with a change of one standard deviation in divorce. To know whether or not that is a strong relationship, you need to know how big a standard deviation of age at marriage is:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1.2436303013880823"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"d.MedianAgeMarriage.std()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
20 changes: 20 additions & 0 deletions R_Notebook.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1685,6 +1685,26 @@
"plot( NULL , xlim=range(d2$temp) , ylim=c(0,1) , xlab=\"year\" , ylab=\"basis\" )\n",
"for ( i in 1:ncol(B) ) lines( d2$temp , B[,i] )"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"vscode": {
"languageId": "r"
}
},
"outputs": [],
"source": [
"# load data and copy\n",
"library(rethinking)\n",
"data(WaffleDivorce)\n",
"d <- WaffleDivorce\n",
"# standardize variables\n",
"d$D <- standardize( d$Divorce )\n",
"d$M <- standardize( d$Marriage )\n",
"d$A <- standardize( d$MedianAgeMarriage )"
]
}
],
"metadata": {
Expand Down
51 changes: 51 additions & 0 deletions data/WaffleDivorce.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
Location;Loc;Population;MedianAgeMarriage;Marriage;Marriage SE;Divorce;Divorce SE;WaffleHouses;South;Slaves1860;Population1860;PropSlaves1860
Alabama;AL;4.78;25.3;20.2;1.27;12.7;0.79;128;1;435080;964201;0.45
Alaska;AK;0.71;25.2;26.0;2.93;12.5;2.05;0;0;0;0;0
Arizona;AZ;6.33;25.8;20.3;0.98;10.8;0.74;18;0;0;0;0
Arkansas;AR;2.92;24.3;26.4;1.70;13.5;1.22;41;1;111115;435450;0.26
California;CA;37.25;26.8;19.1;0.39;8.0;0.24;0;0;0;379994;0
Colorado;CO;5.03;25.7;23.5;1.24;11.6;0.94;11;0;0;34277;0
Connecticut;CT;3.57;27.6;17.1;1.06;6.7;0.77;0;0;0;460147;0
Delaware;DE;0.90;26.6;23.1;2.89;8.9;1.39;3;0;1798;112216;0.016
District of Columbia;DC;0.60;29.7;17.7;2.53;6.3;1.89;0;0;0;75080;0
Florida;FL;18.80;26.4;17.0;0.58;8.5;0.32;133;1;61745;140424;0.44
Georgia;GA;9.69;25.9;22.1;0.81;11.5;0.58;381;1;462198;1057286;0.44
Hawaii;HI;1.36;26.9;24.9;2.54;8.3;1.27;0;0;0;0;0
Idaho;ID;1.57;23.2;25.8;1.84;7.7;1.05;0;0;0;0;0
Illinois;IL;12.83;27.0;17.9;0.58;8.0;0.45;2;0;0;1711951;0
Indiana;IN;6.48;25.7;19.8;0.81;11.0;0.63;17;0;0;1350428;0
Iowa;IA;3.05;25.4;21.5;1.46;10.2;0.91;0;0;0;674913;0
Kansas;KS;2.85;25.0;22.1;1.48;10.6;1.09;6;0;2;107206;0.000019
Kentucky;KY;4.34;24.8;22.2;1.11;12.6;0.75;64;1;225483;1155684;0
Louisiana;LA;4.53;25.9;20.6;1.19;11.0;0.89;66;1;331726;708002;0.47
Maine;ME;1.33;26.4;13.5;1.40;13.0;1.48;0;0;0;628279;0
Maryland;MD;5.77;27.3;18.3;1.02;8.8;0.69;11;0;87189;687049;0.13
Massachusetts;MA;6.55;28.5;15.8;0.70;7.8;0.52;0;0;0;1231066;0
Michigan;MI;9.88;26.4;16.5;0.69;9.2;0.53;0;0;0;749113;0
Minnesota;MN;5.30;26.3;15.3;0.77;7.4;0.60;0;0;0;172023;0
Mississippi;MS;2.97;25.8;19.3;1.54;11.1;1.01;72;1;436631;791305;0.55
Missouri;MO;5.99;25.6;18.6;0.81;9.5;0.67;39;1;114931;1182012;0.097
Montana;MT;0.99;25.7;18.5;2.31;9.1;1.71;0;0;0;0;0
Nebraska;NE;1.83;25.4;19.6;1.44;8.8;0.94;0;0;15;28841;0.00052
New Hampshire;NH;1.32;26.8;16.7;1.76;10.1;1.61;0;0;0;326073;0
New Jersey;NJ;8.79;27.7;14.8;0.59;6.1;0.46;0;0;18;672035;0.000027
New Mexico;NM;2.06;25.8;20.4;1.90;10.2;1.11;2;0;0;93516;0
New York;NY;19.38;28.4;16.8;0.47;6.6;0.31;0;0;0;3880735;0
North Carolina;NC;9.54;25.7;20.4;0.98;9.9;0.48;142;1;331059;992622;0.33
North Dakota;ND;0.67;25.3;26.7;2.93;8.0;1.44;0;0;0;0;0
Ohio;OH;11.54;26.3;16.9;0.61;9.5;0.45;64;0;0;2339511;0
Oklahoma;OK;3.75;24.4;23.8;1.29;12.8;1.01;16;0;0;0;0
Oregon;OR;3.83;26.0;18.9;1.10;10.4;0.80;0;0;0;52465;0
Pennsylvania;PA;12.70;27.1;15.5;0.48;7.7;0.43;11;0;0;2906215;0
Rhode Island;RI;1.05;28.2;15.0;2.11;9.4;1.79;0;0;0;174620;0
South Carolina;SC;4.63;26.4;18.1;1.18;8.1;0.70;144;1;402406;703708;0.57
South Dakota;SD;0.81;25.6;20.1;2.64;10.9;2.50;0;0;0;4837;0
Tennessee;TN;6.35;25.2;19.4;0.85;11.4;0.75;103;1;275719;1109801;0.2
Texas;TX;25.15;25.2;21.5;0.61;10.0;0.35;99;1;182566;604215;0.30
Utah;UT;2.76;23.3;29.6;1.77;10.2;0.93;0;0;0;40273;0
Vermont;VT;0.63;26.9;16.4;2.40;9.6;1.87;0;0;0;315098;0
Virginia;VA;8.00;26.4;20.5;0.83;8.9;0.52;40;1;490865;1219630;0.40
Washington;WA;6.72;25.9;21.4;1.00;10.0;0.65;0;0;0;11594;0
West Virginia;WV;1.85;25.0;22.2;1.69;10.9;1.34;4;1;18371;376688;0.049
Wisconsin;WI;5.69;26.3;17.2;0.79;8.3;0.57;0;0;0;775881;0
Wyoming;WY;0.56;24.2;30.7;3.92;10.3;1.9;0;0;0;0;0

0 comments on commit ae4cd46

Please sign in to comment.