-
Notifications
You must be signed in to change notification settings - Fork 491
/
11-tell-your-story-with-data.Rmd
executable file
·762 lines (572 loc) · 44.3 KB
/
11-tell-your-story-with-data.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
# (PART) Conclusion {-}
# Tell Your Story with Data {#thinking-with-data}
```{r setup_thinking_with_data, include=FALSE, purl=FALSE}
# Used to define Learning Check numbers:
chap <- 11
lc <- 0
# Set R code chunk defaults:
opts_chunk$set(
echo = TRUE,
eval = TRUE,
warning = FALSE,
message = TRUE,
tidy = FALSE,
purl = TRUE,
out.width = "\\textwidth",
fig.height = 4,
fig.align = "center"
)
# Set output digit precision
options(scipen = 99, digits = 3)
# Set random number generator see value for replicable pseudorandomness.
set.seed(76)
```
Recall in the Preface and at the end of chapters throughout this book, we displayed the "*ModernDive* flowchart" mapping your journey through this book.
(ref:finalflowchart) *ModernDive* flowchart.
```{r moderndive-figure-conclusion, echo=FALSE, out.width="100%", out.height="100%", fig.cap="(ref:finalflowchart)", purl=FALSE}
include_graphics("images/flowcharts/flowchart/flowchart.002.png")
```
## Review
Let's go over a refresher of what you've covered so far. You first got started with data in Chapter \@ref(getting-started) where you learned about the difference between R and RStudio, started coding in R, installed and loaded your first R packages, and explored your first dataset: all domestic departure `flights` from a major New York City airport in 2013. Then you covered the following three parts of this book (Parts 2 and 4 are combined into a single portion):
1. Data science with `tidyverse`. You assembled your data science toolbox using `tidyverse` packages. In particular, you
+ Ch.\@ref(viz): Visualized data using the `ggplot2` package.
+ Ch.\@ref(wrangling): Wrangled data using the `dplyr` package.
+ Ch.\@ref(tidy): Learned about the concept of "tidy" data as a standardized data frame input and output format for all packages in the `tidyverse`. Furthermore, you learned how to import spreadsheet files into R using the `readr` package.
2. Data modeling with `moderndive`. Using these data science tools and helper functions from the `moderndive` package, you fit your first data models. In particular, you
+ Ch.\@ref(regression): Discovered basic regression models with only one explanatory variable.
+ Ch.\@ref(multiple-regression): Examined multiple regression models with more than one explanatory variable.
3. Statistical inference with `infer`. Once again using your newly acquired data science tools, you unpacked statistical inference using the `infer` package. In particular, you
+ Ch.\@ref(sampling): Learned about the role that sampling variability plays in statistical inference and the role that sample size plays in this sampling variability.
+ Ch.\@ref(confidence-intervals): Constructed confidence intervals using bootstrapping.
+ Ch.\@ref(hypothesis-testing): Conducted hypothesis tests using permutation.
4. Data modeling with `moderndive` (revisited): Armed with your understanding of statistical inference, you revisited and reviewed the models you constructed in Ch.\@ref(regression) and Ch.\@ref(multiple-regression). In particular, you
+ Ch.\@ref(inference-for-regression): Interpreted confidence intervals and hypothesis tests in a regression setting.
We've guided you through your first experiences of ["thinking with data,"](https://arxiv.org/pdf/1410.3127.pdf) an expression originally coined by \index{Lambert, Diane} Dr.\ Diane Lambert. The philosophy underlying this expression guided your path in the flowchart in Figure \@ref(fig:moderndive-figure-conclusion).
This philosophy is also well-summarized in ["Practical Data Science for Stats"](https://peerj.com/collections/50-practicaldatascistats/): a collection of pre-prints focusing on the practical side of data science workflows and statistical analysis curated by [Dr.\ Jennifer Bryan](https://twitter.com/jennybryan) \index{Bryan, Jenny} and [Dr.\ Hadley Wickham](https://twitter.com/hadleywickham). They quote:
> There are many aspects of day-to-day analytical work that are almost absent from the conventional statistics literature and curriculum. And yet these activities account for a considerable share of the time and effort of data analysts and applied statisticians. The goal of this collection is to increase the visibility and adoption of modern data analytical workflows. We aim to facilitate the transfer of tools and frameworks between industry and academia, between software engineering and statistics and computer science, and across different domains.
In other words, to be equipped to "think with data" in the 21st century, analysts need practice going through the ["data/science pipeline"](http://r4ds.had.co.nz/explore-intro.html) we saw in the Preface (re-displayed in Figure \@ref(fig:pipeline-figure-conclusion)). It is our opinion that, for too long, statistics education has only focused on parts of this pipeline, instead of going through it in its *entirety*.
```{r pipeline-figure-conclusion, fig.cap="Data/science pipeline.", out.height="70%", out.width="70%", echo=FALSE, purl = FALSE}
include_graphics("images/r4ds/data_science_pipeline.png")
```
To conclude this book, we'll present you with some additional case studies of working with data. In Section \@ref(seattle-house-prices) we'll take you through a full-pass of the "Data/Science Pipeline" in order to analyze the sale price of houses in Seattle, WA, USA. In Section \@ref(data-journalism), we'll present you with some examples of effective data storytelling drawn from the data journalism website, [FiveThirtyEight.com](https://fivethirtyeight.com/)\index{FiveThirtyEight}. We present these case studies to you because we believe that you should not only be able to "think with data," but also be able to "tell your story with data." Let's explore how to do this!
### Needed packages {-#story-packages}
Let's load all the packages needed for this chapter (this assumes you've already installed them). Read Section \@ref(packages) for information on how to install and load R packages.
```{r, eval=FALSE}
library(tidyverse)
library(moderndive)
library(skimr)
library(fivethirtyeight)
```
```{r, echo=FALSE, message=FALSE, purl=TRUE}
# The code presented to the reader in the chunk above is different than the code
# in this chunk that is actually run to build the book. In particular we do not
# load the skimr package.
#
# This is because skimr v1.0.6 which we used for the book causes all
# kable() code to break for the remaining chapters in the book. v2 might
# fix these issues:
# https://github.com/moderndive/ModernDive_book/issues/271
# As a workaround for v1 of ModernDive, all skimr::skim() output in this chapter
# has been hard coded.
library(tidyverse)
library(moderndive)
# library(skimr)
library(fivethirtyeight)
```
```{r message=FALSE, echo=FALSE, purl=FALSE}
# Packages needed internally, but not in text.
library(kableExtra)
library(patchwork)
library(scales)
```
## Case study: Seattle house prices {#seattle-house-prices}
[Kaggle.com](https://www.kaggle.com/) is a machine learning and predictive modeling competition website that hosts datasets uploaded by companies, governmental organizations, and other individuals. One of their datasets is the ["House Sales in King County, USA"](https://www.kaggle.com/harlfoxem/housesalesprediction). It consists of sale prices of homes sold between May 2014 and May 2015 in King County, Washington, USA, which includes the greater Seattle metropolitan area. This dataset is in the `house_prices` data frame included in the `moderndive` package.
The dataset consists of `r house_prices %>% nrow() %>% comma()` houses and `r house_prices %>% ncol()` variables describing these houses (for a full list and description of these variables, see the help file by running `?house_prices` in the console). In this case study, we'll create a multiple regression model where:
- The outcome variable $y$ is the sale `price` of houses.
- Two explanatory variables:
1. A numerical explanatory variable $x_1$: house size `sqft_living` as measured in square feet of living space. Note that 1 square foot is about 0.09 square meters.
1. A categorical explanatory variable $x_2$: house `condition`, a categorical variable with five levels where `1` indicates "poor" and `5` indicates "excellent."
### Exploratory data analysis: Part I {#house-prices-EDA-I}
As we've said numerous times throughout this book, a crucial first step when presented with data is to perform an exploratory data analysis (EDA). Exploratory data analysis can give you a sense of your data, help identify issues with your data, bring to light any outliers, and help inform model construction.
Recall the three common steps in an exploratory data analysis we introduced in Subsection \@ref(model1EDA):
1. Looking at the raw data values.
1. Computing summary statistics.
1. Creating data visualizations.
First, let's look at the raw data using `View()` to bring up RStudio's spreadsheet viewer and the `glimpse()` function from the `dplyr` package:
```{r, eval=FALSE}
View(house_prices)
glimpse(house_prices)
```
```{r, echo=FALSE, purl=FALSE}
glimpse(house_prices)
```
Here are some questions you can ask yourself at this stage of an EDA: Which variables are numerical? Which are categorical? For the categorical variables, what are their levels? Besides the variables we'll be using in our regression model, what other variables do you think would be useful to use in a model for house price?
Observe, for example, that while the `condition` variable has values `1` through `5`, these are saved in R as `fct` standing for "factors." This is one of R's ways of saving categorical variables. So you should think of these as the "labels" `1` through `5` and not the numerical values `1` through `5`.
Let's now perform the second step in an EDA: computing summary statistics. Recall from Section \@ref(summarize) that *summary statistics* are single numerical values that summarize a large number of values. Examples of summary statistics include the mean, the median, the standard deviation, and various percentiles.
We could do this using the `summarize()` function in the `dplyr` package along with R's built-in *summary functions*, like `mean()` and `median()`. However, recall in Section \@ref(mutate), we saw the following code that computes a variety of summary statistics of the variable `gain`, which is the amount of time that a flight makes up mid-air:
```{r, eval=FALSE}
gain_summary <- flights %>%
summarize(
min = min(gain, na.rm = TRUE),
q1 = quantile(gain, 0.25, na.rm = TRUE),
median = quantile(gain, 0.5, na.rm = TRUE),
q3 = quantile(gain, 0.75, na.rm = TRUE),
max = max(gain, na.rm = TRUE),
mean = mean(gain, na.rm = TRUE),
sd = sd(gain, na.rm = TRUE),
missing = sum(is.na(gain))
)
```
To repeat this for all three `price`, `sqft_living`, and `condition` variables would be tedious to code up. So instead, let's use the convenient `skim()` function from the `skimr` package we first used in Subsection \@ref(model4EDA)\index{R packages!skimr!skim()}, being sure to only `select()` the variables of interest for our model:
```{r, eval=FALSE}
house_prices %>%
select(price, sqft_living, condition) %>%
skim()
```
```
Skim summary statistics
n obs: 21613
n variables: 3
── Variable type:factor
variable missing complete n n_unique top_counts ordered
condition 0 21613 21613 5 3: 14031, 4: 5679, 5: 1701, 2: 172 FALSE
── Variable type:integer
variable missing complete n mean sd p0 p25 p50 p75 p100
sqft_living 0 21613 21613 2079.9 918.44 290 1427 1910 2550 13540
── Variable type:numeric
variable missing complete n mean sd p0 p25 p50 p75 p100
price 0 21613 21613 540088.14 367127.2 75000 321950 450000 645000 7700000
```
Observe that the mean `price` of `r mean(house_prices$price) %>% dollar()` is larger than the median of `r median(house_prices$price) %>% dollar()`. This is because a small number of very expensive houses are inflating the average. In other words, there are "outlier" house prices in our dataset. (This fact will become even more apparent when we create our visualizations next.)
However, the median is not as sensitive to such outlier house prices. This is why news about the real estate market generally report median house prices and not mean/average house prices. We say here that the median is more *robust to outliers* than the mean. Similarly, while both the standard deviation and interquartile-range (IQR) are both measures of spread and variability, the IQR is more *robust to outliers*.\index{outliers}
Let's now perform the last of the three common steps in an exploratory data analysis: creating data visualizations. Let's first create *univariate* visualizations. These are plots focusing on a single variable at a time. Since `price` and `sqft_living` are numerical variables, we can visualize their distributions using a `geom_histogram()` as seen in Section \@ref(histograms) on histograms. On the other hand, since `condition` is categorical, we can visualize its distribution using a `geom_bar()`. Recall from Section \@ref(geombar) on barplots that since `condition` is not "pre-counted", we use a `geom_bar()` and not a `geom_col()`.
```{r, eval=FALSE, message=FALSE}
# Histogram of house price:
ggplot(house_prices, aes(x = price)) +
geom_histogram(color = "white") +
labs(x = "price (USD)", title = "House price")
# Histogram of sqft_living:
ggplot(house_prices, aes(x = sqft_living)) +
geom_histogram(color = "white") +
labs(x = "living space (square feet)", title = "House size")
# Barplot of condition:
ggplot(house_prices, aes(x = condition)) +
geom_bar() +
labs(x = "condition", title = "House condition")
```
In Figure \@ref(fig:house-prices-viz), we display all three of these visualizations at once.
```{r house-prices-viz, echo=FALSE, message=FALSE, fig.cap="Exploratory visualizations of Seattle house prices data.", fig.height=4.8, purl=FALSE}
p1 <- ggplot(house_prices, aes(x = price)) +
geom_histogram(color = "white") +
labs(x = "price (USD)", title = "House price")
p2 <- ggplot(house_prices, aes(x = sqft_living)) +
geom_histogram(color = "white") +
labs(x = "living space (square feet)", title = "House size")
p3 <- ggplot(house_prices, aes(x = condition)) +
geom_bar() +
labs(x = "condition", title = "House condition")
p1 + p2 + p3 + plot_layout(ncol = 2)
```
First, observe in the bottom plot that most houses are of condition "3", with a few more of conditions "4" and "5", and almost none that are "1" or "2".
Next, observe in the histogram for `price` in the top-left plot that a majority of houses are less than two million dollars. Observe also that the x-axis stretches out to 8 million dollars, even though there does not appear to be any houses close to that price. This is because there are a *very small number* of houses with prices closer to 8 million. These are the outlier house prices we mentioned earlier. We say that the variable `price` is *right-skewed* as exhibited by the long right tail.\index{skew}
Further, observe in the histogram of `sqft_living` in the middle plot as well that most houses appear to have less than 5000 square feet of living space. For comparison, a football field in the US is about 57,600 square feet, whereas a standard soccer/association football field is about 64,000 square feet. Observe also that this variable is also right-skewed, although not as drastically as the `price` variable.
For both the `price` and `sqft_living` variables, the right-skew makes distinguishing houses at the lower end of the x-axis hard. This is because the scale of the x-axis is compressed by the small number of quite expensive and immensely-sized houses.
So what can we do about this skew? Let's apply a *log10 transformation* to these variables. If you are unfamiliar with such transformations, we highly recommend you read Appendix \@ref(appendix-log10-transformations) on logarithmic (log) transformations.\index{log transformations} In summary, log transformations allow us to alter the scale of a variable to focus on *multiplicative* changes instead of *additive* changes. In other words, they shift the view to be on *relative* changes instead of *absolute* changes. Such multiplicative/relative changes are also called changes in *orders of magnitude*.
Let's create new log10 transformed versions of the right-skewed variable `price` and `sqft_living` using the `mutate()` function from Section \@ref(mutate), but we'll give the latter the name `log10_size`, which is shorter and easier to understand than the name `log10_sqft_living`.
```{r}
house_prices <- house_prices %>%
mutate(
log10_price = log10(price),
log10_size = log10(sqft_living)
)
```
Let's display the before and after effects of this transformation on these variables for only the first 10 rows of `house_prices`:
```{r}
house_prices %>%
select(price, log10_price, sqft_living, log10_size)
```
Observe in particular the houses in the sixth and third rows. The house in the sixth row has `price` \$`r house_prices$price[6] %>% comma()`, which is just above one million dollars. Since $10^6$ is one million, its `log10_price` is around `r house_prices$log10_price[6] %>% round(2)`.
Contrast this with all other houses with `log10_price` less than six, since they all have `price` less than \$1,000,000. The house in the third row is the only house with `sqft_living` less than 1000. Since $1000 = 10^3$, it's the lone house with `log10_size` less than 3.
Let's now visualize the before and after effects of this transformation for `price` in Figure \@ref(fig:log10-price-viz).
```{r, eval=FALSE}
# Before log10 transformation:
ggplot(house_prices, aes(x = price)) +
geom_histogram(color = "white") +
labs(x = "price (USD)", title = "House price: Before")
# After log10 transformation:
ggplot(house_prices, aes(x = log10_price)) +
geom_histogram(color = "white") +
labs(x = "log10 price (USD)", title = "House price: After")
```
```{r log10-price-viz, echo=FALSE, message=FALSE, fig.cap="House price before and after log10 transformation.", fig.height=2.3, purl=FALSE}
p1 <- ggplot(house_prices, aes(x = price)) +
geom_histogram(color = "white") +
labs(x = "price (USD)", title = "House price: Before")
p2 <- ggplot(house_prices, aes(x = log10_price)) +
geom_histogram(color = "white") +
labs(x = "log10 price (USD)", title = "House price: After")
p1 + p2
```
Observe that after the transformation, the distribution is much less skewed, and in this case, more symmetric and more bell-shaped. Now you can more easily distinguish the lower priced houses.
Let's do the same for house size, where the variable `sqft_living` was log10 transformed to `log10_size`.
```{r, eval=FALSE}
# Before log10 transformation:
ggplot(house_prices, aes(x = sqft_living)) +
geom_histogram(color = "white") +
labs(x = "living space (square feet)", title = "House size: Before")
# After log10 transformation:
ggplot(house_prices, aes(x = log10_size)) +
geom_histogram(color = "white") +
labs(x = "log10 living space (square feet)", title = "House size: After")
```
```{r log10-size-viz, echo=FALSE, message=FALSE, fig.cap="House size before and after log10 transformation.", fig.height=2.3, purl=FALSE}
p1 <- ggplot(house_prices, aes(x = sqft_living)) +
geom_histogram(color = "white") +
labs(
x = "living space (square feet)",
title = "House size: Before"
)
p2 <- ggplot(house_prices, aes(x = log10_size)) +
geom_histogram(color = "white") +
labs(
x = "log10 living space (square feet)",
title = "House size: After"
)
p1 + p2
```
Observe in Figure \@ref(fig:log10-size-viz) that the log10 transformation has a similar effect of unskewing the variable. We emphasize that while in these two cases the resulting distributions are more symmetric and bell-shaped, this is not always necessarily the case.
Given the now symmetric nature of `log10_price` and `log10_size`, we are going to revise our multiple regression model to use our new variables:
1. The outcome variable $y$ is the sale `log10_price` of houses.
1. Two explanatory variables:
1. A numerical explanatory variable $x_1$: house size `log10_size` as measured in log base 10 square feet of living space.
1. A categorical explanatory variable $x_2$: house `condition`, a categorical variable with five levels where `1` indicates "poor" and `5` indicates "excellent."
### Exploratory data analysis: Part II {#house-prices-EDA-II}
Let's now continue our EDA by creating *multivariate* visualizations. Unlike the *univariate* histograms and barplot in the earlier Figures \@ref(fig:house-prices-viz), \@ref(fig:log10-price-viz), and \@ref(fig:log10-size-viz), *multivariate* visualizations show relationships between more than one variable. This is an important step of an EDA to perform since the goal of modeling is to explore relationships between variables.
Since our model involves a numerical outcome variable, a numerical explanatory variable, and a categorical explanatory variable, we are in a similar regression modeling situation as in Section \@ref(model4) where we studied the UT Austin teaching scores dataset. Recall in that case the numerical outcome variable was teaching `score`, the numerical explanatory variable was instructor `age`, and the categorical explanatory variable was (binary) `gender`.
We thus have two choices of models we can fit: either (1) an *interaction model* where the regression line for each `condition` level will have both a different slope and a different intercept or (2) a *parallel slopes model* where the regression line for each `condition` level will have the same slope but different intercepts.
Recall from Subsection \@ref(model4table) that the `geom_parallel_slopes()` function is a special purpose function that Evgeni Chasnovski created and included in the `moderndive` package, since the `geom_smooth()` method in the `ggplot2` package does not have a convenient way to plot parallel slopes models. We plot both resulting models in Figure \@ref(fig:house-price-parallel-slopes), with the interaction model on the left.
```{r, eval=FALSE}
# Plot interaction model
ggplot(house_prices,
aes(x = log10_size, y = log10_price, col = condition)) +
geom_point(alpha = 0.05) +
geom_smooth(method = "lm", se = FALSE) +
labs(y = "log10 price",
x = "log10 size",
title = "House prices in Seattle")
# Plot parallel slopes model
ggplot(house_prices,
aes(x = log10_size, y = log10_price, col = condition)) +
geom_point(alpha = 0.05) +
geom_parallel_slopes(se = FALSE) +
labs(y = "log10 price",
x = "log10 size",
title = "House prices in Seattle")
```
```{r house-price-parallel-slopes, echo=FALSE, message=FALSE, fig.cap="Interaction and parallel slopes models.", purl=FALSE}
interaction <-
ggplot(
house_prices,
aes(x = log10_size, y = log10_price, col = condition)
) +
geom_point(alpha = 0.05) +
labs(y = "log10 price", x = "log10 size") +
geom_smooth(method = "lm", se = FALSE) +
guides(color = FALSE) +
labs(
title = "House prices in Seattle",
x = "log10 size",
y = "log10 price"
)
parallel_slopes <-
ggplot(
house_prices,
aes(x = log10_size, y = log10_price, col = condition)
) +
geom_point(alpha = 0.05) +
geom_parallel_slopes(se = FALSE) +
labs(y = NULL, x = "log10 size")
if (is_html_output()) {
interaction + parallel_slopes
} else {
(interaction + scale_color_grey()) +
(parallel_slopes + scale_color_grey())
}
```
In both cases, we see there is a positive relationship between house price and size, meaning as houses are larger, they tend to be more expensive. Furthermore, in both plots it seems that houses of condition 5 tend to be the most expensive for most house sizes as evidenced by the fact that the line for condition 5 is highest, followed by conditions 4 and 3. As for conditions 1 and 2, this pattern isn't as clear. Recall from the univariate barplot of `condition` in Figure \@ref(fig:house-prices-viz), there are only a few houses of condition 1 or 2.
Let's also show a faceted version of just the interaction model in Figure \@ref(fig:house-price-interaction-2). It is now much more apparent just how few houses are of condition 1 or 2.
```{r eval=FALSE}
ggplot(house_prices,
aes(x = log10_size, y = log10_price, col = condition)) +
geom_point(alpha = 0.4) +
geom_smooth(method = "lm", se = FALSE) +
labs(y = "log10 price",
x = "log10 size",
title = "House prices in Seattle") +
facet_wrap(~ condition)
```
```{r house-price-interaction-2, echo=FALSE, message=FALSE, fig.cap="Faceted plot of interaction model.", purl=FALSE}
interaction_2_plot <- ggplot(
house_prices,
aes(
x = log10_size, y = log10_price,
col = condition
)
) +
geom_point(alpha = 0.4) +
geom_smooth(method = "lm", se = FALSE) +
labs(
y = "log10 price", x = "log10 size",
title = "House prices in Seattle"
) +
facet_wrap(~condition)
if (!is_latex_output()) {
interaction_2_plot
} else {
interaction_2_plot +
scale_color_grey() +
theme(
strip.text = element_text(colour = "black"),
strip.background = element_rect(fill = "grey93")
)
}
```
Which exploratory visualization of the interaction model is better, the one in the left-hand plot of Figure \@ref(fig:house-price-parallel-slopes) or the faceted version in Figure \@ref(fig:house-price-interaction-2)? There is no universal right answer. You need to make a choice depending on what you want to convey, and own that choice, with including and discussing both also as an option as needed.
### Regression modeling {#house-prices-regression}
Which of the two models in Figure \@ref(fig:house-price-parallel-slopes) is "better"? The interaction model in the left-hand plot or the parallel slopes model in the right-hand plot?
We had a similar discussion in Subsection \@ref(model-selection) on *model selection*. Recall that we stated that we should only favor more complex models if the additional complexity is *warranted*. In this case, the more complex model is the interaction model since it considers five intercepts and five slopes total. This is in contrast to the parallel slopes model which considers five intercepts but only one common slope.
Is the additional complexity of the interaction model warranted? Looking at the left-hand plot in Figure \@ref(fig:house-price-parallel-slopes), we're of the opinion that it is, as evidenced by the slight x-like pattern to some of the lines. Therefore, we'll focus the rest of this analysis only on the interaction model. This visual approach is somewhat subjective, however, so feel free to disagree! What are the five different slopes and five different intercepts for the interaction model? We can obtain these values from the regression table. Recall our two-step process for getting the regression table:
```{r, eval=FALSE}
# Fit regression model:
price_interaction <- lm(log10_price ~ log10_size * condition,
data = house_prices)
# Get regression table:
get_regression_table(price_interaction)
```
```{r seattle-interaction, echo=FALSE, purl=FALSE}
price_interaction <- lm(log10_price ~ log10_size * condition,
data = house_prices
)
get_regression_table(price_interaction) %>%
kable(
digits = 3,
caption = "Regression table for interaction model",
booktabs = TRUE,
linesep = ""
) %>%
kable_styling(
font_size = ifelse(is_latex_output(), 10, 16),
latex_options = c("hold_position")
)
```
Recall we saw in Subsection \@ref(model4interactiontable) how to interpret a regression table when there are both numerical and categorical explanatory variables. Let's now do the same for all 10 values in the `estimate` column of Table \@ref(tab:seattle-interaction).
In this case, the "baseline for comparison" group for the categorical variable `condition` are the condition 1 houses, since "1" comes first alphanumerically. Thus, the `intercept` and `log10_size` values are the intercept and slope for `log10_size` for this baseline group. Next, the `condition2` through `condition5` terms are the *offsets* in intercepts relative to the condition 1 intercept. Finally, the `log10_size:condition2` through `log10_size:condition5` are the *offsets* in slopes for `log10_size` relative to the condition 1 slope for `log10_size`.
Let's simplify this by writing out the equation of each of the five regression lines using these 10 `estimate` values. We'll write out each line in the following format:
$$
\widehat{\log10(\text{price})} = \hat{\beta}_0 + \hat{\beta}_{\text{size}} \cdot \log10(\text{size})
$$
```{r echo=FALSE, purl=FALSE}
# This code is used for dynamic non-static in-line text output purposes
intercept <- get_regression_table(price_interaction) %>%
filter(term == "intercept") %>%
pull(estimate)
offset_log10_size <- get_regression_table(price_interaction) %>%
filter(term == "log10_size") %>%
pull(estimate)
offset_condition2 <- get_regression_table(price_interaction) %>%
filter(term == "condition: 2") %>%
pull(estimate)
offset_log10_size_condition2 <- get_regression_table(price_interaction) %>%
filter(term == "log10_size:condition2") %>%
pull(estimate)
offset_condition3 <- get_regression_table(price_interaction) %>%
filter(term == "condition: 3") %>%
pull(estimate)
offset_log10_size_condition3 <- get_regression_table(price_interaction) %>%
filter(term == "log10_size:condition3") %>%
pull(estimate)
offset_condition4 <- get_regression_table(price_interaction) %>%
filter(term == "condition: 4") %>%
pull(estimate)
offset_log10_size_condition4 <- get_regression_table(price_interaction) %>%
filter(term == "log10_size:condition4") %>%
pull(estimate)
offset_condition5 <- get_regression_table(price_interaction) %>%
filter(term == "condition: 5") %>%
pull(estimate)
offset_log10_size_condition5 <- get_regression_table(price_interaction) %>%
filter(term == "log10_size:condition5") %>%
pull(estimate)
```
1. Condition 1:
<!--
Note: Even though markdown preview of the following LaTeX looks garbled, it
comes out correct in the HTML output.
-->
$$\widehat{\log10(\text{price})} = `r intercept` + `r offset_log10_size` \cdot \log10(\text{size})$$
2. Condition 2:
$$
\begin{aligned}
\widehat{\log10(\text{price})} &= (`r intercept` + `r offset_condition2`) + (`r offset_log10_size` - `r -1*offset_log10_size_condition2`) \cdot \log10(\text{size}) \\
&= `r intercept + offset_condition2` + `r offset_log10_size + offset_log10_size_condition2` \cdot \log10(\text{size})
\end{aligned}
$$
3. Condition 3:
$$
\begin{aligned}
\widehat{\log10(\text{price})} &= (`r intercept` - `r -1*offset_condition3`) + (`r offset_log10_size` + `r offset_log10_size_condition3`) \cdot \log10(\text{size}) \\
&= `r intercept + offset_condition3` + `r offset_log10_size + offset_log10_size_condition3` \cdot \log10(\text{size})
\end{aligned}
$$
4. Condition 4:
$$
\begin{aligned}
\widehat{\log10(\text{price})} &= (`r intercept` - `r -1*offset_condition4`) + (`r offset_log10_size` + `r offset_log10_size_condition4`) \cdot \log10(\text{size}) \\
&= `r intercept + offset_condition4` + `r offset_log10_size + offset_log10_size_condition4` \cdot \log10(\text{size})
\end{aligned}
$$
5. Condition 5:
$$
\begin{aligned}
\widehat{\log10(\text{price})} &= (`r intercept` - `r -1*offset_condition5`) + (`r offset_log10_size` + `r offset_log10_size_condition5`) \cdot \log10(\text{size}) \\
&= `r intercept + offset_condition5` + `r offset_log10_size + offset_log10_size_condition5` \cdot \log10(\text{size})
\end{aligned}
$$
These correspond to the regression lines in the left-hand plot of Figure \@ref(fig:house-price-parallel-slopes) and the faceted plot in Figure \@ref(fig:house-price-interaction-2). For homes of all five condition types, as the size of the house increases, the price increases. This is what most would expect. However, the rate of increase of price with size is fastest for the homes with conditions 3, 4, and 5 of `r offset_log10_size + offset_log10_size_condition3`, `r offset_log10_size + offset_log10_size_condition4`, and `r offset_log10_size + offset_log10_size_condition5`, respectively. These are the three largest slopes out of the five.
### Making predictions {#house-prices-making-predictions}
```{r echo=FALSE, purl=FALSE}
# This code is used for dynamic non-static in-line text output purposes
ex_condition <- 5L
ex_size <- 1900L
ex_size_log10 <- log10(ex_size) %>% round(2)
ex_log10_price_prediction <- 5.75
```
Say you're a realtor and someone calls you asking you how much their home will sell for. They tell you that it's in condition = `r ex_condition` and is sized `r ex_size` square feet. What do you tell them? Let's use the interaction model we fit to make predictions!
We first make this prediction visually in Figure \@ref(fig:house-price-interaction-3). The predicted `log10_price` of this house is marked with a black dot. This is where the following two lines intersect:
* The regression line for the condition = `r ex_condition` homes and
* The vertical dashed black line at `log10_size` equals `r ex_size_log10`, since our predictor variable is the log10 transformed square feet of living space of $\log10(`r ex_size`) = `r ex_size_log10`$.
```{r house-price-interaction-3, echo=FALSE, message=FALSE, fig.cap="Interaction model with prediction.", fig.height=5, purl=FALSE}
new_house <- tibble(log10_size = log10(1900), condition = factor(5)) %>%
get_regression_points(price_interaction, newdata = .)
with_prediction_plot <- ggplot(house_prices, aes(x = log10_size, y = log10_price, col = condition)) +
geom_point(alpha = 0.05) +
labs(y = "log10 price", x = "log10 size", title = "House prices in Seattle") +
geom_smooth(method = "lm", se = FALSE) +
geom_vline(xintercept = log10(1900), linetype = "dashed", size = 1) +
geom_point(data = new_house, aes(y = log10_price_hat), col = "black", size = 3)
if (is_html_output()) {
with_prediction_plot
} else {
with_prediction_plot + scale_color_grey()
}
```
Eyeballing it, it seems the predicted `log10_price` seems to be around `r ex_log10_price_prediction`. Let's now obtain the exact numerical value for the prediction using the equation of the regression line for the condition = `r ex_condition` houses, being sure to `log10()` the square footage first.
```{r}
2.45 + 1 * log10(1900)
```
This value is very close to our earlier visually made prediction of `r ex_log10_price_prediction`. But wait! Is our prediction for the price of this house \$`r ex_log10_price_prediction`? No! Remember that we are using `log10_price` as our outcome variable! So, if we want a prediction in dollar units of `price`, we need to unlog this by taking a power of 10 as described in Appendix \@ref(appendix-log10-transformations).
```{r}
10^(2.45 + 1 * log10(1900))
```
So our predicted price for this home of condition `r ex_condition` and of size `r ex_size` square feet is `r 10^(2.45 + 1*log10(ex_size)) %>% dollar()`.
<!--
v2 TODO: Inference for regression for Seattle house prices
### Inference for regression {#house-prices-inference-for-regression}
- Interpret offset in slope terms
- All possible partial residual plots
- Then add back LC below
-->
```{block, type="learncheck", purl=FALSE}
\vspace{-0.15in}
**_Learning check_**
\vspace{-0.1in}
```
<!--
v2 TODO: Inference for regression for Seattle house prices
**` paste0("(LC", chap, ".", (lc <- lc + 1), ")")`** Check that the LINE conditions are met for inference to be made in this Seattle house prices example.
-->
**`r paste0("(LC", chap, ".", (lc <- lc + 1), ")")`** Repeat the regression modeling in Subsection \@ref(house-prices-regression) and the prediction making you just did on the house of condition `r ex_condition` and size `r ex_size` square feet in Subsection \@ref(house-prices-making-predictions), but using the parallel slopes model you visualized in Figure \@ref(fig:house-price-parallel-slopes). Show that it's `r 10^5.72 %>% dollar()`!
```{block, type="learncheck", purl=FALSE}
\vspace{-0.25in}
\vspace{-0.25in}
```
## Case study: Effective data storytelling {#data-journalism}
As we've progressed throughout this book, you've seen how to work with data in a variety of ways. You've learned effective strategies for plotting data by understanding which types of plots work best for which combinations of variable types. You've summarized data in spreadsheet form and calculated summary statistics for a variety of different variables. Furthermore, you've seen the value of statistical inference as a process to come to conclusions about a population by using sampling. Lastly, you've explored how to fit linear regression models and the importance of checking the conditions required so that all confidence intervals and hypothesis tests have valid interpretation. All throughout, you've learned many computational techniques and focused on writing R code that's reproducible.
We now present another set of case studies, but this time on the "effective data storytelling" done by data journalists around the world. Great data stories don't mislead the reader, but rather engulf them in understanding the importance that data plays in our lives through storytelling.
### Bechdel test for Hollywood gender representation
We recommend you read and analyze Walt Hickey's\index{Hickey, Walt} FiveThirtyEight.com article, ["The Dollar-And-Cents Case Against Hollywood’s Exclusion of Women."](http://fivethirtyeight.com/features/the-dollar-and-cents-case-against-hollywoods-exclusion-of-women/) In it, Walt completed a multidecade study of how many movies pass the [Bechdel test](https://bechdeltest.com/), an informal test of gender representation in a movie that was created by \index{Bechdel, Alison} Alison Bechdel.
As you read over the article, think carefully about how Walt Hickey is using data, graphics, and analyses to tell the reader a story. In the spirit of reproducibility, FiveThirtyEight have also shared the [data and R code](https://github.com/fivethirtyeight/data/tree/master/bechdel) that they used for this article. You can also find the data used in many more of their articles on their [GitHub](https://github.com/fivethirtyeight/data) page.
*ModernDive* co-authors Chester Ismay and Albert Y. Kim along with Jennifer Chunn went one step further by creating the `fivethirtyeight` package which provides access to these datasets more easily in R. For a complete list of all `r nrow(data(package = "fivethirtyeight")[[3]])` datasets included in the `fivethirtyeight` package, check out the package webpage at <https://fivethirtyeight-r.netlify.app/articles/fivethirtyeight.html>.\index{R packages!fivethirtyeight}
Furthermore, example "vignettes" of fully reproducible start-to-finish analyses of some of these data using `dplyr`, `ggplot2`, and other packages in the `tidyverse` are available [here](https://fivethirtyeight-r.netlify.app/articles/). For example, a vignette showing how to reproduce one of the plots at the end of the article on the Bechdel test is available [here](https://fivethirtyeightdata.github.io/fivethirtyeightdata/articles/bechdel.html).
### US Births in 1999
The `US_births_1994_2003` data frame included in the `fivethirtyeight` package provides information about the number of daily births in the United States between 1994 and 2003. For more information on this data frame including a link to the original article on FiveThirtyEight.com, check out the help file by running `?US_births_1994_2003` in the console.
It's always a good idea to preview your data, either by using RStudio's spreadsheet `View()` function or using `glimpse()` from the `dplyr` package:
```{r}
glimpse(US_births_1994_2003)
```
We'll focus on the number of `births` for each `date`, but only for births that occurred in 1999. Recall from Section \@ref(filter) we can do this using the `filter()` function from the `dplyr` package:
```{r}
US_births_1999 <- US_births_1994_2003 %>%
filter(year == 1999)
```
As discussed in Section \@ref(linegraphs), since `date` is a notion of time and thus has sequential ordering to it, a linegraph would be a more appropriate visualization to use than a scatterplot. In other words, we should use a `geom_line()` instead of `geom_point()`. Recall that such plots are called \index{time series plots} *time series* plots.
```{r us-births, fig.cap="Number of births in the US in 1999.", fig.height=6.4}
ggplot(US_births_1999, aes(x = date, y = births)) +
geom_line() +
labs(x = "Date",
y = "Number of births",
title = "US Births in 1999")
```
We see a big dip occurring just before January 1st, 2000, most likely due to the holiday season. However, what about the large spike of over 14,000 births occurring just before October 1st, 1999? What could be the reason for this anomalously high spike?
Let's sort the rows of `US_births_1999` in descending order of the number of births. Recall from Section \@ref(arrange) that we can use the `arrange()` function from the `dplyr` function to do this, making sure to sort `births` in `desc`ending order:
```{r}
US_births_1999 %>%
arrange(desc(births))
```
The date with the highest number of births (14,540) is in fact 1999-09-09. If we write down this date in month/day/year format (a standard format in the US), the date with the highest number of births is 9/9/99! All nines! Could it be that parents deliberately induced labor at a higher rate on this date? Maybe? Whatever the cause may be, this fact makes a fun story!
```{block, type="learncheck", purl=FALSE}
\vspace{-0.15in}
**_Learning check_**
\vspace{-0.1in}
```
**`r paste0("(LC", chap, ".", (lc <- lc + 1), ")")`** What date between 1994 and 2003 has the fewest number of births in the US? What story could you tell about why this is the case?
```{block, type="learncheck", purl=FALSE}
\vspace{-0.25in}
\vspace{-0.25in}
```
Time to think with data and further tell your story with data! How could statistical modeling help you here? What types of statistical inference would be helpful? What else can you find and where can you take this analysis? What assumptions did you make in this analysis? We leave these questions to you as the reader to explore and examine.
Remember to get in touch with us via our contact info in the Preface. We’d love to see what you come up with!
Please check out additional problem sets and labs at <https://moderndive.com/labs> as well.
### Scripts of R code
```{r echo=FALSE, purl=FALSE, results="asis"}
generate_r_file_link("11-tell-your-story-with-data.R")
```
R code files saved as *.R files for all relevant chapters throughout the entire book are in the following table.
```{r script-files-table, echo=FALSE, message=FALSE, purl=FALSE}
if (!file.exists("rds/chapter_script_pub_files.rds")) {
chapter_script_pub_files <- "https://docs.google.com/spreadsheets/d/e/2PACX-1vTtqCUn7IdKJMgQ8LmXl7-us2DVxKCPz0w5BHhO5JLOof0gRfmv0DK1xw1PDC7PhIhUglb4Q_JA2zsg/pub?gid=0&single=true&output=csv" %>%
read_csv(na = "")
write_rds(chapter_script_pub_files, "rds/chapter_script_pub_files.rds")
} else {
chapter_script_pub_files <- read_rds("rds/chapter_script_pub_files.rds")
}
if (!file.exists("rds/chapter_script_dev_files.rds")) {
chapter_script_dev_files <- "https://docs.google.com/spreadsheets/d/e/2PACX-1vTtqCUn7IdKJMgQ8LmXl7-us2DVxKCPz0w5BHhO5JLOof0gRfmv0DK1xw1PDC7PhIhUglb4Q_JA2zsg/pub?gid=490443444&single=true&output=csv" %>%
read_csv(na = "")
write_rds(chapter_script_dev_files, "rds/chapter_script_dev_files.rds")
} else {
chapter_script_dev_files <- read_rds("rds/chapter_script_dev_files.rds")
}
if (dev_version & is_html_output()) {
chapter_script_dev_files %>%
select(chapter, link) %>%
kable()
} else {
chapter_script_pub_files %>%
select(chapter, link) %>%
kable()
}
```
## Concluding remarks {-}
Now that you've made it to this point in the book, we suspect that you know a thing or two about how to work with data in R! You've also gained a lot of knowledge about how to use simulation-based techniques for statistical inference and how these techniques help build intuition about traditional theory-based inferential methods like the $t$-test.
The hope is that you've come to appreciate the power of data in all respects, such as data wrangling, tidying datasets, data visualization, data modeling, and statistical inference. In our opinion, while each of these is important, data visualization may be the most important tool for a citizen or professional data scientist to have in their toolbox. If you can create truly beautiful graphics that display information in ways that the reader can clearly understand, you have great power to tell your tale with data. Let's hope that these skills help you tell great stories with data into the future. Thanks for coming along this journey as we dove into modern data analysis using R and the `tidyverse`!