-
Notifications
You must be signed in to change notification settings - Fork 16
/
Copy pathMissing-Value-Treatment-With-R.html
357 lines (322 loc) · 25 KB
/
Missing-Value-Treatment-With-R.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
<!DOCTYPE html>
<html>
<head>
<title>Missing Value Treatment</title>
<meta charset="utf-8">
<meta name="Description" content="R Language Tutorials for Advanced Statistics">
<meta name="Keywords" content="R, Tutorial, Machine learning, Statistics, Data Mining, Analytics, Data science, Linear Regression, Logistic Regression, Time series, Forecasting">
<meta name="Distribution" content="Global">
<meta name="Author" content="Selva Prabhakaran">
<meta name="Robots" content="index, follow">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="shortcut icon" href="/screenshots/iconb-64.png" type="image/x-icon" />
<link href="www/bootstrap.min.css" rel="stylesheet">
<link href="www/highlight.css" rel="stylesheet">
<link href='http://fonts.googleapis.com/css?family=Inconsolata:400,700'
rel='stylesheet' type='text/css'>
<!-- Color Script -->
<style type="text/css">
a {
color: #3675C5;
color: rgb(25, 145, 248);
color: #4582ec;
color: #3F73D8;
}
li {
line-height: 1.65;
}
/* reduce spacing around math formula*/
.MathJax_Display {
margin: 0em 0em;
}
</style>
<!-- Add Google search -->
<script language="Javascript" type="text/javascript">
function my_search_google()
{
var query = document.getElementById("my-google-search").value;
window.open("http://google.com/search?q=" + query
+ "%20site:" + "http://r-statistics.co");
}
</script>
</head>
<body>
<div class="container">
<div class="masthead">
<!--
<ul class="nav nav-pills pull-right">
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">
Table of contents<b class="caret"></b>
</a>
<ul class="dropdown-menu pull-right" role="menu">
<li class="dropdown-header"></li>
<li class="dropdown-header">Tutorial</li>
<li><a href="R-Tutorial.html">R Tutorial</a></li>
<li class="dropdown-header">ggplot2</li>
<li><a href="ggplot2-Tutorial-With-R.html">ggplot2 Short Tutorial</a></li>
<li><a href="Complete-Ggplot2-Tutorial-Part1-With-R-Code.html">ggplot2 Tutorial 1 - Intro</a></li>
<li><a href="Complete-Ggplot2-Tutorial-Part2-Customizing-Theme-With-R-Code.html">ggplot2 Tutorial 2 - Theme</a></li>
<li><a href="Top50-Ggplot2-Visualizations-MasterList-R-Code.html">ggplot2 Tutorial 3 - Masterlist</a></li>
<li><a href="ggplot2-cheatsheet.html">ggplot2 Quickref</a></li>
<li class="dropdown-header">Foundations</li>
<li><a href="Linear-Regression.html">Linear Regression</a></li>
<li><a href="Statistical-Tests-in-R.html">Statistical Tests</a></li>
<li><a href="Missing-Value-Treatment-With-R.html">Missing Value Treatment</a></li>
<li><a href="Outlier-Treatment-With-R.html">Outlier Analysis</a></li>
<li><a href="Variable-Selection-and-Importance-With-R.html">Feature Selection</a></li>
<li><a href="Model-Selection-in-R.html">Model Selection</a></li>
<li><a href="Logistic-Regression-With-R.html">Logistic Regression</a></li>
<li><a href="Environments.html">Advanced Linear Regression</a></li>
<li class="dropdown-header">Advanced Regression Models</li>
<li><a href="adv-regression-models.html">Advanced Regression Models</a></li>
<li class="dropdown-header">Time Series</li>
<li><a href="Time-Series-Analysis-With-R.html">Time Series Analysis</a></li>
<li><a href="Time-Series-Forecasting-With-R.html">Time Series Forecasting </a></li>
<li><a href="Time-Series-Forecasting-With-R-part2.html">More Time Series Forecasting</a></li>
<li class="dropdown-header">High Performance Computing</li>
<li><a href="Parallel-Computing-With-R.html">Parallel computing</a></li>
<li><a href="Strategies-To-Improve-And-Speedup-R-Code.html">Strategies to Speedup R code</a></li>
<li class="dropdown-header">Useful Techniques</li>
<li><a href="Association-Mining-With-R.html">Association Mining</a></li>
<li><a href="Multi-Dimensional-Scaling-With-R.html">Multi Dimensional Scaling</a></li>
<li><a href="Profiling.html">Optimization</a></li>
<li><a href="Information-Value-With-R.html">InformationValue package</a></li>
</ul>
</li>
</ul>
-->
<ul class="nav nav-pills pull-right">
<div class="input-group">
<form onsubmit="my_search_google()">
<input type="text" class="form-control" id="my-google-search" placeholder="Search..">
<form>
</div><!-- /input-group -->
</ul><!-- /.col-lg-6 -->
<h3 class="muted"><a href="/">r-statistics.co</a><small> by Selva Prabhakaran</small></h3>
<hr>
</div>
<div class="row">
<div class="col-xs-12 col-sm-3" id="nav">
<div class="well">
<li>
<ul class="list-unstyled">
<li class="dropdown-header"></li>
<li class="dropdown-header">Tutorial</li>
<li><a href="R-Tutorial.html">R Tutorial</a></li>
<li class="dropdown-header">ggplot2</li>
<li><a href="ggplot2-Tutorial-With-R.html">ggplot2 Short Tutorial</a></li>
<li><a href="Complete-Ggplot2-Tutorial-Part1-With-R-Code.html">ggplot2 Tutorial 1 - Intro</a></li>
<li><a href="Complete-Ggplot2-Tutorial-Part2-Customizing-Theme-With-R-Code.html">ggplot2 Tutorial 2 - Theme</a></li>
<li><a href="Top50-Ggplot2-Visualizations-MasterList-R-Code.html">ggplot2 Tutorial 3 - Masterlist</a></li>
<li><a href="ggplot2-cheatsheet.html">ggplot2 Quickref</a></li>
<li class="dropdown-header">Foundations</li>
<li><a href="Linear-Regression.html">Linear Regression</a></li>
<li><a href="Statistical-Tests-in-R.html">Statistical Tests</a></li>
<li><a href="Missing-Value-Treatment-With-R.html">Missing Value Treatment</a></li>
<li><a href="Outlier-Treatment-With-R.html">Outlier Analysis</a></li>
<li><a href="Variable-Selection-and-Importance-With-R.html">Feature Selection</a></li>
<li><a href="Model-Selection-in-R.html">Model Selection</a></li>
<li><a href="Logistic-Regression-With-R.html">Logistic Regression</a></li>
<li><a href="Environments.html">Advanced Linear Regression</a></li>
<li class="dropdown-header">Advanced Regression Models</li>
<li><a href="adv-regression-models.html">Advanced Regression Models</a></li>
<li class="dropdown-header">Time Series</li>
<li><a href="Time-Series-Analysis-With-R.html">Time Series Analysis</a></li>
<li><a href="Time-Series-Forecasting-With-R.html">Time Series Forecasting </a></li>
<li><a href="Time-Series-Forecasting-With-R-part2.html">More Time Series Forecasting</a></li>
<li class="dropdown-header">High Performance Computing</li>
<li><a href="Parallel-Computing-With-R.html">Parallel computing</a></li>
<li><a href="Strategies-To-Improve-And-Speedup-R-Code.html">Strategies to Speedup R code</a></li>
<li class="dropdown-header">Useful Techniques</li>
<li><a href="Association-Mining-With-R.html">Association Mining</a></li>
<li><a href="Multi-Dimensional-Scaling-With-R.html">Multi Dimensional Scaling</a></li>
<li><a href="Profiling.html">Optimization</a></li>
<li><a href="Information-Value-With-R.html">InformationValue package</a></li>
</ul>
</li>
</div>
<div class="well">
<p>Stay up-to-date. <a href="https://docs.google.com/forms/d/1xkMYkLNFU9U39Dd8S_2JC0p8B5t6_Yq6zUQjanQQJpY/viewform">Subscribe!</a></p>
<p><a href="https://docs.google.com/forms/d/13GrkCFcNa-TOIllQghsz2SIEbc-YqY9eJX02B19l5Ow/viewform">Chat!</a></p>
</div>
<h4>Contents</h4>
<ul class="list-unstyled" id="toc"></ul>
<!--
<hr>
<p><a href="/contribute.html">How to contribute</a></p>
<p><a class="btn btn-primary" href="">Edit this page</a></p>
-->
</div>
<div id="content" class="col-xs-12 col-sm-8 pull-right">
<h1>Missing Value Treatment</h1>
<blockquote>
<p>Missing values in data is a common phenomenon in real world problems. Knowing how to handle missing values effectively is a required step to reduce bias and to produce powerful models. Lets explore various options of how to deal with missing values and how to implement them.</p>
</blockquote>
<h2>Data prep and pattern</h2>
<p>Lets use the <code>BostonHousing</code> dataset in <code>mlbench</code> package to discuss the various approaches to treating missing values. Though the original <code>BostonHousing</code> data doesn’t have missing values, I am going to randomly introduce missing values. This way, we can validate the imputed missing values against the actuals, so that we know how effective are the approaches in reproducing the actual data. Lets begin by importing the data from <code>mlbench</code> pkg and randomly insert missing values (NA).</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">data</span> (<span class="st">"BostonHousing"</span>, <span class="dt">package=</span><span class="st">"mlbench"</span>) <span class="co"># initialize the data # load the data</span>
original <-<span class="st"> </span>BostonHousing <span class="co"># backup original data</span>
<span class="co"># Introduce missing values</span>
<span class="kw">set.seed</span>(<span class="dv">100</span>)
BostonHousing[<span class="kw">sample</span>(<span class="dv">1</span>:<span class="kw">nrow</span>(BostonHousing), <span class="dv">40</span>), <span class="st">"rad"</span>] <-<span class="st"> </span><span class="ot">NA</span>
BostonHousing[<span class="kw">sample</span>(<span class="dv">1</span>:<span class="kw">nrow</span>(BostonHousing), <span class="dv">40</span>), <span class="st">"ptratio"</span>] <-<span class="st"> </span><span class="ot">NA</span>
<span class="kw">head</span>(BostonHousing)
<span class="co">#> crim zn indus chas nox rm age dis rad tax ptratio b lstat medv</span>
<span class="co">#> 1 0.00632 18 2.31 0 0.538 6.575 65.2 4.0900 1 296 15.3 396.90 4.98 24.0</span>
<span class="co">#> 2 0.02731 0 7.07 0 0.469 6.421 78.9 4.9671 2 242 17.8 396.90 9.14 21.6</span>
<span class="co">#> 3 0.02729 0 7.07 0 0.469 7.185 61.1 4.9671 2 242 17.8 392.83 4.03 34.7</span>
<span class="co">#> 4 0.03237 0 2.18 0 0.458 6.998 45.8 6.0622 3 222 18.7 394.63 2.94 33.4</span>
<span class="co">#> 5 0.06905 0 2.18 0 0.458 7.147 54.2 6.0622 3 222 18.7 396.90 5.33 36.2</span>
<span class="co">#> 6 0.02985 0 2.18 0 0.458 6.430 58.7 6.0622 3 222 18.7 394.12 5.21 28.7</span></code></pre></div>
<p>The missing values have been injected. Though we know where the missings are, lets quickly check the ‘missings’ pattern using <code>mice::md.pattern</code>.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(mice)
<span class="kw">md.pattern</span>(BostonHousing) <span class="co"># pattern or missing values in data.</span>
<span class="co">#> crim zn indus chas nox rm age dis tax b lstat medv rad ptratio </span>
<span class="co">#> 431 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0</span>
<span class="co">#> 35 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1</span>
<span class="co">#> 35 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1</span>
<span class="co">#> 5 1 1 1 1 1 1 1 1 1 1 1 1 0 0 2</span>
<span class="co">#> 0 0 0 0 0 0 0 0 0 0 0 0 40 40 80</span></code></pre></div>
<h4>There are really four ways you can handle missing values:</h4>
<h2>1. Deleting the observations</h2>
<p>If you have large number of observations in your dataset, where all the classes to be predicted are sufficiently represented in the training data, then try deleting (or not to include missing values while model building, for example by setting <code>na.action=na.omit</code>) those observations (rows) that contain missing values. Make sure after deleting the observations, you have:</p>
<ol style="list-style-type: decimal">
<li>Have sufficent data points, so the model doesn’t lose power.</li>
<li>Not to introduce bias (meaning, disproportionate or non-representation of classes).</li>
</ol>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="co"># Example</span>
<span class="kw">lm</span>(medv ~<span class="st"> </span>ptratio +<span class="st"> </span>rad, <span class="dt">data=</span>BostonHousing, <span class="dt">na.action=</span>na.omit) <span class="co"># though na.omit is default in lm()</span></code></pre></div>
<h2>2. Deleting the variable</h2>
<p>If a paricular variable is having more missing values that rest of the variables in the dataset, and, if by removing that one variable you can save many observations, then you are better off without that variable unless it is a really important predictor that makes a lot of business sense. It is a matter of deciding between the importance of the variable and losing out on a number of observations.</p>
<h2>3. Imputation with mean / median / mode</h2>
<p>Replacing the missing values with the mean / median / mode is a crude way of treating missing values. Depending on the context, like if the variation is low or if the variable has low leverage over the response, such a rough approximation is acceptable and could possibly give satisfactory results.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(Hmisc)
<span class="kw">impute</span>(BostonHousing$ptratio, mean) <span class="co"># replace with mean</span>
<span class="kw">impute</span>(BostonHousing$ptratio, median) <span class="co"># median</span>
<span class="kw">impute</span>(BostonHousing$ptratio, <span class="dv">20</span>) <span class="co"># replace specific number</span>
<span class="co"># or if you want to impute manually</span>
BostonHousing$ptratio[<span class="kw">is.na</span>(BostonHousing$ptratio)] <-<span class="st"> </span><span class="kw">mean</span>(BostonHousing$ptratio, <span class="dt">na.rm =</span> T) <span class="co"># not run</span></code></pre></div>
<p>Lets compute the accuracy when it is imputed with mean</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(DMwR)
actuals <-<span class="st"> </span>original$ptratio[<span class="kw">is.na</span>(BostonHousing$ptratio)]
predicteds <-<span class="st"> </span><span class="kw">rep</span>(<span class="kw">mean</span>(BostonHousing$ptratio, <span class="dt">na.rm=</span>T), <span class="kw">length</span>(actuals))
<span class="kw">regr.eval</span>(actuals, predicteds)
<span class="co">#> mae mse rmse mape </span>
<span class="co">#> 1.62324034 4.19306071 2.04769644 0.09545664</span></code></pre></div>
<h2>4. Prediction</h2>
<h2>4.1. kNN Imputation</h2>
<p><code>DMwR::knnImputation</code> uses k-Nearest Neighbours approach to impute missing values. What kNN imputation does in simpler terms is as follows: For every observation to be imputed, it identifies ‘k’ closest observations based on the euclidean distance and computes the weighted average (weighted based on distance) of these ‘k’ obs.</p>
<p>The advantage is that you could impute all the missing values in all variables with one call to the function. It takes the whole data frame as the argument and you don’t even have to specify which variabe you want to impute. But be cautious not to include the response variable while imputing, because, when imputing in test/production environment, if your data contains missing values, you won’t be able to use the unknown response variable at that time.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(DMwR)
knnOutput <-<span class="st"> </span><span class="kw">knnImputation</span>(BostonHousing[, !<span class="kw">names</span>(BostonHousing) %in%<span class="st"> "medv"</span>]) <span class="co"># perform knn imputation.</span>
<span class="kw">anyNA</span>(knnOutput)
<span class="co">#> FALSE</span></code></pre></div>
<p>Lets compute the accuracy.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">actuals <-<span class="st"> </span>original$ptratio[<span class="kw">is.na</span>(BostonHousing$ptratio)]
predicteds <-<span class="st"> </span>knnOutput[<span class="kw">is.na</span>(BostonHousing$ptratio), <span class="st">"ptratio"</span>]
<span class="kw">regr.eval</span>(actuals, predicteds)
<span class="co">#> mae mse rmse mape </span>
<span class="co">#> 1.00188715 1.97910183 1.40680554 0.05859526 </span></code></pre></div>
<p>The mean absolute percentage error (mape) has improved by ~ 39% compared to the imputation by mean. Good.</p>
<h2>4.2 rpart</h2>
<p>The limitation with <code>DMwR::knnImputation</code> is that it sometimes may not be appropriate to use when the missing value comes from a factor variable. Both <code>rpart</code> and <code>mice</code> has flexibility to handle that scenario. The advantage with <code>rpart</code> is that you just need only one of the variables to be non NA in the predictor fields.</p>
<p>The idea here is we are going to use <code>rpart</code> to predict the missing values instead of <code>kNN</code>. To handle factor variable, we can set the <code>method=class</code> while calling <code>rpart()</code>. For numerics, we use, <code>method=anova</code>. Here again, we need to make sure not to train <code>rpart</code> on response variable (<code>medv</code>).</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(rpart)
class_mod <-<span class="st"> </span><span class="kw">rpart</span>(rad ~<span class="st"> </span>. -<span class="st"> </span>medv, <span class="dt">data=</span>BostonHousing[!<span class="kw">is.na</span>(BostonHousing$rad), ], <span class="dt">method=</span><span class="st">"class"</span>, <span class="dt">na.action=</span>na.omit) <span class="co"># since rad is a factor</span>
anova_mod <-<span class="st"> </span><span class="kw">rpart</span>(ptratio ~<span class="st"> </span>. -<span class="st"> </span>medv, <span class="dt">data=</span>BostonHousing[!<span class="kw">is.na</span>(BostonHousing$ptratio), ], <span class="dt">method=</span><span class="st">"anova"</span>, <span class="dt">na.action=</span>na.omit) <span class="co"># since ptratio is numeric.</span>
rad_pred <-<span class="st"> </span><span class="kw">predict</span>(class_mod, BostonHousing[<span class="kw">is.na</span>(BostonHousing$rad), ])
ptratio_pred <-<span class="st"> </span><span class="kw">predict</span>(anova_mod, BostonHousing[<span class="kw">is.na</span>(BostonHousing$ptratio), ])</code></pre></div>
<p>Lets compute the accuracy for <code>ptratio</code></p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">actuals <-<span class="st"> </span>original$ptratio[<span class="kw">is.na</span>(BostonHousing$ptratio)]
predicteds <-<span class="st"> </span>ptratio_pred
<span class="kw">regr.eval</span>(actuals, predicteds)
<span class="co">#> mae mse rmse mape </span>
<span class="co">#> 0.71061673 0.99693845 0.99846805 0.04099908 </span></code></pre></div>
<p>The mean absolute percentage error (mape) has improved additionally by another ~ 30% compared to the <code>knnImputation</code>. Very Good.</p>
<p>Accuracy for <code>rad</code></p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">actuals <-<span class="st"> </span>original$rad[<span class="kw">is.na</span>(BostonHousing$rad)]
predicteds <-<span class="st"> </span><span class="kw">as.numeric</span>(<span class="kw">colnames</span>(rad_pred)[<span class="kw">apply</span>(rad_pred, <span class="dv">1</span>, which.max)])
<span class="kw">mean</span>(actuals !=<span class="st"> </span>predicteds) <span class="co"># compute misclass error.</span>
<span class="co">#> 0.25 </span></code></pre></div>
<p>This yields a mis-classification error of 25%. Not bad for a factor variable!</p>
<h2>4.3 mice</h2>
<p><code>mice</code> short for <a href="http://www.jstatsoft.org/article/view/v045i03/v45i03.pdf">Multivariate Imputation by Chained Equations</a> is an R package that provides advanced features for missing value treatment. It uses a slightly uncommon way of implementing the imputation in 2-steps, using <code>mice()</code> to build the model and <code>complete()</code> to generate the completed data. The <code>mice(df)</code> function produces multiple complete copies of <code>df</code>, each with different imputations of the missing data. The <a href="http://www.inside-r.org/packages/cran/mice/docs/complete"><code>complete()</code></a> function returns one or several of these data sets, with the default being the first. Lets see how to impute ‘rad’ and ‘ptratio’:</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r"><span class="kw">library</span>(mice)
miceMod <-<span class="st"> </span><span class="kw">mice</span>(BostonHousing[, !<span class="kw">names</span>(BostonHousing) %in%<span class="st"> "medv"</span>], <span class="dt">method=</span><span class="st">"rf"</span>) <span class="co"># perform mice imputation, based on random forests.</span>
miceOutput <-<span class="st"> </span><span class="kw">complete</span>(miceMod) <span class="co"># generate the completed data.</span>
<span class="kw">anyNA</span>(miceOutput)
<span class="co">#> FALSE</span></code></pre></div>
<p>Lets compute the accuracy of <code>ptratio</code>.</p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">actuals <-<span class="st"> </span>original$ptratio[<span class="kw">is.na</span>(BostonHousing$ptratio)]
predicteds <-<span class="st"> </span>miceOutput[<span class="kw">is.na</span>(BostonHousing$ptratio), <span class="st">"ptratio"</span>]
<span class="kw">regr.eval</span>(actuals, predicteds)
<span class="co">#> mae mse rmse mape </span>
<span class="co">#> 0.36500000 0.78100000 0.88374204 0.02121326</span></code></pre></div>
<p>The mean absolute percentage error (mape) has improved additionally by ~ 48% compared to the <code>rpart</code>. Excellent!.</p>
<p>Lets compute the accuracy of <code>rad</code></p>
<div class="sourceCode"><pre class="sourceCode r"><code class="sourceCode r">actuals <-<span class="st"> </span>original$rad[<span class="kw">is.na</span>(BostonHousing$rad)]
predicteds <-<span class="st"> </span>miceOutput[<span class="kw">is.na</span>(BostonHousing$rad), <span class="st">"rad"</span>]
<span class="kw">mean</span>(actuals !=<span class="st"> </span>predicteds) <span class="co"># compute misclass error.</span>
<span class="co">#> 0.15</span></code></pre></div>
<p>The mis-classification error reduced to 15%, which is 6 out of 40 observations. This is a good improvement compared to rpart’s 25%.</p>
<p>If you’d like to dig in deeper, here is the <a href="http://www.stefvanbuuren.nl/publications/MICE%20V1.0%20Manual%20TNO00038%202000.pdf">manual</a>.</p>
<p>Though we have an idea of how each method performs, there is not enough evidence to conclude which method is better or worse. But these are definitely worth testing out the next time you impute missing values.</p>
</div>
</div>
<div class="footer">
<hr>
<p>© 2016-17 Selva Prabhakaran. Powered by <a href="http://jekyllrb.com/">jekyll</a>,
<a href="http://yihui.name/knitr/">knitr</a>, and
<a href="http://johnmacfarlane.net/pandoc/">pandoc</a>.
This work is licensed under the <a href="http://creativecommons.org/licenses/by-nc/3.0/">Creative Commons License.</a>
</p>
</div>
</div> <!-- /container -->
<script src="//code.jquery.com/jquery.js"></script>
<script src="www/bootstrap.min.js"></script>
<script src="www/toc.js"></script>
<!-- MathJax Script -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
<script type="text/javascript"
src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>
<!-- Google Analytics Code -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-69351797-1', 'auto');
ga('send', 'pageview');
</script>
<style type="text/css">
/* reduce spacing around math formula*/
.MathJax_Display {
margin: 0em 0em;
}
body {
font-family: 'Helvetica Neue', Roboto, Arial, sans-serif;
font-size: 16px;
line-height: 27px;
font-weight: 400;
}
blockquote p {
line-height: 1.75;
color: #717171;
}
.well li{
line-height: 28px;
}
li.dropdown-header {
display: block;
padding: 0px;
font-size: 14px;
}
</style>
</body>
</html>