diff --git a/Predicting Reddit News Sentiment/Predicting Reddit News Sentiment with Naive Bayes and Other Text Classifiers.ipynb b/Predicting Reddit News Sentiment/Predicting Reddit News Sentiment with Naive Bayes and Other Text Classifiers.ipynb new file mode 100644 index 0000000..73df55c --- /dev/null +++ b/Predicting Reddit News Sentiment/Predicting Reddit News Sentiment with Naive Bayes and Other Text Classifiers.ipynb @@ -0,0 +1,1125 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Text Classification\n", + "\n", + "In our previous post, we covered some of the [basics of sentiment analysis](/sentiment-analysis-reddit-headlines-pythons-nltk/), where we gathered and categorize political headlines. Now, we can use that data to train a binary classifier to predict if a headline is positive or negative. \n", + "\n", + "## Brief Intro Using Classification and Some Problems We Face\n", + "\n", + "Classification is the process of identifying the category of a new, unseen observation based of a training set of data, which has categories that are known. \n", + "\n", + "In our case, our headlines are the observations and the positive/negative sentiment are the categories. This is a **binary classification** problem -- we're trying to predict if a headline is either positive or negative.\n", + "\n", + "### First Problem: Imbalanced Dataset\n", + "\n", + "One of the most common problems, in machine learning, is working with an imbalanced dataset. As we'll see below, we have a *slightly* imbalanced dataset, where there's more negatives than positives. \n", + "\n", + "Compared to some problems, like fraud detection, our dataset isn't super imbalanced. Sometimes you'll have datasets where the positive class is only 1% of the training data, the rest being negatives.\n", + "\n", + "We want to be careful with interpreting results from imbalanced data. When producing scores with our classifier, you may experience accuracy up to 90%, which is commonly known as the [Accuracy Paradox](https://en.wikipedia.org/wiki/Accuracy_paradox). \n", + "\n", + "The reason why we might have 90% accuracy is due to our model examining the data and deciding to always predict *negative*, resulting in high accuracy. \n", + "\n", + "There's a number of ways to counter this problem, such as::\n", + "\n", + "* **Collect more data:** could help balance the dataset by adding more minor class examples.\n", + "* **Change you metric:** use either the Confusion Matrix, Precision, Recall or F1 score (combination of precision and recall).\n", + "* **Oversample the data:** randomly sample the attributes from examples in the minority class to create more 'fake' data.\n", + "* **Penalized model:** Implements an additional cost on the model for making classification mistakes on the minority class during training. These penalties bias the model towards the minority class.\n", + "\n", + "In our dataset, we have less positive examples than negative examples, and we will explore both different metrics and utilizing an oversampling technique, called SMOTE.\n", + "\n", + "Let's establish a few basic imports:" + ] + }, + { + "cell_type": "code", + "execution_count": 229, + "metadata": {}, + "outputs": [], + "source": [ + "import math\n", + "import random\n", + "from collections import defaultdict\n", + "from pprint import pprint\n", + "\n", + "# Prevent future/deprecation warnings from showing in output\n", + "import warnings\n", + "warnings.filterwarnings(action='ignore')\n", + "\n", + "import seaborn as sns\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "import pandas as pd\n", + "\n", + "# Set global styles for plots\n", + "sns.set_style(style='white')\n", + "sns.set_context(context='notebook', font_scale=1.3, rc={'figure.figsize': (16,9)})" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "These are basic imports used across the entire notebook, and are usually imported in every data science project. The more specific imports from sklearn and other libraries will be brought up when we use them." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Loading the Dataset\n", + "\n", + "First let's load the dataset that we created in the last article:" + ] + }, + { + "cell_type": "code", + "execution_count": 230, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
headlinelabel
0Gillespie Victory In Virginia Would Vindicate ...0
1Screw Ron Paul and all of his \"if he can't aff...-1
2Corker: Trump, 'perfectly fine,' with scrappin...1
3Concerning Recent Changes in Allowed Domains0
4Trump confidantes Bossie, Lewandowski urge aga...-1
\n", + "
" + ], + "text/plain": [ + " headline label\n", + "0 Gillespie Victory In Virginia Would Vindicate ... 0\n", + "1 Screw Ron Paul and all of his \"if he can't aff... -1\n", + "2 Corker: Trump, 'perfectly fine,' with scrappin... 1\n", + "3 Concerning Recent Changes in Allowed Domains 0\n", + "4 Trump confidantes Bossie, Lewandowski urge aga... -1" + ] + }, + "execution_count": 230, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "df = pd.read_csv('reddit_headlines_labels.csv', encoding='utf-8')\n", + "df.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now that we have the dataset in a dataframe, let's remove the neutral (0) headlines labels so we can focus on only classifying positive or negative:" + ] + }, + { + "cell_type": "code", + "execution_count": 231, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "-1 758\n", + " 1 496\n", + "Name: label, dtype: int64" + ] + }, + "execution_count": 231, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "df = df[df.label != 0]\n", + "df.label.value_counts()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "collapsed": true + }, + "source": [ + "Our dataframe now only contains positive and negative examples, and we've confirmed again that we have more negatives than positives.\n", + "\n", + "Let's move into featurization of the headlines." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Transform Headlines into Features\n", + "\n", + "In order to train our classifier, we need to transform our headlines of words into numbers, since algorithms only know how to work with numbers.\n", + "\n", + "To do this transformation, we're going to use `CountVectorizer` from sklearn. This is a very straightforward class for converting words into features.\n", + "\n", + "Unlike in the last tutorial where we manually tokenized and lowercased the text, `CountVectorizer` will handle this step for us. All we need to do is pass it the headlines.\n", + "\n", + "Let's work with a tiny example to show how vectorizing words into numbers works:" + ] + }, + { + "cell_type": "code", + "execution_count": 232, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([[1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1],\n", + " [0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0]], dtype=int64)" + ] + }, + "execution_count": 232, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from sklearn.feature_extraction.text import CountVectorizer\n", + "\n", + "s1 = \"Senate panel moving ahead with Mueller bill despite McConnell opposition\"\n", + "s2 = \"Bill protecting Robert Mueller to get vote despite McConnell opposition\"\n", + "\n", + "vect = CountVectorizer(binary=True)\n", + "X = vect.fit_transform([s1, s2])\n", + "\n", + "X.toarray()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What we've done here is take two headlines about a similar topic and vectorized them.\n", + "\n", + "`vect` is set up with default params to tokenize and lowercase words. On top of that, we have set `binary=True` so we get an output of 0 (word doesn't exist in that sentence) or 1 (word exists in that sentence).\n", + "\n", + "`vect` builds a vocabulary from all the words it sees in all the text you give it, then assigns a 0 or 1 if that word exists in the current sentence. To see this more clearly, let's check out the feature names mapped to the first sentence:" + ] + }, + { + "cell_type": "code", + "execution_count": 234, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[(1, 'ahead'), (1, 'bill'), (1, 'despite'), (0, 'get'), (1, 'mcconnell'), (1, 'moving'), (1, 'mueller'), (1, 'opposition'), (1, 'panel'), (0, 'protecting'), (0, 'robert'), (1, 'senate'), (0, 'to'), (0, 'vote'), (1, 'with')]" + ] + }, + "execution_count": 234, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "list(zip(X.toarray()[0], vect.get_feature_names()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This is the vectorization mapping of the first sentence. You can see that there's a 1 mapped to 'ahead' because 'ahead' shows up in `s1`. But if we look at `s2`:" + ] + }, + { + "cell_type": "code", + "execution_count": 176, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[(0, 'ahead'), (1, 'bill'), (1, 'despite'), (1, 'get'), (1, 'mcconnell'), (0, 'moving'), (1, 'mueller'), (1, 'opposition'), (0, 'panel'), (1, 'protecting'), (1, 'robert'), (0, 'senate'), (1, 'to'), (1, 'vote'), (0, 'with')]" + ] + }, + "execution_count": 176, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "list(zip(X.toarray()[1], vect.get_feature_names()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There's a 0 at 'ahead' since that word doesn't show up in `s2`. But notice that each row contains **every** word seen so far.\n", + "\n", + "When we expand this to all of the headlines in the dataset, this vocabulary will grow by a lot. Each mapping like the one printed above will end up being the length of all words the vectorizer encounters.\n", + "\n", + "Let's now apply the vectorizer to all of our headlines:" + ] + }, + { + "cell_type": "code", + "execution_count": 177, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([[0, 0, 0, ..., 0, 1, 0],\n", + " [0, 0, 0, ..., 0, 0, 0],\n", + " [0, 0, 0, ..., 0, 0, 0],\n", + " ...,\n", + " [0, 0, 0, ..., 0, 0, 0],\n", + " [0, 0, 0, ..., 0, 0, 0],\n", + " [0, 0, 0, ..., 0, 0, 0]], dtype=int64)" + ] + }, + "execution_count": 177, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "vect = CountVectorizer(max_features=1000, binary=True)\n", + "X = vect.fit_transform(df.headline)\n", + "\n", + "X.toarray()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice that the vectorizer by default stores everything in a **sparse array**, and using `X.toarray()` shows us the dense version. A sparse array is much more efficient since most values in each row are 0. In other words, most headlines are only a dozen or so words and each row contains every word ever seen, and sparse arrays only store the non-zero value indices.\n", + "\n", + "You'll also notice that we have a new keyword argument; `max_features`. This is essentially the number of words to consider, ranked by frequency. So the 1000 value means we only want to look at the 1000 most common words as features.\n", + "\n", + "Now that we know how vectorization works, let's use it in action." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Preparing for Training\n", + "\n", + "Before training, and even vectorizing, let's split our data into training and testing sets. It's important to do this before doing anything with the data so we have a fresh test set." + ] + }, + { + "cell_type": "code", + "execution_count": 235, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.model_selection import train_test_split\n", + "\n", + "X = df.headline\n", + "y = df.label\n", + "\n", + "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Our test size is 0.2, or 20%. This means that `X_test` and `y_test` contains 20% of our data which we reserve for testing." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's now fit the vectorizer on the training set only and perform the vectorization. \n", + "\n", + "Just to reiterate, it's important to not fit the vectorizer on all of the data since we want a clean test set for evaluating performance. Fitting the vectorizer on everything would result in *data leakage*, causing unreliable results since the vectorizer shouldn't know about future data.\n", + "\n", + "We can fit the vectorizer and transform `X_train` in one step:" + ] + }, + { + "cell_type": "code", + "execution_count": 236, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.feature_extraction.text import CountVectorizer\n", + "\n", + "vect = CountVectorizer(max_features=1000, binary=True)\n", + "\n", + "X_train_vect = vect.fit_transform(X_train)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`X_train_vect` is now transformed into the right format to give to the Naive Bayes model, but let's first look into balancing the data.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Balancing the data\n", + "\n", + "It seems that there may be a lot more negative headlines than positive headlines (hmm), and so we have a lot more negative labels than positive labels." + ] + }, + { + "cell_type": "code", + "execution_count": 237, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "-1 758\n", + " 1 496\n", + "Name: label, dtype: int64\n", + "\n", + "Predicting only -1 = 60.45% accuracy\n" + ] + } + ], + "source": [ + "counts = df.label.value_counts()\n", + "print(counts)\n", + "\n", + "print(\"\\nPredicting only -1 = {:.2f}% accuracy\".format(counts[-1] / sum(counts) * 100))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can see from above, we have slightly more negatives than positives, making our dataset slightly imbalanced.\n", + "\n", + "By calculating if our model only chose to predict -1, the larger class, we would get a ~60% accuracy. This means that in our binary classification model, where random chance is 50%, a 60% accuracy wouldn't tell us much. We would definitely want to look at precision and recall more than accuracy.\n", + "\n", + "We can balance our data by using a form of **oversampling** called SMOTE. SMOTE looks at the minor class, positives in our case, and creates new, synthetic training examples. Read more about the algorithm [here](https://www.jair.org/media/953/live-953-2037-jair.pdf).\n", + "\n", + "Note: We have to make sure we only oversample the **train** data so we don't leak any information to the test set. \n", + "\n", + "Let's perform SMOTE with the `imblearn` library:" + ] + }, + { + "cell_type": "code", + "execution_count": 238, + "metadata": {}, + "outputs": [], + "source": [ + "from imblearn.over_sampling import SMOTE\n", + "\n", + "sm = SMOTE()\n", + "\n", + "X_train_res, y_train_res = sm.fit_sample(X_train_vect, y_train)" + ] + }, + { + "cell_type": "code", + "execution_count": 239, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[(-1, 601), (1, 601)]\n" + ] + } + ], + "source": [ + "unique, counts = np.unique(y_train_res, return_counts=True)\n", + "print(list(zip(unique, counts)))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The classes are now balanced for the train set. We can move onto training a Naive Bayes model." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Naive Bayes" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For our first algorithm, we're going to use the extremely fast and versatile Naive Bayes model.\n", + "\n", + "Let's instantiate one from sklearn and fit it to our training data:" + ] + }, + { + "cell_type": "code", + "execution_count": 240, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "0.9201331114808652" + ] + }, + "execution_count": 240, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from sklearn.naive_bayes import MultinomialNB\n", + "\n", + "nb = MultinomialNB()\n", + "\n", + "nb.fit(X_train_res, y_train_res)\n", + "\n", + "nb.score(X_train_res, y_train_res)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Naive Bayes has successfully fit all of our training data and is ready to make predictions. You'll notice that we have a score of ~92%. This is the *fit* score, and not the actual accuracy score. You'll see next that we need to use our test set in order to get a good estimate of accuracy.\n", + "\n", + "Let's vectorize the test set, then use that test set to predict if each test headline is either positive or negative. Since we're avoiding any data leakage, we are only transforming, not refitting. And we won't be using SMOTE to oversample either." + ] + }, + { + "cell_type": "code", + "execution_count": 241, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([-1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, -1, 1, -1, 1, 1, 1,\n", + " 1, -1, -1, 1, -1, -1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1,\n", + " -1, -1, 1, 1, 1, -1, 1, 1, 1, -1, 1, -1, 1, -1, 1, -1, 1,\n", + " 1, 1, 1, 1, -1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, -1, -1,\n", + " -1, -1, 1, -1, 1, -1, -1, -1, -1, 1, -1, 1, 1, -1, -1, -1, -1,\n", + " -1, -1, -1, -1, -1, -1, 1, 1, -1, 1, 1, 1, -1, -1, -1, -1, 1,\n", + " 1, 1, 1, 1, -1, 1, 1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1,\n", + " -1, -1, -1, -1, -1, 1, -1, 1, -1, 1, 1, -1, 1, 1, 1, -1, -1,\n", + " -1, -1, 1, -1, -1, -1, 1, 1, 1, -1, -1, -1, -1, 1, -1, 1, -1,\n", + " -1, 1, -1, 1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, -1, 1, -1,\n", + " 1, -1, -1, -1, -1, 1, -1, 1, 1, 1, 1, -1, -1, -1, 1, -1, -1,\n", + " -1, 1, -1, -1, -1, -1, -1, -1, -1, 1, 1, -1, 1, -1, -1, -1, 1,\n", + " -1, 1, -1, -1, 1, -1, -1, 1, -1, -1, 1, -1, 1, -1, -1, -1, -1,\n", + " -1, 1, 1, -1, 1, -1, -1, 1, 1, 1, -1, -1, 1, -1, 1, 1, -1,\n", + " -1, -1, 1, -1, 1, 1, 1, -1, 1, -1, 1, -1, -1], dtype=int64)" + ] + }, + "execution_count": 241, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "X_test_vect = vect.transform(X_test)\n", + "\n", + "y_pred = nb.predict(X_test_vect)\n", + "\n", + "y_pred" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`y_pred` now contains a prediction for every row of the test set. With this prediction result, we can pass it into an sklearn metric with the true labels to get an accuracy score, F1 score, and generate a confusion matrix: " + ] + }, + { + "cell_type": "code", + "execution_count": 243, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Accuracy: 74.50%\n", + "\n", + "F1 Score: 68.93\n", + "\n", + "COnfusion Matrix:\n", + " [[116 41]\n", + " [ 23 71]]\n" + ] + } + ], + "source": [ + "from sklearn.metrics import accuracy_score, f1_score, confusion_matrix\n", + "\n", + "print(\"Accuracy: {:.2f}%\".format(accuracy_score(y_test, y_pred) * 100))\n", + "print(\"\\nF1 Score: {:.2f}\".format(f1_score(y_test, y_pred) * 100))\n", + "print(\"\\nCOnfusion Matrix:\\n\", confusion_matrix(y_test, y_pred))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can see that our model has predicted the sentiment of headlines with a 75% accuracy, but looking at the confusion matrix we can see it's not doing that great of a job classifying.\n", + "\n", + "For a breakdown of the confusion matrix, we have:\n", + "+ 116 predicted negative (-1), and was negative (-1). **True Negative**.\n", + "+ 71 predicted positive (+1), and was positive (+1). **True Positive**.\n", + "+ 23 predicted negative (-1), but was positive (+1). **False Negative**.\n", + "+ 41 predicted positive (+1), but was negative (-1). **False Positive**.\n", + "\n", + "So our classifier is getting a lot of the negatives right, but there's a high number of false predictions. We'll see if we can improve these metrics with other classifiers below." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Cross Validation\n", + "\n", + "Lets now utilize **cross validation**, where we generate a training and testing set 10 different times on the same data in different positions.\n", + "\n", + "Right now, we are set up with the usual 80% of the data as training and 20% as the test. The accuracy of prediction on a single test set doesn't say much about generalization. To get a better insight on our classifier’s generalization capabilities, there's two different techniques we can use:\n", + "\n", + "1) **K-fold cross-validation**: The examples are randomly partitioned into $k$ equal sized subsets (usually 10). Out of the $k$ subsets, a single subsample is used for testing the model and the remaining $k-1$ subsets are used as training data. The cross-validation technique is then repeated $k$ times, resulting in process where each subset is used exactly once as part of the test set. Finally, the average of the $k$-runs is computed. The advantage of this method is that every example is used in both train and test set.\n", + "\n", + "2) **Monte Carlo cross-validation**: Randomly splits the dataset into train and test data, the model is run, and the results are then averaged. The advantage of this method is that the proportion of the train/test split is not dependent on the number of iterations, which is useful for very large datasets. On the other hand, the disadvantage of this method if you're not running through enough iterations is that some examples may never be selected in the test subset, whereas others may be selected more than once.\n", + "\n", + "For an even better explanation of the differences between these two methods, check out this answer: https://stats.stackexchange.com/a/60967\n", + "\n", + "The relevant class from the sklearn library is `ShuffleSplit`. This performs a shuffle first and then a split of the data into train/test. Since it's an iterator, it will perform a random shuffle and split for each iteration. This is an example of the Monte Carlo method mentioned above.\n", + "\n", + "Normally, we could just use `sklearn.model_selection.cross_val_score` which automatically calculates a score for each fold, but we're going to show the manual splitting with `ShuffleSplit`.\n", + "\n", + "Also, if you're familiar with `cross_val_score` you'll notice that `ShuffleSplit` works differently. The `n_splits` parameter in `ShuffleSplit` is the number of times to randomize the data and then split it 80/20, whereas the `cv` parameter in `cross_val_score` is the number of folds. By using a large `n_splits`, we can get a good approximation of the true performance on larger datasets, but it's harder to plot." + ] + }, + { + "cell_type": "code", + "execution_count": 262, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "Average accuracy across folds: 72.95%\n", + "\n", + "Average F1 score across folds: 66.43%\n", + "\n", + "Average Confusion Matrix across folds: \n", + " [[115.6 39. ]\n", + " [ 28.9 67.5]]\n" + ] + } + ], + "source": [ + "from sklearn.model_selection import ShuffleSplit\n", + "\n", + "X = df.headline\n", + "y = df.label\n", + "\n", + "ss = ShuffleSplit(n_splits=10, test_size=0.2)\n", + "sm = SMOTE()\n", + "\n", + "accs = []\n", + "f1s = []\n", + "cms = []\n", + "\n", + "for train_index, test_index in ss.split(X):\n", + " \n", + " X_train, X_test = X.iloc[train_index], X.iloc[test_index]\n", + " y_train, y_test = y.iloc[train_index], y.iloc[test_index]\n", + " \n", + " # Fit vectorizer and transform X train, then transform X test\n", + " X_train_vect = vect.fit_transform(X_train)\n", + " X_test_vect = vect.transform(X_test)\n", + " \n", + " # Oversample\n", + " X_train_res, y_train_res = sm.fit_sample(X_train_vect, y_train)\n", + " \n", + " # Fit Naive Bayes on the vectorized X with y train labels, \n", + " # then predict new y labels using X test\n", + " nb.fit(X_train_res, y_train_res)\n", + " y_pred = nb.predict(X_test_vect)\n", + " \n", + " # Determine test set accuracy and f1 score on this fold using the true y labels and predicted y labels\n", + " accs.append(accuracy_score(y_test, y_pred))\n", + " f1s.append(f1_score(y_test, y_pred))\n", + " cms.append(confusion_matrix(y_test, y_pred))\n", + " \n", + "print(\"\\nAverage accuracy across folds: {:.2f}%\".format(sum(accs) / len(accs) * 100))\n", + "print(\"\\nAverage F1 score across folds: {:.2f}%\".format(sum(f1s) / len(f1s) * 100))\n", + "print(\"\\nAverage Confusion Matrix across folds: \\n {}\".format(sum(cms) / len(cms)))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Looks like the average accuracy and F1 score are both similar to what we saw on a single fold above." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Let's Plot our Results" + ] + }, + { + "cell_type": "code", + "execution_count": 263, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAA7kAAAIuCAYAAAB6s8+PAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAIABJREFUeJzs3XlUVeX+x/EPyHjEHHIK4SKpOSMOmWMq5AxqOVRamZUjRuaAI6LiWGlOOWblVJlzoV2nNEvKqdSuA6Kg1+Jq4ZgIyPT7w8X5eYKDRwXB7fu1lmt1nr3Z+3tOnuLz7P18t11GRkaGAAAAAAAwAPv8LgAAAAAAgNxCyAUAAAAAGAYhFwAAAABgGIRcAAAAAIBhEHIBAAAAAIZByAUAAAAAGAYhFwAAK1599VXVrl1bcXFxWbatW7dOlStXVnJyss3H8/Pz0wcffJCbJVrYu3evKleubPGndu3a6tatm3744Yc8Oy8AAAWJQ34XAABAQXbjxg1NmDBBCxYsuO9jzZ07V8WLF8+FqnL24Ycfyt3dXRkZGbp27Zq++eYb9e/fX2vWrFGVKlXy/PwAAOQnQi4AADkoUqSIdu7cqW3btqlly5b3daxq1arlUlU5q1y5sipUqGB+3bRpU+3du1cbN24k5AIADI/blQEAyEGTJk1Ur149TZw4UQkJCTnuu27dOnXq1Ek+Pj6qXbu2evXqpVOnTpm3Z96ufOPGDfn6+uqzzz6z+PmtW7eqWrVqio+PlySdPn1ab731lnx9ffXMM88oPDxcSUlJd/0e7O3t5ebmZjF27do1jR8/Xs8++6xq1KihJk2aaPLkyUpJSZEkBQYGavDgwRY/c+7cOVWuXFn79++XJF24cEGDBg1S3bp1VbduXQ0bNkyXL18275+QkKCRI0eqcePG8vHx0UsvvaSffvrprusHAOBuEHIBAMiBnZ2dJkyYoIsXL2rmzJlW99u8ebNGjx6ttm3b6uOPP1ZYWJhiYmI0ZsyYLPuaTCY1b95cW7dutRjfsmWL6tevr5IlS+rChQvq0aOHEhISNGPGDIWEhGjTpk0KCQm5Y83p6elKTU1Vamqqrl69qqVLl+r3339Xx44dzfsMHjxYkZGRGjlypBYvXqwXXnhBS5cu1fr16yVJHTt21M6dO5WYmGjxHt3d3VWvXj3duHFDr732mk6ePKlJkyYpPDxcv/76q/r27au0tDRJ0qRJk7R//36NHTtWCxcuVPHixdW/f39dunTpju8BAIB7xe3KAADcQYUKFfTmm29q8eLF6tSpk6pXr55ln3Pnzun1119X3759zWNXrlzR1KlTlZ6eLnt7y3nl9u3bKzg4WH/99ZdKlSqlmzdvaufOnRoxYoQkaenSpbKzs9PixYvNV2Hd3d31+uuv6/jx46patarVegMCArKM9enTx3yrclJSktLS0jR+/Hg1aNBAktSwYUPt2rVLBw8eVLdu3RQYGKjp06dr586dateunSQpIiJC7du3l52dndavX68//vhDW7ZsUbly5STduh27bdu22rlzp5577jn98ssvatSokVq3bm3evmDBAovgDABAbiPkAgBggwEDBmjz5s0aO3asVq9enWV7Zri9cuWKTp8+rZiYGO3cuVMZGRlKTU2Vk5OTxf7PPvusTCaTtm/frpdfflk//PCDkpOTzet+9+/fr7p168rFxUWpqamSZH69d+/eHEPu7Nmz5e7uLunWLcM//fSTFi1apGLFiunNN9+Ui4uLPv30U2VkZOi///2vYmNjFRUVpYsXL5pvVy5TpoyeeeYZbdq0Se3atVN0dLROnjyp999/31xfpUqVVKZMGXN9Hh4e+te//qWff/5Zzz33nOrVq6evvvpK8fHx8vPzk5+fn4YPH34//xoAALgjQi4AADZwdnbWuHHj9MYbb2jlypUqXLiwxfYLFy5o1KhR+vHHH+Xi4qLKlSurSJEikqSMjIxsj+fv76+tW7fq5Zdf1pYtW9SwYUNz9+UrV67oyJEj2V41/vPPP3OstWLFihaNpxo0aKDLly/ro48+0uuvv65ChQpp+/btmjRpkuLi4lSyZEn5+vrK2dnZotYOHTooLCxM169fV0REhJ566inz1eArV67o2LFj2daXuc+YMWNUsmRJbdy4UTt27JCDg4Patm2r8PBwubq65vgeAAC4V4RcAABs1LhxYwUEBGjmzJnq3bu3xbbMpkvr169X5cqVVahQIX3++ef68ccfrR6vXbt2CgoK0sWLF7Vz506NHDnSvK1IkSJq3bp1lvNIUsmSJe+69sqVK2vVqlW6dOmSEhISNGjQIHXv3l29e/dWqVKlJEldu3a1+JlWrVpp/Pjx2r17t7Zt26ZOnTqZtz322GOqXbu2Ro8eneVcRYsWlSS5uLho0KBBGjRokE6ePKlNmzZp0aJFqlSpksVt3QAA5CYaTwEAcBdGjhwpe3t7LV682GL80KFD6tChg6pVq6ZChQpJkiIjIyXdagSVncaNG6tw4cKaNWuWEhMT9dxzz5m31alTRzExMapWrZpq1qypmjVrqkyZMpoxY4bOnDlz13UfO3ZMbm5uKl68uI4dO6aUlBT17dvXHHDj4+N18uRJi1rd3Nzk7++vFStWKCYmRu3btzdvq127ts6ePStvb29zfZUqVdKcOXP022+/KSMjQ506dTJ3kH7qqaf07rvvqnz58jp//vxd1w8AgK24kgsAwF0oWbKkhgwZorCwMIvxGjVq6KuvvlL58uXl6uqqr7/+Wtu3b5ckJSYmZnt7rqOjo1q2bKnVq1erWbNmeuyxx8zbevbsqfXr1ysoKEgvvfSSUlNTNXfuXMXHx9/xWbdRUVH6+++/JUmpqanas2eP1q5dqz59+sjBwUFVqlRRoUKFNHXqVHXu3Fl//vmnFixYoOTk5CxNoTp27Kg+ffqobt265gZTktS5c2ctXbpUb731lt588005OTnpk08+0ZEjRzRixAjZ2dnJx8dH8+bNk8lkkpeXl3766SfFxsYqNDT07j50AADuAiEXAIC79OKLL2rDhg369ddfzWNTpkzR2LFjFRISIldXV9WsWVNLlizRG2+8oUOHDsnPzy/bY7Vv315r1qwxdzDO5OnpqRUrVuj999/XoEGD5OTkpLp162rGjBnmdbvWvPvuu+Z/dnR0lKenpwYNGmS+9fnJJ5/U5MmT9dFHH2nLli0qU6aM2rRpo1atWmnVqlVKS0szX41u2LChHBwcFBgYaHGOxx57TMuXL9d7771n7ghdo0YNLV26VE8++aSkW1e9HR0dNXfuXF26dEleXl6aOnWqGjVqZMvHDADAPbHLyK4bBgAAgKTdu3crKChIP/74o3mtLQAABRlXcgEAQBaHDh3S7t27tWbNGnXs2JGACwB4aORL46kjR46oSZMm5tdXr15VUFCQ6tatq+bNm1s8fzAjI0PTp09XgwYN9PTTT2vixIlKS0vLj7IBAHhkXL9+XZ9++qk8PT01ZMiQ/C4HAACbPdAruRkZGVq7dq2mTp1qXusjSaGhoTKZTIqMjFRUVJR69+6tSpUqydfXVytXrtSuXbv09ddfy87OTn379tUnn3yS7SMVAABA7mjSpInFmmMAAB4WD/RK7oIFC7Rs2TL169fPPJaQkKDt27crODhYzs7O8vHxUUBAgDZs2CBJ2rhxo3r27KnSpUurVKlS6tu3r9avX/8gywYAAAAAPCQeaMjt3LmzNm7cqJo1a5rHzp49KwcHB3l6eprHvL29FRMTI0mKiYlRxYoVLbbFxsaKflkAAAAAgH96oCG3dOnSsrOzsxi7ceOGXFxcLMZcXFyUlJQk6dazBW/f7urqqvT0dN28eTPvCwYAAAAAPFTypfHU7VxdXZWcnGwxlpSUJJPJJOlW4L19e2JiohwcHOTs7PxA6wQAAAAAFHz5HnK9vLyUkpKiuLg481hsbKz5FuUKFSooNjbWYlvmQ+YBAAAAALhdvodcNzc3+fv7a/r06UpMTNSRI0cUERGhwMBASVKHDh20ZMkSnT9/XvHx8Vq4cKE6duyYz1UDAAAAAAqiB/oIIWvCw8MVFhamZs2ayWQyadiwYapVq5YkqXv37oqPj1eXLl2UkpKiwMBA9erVK58rBgAAAAAURHYZtCkGAAAAABhEvt+uDAAAAABAbiHkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAACs2rt3rypXrmz1jyRdu3ZNo0ePVuPGjdWgQQMNHTpUly5dsvkcn332mbp165ZlfNq0aapfv778/f21detWi20nTpxQmzZtlJaWdn9vEABgOA75XQAAACi4qlevrlWrVlmMXb16VW+//bYCAgIkSSEhIfrtt980fPhwubq66r333tPAgQO1cuVK2dnZ5Xj8HTt26IMPPlC1atUsxnfv3q2VK1dq2rRpOn36tEJCQtSgQQM99thjkqSZM2cqKChIhQoVysV3CwAwAkIuAACwys3NTb6+vhZjgwcPVtmyZRUaGqpr165p586dmjx5sjp06CBJMplMeuONNxQTE6MKFSpke9zExEQtWLBAixYtUpEiRbJsj4qKUtWqVdW2bVslJydrzpw5OnPmjHx8fHT48GH98ccfat++fe6/YQDAQ4+QCwAAbHbw4EFt2rRJ8+fPl6urqxISEiTdCsOZihYtKkm6cuWK1eN8++23WrNmjd577z3t2bNHMTExFtvLlSunM2fO6Pfff1d0dLTs7e1VtmxZSbeu4g4cOFD29qy6AgBkRcgFAAA2mzlzpp5++mn5+flJkkqWLKnnnntOCxcuVOXKleXq6qoPP/xQHh4eqlmzptXjNGjQQNu2bZPJZNKePXuybG/ZsqVWrVolf39/2dvb65133lHp0qW1b98+Xb16Va1atcqz9wgAeLgxBQoAAGxy4sQJ7du3T2+++abF+IgRI3T16lW1bt1azz77rH777TfNnz9fTk5OVo/l7u4uk8lkdbujo6M+++wzbdu2TXv27FG/fv0k3QrZwcHBOn36tHr06KFOnTppx44dufMGgUeMLY3lMjIytGjRIvn5+alWrVrq1q2bDhw4YPM5aCyH/MCVXAAAYJPVq1fL09NTzZs3N4/9+eefevnll1W6dGmNGjVKDg4O+vjjj9WnTx+tWrVKZcqUuefz2dnZ6V//+pf59e7du5WWlqbmzZsrMDBQrVu3Vs2aNRUcHKzt27erVKlS9/P2gEeOLY3l5syZo8WLF2vo0KGqWLGiVqxYoT59+igiIkLu7u45Hp/GcsgvXMkFAAA22bFjh9q0aWPRMXnNmjVKSEjQxx9/LH9/fzVr1kyLFi1Senq6lixZkqvnnzlzpt555x3997//1cmTJ9W9e3c1a9ZMnp6e2r17d66eC3gUZDaWu/3Pxo0bLRrLLV68WKNGjVLPnj3VuHFjzZw5UyVLltS+ffusHjcxMVEffvihBg4cmO0dG7c3luvdu7cSExN15swZSaKxHHJFgQm5v/zyi1544QXVqVNHrVu31jfffCPp1mxSUFCQ6tatq+bNm2v16tX5XCkAAI+eU6dO6X//+1+WtbDnz5+Xp6enSpQoYR5zdXVV1apVdfr06Vw7/7Zt21S4cGE1atTI/AzezKs+bm5uunjxYq6dC3hUZTaWGzFihFxdXfXjjz9Kkp5//nnzPs7Oztq6das6depk9Ti3N5bLXL9/u9sby0VGRtJYDrmuQPztSUtLU1BQkPr06aNffvlFkyZN0ogRI/T7778rNDRUJpNJkZGRmj17tj744AMdOnQov0sGABRgtqwzu93BgwdVpUoV7d271+ZzPGrrzP7zn//I0dFRVatWtRj38vLS2bNnzcFTkpKTkxUVFaVy5crlyrnT09M1e/ZsvfPOO5JkDtSXL1+WJF26dMkiZMPY+H7nnX82louOjla5cuW0d+9eBQYGqnr16urUqZN+/fXXHI+T2VguMDAw2+0tW7ZUlSpV5O/vrwEDBtBYDrmuQKzJvXbtmi5duqS0tDRlZGTIzs5Ojo6OKlSokLZv364tW7bI2dlZPj4+CggI0IYNG7I8sw8AgEy2rDPLdPPmTYWGhiojI8Pm4z+K68yio6Pl6ekpR0dHi/HOnTtr6dKleuutt9S/f385Ojpq2bJlunLlinr27Cnp1md87NgxlS1b1ny15m5s2rRJpUuXVr169SRJnp6eKl++vObMmaMqVaooLi5OjRs3vv83iYcC3++8kdlYbsGCBeaxS5cu6fLlyxo9erQGDRqkMmXK6OOPP9Zbb72lLVu2qGTJktke605rdTMby507d05ubm7mSarbG8uFhYUpISFBb7/9tvz9/XPvjeKRUCCu5BYvXlzdu3fX4MGDVb16dfXo0UOhoaG6fPmyHBwc5Onpad7X29s7y7P0HiW2zF5euHBBwcHBql+/vho0aKDw8HDduHEjx+Neu3ZNY8aMUaNGjVSvXj0NHTo0y61fRp+9BGAcd1pndruFCxfq+vXrNh33UV5ndunSJfMv87crVqyYVq5cqbJly2r48OEaPny4ChUqpK+++koVKlSQdKs51YsvvnhPS47S0tI0d+5c81Vc6VZDqvfee0979+7VvHnzNHHiRD3xxBP3/ubwUOH7nTeyayyXmpqqK1euaNy4cerSpYuaNm2qjz76SA4ODlq+fPl9nS+zsVxmwL29sdy7776rhg0b6t1339XgwYP1119/3de58OgpEFdy09PT5eLiolmzZsnPz0+RkZEaMmSI5s+fLxcXF4t9XVxclJSUlE+V5r87zV7evHlTffr0kSRNmTJFiYmJmjZtmuLj4zVr1iyrxx08eLBiYmI0ZswYOTk5acaMGerXr5/5FxKjz17i3u3du1evvfaa1e1RUVGKj4/X9OnT9eOPPyoxMVHVq1fXiBEjstz2eLu//vpLU6ZM0ffffy8HBwc1a9ZMw4cP1+OPP27eZ9q0aVq7dq2KFCmi4cOHW9zedOLECQ0aNEibNm16aP5e8lnmncx1ZvPnz5erq6t5/NSpU/rkk080adIkvfvuu3c8zu3rzPbs2ZNl0vX2dWbR0dGGWmc2ZcoUq9s8PT01b948q9s9PDwUFRVldfvUqVOtbitUqJC2bNmSZbxWrVrZjuPRw/c7d+zYsUMBAQEWjeUyw/7td0q4ubmpZs2aio6OztXzz5w5U0OHDjU3llu6dKlKlChhbizXuXPnXD0fjK1AhNytW7fqyJEjGj58uCSpefPmat68uebMmaPk5GSLfZOSknJ8rp7RZc5e3m7w4MHm2cuffvpJJ06c0NatW+Xl5SXp1izciBEjdOXKFRUrVizLMU+dOqUffvhBn376qRo1aiRJKlq0qF555RUdO3ZM1apVs5i9TE5O1pw5c3TmzBn5+PgYYvYS9+5OEy9paWnq37+/rly5ouHDh6tIkSL67LPP1KNHD23atCnbqy9paWnq27evLly4oBEjRqh06dJasWKFXn31VW3YsEFOTk6GnHjhs8w7/1xnJt169uOYMWPUo0ePbNfxZSdznZnJZNKePXuybG/ZsqVWrVolf39/2dvbs84MeAD4ft8/a43lMh/hlZKSYjGBkJqaKmdn51w7/+2N5TJ779BYDvejQITc//3vf7p586bFmIODg6pXr66DBw8qLi7OfG9/bGysKlasmB9lFkj/nL2sV6+eVq1aZQ640q11DxkZGUpJScn2GJ6enlq1apVq1Khh8TOSzP9ejD57iXt3p4mXgwcP6siRI1q/fr15fVP9+vXVokULrV27VgMHDsxyzN27d+vo0aNauXKleQ1ew4YN1aZNG61atUqvvvqqISde+CzzRnbrzCTp888/V3x8vAYOHKg//vjDpmOxzgwoWPh+5w5rjeUyL378+9//NjfiunLlio4cOaL+/fvnyrkzG8uFhYVJsmwsV6pUKRrL4Z4UiFTSqFEjHT9+XGvXrlVGRob27dunbdu2qX379vL399f06dOVmJioI0eOKCIiwmqntkfRP2cvCxcubP4lOTk5WQcOHNDMmTPVvHlzlSpVKttjODs7y9fXVw4ODkpJSdHRo0c1ceJEVa5cWTVr1pREFzzY7p+PH3ByctKLL75o0cDD1dVVTzzxhNVfPGJiYmQymcyhTJKcnJxUo0YN8+z6o/D4AT7L3JHdOrMLFy5oxowZCgsLy7Is5n6xzgx4cPh+5w5rjeUqVKigwMBATZkyRStXrtT333+v/v37y9XV1Rx6b968qUOHDun8+fP3dO6cGst9/vnnNJbDPSkQV3IrV66s2bNna9asWZo0aZLc3d01bdo01axZU+Hh4QoLC1OzZs1kMpk0bNgw1apVK79LLhCszV5mevHFF3X8+HEVK1ZMgwcPtumYgwYN0vbt2+Xs7KyFCxeab0808uwlctc/J14ym4Lc7o8//lB0dLTatm2b7TFKliypxMTELLO3v//+u/mOBCPfNpaJzzJ3ZLfObNy4cXr22WfVsGFDpaamKj09XdKtKwppaWm5emt2QVpnVn7Epgd2roLszNSH984EWOL7nTusNZaTpMmTJ2vWrFmaP3++rl+/rjp16mj58uUqWrSopP9vLDdw4EC9/fbbd3XezMZy77//vnkss7FcSEiIvvvuOxrL4Z4UiJArSX5+ftk+LLpYsWI5Nkx6lGU3e3m7kSNHKjk5WYsXL9Yrr7yitWvXmtdWWPPWW2+pR48e+vLLL9W7d28tW7ZMderUkfT/s5eZbp+9DAwMVOvWrVWzZk0FBwdr+/btVq8cw7juNPEi3VrHM3bsWLm6uqpLly7Z7tO0aVMVLVpUgwcP1tixY1W8eHGtWLFC0dHRKlOmjCTjT7zwWeYOa+vMvvvuO0nS5s2bLcZff/111a9f/767hmZinRluZ0tzucTERE2dOlVbtmxRamqqWrZsqVGjRqlIkSJWf+7w4cPZPtN13LhxevnllyUZs7kc3+/ck1NjOScnJw0bNkzDhg3LdjuN5VAQFZiQi7uX3ezl7Z555hlJUp06deTn56fVq1dryJAhOR6zdu3a5p8NCAjQihUrzCH3n4wye4ncc6eJl9TUVIWEhOinn37S3Llzra6xKVGihD766COFhISYr1D6+fmpW7du2rdvn3k/I0+88FnmDmvrzNasWWPxOi4uTsHBwRo/frzq16+fK+dmnRn+yZbnu2Y2kRw9erQyMjI0bdo0Xb16Ncfu1SdPnlTx4sWzTIplPoLRqM3l+H4DsIaQ+5CyNnsZFRWl06dPq127duYxNzc3eXh4WF0bcu7cOe3fv18vvPCCeaxQoUKqVKmS1Z8xyuzld999p1mzZik2NlZPPPGE3nzzTfNseGxsrKZMmaJ9+/bJZDKpXbt2Gjx4sNXu3uvWrdPIkSOz3VauXDnzzLIRZ9Mz5TTxkpiYqODgYEVGRmratGnZ3rlxu3r16mnHjh36/fff5eTkpDJlyigkJCTHqxlGmnjhs8wd1taZZfYbyJT5vfb29taTTz4p6dY6s2PHjqls2bLmNcp3I6d1ZlWqVGGd2SPoTs3lzp49q4iICC1YsMA8wVWmTBm99tprio6OVqVKlbI9blRUlCpXrpzl2LdvN2JzOb7fAKx5eDuJPOKszV4ePHhQQ4cOtVj8/+effyomJsZqV+qYmBiNHDnSHFalW79EHz58ONufyZy9fOeddyRZzl5KemhmLyMjIxUUFKQ6depo4cKFatWqlUJDQ7Vt2zZdv35dvXr10pkzZzRx4kSFh4fr8OHDOXYSbN68uVatWmXx57333pOdnZ05EGTOpo8fP17PP/+8QkJCdO3aNfMxHtbZdMn6xIskXb9+Xa+//rp+/vlnzZw5847N4y5duqS1a9cqOTlZnp6e5ttqT5w4oSpVqmT7M7dPvFy6dEnSwznxIvFZ5qac1pndSeY6s8znhd+NzHVmmf+dlP5/ndnevXs1b9481pkhS3O5vXv3ytHR0SIc1a9fX0WKFMn2kTaZoqKi9NRTT1ndbtTmcny/AVjDldyHlLXZy8DAQH388ccaMGCABg4cqJs3b+qjjz5SiRIl9OKLL0rKOnvZqFEj1axZU8OGDdO7774rFxcXffLJJ0pMTNSbb76Z5dxGmb388MMP1aZNG/OtRg0bNtTZs2e1Z88eXbhwQfHx8dq8ebP5Fs7atWvL399f3333XbZXzkqUKGER7jMyMjR16lTVrVvXHI6NOpsuWZ94ycjI0DvvvKMTJ05owYIFNv3dSElJ0ahRo1SyZEk1a9ZMknTgwAFFRUVle8u90W4b47PMPTmtM7tdhQoVsqwpY50Z8to/m8vFxsbK3d3d4v/tdnZ2cnd319mzZ60e5+TJkzKZTAoICFBsbKzKly+vkJAQ83feqM3l+H5borHcLTSWg0TIfWhZm70sUqSIli1bpmnTpmnkyJFKTU1VkyZNNHLkSPOtif/sgufo6KhFixbp/fff16RJk5SQkKB69erp888/l4eHh8XxjdIFLz4+XkeOHMnSdXr27NmSpAkTJsjLy8tijWKJEiX05JNPas+ePXe8PVSSvvnmGx0+fFjr1q0zz5Ab+XnD1iZeNm3apB9//FGvvPKKChcubHHHQPHixeXl5ZVl4qVMmTJq0aKFJk6cqJSUFCUlJWny5Mlq1KiR+Ze2f57DCBMvmfgsAePLrrlcQkKCChcunGXfwoULKyEhIdvjXLhwQVeuXNHZs2c1ZMgQubi4aNWqVerfv7/Wrl2rqlWrGrq5HABkh5D7kMpp9tLDw0Nz5szJcfs/Zy9LlChh04xoQZ29vFvR0dGSbnWV7dWrl/bv36/HH39cAwcOVNeuXVWyZEnFx8fr5s2bcnJyknSr0c/58+dteqh85tWwDh06WFyNM+psumR94mXHjh2SpBUrVmjFihUW29q1a6cPP/ww28dNCfBEAAAgAElEQVQPTJkyRRMnTtSoUaPk6OioNm3aZHvl0SgTL7fjswSML7vmcunp6VabSVobf+yxx/Txxx+ratWqKlmypCSpcePG6tixo+bPn2+evDVqczkAyI5dRkZGRn4XATxomzdv1rvvvquyZcuqS5cuqlevnrZv364VK1Zo0aJF8vT0VMeOHdWqVSsNGTJEDg4OmjNnjjZs2KA6depo6dKlOR5/x44dGjBggL7++mtVrlzZYltGRkaW2fTu3burT58+8vDwYDYdQJ7hdsZbCsLtjM2bN1dAQICGDh1qHps2bZp27dqlb7/91mLfDh06qH79+hozZozNx588ebK+//57qxPQL7zwgoYOHSoPDw+1bNlSP/30k0qUKKGAgAD16tXroWkuh//H9/uW/P5+W2tqOmfOHM2dOzfbn7H2aCuamt47ruTikZSSkiJJatWqlflqV8OGDRUbG6sFCxboiy++0IwZMxQaGqqIiAjZ29vr+eefl5+fn5KTk+94/NWrV6t+/fpZAq7EbDpwt/jF7Zb8/sUNucdaczkvLy/973//U1pamvmX0YyMDMXFxal8+fLZHuvMmTOKjIxU165dLZY4JCcnZ3vrs1SwnpDA9/v/8R1/+GU2NX3ppZc0YsQIRUZGKjQ0VMWLF1fXrl3VtGlTi/0PHjyo9957z+qkUmZT09udPXtWw4cPz9LU1GiPCLtfhFw8kjL/x9+kSROL8QYNGmjRokWSbt1a7Ofnp//+978qWrSoSpQooR49esjd3T3HY9+4cUORkZFWZ97+ySiPagEA2MZac7kGDRooMTFRkZGR5l+G9+3bp7///tvq810vXLig8ePHq1y5cuZ19snJyfrhhx+yvRvIaM3lgIIkp6amLVu2tHhcVWJiooYOHaqAgAB16tQp2+M96k1N7wchNx8xe3lLfsxcenp6Svr/K7qZUlNTZWdnpz/++EP79u3T888/L29vb0m31itGR0ffsenU/v37lZycrJYtW96xjoI0mw4AeDCsNZcrX768WrZsqWHDhmnEiBEqVKiQpk2bJn9/f/Mjgv7ZXK5evXqqXbu2xowZoyFDhqho0aL65JNPdP36db311ltZzk1zOSBv3Kmp6T999tlnunjxooYPH27zOR61pqb3g5CLR1KlSpVUqlQpffvtt3ruuefM4z/88IN8fX11/vx5jRgxQjVr1jQ/KzgiIkLXrl3LtiPt7X777Td5eHiYG4BYUxBn05l4uSU3Jl74LG/h9jsgq5ye7zp16lRNmjRJ4eHhcnBwkJ+fn0aPHm3e/s/mcoUKFdL8+fM1ffp0ffDBB/r7779Vp04dLV++3Pxc7Ew0lwPyzp2amt7u77//1scff6xevXqpdOnSNh3/UWxqej8IuXgk2dvbKzg4WKGhoSpdurSaNm2qTZs26dChQ1q+fLl8fX1VtWpVDR8+XO+8847Onz+vyZMnq0uXLubQ+8/Z9EynTp0yX/3NCbPpAPBoyulpBm5ubpoyZYrVfbJ7QkLx4sU1ceLEO57XKE9IAAqiy5cvS5KGDBmiLl26qE+fPtq+fbvGjBmj0qVLW1wkWb9+vVJSUvTKK6/YfPydO3fq3Llz+uijjyzGeURY9gi5eGR169ZNhQoV0uLFi7VixQp5e3tr/vz55tA5d+5cTZgwQYMGDdJjjz2mXr16KSgoyPzz2T2qRbo1Q3+nq7jMpgMAABjHnZqa3h5yV69erfbt29/VHXs0Nb07hFw80jp37my1sZOHh4e5CZW17f+cTZekZcuW3fG8zKYDAAAYhy1NTSXp3LlzOnnypIYNG2bzsWlqevcevVXIAAAAAJCL7tTUNNPu3bvl5uamhg0b2nzse21qeunSJUmPZlNTruQCAADcBRrL/T+aywG33Kmpaab//Oc/ql69epbu6jl5mJua5hdCLgAAAADchzs1Nc106tQpVatWLdtj0NQ09xBy8dBjRv3/MaMOAACQP+7U1FTK+RFiNDXNPYRcAAAAAMgFOTU1laQdO3ZY3UZT09xD4ykAAAAAgGHc1ZXc2NhYXbx4Ufb29ipVqpS5ixgAAAAAAAXBHUPuwYMHtWLFCu3Zs0fXrl0zj9vZ2alo0aJq2rSpXn75ZdWpUydPCwUAAACAvEa/l1se5l4vVkPu2bNnNXbsWJ0/f17+/v6aNWuWKlSooGLFiik9PV2XL1/WiRMndODAAQ0ZMkQeHh6aMGGCTZ2/AAAAAADIC1ZD7vDhwxUUFKSmTZtmu/2JJ57QE088oRYtWmjo0KH67rvvNGLECK1atSrPigUAAAAAICdWQ+6XX35p80Hs7Ozk7+8vf3//XCkKAAAAAIB7cU/dla9fv67r16/ndi0AAAAAANyXuwq5x48fV2BgoOrVq6enn35a7du315EjR/KqNgAAAAAA7spdhdzRo0crODhYhw8f1v79+9W1a1eFhITkVW0AAAAAANwVqyE3ODhYp06dshhLSEiQp6ennJ2d5ebmJnd3d25bBgAAAAAUGFYbT3Xs2FEhISEqX7683n77bXl7eyskJEQ9e/aUo6Oj0tPTlZqaqvHjxz/IegEAAAAAsMpqyM3slvzvf/9b77zzjqpUqaKBAwdq9+7dOn36tOzt7eXl5SVXV9cHWS8AAAAAAFZZDbmZ2rRpo9atW+ubb75R//79VatWLQ0cOFDu7u4Poj4AAAAAAGyWY8j9/vvvderUKZUrV04BAQEKCAjQ+vXr9cYbb6h+/foaMGCAypYt+6BqBQAAAAAgR1YbT02cOFHjx4/XsWPHNHfuXPXv31/29vbq3LmzIiIiVLVqVb322msKDw9/kPUCAAAAAGCV1Su5Gzdu1JdffqkKFSooMTFR9erV0+XLl1W8eHE5ODjo5ZdfVufOnbVq1aoHWS8AAAAAAFZZvZLr6emplStXKjIyUsuWLVORIkX02GOPWezj5OSkV199NVcKOX/+vPr27as6dero2Wef1bJlyyRJV69eVVBQkOrWravmzZtr9erVuXI+AAAAAIDxWL2S++GHH2r69OmaOHGi3N3dtXDhQhUqVChPisjIyNCAAQP0zDPPaO7cuTpz5ox69OihGjVq6LPPPpPJZFJkZKSioqLUu3dvVapUSb6+vnlSCwAAAADg4WU15Hp5eWn27NkPpIjDhw/rzz//1NChQ1WoUCFVqlRJX375pZydnbV9+3Zt2bJFzs7O8vHxUUBAgDZs2EDIBQAAAABkYfV25UWLFunmzZs2HygxMVELFiy4pyKOHj2qSpUq6f3331fjxo3VunVrHT58WFevXpWDg4M8PT3N+3p7eysmJuaezgMAAAAAMDarIVeS2rdvr1mzZuno0aNW9zl+/LimTZumdu3ayc7O7p6KuHr1qvbu3avixYtr586dmjJlisLDw3Xjxg25uLhY7Ovi4qKkpKR7Og8AAAAAwNis3q7cp08ftWvXTkuWLNErr7wiZ2dnVaxYUcWLF1d6erouX76s6Ohopaam6vnnn9fy5cvl4eFxT0U4OTmpaNGi6tu3rySpTp06at26tWbPnq3k5GSLfZOSkmQyme7pPAAAAAAAY7MaciXJw8NDYWFhGjp0qPbt26ejR4/q4sWLsre3V40aNdSvXz81aNBATk5O91WEt7e30tLSlJaWZm5ulZaWpmrVqunAgQOKi4uTu7u7JCk2NlYVK1a8r/MBAAAAAIwpx5CbqXDhwmrRooVatGiRJ0U0btxYLi4umjt3roKCgnTkyBFt27ZNn376qf744w9zl+fo6GhFRERo0aJFeVIHAAAAAODhZlPIzWsuLi5avny5JkyYoEaNGsnNzU1jxoyRr6+vwsPDFRYWpmbNmslkMmnYsGGqVatWfpcMAAAAACiACkTIlW49smjJkiVZxosVK6ZZs2blQ0UAAAAAgIdNjt2VAQAAAAB4mNgUco8fP57XdQAAAAAAcN9sCrldu3ZV+/btNX/+fJ07dy6vawIAAAAA4J7YFHL37Nmjnj176ueff1abNm300ksvaeXKlbp06VJe1wcAAAAAgM1sCrlFixZVt27dtHTpUu3cuVMBAQH67rvv5O/vr969eysiIkI3b97M61oBAAAAAMjRXTeeSkpKUkJCgq5fv66UlBSlp6dr0aJFatGihb7//vu8qBEAAAAAAJvY9AihuLg4ffvtt9q0aZOOHz8uHx8fBQQEaN68eXr88cclSTNmzNDIkSMVGRmZpwUDAAAAAGCNTSHXz89PXl5eCgwM1IcffigvL68s+zz99NM6ceJErhcIAAAAAICtbAq5X331lXx8fCzGrl+/Ljc3N/Prpk2bqmnTprlbHQAAAAAAd8GmNbkeHh7q16+fZs+ebR5r06aNgoKCdPXq1TwrDgAAAACAu2FTyB03bpyuX7+u9u3bm8eWLFmia9euadKkSXlWHAAAAAAAd8Om25UjIyO1atUqVahQwTxWuXJljRkzRq+99lqeFQcAAAAAwN2w6Uqus7OzLl26lGU8ISEh1wsCAAAAAOBe2RRy27VrpzFjxuiHH37Q5cuXdfnyZUVGRiosLExt2rTJ6xoBAAAAALCJTbcrDxs2TNeuXVP//v2VlpYmSbK3t1eXLl00YsSIPC0QAAAAAABb2RRynZycNG3aNIWGhio2NlaOjo7y9PRU4cKF87o+AAAAAABsZlPIlaQLFy4oJibGfCX3r7/+0s2bN3X06FEFBwfnWYEAAAAAANjKppC7cuVKTZ48WWlpabKzs1NGRoYkyc7OTrVq1SLkAgAAAAAKBJsaTy1ZskT9+/fXb7/9pscff1y7du1SRESEqlSpopYtW+Z1jQAAAAAA2MSmkPvnn3+qY8eOcnR0VNWqVXXo0CFVrFhRI0eO1OrVq/O6RgAAAAAAbGJTyC1WrJj+/vtvSZK3t7eioqIkSeXKldP58+fzrjoAAAAAAO6CTSG3RYsWGjt2rE6cOKEGDRpo48aN+uWXX7R8+XI98cQTeV0jAAAAAAA2sSnkjhgxQlWqVNGJEyfk5+enp59+Wt27d9eaNWt4Ti4AAAAAoMCwqbvyDz/8oGHDhqlo0aKSpGnTpmnkyJFyc3OTg4PNTyECAAAAACBP2XQld+zYsYqPj7cYK1asGAEXAAAAAFCg2BRya9Sood27d+d1LQAAAAAA3BebLsU6OTlp2rRp+uijj+Th4SEXFxeL7V9++WWeFAcAAAAAwN2wKeTWqFFDNWrUyOtaAAAAAAC4LzaF3IEDB+Z1HQAAAAAA3DebQu7IkSNz3D5lypRcKQYAAAAAgPthU8hNTk62eJ2amqrff/9dp0+f1ksvvZQnhQEAAAAAcLdsCrkzZszIdnzevHmKi4vL1YIAAAAAALhXNj1CyJoOHTpo8+bNuVULAAAAAAD35b5C7rfffqvChQvnVi0AAAAAANwXm25XbtKkSZaxGzduKDEx8Y5Nqe5WfHy8AgMDNXnyZLVo0UJXr17VqFGj9PPPP6tIkSIKCgpS165dc/WcAAAAAABjsCnkDhkyxOK1nZ2dHB0dVaNGDXl5eeVqQaNHj9aVK1fMr0NDQ2UymRQZGamoqCj17t1blSpVkq+vb66eFwAAAADw8LMp5D7//PM6f/68rl+/rooVK0qS1q9fLycnp1wt5osvvpCrq6ueeOIJSVJCQoK2b9+uLVu2yNnZWT4+PgoICNCGDRsIuQAAAACALGxak/vjjz+qTZs2Fk2m1qxZo4CAAB04cCBXComNjdWnn36qcePGmcfOnj0rBwcHeXp6mse8vb0VExOTK+cEAAAAABiLTSH3gw8+0IABAxQcHGweW7lypfr06aMpU6bcdxGpqakKCQnR6NGjVaxYMfP4jRs35OLiYrGvi4uLkpKS7vucAAAAAADjsSnkxsbGqm3btlnG27Vrp1OnTt13EfPmzVPVqlXVrFkzi3FXV1clJydbjCUlJclkMt33OQEAAAAAxmNTyPXy8tKuXbuyjEdGRqps2bL3XcTmzZu1adMm1atXT/Xq1VNcXJwGDx6sXbt2KSUlRXFxceZ9Y2NjzeuCAQAAAAC4nU2NpwYMGKAhQ4bol19+Uc2aNSVJx44d05YtW3LlduV///vfFq/9/PwUGhqqFi1a6MSJE5o+fbomTpyo6OhoRUREaNGiRfd9TgAAAACA8dgUctu0aaNixYrpiy++0Lp16+To6Kjy5ctr+fLled7lODw8XGFhYWrWrJlMJpOGDRumWrVq5ek5AQAAAAAPJ5tCriTVrl1b3t7eKlOmjCTpp59+0lNPPZUnRX333Xfmfy5WrJhmzZqVJ+cBAAAAABiLTWtyf/vtN7Vo0UKfffaZeWzs2LFq27atTp48mVe1AQAAAABwV2wKuZMmTVK7du00ePBg89jWrVv13HPPKTw8PM+KAwAAAADgbtgUck+cOKGePXvK0dHRPGZnZ6eePXvqP//5T54VBwAAAADA3bAp5JYpU0a//vprlvGjR4+qWLFiuV4UAAAAAAD3wqbGUz179lRYWJiio6NVo0YNSbceIbRy5UoNHDgwTwsEAAAAAMBWNoXc7t27y9nZWV988YVWrFghR0dHeXt7a8yYMXJ2ds7rGgEAAAAAsInNjxDq3LmzOnfuLOnWbcrr1q3TlClTdO3aNbVt2zbPCgQAAAAAwFY2h9zLly/r66+/1rp163Ty5Ek5OjqqVatW6tGjR17WBwAAAACAzXIMuenp6fr++++1bt067dq1SykpKapRo4bs7Oy0YsUK+fj4PKg6AQAAAAC4I6sh97333tPXX3+tK1euyNfXV0OGDFGrVq3k7u6u6tWry2QyPcg6AQAAAAC4I6sh95NPPpGXl5dCQkLk5+cnNze3B1kXAAAAAAB3zepzchcuXCgfHx+FhYWpQYMGevPNN/XVV1/p4sWLD7I+AAAAAABsZjXkNmvWTO+//74iIyM1ZcoUOTg4aMKECXr22WeVnp6unTt3KjEx8UHWCgAAAABAjqyG3Eyurq4KDAzUwoULtXv3bo0aNUq1atXS9OnT1aRJE4WFhT2IOgEAAAAAuCObHyEkSSVKlFCPHj3Uo0cPnTt3Tt988402bdqUV7UBAAAAAHBX7ngl1xpPT08NGDCAkAsAAAAAKDDuOeQCAAAAAFDQEHIBAAAAAIZByAUAAAAAGAYhFwAAAABgGIRcAAAAAIBhEHIBAAAAAIZByAUAAAAAGAYhFwAAAABgGIRcAAAAAIBhEHIBAAAAAIZByAUAAAAAGAYhFwAAAABgGIRcAAAAAIBhEHIBAAAAAIZByAUAAAAAGAYhFwAAAABgGIRcAAAAAIBhEHIBAAAAAIZByAUAAAAAGEaBCbkHDhxQ165dVbduXT333HP68ssvJUlXr15VUFCQ6tatq+bNm2v16tX5XCkAAAAAoKByyO8CpFtBdsCAAQoNDVX79u11/Phx9erVS//617/05ZdfymQyKTIyUlFRUerdu7cqVaokX1/f/C4bAAAAAFDAFIgruXFxcWrWrJkCAwNlb2+v6tWr65lnntEvv/yi7du3Kzg4WM7OzvLx8VFAQIA2bNiQ3yUDAAAAAAqgAhFyq1atqvfff9/8+urVqzpw4IAkycHBQZ6enuZt3t7eiomJeeA1AgAAAAAKvgIRcm/3999/q1+/fuaruS4uLhbbXVxclJSUlE/VAQAAAAAKsgIVcs+dO6eXXnpJRYsW1dy5c2UymZScnGyxT1JSkkwmUz5VCAAAAAAoyApMyD169Ki6deumJk2aaN68eXJxcZGXl5dSUlIUFxdn3i82NlYVK1bMx0oBAAAAAAVVgQi58fHxeuutt9SrVy+NHDlS9va3ynJzc5O/v7+mT5+uxMREHTlyRBEREQoMDMznigEAAAAABVGBeITQmjVrdOnSJc2fP1/z5883j7/22msKDw9XWFiYmjVrJpPJpGHDhqlWrVr5WC0AAAAAoKAqECG3X79+6tevn9Xts2bNeoDVAAAAAAAeVgXidmUAAAAAAHIDIRcAAAAAYBiEXAAAAACAYRByAQAAAACGQcgFAAAAABgGIRcAAAAAYBiEXAAAAACAYRByAQAAAACGQcgFAAAAABgGIRcAAAAAYBiEXAAAAACAYRByAQAAAACGQcgFAAAAABgGIRcAAAAAYBiEXAAAAACAYRByAQAAAACGQcgFAAAAABgGIRcAAAAAYBiEXAAAAACAYRByAQAAAACGQcgFAAAAABgGIRcAAAAAYBiEXAAAAACAYRByAQAAAACGQcgFAAAAABgGIRcAAAAAYBiEXAAAAACAYRByAQAAAACGQcgFAAAAABgGIRcAAAAAYBiEXAAAAACAYRByAQAAAACGQcgFAAAAABgGIRcAAAAAYBgPRcg9duyYunTpIl9fX3Xs2FGHDh3K75IAAAAAAAVQgQ+5ycnJ6tevn1544QXt379fr776qvr376+EhIT8Lg0AAAAAUMAU+JD7888/y97eXt27d5ejo6O6dOmikiVL6vvvv8/v0gAAAAAABUyBD7mxsbGqUKGCxZi3t7diYmLyqSIAAAAAQEFll5GRkZHfReRk3rx5OnbsmObOnWseCwkJUenSpTV06NB8rAwAAAAAUNAU+Cu5rq6uSkpKshhLSkqSyWTKp4oAAAAAAAVVgQ+5Tz75pGJjYy3GYmNjVbFixXyqCAAAAABQUBX4kNuwYUPdvHlTy5cvV0pKitasWaP4+Hg1adIkv0sDAAAAABQwBX5NriSdOHFC48aNU1RUlLy8vDRu3Dj5+vrmd1kAAAAAgALmoQi5AAAAAADYosDfrgwAAAAAgK0IuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMg5ALAAAAADAMQi4AAAAAwDAIuQAAAAAAwyDkAgAAAAAMI19C7pEjR9SkSRPz66tXryooKEh169ZV8+bNtXr1avO2jIwMTZ8+XQ0aNNDTTz+tiRMnKi0tLT/KBgAAAAAUcA805GZkZGjNmjV64403lJKSYh4PDQ2VyWRSZGSkZs+erQ8++ECHDh2SJK1cuVK7du3S119/rc2bN+uXX37RJ5988iDLBgAAAAA8JB5oyF2wYIGWLVumfv36mccSEhK0fft2BQcHy9nZWT4+PgoICNCGDRskSRs3blTPnj1VunRplSpVSn379tX69esfZNkAAAAAgIfEAw25nTt31saNG1WzZk3z2NmzZ+Xg4CBPT0/zmLe3t2JiYiRJMTExqlixosW22NhYZWRkPLjCAQAAAAAPhQcackuXLi07OzuLsRs3bsjFxcVizMXFRUlJSZKkxMREi+2urq5KT0/XzZs3875gAAAAAMBDJd+7K7u6uio5OdliLCkpSSaTSdKtwHv79sTERDk4OMjZ2fmB1gkAAAAAKPjyPeR6eXkpJSVFcXFx5rHY2FjzLcoVKlRQbGysxbYnn3zygdcJAAAAACj48j3kurm5yd/fX9OnT1diYqKOHDmiiIgIBQYGSpI6dOigJUuW6Pz584qPj9fChQvVsWPHfK4aAAAAAFAQOeR3AZIUHh6usLAwNWvWTCaTScOGDVOtWrUkSd27d1d8fLy6dOmilJQUBQYGqlevXvlcMQAAAACgILLLoE0xAAAAAMAg8v12ZQAAAAAAcgshFwAAAABgGIRcAAAAAIBhEHIBAAAAAIZByAUAAAAAGAYhFwAAAABgGIRcAAAAAIBhEHIBAAAAAIZByAUAAAAAGAYhFwAAAABgGIRcAAAAAIBhEHIBAAAA/F97dx5VdZ3/cfwFsosL5K6E4h4upGZqKkiSTuq0ueVW5poY7qgo6YiimOaG4q65pLmUTWrlOtVIi2ZmqbjBr1S0wl1jl98fHO54leWWwL1cno9zOmf83O/98P42Hfm+vp8NsBqEXAAAAACA1SDkAgAAAACsBiEXAAAAAGA1CLkAAAAAAKtByAUAAAAAWA1CLgAAAADAahByAQAAAABWg5ALAAAAALAahFwAAAAAgNUg5AIAAAAArAYhFwAAAABgNQi5AAAAAACrQcgFAAAAAFgNQi4AAAAAwGoQcgEAAAAAVoOQCwAAAACwGoRcAAAAAIDVIOQCAAAAAKwGIRcAAAAAYDUIuQAAAAAAq0HIBQAAAABYDTtzFwAAACzXt99+q379+uX4+enTp7V7926NGjXqoc9WrFihtm3b5vkz1q5dq927d2vLli05XjN06FDVrVs3258DAMD9LCbkHj16VNOnT9f//d//qXz58ho+fLi6dOmimzdvKiQkRN98841KlSqlwMBAdevWzdzlAgBQLHh7e+uDDz4wart586beeustde7cWVJm0K1Tp47CwsKMrqtZs2ae/e/fv19z5szRE088ke3nGRkZmj17tg4ePKi6dev+zbsAABQnFhFy09PTFRgYqClTpqhjx446cuSIXnvtNT355JOaPXu2XFxcFB0drdOnT2vQoEGqXbu2fHx8zF02AABWz9XV9aHfuaNHj1alSpUUGhoqKTPkNmzY8C/9bk5MTNTSpUu1fPlylSpVKttr4uPjFRYWpujoaDk6Ov79mwAAFCsWsSb31q1bunbtmtLT05WRkSEbGxvZ29urRIkS2rdvn4KCguTo6KhGjRqpc+fO2rFjh7lLBgCgWPr++++1a9cuTZgwQc7OzpKkM2fOqE6dOn+pn08//VTbtm3T7Nmz5e/vn+017777rq5cuaJNmzapbNmyj41uN1IAACAASURBVFw7AKB4sIiRXDc3N/Xq1UujR4/WuHHjdO/ePc2YMUPXr1+XnZ2dPDw8DNfWqFFDe/bsMWO1AAAUX/Pnz9dTTz1lCKZ37txRfHy8jh07pvbt2+vy5ct64oknNHnyZDVu3DjHflq0aKG9e/fKxcVFhw4dyvaaN998U15eXrKxsSmQewEAWCeLGMm9d++enJyctGDBAh07dkxLly5VeHi47ty5IycnJ6NrnZyclJSUZKZKAQAovmJiYvTdd99pwIABhrYzZ84oIyNDly9fVmhoqCIjI+Xg4KA33nhDv/32W459ValSRS4uLrn+vJo1axJwAQB/mUWE3D179uj48ePq2LGjHBwc5OfnJz8/Py1atEjJyclG1yYlJeX5SxEAAOS/rVu3ysPDQ35+foa2WrVqadmyZVq9erV8fX3Vrl07LV++XI6Ojlq7dq3ZagUAFF8WEXIvX76slJQUozY7Ozt5e3srNTVV8fHxhva4uDjVqlWrsEsEAKDY279/vzp27Gg0ulq6dGn5+fmpZMmShraSJUvqySef1JkzZ8xRJgCgmLOIkNuqVSudOnVK27dvV0ZGhr777jvt3btXnTp10rPPPqu5c+cqMTFRx48f186dO9WlSxdzlwwAQLFy7tw5Xb58Wc8995xR+6lTp7Rt27aHrk9OTmbmFQDALCwi5NatW1cLFy7UunXr1LRpU02bNk0RERFq2LChwsLClJaWJl9fXwUFBWncuHG5bmQBAADy388//yx7e3vVr1/fqP3UqVOaNGmSzp8/b2i7evWqjh49qqZNmxZ2mQAAWMbuypLk7++f7RECZcuW1YIFC8xQEQAAyHL27Fl5eHjI3t7eqL1Dhw6KiorS8OHDNXLkSNnY2CgyMlJubm7q0aOHJCklJUUnT55UpUqVVKlSJXOUDwAoRixiJBcAAFi2a9euqXTp0g+1lyxZUmvXrlWdOnU0ZcoUTZgwQZ6enlq/fr3hHN3ff/9dPXr00NatWwu7bABAMWSTkZGRYe4iAAAAAADID4zkAgAAAMXYgQMH9MILL6hRo0bq0KGDtmzZYvgsLi5OgwcPlo+Pj1q1aqXp06frzz//zLW/W7duafLkyWrVqpWaNWumsWPH6urVq0bXHD9+XH369FGjRo0MR4emp6cXyP2h+GEkFwBgtQ4cOKAFCxYoLi5OlStX1oABA9S9e3dJmQ9hERER+s9//qP09HS1bt1aISEhcnd3z7G/P/74QzNnztQXX3whOzs7+fr6avz48XrssccK65YAIF9FR0drwIAB6tmzp5577jlFR0dr+fLlioyMVMuWLdW5c2c5ODgoKChIzs7OWrp0qVxcXPTee+/l2OfAgQMVGxursWPHysHBQe+++65KlixpWLJw+fJlderUSTVq1NCbb76plJQULViwQE2aNNHMmTML69ZhxQi5AACrlNuDW0BAgIYOHaqffvpJ48ePl7Ozs2bPnq3y5ctr48aNRufAZklPT1e3bt3022+/aeTIkapQoYI2bNigS5cuaceOHXJwcDDDXQLAo+nWrZuqVaumefPmGdqCgoLk7u6uWrVqadasWdq9e7cef/xxSZnr87OO+Mxu09hz586pU6dOWrNmjVq1aiVJOnz4sPr06aOPPvpITzzxhCIiIvThhx9qz549KlOmjCQpNjZWzz//vHbs2KF69eoVwp3DmlnM7soAiracRswWLVqkyMjIbL/TvHlzrV+/PtvPTBkx+/XXXzV79mx9//33unfvnpo2baqJEyfKw8OjQO4RRcu8efPUsWNHTZkyRZLUsmVL/fLLLzp06JCefvppHTx4UOHh4frnP/8pSXJxcdEbb7yh2NhY1axZ86H+vvzyS504cUIbN25Us2bNDH127NhRH3zwgfr27Vt4N/c3dV/zH3OXYBG29PczdwmARUhISNDx48c1evRoo/aFCxdKkqZNmyZPT09DwJUkd3d3eXl56dChQ9mGXA8PD33wwQdq0KCBoS1rV/aUlBRJmYG2cePGhoArSV5eXipbtqyio6MJuXhkrMkF8Miio6MVGBioJk2aaNmyZXruuecUGhqqvXv3qlu3bvrggw+M/gkODpYkvfLKK9n2l56eriFDhujbb7/VhAkTNHv2bF2/fl19+/Y1/IK8e/euXn/9dcXHx2vatGmaMWOGrly5or59++rOnTuFdu+wTFkPbllTk7MsXLhQU6dONfx35Orqavgs62Hrxo0b2fYZGxsrFxcXQ8CVJAcHBzVo0ECHDh3K71sAkIec1pEuWrRIdevWzfaf3F5GmbKO9H6ffPKJ6tatq4sXL+b7vRWWs2fPSsoMof3791eDBg3k6+trmFZcrlw5JSQkGP7OlKS0tDRduXJFly5dyrZPR0dH+fj4yM7OTqmpqTpx4oSmT5+uunXrqmHDhoZ+L1++bPS9W7du6datW4qPjy+IW0Uxw0gugEeW24hZQECA0bmYiYmJGjt2rDp37qwXX3wx2/5MGTHbs2ePfv/9d33wwQcqX768JKlx48by9fXV3r179dJLLxXwXcOSPfjgdvjwYT322GMaPny4unXrpnLlyql9+/ZatmyZ6tatK2dnZ82bN0/VqlUzPIQ9qFy5ckpMTNS1a9eM1u1evHhRqamphXJfADJlvVzt2bOnJkyYoOjoaIWGhsrNzU3dunVTmzZtjK7//vvvNXv27BxfrkrS6NGjFRsbq8mTJxvWkQ4dOjTbo6+uX7+u8PDwfL+vwnb9+nVJ0pgxY9S1a1cNHjxY+/bt0+TJk1WhQgV17NhRUVFRmjhxosaMGSM7OzstWrRIt27dUmJiYp79jxw5Uvv27ZOjo6OWLVumEiVKSJK6dOmibdu2aebMmRo0aJCSkpI0ffp0lShRIs9NrQBTEHIBPJK8pjo9aO3atbp69arGjx+fY595jZj17dtX7u7u6t+/vyHgSlL58uXl6uqa49tlFB95Pbj5+vpqwoQJev3119WhQwdJmSO5GzZsyHFtbZs2bVSmTBmNHj1ab7/9ttzc3LRhwwadPXtWFStWLLR7A5D/L1fPnTunr776ymgdaZkyZdSnTx+dPHlSTzzxhNH1ERERhim4RVnWC7rnnntOb731lqTMf5dxcXFaunSpNm3apHfffVehoaHauXOnbG1t9dJLL8nf31/Jycl59j9w4ED17t1bmzdv1qBBg7Ru3To1adJELVq00NSpUzV79mytXbtW9vb2GjBggG7cuGE4Xxt4FIRcAI8krxGz+92+fVsrV65U//79VaFChRz7NGXEzNfXV76+vkbf++GHH3Tz5k15eXnl1+2hiMrrwa1+/fp69dVXVaFCBYWEhMjOzk4rV67U4MGD9cEHH2QbWt3d3bV48WIFBwfrH//4hyTJ399f3bt313fffVd4NwcUcwXxctWUdaRZoqOjtX//fk2YMEEhISF/9zYsQsmSJSVJrVu3Nmpv0aKFli9fLkkKCAiQv7+/fv31V5UpU0bu7u7q3bu3qlSpkmf/Tz75pCTp6aefVufOnbVhwwY1adJEkvTqq6+qa9eu+vXXX1W+fHmVLl1afn5+atGiRX7eIoop1uQCeCT3j5g1adJEK1asUPv27TV58mR98cUXRtd+9NFHSk1NVZ8+fXLt8/4Rs9jYWF2/fl2LFi3S2bNnc5wedffuXU2ZMkUeHh5q3759/twciqzcHtzOnj2rbdu26e7du1q5cqWeffZZ+fr6avny5bp3755WrVqVY7/NmjXT/v37tW/fPn355ZeKiorS7du3VapUqQK9HwD/k9c60vuZ+nLVlHWkkpSUlKS3335bo0ePNppJVFRlbdT44JKLtLQ02djY6NKlS/roo49UokQJ1ahRQ+7u7kpPT9fZs2dz3BzqwoUL+vDDD43aSpQoodq1a+uPP/6QlDly/tlnn8ne3l41a9ZU6dKldf36df32229sOoV8QcgtonI7tHv37t3Zbrbw5Zdf5tjfnTt3FB4eLn9/fz355JPq0aOHvv76a6NrfvvtNwUFBenpp59WmzZtFBYWxroJPDRi1rJlS4WGhuqZZ57R0qVLja7dunWrOnXqlOs5pNL/Rsx+/fVX/eMf/1CLFi108uRJde/ePdtpTHfv3tXQoUN14cIFzZs3j6NckOeD25UrV+Th4WH036Kzs7Pq16+v8+fPZ9vntWvXtH37diUnJ8vDw8Mw2hsTE8NDGVCICuLl6v1Gjhypl19+WWfOnNHEiRMN60ilzNHicuXKqWfPnvlzM2ZWu3ZtlS9fXp9++qlR+1dffSUfHx9duXJFEyZM0Llz5wyf7dy5U7du3XpoNlWW2NhYTZw4UceOHTO0JSYm6scff1StWrUkSSdPntTYsWN18+ZNwzWbNm2So6Ojnn766fy8RRRTTFcugnLbbCEgIECnT59WnTp1FBYWZvS97I7EyBISEqIjR45o1KhRqly5sj766CMNGDBAW7ZsUYMGDZSenq6hQ4fqzz//1IwZM3T37l3NmjVLN27c0Ny5cwv6lmHBTJnqJGW+2T1z5ozGjRtnUr9ZI2YXL16Ug4ODKlasqODg4IdGzK5du6ZBgwYpNjZWS5YsyXHTIBQv9z+43T+yn/Xg5unpqY8//thoSnxycrJOnz6ttm3bZttnamqqQkJCVK5cOcPD3ZEjR3T69GmNGTOm4G8KgKS8lyPcH75Mfbl6v5zWkZ48eVIbN27U1q1bsz1LuyiytbVVUFCQQkNDVaFCBbVp00a7du3SsWPHtH79evn4+Kh+/foaP368RowYoStXrig8PFxdu3Y1BNaUlBSdPHlSlSpVUqVKldSqVSs1bNhQ48aN06hRo+Tk5KTVq1crMTFRAwYMkJS51MPNzU1jxozRG2+8oZMnT2rRokUKCgqSm5ubOf+VwEoQcougvDZbOH36tBo2bCgfHx+T+rt06ZI+//xzRUZGKiAgQJLUqlUrnTlzRhs3btTMmTN19uxZnTx5Uu+9955hrURycrKmTJmisLAwubi4FMzNwuLlNWKW5csvv5Srq6tatmyZZ5/Xrl3TwYMH1alTJ6Mzb2NiYow2o/rtt9/Ur18/Xbt2TatWrTKs8ynqcjpz+EHff/+9evfurffeey/XN9937tzRwoULtW/fPl2/fl116tTRyJEjc/z/YtiwYfLy8tLYsWPz7Z4KW14PbrVq1dJ7772ngQMH6s0335S9vb3WrVunGzdu6LXXXpP08INbxYoV1a5dO02fPl2pqalKSkpSeHi4WrVqleOIBoD8V1AvV7Nkt460cePGmjx5svr27SsvLy+lpaUpIyNDknTv3j3du3dPtrZFc4Jk9+7dVaJECa1YsUIbNmxQjRo1FBUVZfh9GxkZqWnTpmnkyJEqXbq0+vfvr8DAQMP3f//9d/Xo0UPDhw/XW2+9JXt7ey1fvlzvvPOOYWCkWbNmev/991WtWjVJmce3rVixQmFhYQoMDFT58uU1YcIEw9+/wKMi5BYxpmy2cObMGfXr18/kPlNSUtSjRw899dRThjZbW1t5enoazn7L2kHvwTMl7927p9u3bxNyi7G8Rsyy/Pzzz/L29jZpN0pTRsySk5M1cOBA3bhxQ+vXr7ea6aJ5zdTIkpKSotDQUMNDVm7ymqlxv7lz52r//v1WsXlXXg9uWS/xxo8fL3t7ezVq1EhbtmwxzHp58MFNkmbOnKnp06crJCRE9vb26tixI6O4QCEriJerFy5c0OHDh/Xyyy8b2u5fR3r58mWdOHFCJ06c0IoVK4y+GxAQoJdeekmzZs16lNsyq1deeSXH45WqVatm9PIgu89Pnz5t1Obu7q6ZM2fm+jPr1aunjRs3/vViARMQcouYvHayvXPnjuLj43Xs2DG1b99ely9f1hNPPKHJkyercePG2fZZo0YNTZs2zajtzp07OnLkiOFojQYNGsjb21vz5s1TeHi4EhMTFRUVpSZNmhTpozNyGzH77bffNGPGDH3zzTeytbVVp06dNGbMGJMD/a+//qouXbpoypQpRr80U1JSNH/+fP373//W3bt31bhxY4WEhKhOnToFco8FLa8Rsyznzp176AiGLH9nxGzt2rU6c+aMRo0apaSkJKO1PxUrVlTlypUL9sYLSF4zNbIsW7ZMd+7cybM/U2ZqSNKVK1cUFhamr776Sk5OTgVwZ+aR24Obh4eHlixZkuN3s3twc3NzY4kGYGYF8XI1ax2pl5eXoY+sdaT+/v6qUKGCtm3bZvSdY8eOafr06YqKiiqyv8MBa0XILWLyOvuxVKlSysjI0OXLlxUaGqp79+5p5cqVeuONN7R7926TA2l4eLju3Lmjvn37Ssp8m/mvf/1LAwcONKxXq1q1qpYtW1YwN1oIchsx8/X11eDBgyVljtwkJiYqIiJCCQkJWrBggUn9v/3220pKSnqoPTQ0VAcPHtT48eNVrlw5RUVFadCgQdq1a5fRSHlRkteImZQ5Bbl06dLZfv/vjJjt379fUmYofNCgQYOK5FRbU4/FOHfunFavXq0ZM2Zo1KhRufZpykwNSZo/f74uXbqk999/X0FBQflwNwBQMAri5Wpe60gdHBwe2vMh65msTp06hmm4ACwDIbeIyWuzhWXLlmnZsmV66qmnDGtWmjdvroCAAK1duzbXM+KyREREaPv27ZoyZYphU4HTp0+rX79+evLJJ9W/f38lJiYqMjJSgwYN0vvvv18kw1luI2YODg6KiYnRnj175OnpKSlzGtSECRN048YNlS1bNte+P/zwQ8XGxj7Ufu7cOe3YsUMrVqwwvCyoV6+eunbtqp9++smkKVWWKrcRM+l/oTQ7f2fE7P4dxa2FKWcOZ2RkaPLkyerdu7fq1q2bZ5+mzNSQMl8MeHl5Wc1mKoAly20WUUJCgubOnav//ve/SkxMlLe3tyZMmKD69evn2ue2bdu0cuVKxcfHy9PTUyNGjDAa5Tx//rzCw8P1ww8/yNXVVa+88ooCAwNlZ1c0HwXz++WqKetILV33Nf8xdwkWYUt/P3OXAAtQNP9mK8by2mwh6yDtB7/z5JNP6syZM7n2nZ6ertDQUG3fvl1jxoxRr169DJ+tW7dOZcuWVVRUlBwdHSVlbszQvn17bd++vchtFJDXiNndu3f1wQcfGAKulBk8MjIyHloD9KCrV68qIiJC06dP1/Dhw40+O3DggKpUqWK0e2vFihX11VdfPeotwQrkNVPD19dX77//vhISEjR8+HBdunTpb/2cB2dqSLnvvg4g/+Q2i8jf319vvvmmbty4ofHjx6tUqVJau3atevfurV27duW4DGP79u0KDQ3VsGHD1KxZM+3cuVMjRozQpk2b1KhRI924cUP9+/dX9erVtWDBAl2+fFkRERFKTk5WcHBwIf8byD/5/XLVlHWk92vbtu1DfQCwDITcIiavzRZOnTqlEydOqGvXrkafJycn57qWNDU1VSNHjtT+/fsVGhr60HlyV65cUb169QwBV5LKly8vDw+PHM+UtGR5jZiVLFnSsCYnOTlZP/30k+bPny8/P788D3+fMWOGWrVqle0xJOfOnZOXl5d27NihxYsXKz4+Xj4+Ppo2bRohA3nO1KhXr57effddzZ8//2+vm81upoalY3QiE6MT1iG3WURlypTR8ePH9dFHHxmm2TZv3lzt2rXT9u3bH3pxKmXu7Dtv3jwNGDDA8PdGixYtdP78eX399ddq1KiRdu3apVu3bmnx4sWGY9h+//13rVmzpkiHXADISdHc67wYy+vQ7lOnTmnSpElGwfPq1as6evSomjZtmmO/06ZN04EDBzRr1qxsD0z39PTUyZMnlZKSYmi7du2aLl68WGSm8dzvrxwk36NHD/Xu3Vu3bt16aOT3QV988YW++uorhYSEZPv5tWvXFBMTo0WLFmnUqFGKjIzUnTt3NGjQIMMO1ii+cpupcfbsWU2dOlVt27ZVy5YtlZaWpnv37knKfMhNT0/Pte/09HSFhIRo9erVD83UAFA4smYRPXgk2MKFCzV16lQ5ODioR48eRutInZ2dVbly5Rxnbvz000/6448/1KNHD0ObjY2NNm/erCFDhkiSOnfurI0bNxqdM+7g4GB0DA4AWBNGcouYvDZbqF+/vqKiojR8+HCNHDlSNjY2ioyMlJubm+EX4IObLfzwww/asmWLOnTooOrVqxvtUluyZEnVrl1br732mnbs2KGhQ4fqtddeU3JyspYuXaqSJUs+NGpcFPyVg+QnTpyo5ORkrVixQn369NH27dv1+OOPP9Tn3bt3NXXqVI0dO1bly5fPNrSmpaUpISFBO3bsMKyvqlOnjjp06KB///vfhnWX5sKIWSZzjZjlNVPjwIEDkqTdu3cbff7666+refPmRhuu3C+vmRoACkdes4h8fHweOuP+0qVLOnv2rP7xj3/k2KeDg4OuX7+uMWPG6MSJE6pWrZrGjx8vf39/SZlH/pUpU0aS9Oeff+rw4cNavXq1XnnlFdbhA7BKhNwiKK/NFtauXavZs2drypQpSklJ0TPPPKOJEyfK2dlZ0sObLWStWfn888/1+eefG/2sxo0ba8uWLfL09NT69ev1zjvvKCgoSK6urmrWrJkiIyPl7u5euP8C8oGpB8lLmYfBS1KTJk3k7++vrVu3Znsu5rx581SxYkW9/PLLSktLM4ysZY2ylShRQi4uLnrssceMNhCpWrWqPDw88lwzDeuX17EYD+56HB8fr6CgIP3rX/9S8+bNc+z3/pkaL774YoHVDyB3pqy7v19aWprefvttOTs75/hC+dq1a7KxsVFgYKAGDBigUaNGacuWLRo+fLg++uijhzao8/X11a1bt+Th4WE4RcCceLn6PyxJsA65bSyXmJioWbNm6fPPP1daWpoCAgIUEhJiNMsiN2vXrtXu3butcvPN/EbILaJy22yhatWquR5z8+BmC2PHjjXpuBVvb2+tXbv2L9dqifIaMTt9+rTOnz+v559/3vCZq6urqlWrpj/++CPbPg8cOKBLly6pQYMGRu2TJk3SkiVLdODAAT3++OM6fPiwMjIyjN6eP3iAPYqnvGZqPHh8RdY6+xo1asjLy0vS35upAaBw/JVZRGlpaQoODtbXX3+d6wvltLQ0JScna8CAAXr99dclZb6cjYmJ0YoVKzRnzhzDtRkZGZo3b56SkpK0YMEC9ezZUx9//LFhlBfAo8ltY7mAgACFhobq66+/1qRJk5SRkaGIiAjdvHkz1zPbs+zfv19z5szJ8VgsGCPkoljKa8Ts+++/1/Tp09WkSRNVqlRJUuYIeGxsrFHwvV9UVJTRmuXU1FS9+uqrGj58uJ577jlJmQ8za9asUXR0tJ555hlJmQfQX7x48aEpaiieTDkWIzd/Z6YGgMJh6iyixMREBQUFKTo6WhEREYZpx9nJetnVpk0bQ5utra2aN29u9FJLylyrm/Wzvb291a5dO3366afq2bPno90YAEm5byxXp04d7dy5U0uXLjWchFKxYkX169dPZ8+ezfGlc2JiopYuXarly5ebPOILQi6KqbxGzOrWrauVK1dq2LBhGj58uFJSUrR48WK5u7vnuLb5wSlhWWtyq1atavisTZs2atasmYKDgzVu3Di5urpq7ty5qlmzplHYRvGW17EYWWrWrPnQ8RV/d6bG/bLW/gLIX3nNIpIyz7EeMGCATp48qfnz5ysgICDXPrP2iLj/JeuDfR47dkw3b940GimuXLmyypQpk+PsJAB/TV7HU27ZskX29vaGQQ4pc/f0UqVK6dChQzmG3E8//VTbtm3T7NmzdejQIcXGxhbcTVgRdldGsdW9e3eFh4fr4MGDGjJkiH766SfDiFmpUqW0bt06Va1aVRMnTtSkSZPk5eWl999/3+j4hR49emjr1q0m/0xbW1stXbpU/v7+mjlzpoKDg1WrVi2tXr1aDg4OBXWrAAALkNcJCRkZGRoxYoRiYmK0dOnSPAOuJDVr1kz29vb67LPPDG2pqan6+uuvDTOE9u7dq+DgYP3555+Ga2JiYnTjxo0ic5QYYOke3FiuQYMG8vX1NTwnxsXFqUqVKrK3tzd8x8bGRlWqVNEvv/ySY78tWrTQ3r171aVLl4K9ASvDSK4ZsdlCJnNutJDbiFm1atW0aNGiHL+b3UHy93N0dMz281KlSiksLExhYWF/vWAAQJGV1yyiXbt26b///a/69OmjkiVLGk03dnNzk6en50OziEqXLq0BAwZo5cqVcnFxkbe3tzZu3Khr166pf//+kqRevXoZNqPq37+/rl69qgULFqhBgwYmBWkAectrY7m7d+8alizcr2TJkrp7926O/VapUqXAarZmhFwAAIBCktu6+1GjRkmSNmzYoA0bNhh97/nnn9e8efMeWncvSSNHjpSrq6s2bdqkhIQEPfHEE1qzZo08PT0lZS6bWb9+vSIiIjRixAjZ29srICBAwcHBsrPjURDID3ltLFezZs0cNxll89H8x99sAFAAmKmRiSMxgIflNIto3rx5mjdvXq7fzW4WkY2NjQYNGqRBgwbl+L169eppzZo1f69gAHnKa2M5Hx8foyUDWe7evcuGUgWANbkAAAAA8Ajy2ljO09NTly9fVnp6uuGzjIwMxcfHq3r16oVZarHASC6KPEbM/odRMwAAgMKX1/GULVq0UGJioqKjow1Hfn333Xe6ffu2mjdvbq6yrRYhFwAAAAAeQV4by1WvXl0BAQEaN26cJkyYoBIlSigiIkLPPvus6tSpI+nh4ynx9/2lkJuRkaGLFy+qcuXKunfvHkeeAACAYocZRP/DDCLgf3LbWE6SZs2apRkzZigsLEx2dnby9/fXpEmTDN/PbmM5/D0mhdy0tDTNnz9f69atU1pamj7//HPNmTNH9vb2mj59upycnAq6TgAAAACwaLkdT+nq6qqZM2dq5syZ2X6e1/GUs2bNypcaiwOTNp5avHixDhw4oKioKDk6OkqSXn31VR07dkwRERH5UsiVK1c00NgQIwAAFp5JREFUZMgQNWnSRG3bttW6deskSTdv3lRgYKCaNm0qPz8/w4HKAAAAAAA8yKSQ+8knn2jq1Kl65plnDG0tWrTQzJkztWfPnkcuIiMjQ8OGDZOXl5e+/fZbrVq1SpGRkTp69KhCQ0Pl4uKi6OhoLVy4UHPmzDE6HB0AAAAAgCwmTVdOSEjIdvGzm5tbtuc9/VU//vijfv/9d40dO1YlSpRQ7dq1tXnzZjk6Omrfvn36/PPP5ejoqEaNGqlz587asWOHfHx8HvnnAgAAAACsi0kht2nTptq8ebOCg4MNbampqYqKilKTJk0euYgTJ06odu3aeuedd/TJJ5/I1dVVQ4cOVd26dWVnZ2c4d0qSatSokS+jxwAAAADwIDaXy1SUN5YzKeSGhIRo0KBB+uqrr5SSkqJJkybpl19+kSStWrXqkYu4efOmvv32W7Vo0UIHDx7Uzz//rIEDB2r58uUPbWrl5OSkpKSkR/6ZAAAAAADrY1LIrVmzpj777DN98sknOnfunNLT09WpUyf985//lLOz8yMX4eDgoDJlymjIkCGSpCZNmqhDhw5auHChkpOTja5NSkqSi4vLI/9MAAAAAID1MSnkdu/eXdOnT89xO+xHVaNGDaWnpys9PV0lSpSQJKWnp+uJJ57QkSNHFB8frypVqkiS4uLiVKtWrQKpAwAAAABQtJm0u/LFixdlZ2dSHv5bnnnmGTk5OSkyMlJpaWk6evSo9u7dq44dO+rZZ5/V3LlzlZiYqOPHj2vnzp3q0qVLgdUCAAAAACi6TEquPXv2VGBgoHr06KFq1ao9tE62devWj1SEk5OT1q9fr2nTpqlVq1ZydXXV5MmT5ePjo7CwME2ZMkW+vr5ycXHRuHHj1Lhx40f6eQAAAAAA62RSyF2yZIkkadasWQ99ZmNjo1OnTj1yIZ6entluYlW2bFktWLDgkfsHAAAAAFg/k0JuTExMQdcBAAAAAMAjM3mhbUZGhv7zn//o3Llzunfvnry8vNS2bVs5OjoWZH0AAAAAAJjMpJB7+fJlDRkyRBcuXDDshPzLL7+oYsWKWrdunSpWrFjQdQIAAAAAkCeTdlcOCwtT+fLldfDgQX344Yf6+OOPdeDAAVWtWlXh4eEFXSMAAAAAACYxKeR+/fXXCg4OVtmyZQ1t7u7uCg4O1qFDhwqsOAAAAAAA/gqTQq6rq6uSkpIeak9MTJStrUldAAAAAABQ4ExKqAEBAfrXv/6ls2fPGtpOnz6tsLAwPfvsswVWHAAAAAAAf4VJG0+NHj1ab731lrp06SJnZ2dJmaO4zz77rEJCQgq0QAAAAAAATGVSyHV1ddWaNWt05swZnT9/Xo6OjvLy8lL16tULuDwAAAAAAExnUsjNyMjQmjVr5O7urhdffFGSNGDAALVt21avvfZagRYIAAAAAICpTFqTO3fuXK1evVqlS5c2tPn7+2vVqlWKjIwssOIAAAAAAPgrTAq5O3bs0Pz58+Xv729o6927tyIiIrR169YCKw4AAAAAgL/CpJD7559/qkyZMg+1ly9fXrdu3cr3ogAAAAAA+DtMCrktWrTQnDlzjALtnTt3tHDhQj311FMFVhwAAAAAAH+FSRtPhYaG6vXXX1fbtm3l4eEhSbp48aKqVaumJUuWFGiBAAAAAACYyqSQW7lyZX3yySeKjo7W+fPnZW9vr+rVq6t169aytTVpMBgAAAAAgAJnUsiVJAcHB/n5+alVq1Y6e/as3N3dCbgAAAAAAIuSa0p9//331aVLF126dEmSdPLkSbVv315du3aVv7+/xowZo5SUlEIpFAAAAACAvOQYcrdt26aIiAj5+fmpVKlSkqRx48bp3r17+uijj7Rnzx5duHBBy5cvL7RiAQAAAADITY4h9/3339fkyZM1ZswYlS5dWseOHdP58+f1+uuvq169evLw8FBgYKA+/vjjwqwXAAAAAIAc5RhyY2Nj1aJFC8OfDx06JBsbG/n5+RnaatasqStXrhRogQAAAAAAmCrHkGtvb2+03vbrr79WxYoVVatWLUNbQkKCYSozAAAAAADmlmPIbdq0qf79739Lks6dO6ejR4/queeeM7pm/fr1aty4ccFWCAAAAACAiXI8QmjEiBHq16+f9u7dqytXrqh8+fIaPHiwJOmLL77QmjVr9MMPP2jjxo2FViwAAAAAALnJMeTWr19fu3bt0p49e2RjY6Pnn39ebm5ukqTTp0/LxcVF69evV4MGDQqtWAAAAAAAcpNjyJWkChUqqE+fPg+1Z43oAgAAAABgSXJckwsAAAAAQFFDyAUAAAAAWA1CLgAAAADAahByAQAAAABWg5ALAAAAALAahFwAAAAAgNXI8QihNm3aKCMjw6RO/vvf/+ZbQQAAAAAA/F05htx3331XQUFBqlSpkvr161eYNQEAAAAA8LfkGHKfeuoprVy5Ur1795abm5v8/PwKpaCEhAR16dJF4eHhateunW7evKmQkBB98803KlWqlAIDA9WtW7dCqQUAAAAAULTkuibX29tbI0aM0Lp16wqrHk2aNEk3btww/Dk0NFQuLi6Kjo7WwoULNWfOHB07dqzQ6gEAAAAAFB15bjzVv39/rV69ujBq0aZNm+Ts7KzKlStLku7evat9+/YpKChIjo6OatSokTp37qwdO3YUSj0AAAAAgKLFYnZXjouL05o1azR16lRD2y+//CI7Ozt5eHgY2mrUqKHY2FgzVAgAAAAAsHQ5htzmzZvr2rVrRm0xMTFKTU3N9yLS0tIUHBysSZMmqWzZsob2P//8U05OTkbXOjk5KSkpKd9rAAAAAAAUfTmG3Fu3bj10hFCvXr105cqVfC9iyZIlql+/vnx9fY3anZ2dlZycbNSWlJQkFxeXfK8BAAAAAFD05bi7cnZMPTf3r9q9e7f++OMP7d69W5J0584djR49WgMHDlRqaqri4+NVpUoVSZnTmmvVqlUgdQAAAAAAira/FHILymeffWb0Z39/f4WGhqpdu3aKiYnR3LlzNX36dJ09e1Y7d+7U8uXLzVQpAAAAAMCSWczGUzkJCwtTWlqafH19FRQUpHHjxqlx48bmLgsAAAAAYIFyHcn9+OOPVbJkScOf7927p507d8rd3d3ouh49euRrUQcOHDD877Jly2rBggX52j8AAAAAwDrlGHKrVKmiDRs2GLU99thj2rp1q1GbjY1NvodcAAAAAAD+jhxD7v2jqQAAAAAAFAUWvyYXAAAAAABTEXIBAAAAAFaDkAsAAAAAsBqEXAAAAACA1SDkAgAAAACsBiEXAAAAAGA1CLkAAAAAAKtByAUAAAAAWA1CLgAAAADAahByAQAAAABWg5ALAAAAALAahFwAAAAAgNUg5AIAAAAArAYhFwAAAABgNQi5AAAAAACrQcgFAAAAAFgNQi4AAAAAwGoQcgEAAAAAVoOQCwAAAACwGoRcAAAAAIDVIOQCAAAAAKwGIRcAAAAAYDUIuQAAAAAAq0HIBQAAAABYDUIuAAAAAMBqEHIBAAAAAFaDkAsAAAAAsBqEXAAAAACA1SDkAgAAAACsBiEXAAAAAGA1CLkAAAAAAKtByAUAAAAAWA2LCblHjhxRt27d1LRpU7Vv316bN2+WJN28eVOBgYFq2rSp/Pz8tHXrVjNXCgAAAACwVHbmLkDKDLLDhg1TaGioOnXqpFOnTql///56/PHHtXnzZrm4uCg6OlqnT5/WoEGDVLt2bfn4+Ji7bAAAAACAhbGIkdz4+Hj5+vqqS5cusrW1lbe3t55++mkdPXpU+/btU1BQkBwdHdWoUSN17txZO3bsMHfJAAAAAAALZBEht379+nrnnXcMf75586aOHDkiSbKzs5OHh4fhsxo1aig2NrbQawQAAAAAWD6LCLn3u337toYOHWoYzXVycjL63MnJSUlJSWaqDgAAAABgySwq5F64cEE9e/ZUmTJlFBkZKRcXFyUnJxtdk5SUJBcXFzNVCAAAAACwZBYTck+cOKHu3burdevWWrJkiZycnOTp6anU1FTFx8cbrouLi1OtWrXMWCkAAAAAwFJZRMhNSEjQwIED1b9/f02cOFG2tpllubq66tlnn9XcuXOVmJio48ePa+fOnerSpYuZKwYAAAAAWCKLOEJo27ZtunbtmqKiohQVFWVo79evn8LCwjRlyhT5+vrKxcVF48aNU+PGjc1YLQAAAADAUllEyB06dKiGDh2a4+cLFiwoxGoAAAAAAEWVRUxXBgAAAAAgPxByAQAAAABWg5ALAAAAALAahFwAAAAAgNUg5AIAAAAArAYhFwAAAABgNQi5AAAAAACrQcgFAAAAAFgNQi4AAAAAwGoQcgEAAAAAVoOQCwAAAACwGoRcAAAAAIDVIOQCAAAAAKwGIRcAAAAAYDUIuQAAAAAAq0HIBQAAAABYDUIuAAAAAMBqEHIBAAAAAFaDkAsAAAAAsBqEXAAAAACA1SDkAgAAAACsBiEXAAAAAGA1CLkAAAAAAKtByAUAAAAAWA1CLgAAAADAahByAQAAAABWg5ALAAAAALAahFwAAAAAgNUg5AIAAAAArAYhFwAAAABgNQi5AAAAAACrQcgFAAAAAFgNQi4AAAAAwGoQcgEAAAAAVoOQCwAAAACwGkUi5J48eVJdu3aVj4+PXnjhBR07dszcJQEAAAAALJDFh9zk5GQNHTpUL7/8sg4fPqy+ffvqzTff1N27d81dGgAAAADAwlh8yP3mm29ka2urXr16yd7eXl27dlW5cuX0xRdfmLs0AAAAAICFsfiQGxcXp5o1axq11ahRQ7GxsWaqCAAAAABgqWwyMjIyzF1EbpYsWaKTJ08qMjLS0BYcHKwKFSpo7NixZqwMAAAAAGBpLH4k19nZWUlJSUZtSUlJcnFxMVNFAAAAAABLZfEh18vLS3FxcUZtcXFxqlWrlpkqAgAAAABYKosPuS1btlRKSorWr1+v1NRUbdu2TQkJCWrdurW5SwMAAAAAWBiLX5MrSTExMZo6dapOnz4tT09PTZ06VT4+PuYuCwAAAABgYYpEyAUAAAAAwBQWP10ZAAAAAABTEXKLqZMnT6pr167y8fHRCy+8oGPHjpm7JBRzR44cUbdu3dS0aVO1b99emzdvNndJgCQpISFBLVu21MGDB81dCqArV65oyJAhatKkidq2bat169aZuyRAR48e1csvv6wmTZqoQ4cO+uSTT8xdEoo5Qm4xlJycrKFDh+rll1/W4cOH1bdvX7355pu6e/euuUtDMXXz5k0NGzZM/fr10+HDh7VgwQK9++67io6ONndpgCZNmqQbN26YuwxAGRkZGjZsmLy8vPTtt99q1apVioyM1NGjR81dGoqx9PR0BQYGavDgwTp69KhmzJihCRMm6OLFi+YuDcUYIbcY+uabb2Rra6tevXrJ3t5eXbt2Vbly5fTFF1+YuzQUU/Hx8fL19VWXLl1ka2srb29vPf300zy4wew2bdokZ2dnVa5c2dylAPrxxx/1+++/a+zYsbK3t1ft2rW1efNm1ahRw9yloRi7deuWrl27pvT0dGVkZMjGxkb29vYqUaKEuUtDMUbILYbi4uJUs2ZNo7YaNWooNjbWTBWhuKtfv77eeecdw59v3rypI0eOqF69emasCsVdXFyc1qxZo6lTp5q7FECSdOLECdWuXVvvvPOOnnnmGXXo0EE//vij3NzczF0aijE3Nzf16tVLo0ePlre3t3r37q3Q0FBeDsKs7MxdAArfn3/+KWdnZ6M2JycnJSUlmaki4H9u376toUOHytvbW/7+/uYuB8VUWlqagoODNWnSJJUtW9bc5QCSMl8Afvvtt2rRooUOHjyon3/+WQMHDpSHh4eaNWtm7vJQTN27d09OTk5asGCB/P39FR0drTFjxsjb25uX1TAbRnKLIWdn54cCbVJSklxcXMxUEZDpwoUL6tmzp8qUKaPIyEjZ2vJXFMxjyZIlql+/vnx9fc1dCmDg4OCgMmXKaMiQIXJwcDBs8rN//35zl4ZibM+ePTp+/Lg6duwoBwcH+fn5yc/PTzt27DB3aSjGeIIshry8vBQXF2fUFhcXp1q1apmpIiBzGl737t3VunVrLVmyRE5OTuYuCcXY7t27tWvXLjVr1kzNmjVTfHy8Ro8ereXLl5u7NBRjNWrUUHp6utLT0w1tWesgAXO5fPmyUlJSjNrs7OxYkwuzIuQWQy1btlRKSorWr1+v1NRUbdu2TQkJCWrdurW5S0MxlZCQoIEDB6p///6aOHEiI7gwu88++0zff/+9jhw5oiNHjqhKlSp69913NXjwYHOXhmLsmWeekZOTkyIjI5WWlqajR49q79696tixo7lLQzHWqlUrnTp1Stu3b1dGRoa+++47/ruE2dlk8PqvWIqJidHUqVN1+vRpeXp6aurUqfLx8TF3WSimli5dqnnz5j00Zb5fv34aNWqUmaoC/sff31+hoaFq166duUtBMffLL79o2rRp+umnn+Tq6qrAwEC98sor5i4LxdyBAwe0YMECXbhwQVWqVNGIESMUEBBg7rJQjBFyAQAAAABWgzmBAAAAAACrQcgFAAAAAFgNQi4AAAAAwGoQcgEAAAAAVoOQCwAAAACwGoRcAAAAAIDVsDN3AQAAIJO/v78uXbpk+LONjY1KliypRo0aadKkSapVq5YZqwMAoGjgnFwAACyEv7+/Xn31Vb344ouSpIyMDP36668KDw/XnTt39Nlnn8nWlklYAADkht+UAABYEFdXV5UvX17ly5dXhQoV1KxZM02aNEm//PKLzpw5Y+7yAACweIRcAAAsnL29vSSpRIkS8vf316ZNmwyfnT9/XnXr1tXFixclZY4Gr1+/Xn369FHDhg3VoUMHffHFF4br9+zZo+eff14NGzZUQECANm/eXLg3AwBAASPkAgBgwX777TfNnz9ftWvXlpeXl0nfWbhwoXr16qVdu3apbt26CgkJUWpqqq5evarRo0erT58++uyzzxQYGKipU6cqJiamgO8CAIDCw8ZTAABYkPDwcM2ePVuSlJ6eLhsbG7Vq1UrLly9XiRIlTOqjS5cuev755yVJw4YN0wsvvKArV67o9u3bSk1NVcWKFVW1alVVrVpVlSpVUsWKFQvsfgAAKGyEXAAALMjQoUPVuXNnJSYmatWqVfrhhx80evRoValSxeQ+qlevbvjfrq6ukqTU1FTVr19f/v7+GjZsmKpVq6Z27drppZdekpubW37fBgAAZsN0ZQAALIi7u7s8PT1Vr149RUREqHLlyhoyZIhu376d7fXp6ekPtWWt4b1fRkaGbGxsFBUVpQ8//FAvvfSSjh49qu7du+vgwYP5fh8AAJgLIRcAAAtla2ur8PBwXb16VXPmzJGUGWDv3LljuObChQsm93f+/HnNmDFD3t7eGj58uD788EO1aNFCe/bsyffaAQAwF6YrAwBgwTw8PDR48GBFRkaqR48eatiwoT788EO1adNGSUlJWrx4scl9lSlTRlu3blXJkiX1yiuv6PLly4qJiVG7du0K8A4AAChcjOQCAGDhBg0apGrVqmnatGkaNWqUKlSooG7dumnChAl66623TO6nXLlyWrx4sb788kt17txZo0eP1iuvvKJevXoVYPUAABQum4yMjAxzFwEAAAAAQH5gJBcAAAAAYDUIuQAAAAAAq0HIBQAAAABYDUIuAAAAAMBqEHIBAAAAAFaDkAsAAAAAsBqEXAAAAACA1SDkAgAAAACsBiEXAAAAAGA1/h/7CXRJot2FUgAAAABJRU5ErkJggg==\n", + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(16,9))\n", + "\n", + "acc_scores = [round(a * 100, 1) for a in accs]\n", + "f1_scores = [round(f * 100, 2) for f in f1s]\n", + "\n", + "x1 = np.arange(len(acc_scores))\n", + "x2 = np.arange(len(f1_scores))\n", + "\n", + "ax1.bar(x1, acc_scores)\n", + "ax2.bar(x2, f1_scores, color='#559ebf')\n", + "\n", + "# Place values on top of bars\n", + "for i, v in enumerate(list(zip(acc_scores, f1_scores))):\n", + " ax1.text(i - 0.25, v[0] + 2, str(v[0]) + '%')\n", + " ax2.text(i - 0.25, v[1] + 2, str(v[1]))\n", + "\n", + "ax1.set_ylabel('Accuracy (%)')\n", + "ax1.set_title('Naive Bayes')\n", + "ax1.set_ylim([0, 100])\n", + "\n", + "ax2.set_ylabel('F1 Score')\n", + "ax2.set_xlabel('Runs')\n", + "ax2.set_ylim([0, 100])\n", + "\n", + "sns.despine(bottom=True, left=True) # Remove the ticks on axes for cleaner presentation\n", + "\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The F1 score fluctuates greater than 15 points between some runs, which could be remedied with a larger dataset. Let's see how other algorithms do." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Other Classification Algorithms in scikit-learn\n", + "\n", + "As you can see Naive Bayes performed pretty well, so let’s experiment with other classifiers.\n", + "\n", + "We'll use the same shuffle splitting as before, but now we'll run several types of models in each loop:" + ] + }, + { + "cell_type": "code", + "execution_count": 264, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.naive_bayes import BernoulliNB\n", + "from sklearn.linear_model import LogisticRegression, SGDClassifier\n", + "from sklearn.svm import LinearSVC\n", + "from sklearn.ensemble import RandomForestClassifier\n", + "from sklearn.neural_network import MLPClassifier\n", + "\n", + "X = df.headline\n", + "y = df.label\n", + "\n", + "cv = ShuffleSplit(n_splits=20, test_size=0.2)\n", + "\n", + "models = [\n", + " MultinomialNB(),\n", + " BernoulliNB(),\n", + " LogisticRegression(),\n", + " SGDClassifier(),\n", + " LinearSVC(),\n", + " RandomForestClassifier(),\n", + " MLPClassifier()\n", + "]\n", + "\n", + "sm = SMOTE()\n", + "\n", + "# Init a dictionary for storing results of each run for each model\n", + "results = {\n", + " model.__class__.__name__: {\n", + " 'accuracy': [], \n", + " 'f1_score': [],\n", + " 'confusion_matrix': []\n", + " } for model in models\n", + "}\n", + "\n", + "for train_index, test_index in cv.split(X):\n", + " X_train, X_test = X.iloc[train_index], X.iloc[test_index]\n", + " y_train, y_test = y.iloc[train_index], y.iloc[test_index]\n", + " \n", + " X_train_vect = vect.fit_transform(X_train) \n", + " X_test_vect = vect.transform(X_test)\n", + " \n", + " X_train_res, y_train_res = sm.fit_sample(X_train_vect, y_train)\n", + " \n", + " for model in models:\n", + " model.fit(X_train_res, y_train_res)\n", + " y_pred = model.predict(X_test_vect)\n", + " \n", + " acc = accuracy_score(y_test, y_pred)\n", + " f1 = f1_score(y_test, y_pred)\n", + " cm = confusion_matrix(y_test, y_pred)\n", + " \n", + " results[model.__class__.__name__]['accuracy'].append(acc)\n", + " results[model.__class__.__name__]['f1_score'].append(f1)\n", + " results[model.__class__.__name__]['confusion_matrix'].append(cm) " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now have a bunch of accuracy scores, f1 scores, and confusion matrices stored for each model. Let's average these together to get average scores across models and folds:" + ] + }, + { + "cell_type": "code", + "execution_count": 265, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "MultinomialNB\n", + "------------------------------\n", + " Avg. Accuracy: 74.70%\n", + " Avg. F1 Score: 69.63\n", + " Avg. Confusion Matrix: \n", + " \n", + "[[114.05 36.4 ]\n", + " [ 27.1 73.45]]\n", + " \n", + "BernoulliNB\n", + "------------------------------\n", + " Avg. Accuracy: 75.32%\n", + " Avg. F1 Score: 67.96\n", + " Avg. Confusion Matrix: \n", + " \n", + "[[122.75 27.7 ]\n", + " [ 34.25 66.3 ]]\n", + " \n", + "LogisticRegression\n", + "------------------------------\n", + " Avg. Accuracy: 74.80%\n", + " Avg. F1 Score: 68.31\n", + " Avg. Confusion Matrix: \n", + " \n", + "[[119.2 31.25]\n", + " [ 32. 68.55]]\n", + " \n", + "SGDClassifier\n", + "------------------------------\n", + " Avg. Accuracy: 71.75%\n", + " Avg. F1 Score: 65.31\n", + " Avg. Confusion Matrix: \n", + " \n", + "[[112.6 37.85]\n", + " [ 33.05 67.5 ]]\n", + " \n", + "LinearSVC\n", + "------------------------------\n", + " Avg. Accuracy: 73.01%\n", + " Avg. F1 Score: 66.61\n", + " Avg. Confusion Matrix: \n", + " \n", + "[[115.55 34.9 ]\n", + " [ 32.85 67.7 ]]\n", + " \n", + "RandomForestClassifier\n", + "------------------------------\n", + " Avg. Accuracy: 69.64%\n", + " Avg. F1 Score: 52.74\n", + " Avg. Confusion Matrix: \n", + " \n", + "[[132. 18.45]\n", + " [ 57.75 42.8 ]]\n", + " \n", + "MLPClassifier\n", + "------------------------------\n", + " Avg. Accuracy: 74.14%\n", + " Avg. F1 Score: 67.43\n", + " Avg. Confusion Matrix: \n", + " \n", + "[[118.75 31.7 ]\n", + " [ 33.2 67.35]]\n", + " \n" + ] + } + ], + "source": [ + "for model, d in results.items():\n", + " avg_acc = sum(d['accuracy']) / len(d['accuracy']) * 100\n", + " avg_f1 = sum(d['f1_score']) / len(d['f1_score']) * 100\n", + " avg_cm = sum(d['confusion_matrix']) / len(d['confusion_matrix'])\n", + " \n", + " slashes = '-' * 30\n", + " \n", + " s = f\"\"\"{model}\\n{slashes}\n", + " Avg. Accuracy: {avg_acc:.2f}%\n", + " Avg. F1 Score: {avg_f1:.2f}\n", + " Avg. Confusion Matrix: \n", + " \\n{avg_cm}\n", + " \"\"\"\n", + " print(s)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We've gotten some pretty decent results, but overall it looks like we need more data to be sure which one performs the best. \n", + "\n", + "Since we're only running metrics on a test set size of about 300 examples, a 0.5% difference in accuracy would mean only ~2 more examples are classified correctly versus the other model(s). If we had a test set of 10,000, a 0.5% difference in accuracy would equal 50 more correctly classified headlines, which is much more reassuring. \n", + "\n", + "The difference between Random Forest and Multinomial Naive Bayes is quite clear, but the difference between Multinomial and Bernoulli Naive Bayes isn't. To compare these two further, we need more data.\n", + "\n", + "Let's see if ensembling can make a better difference." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Esembling Classifiers\n", + "\n", + "After we evaluated each classifier individually, let's see if ensembling helps improve our metrics.\n", + "\n", + "We're going to use sklearn's `VotingClassifier` which defaults to a *majority rule* voting." + ] + }, + { + "cell_type": "code", + "execution_count": 266, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.ensemble import VotingClassifier\n", + "\n", + "X = df.headline\n", + "y = df.label\n", + "\n", + "cv = ShuffleSplit(n_splits=10, test_size=0.2)\n", + "\n", + "models = [\n", + " MultinomialNB(),\n", + " BernoulliNB(),\n", + " LogisticRegression(),\n", + " SGDClassifier(),\n", + " LinearSVC(),\n", + " RandomForestClassifier(),\n", + " MLPClassifier()\n", + "]\n", + "\n", + "m_names = [m.__class__.__name__ for m in models]\n", + "\n", + "models = list(zip(m_names, models))\n", + "vc = VotingClassifier(estimators=models)\n", + "\n", + "sm = SMOTE()\n", + "\n", + "# No need for dictionary now\n", + "accs = []\n", + "f1s = []\n", + "cms = []\n", + "\n", + "for train_index, test_index in cv.split(X):\n", + " X_train, X_test = X.iloc[train_index], X.iloc[test_index]\n", + " y_train, y_test = y.iloc[train_index], y.iloc[test_index]\n", + " \n", + " X_train_vect = vect.fit_transform(X_train) \n", + " X_test_vect = vect.transform(X_test)\n", + " \n", + " X_train_res, y_train_res = sm.fit_sample(X_train_vect, y_train)\n", + " \n", + " vc.fit(X_train_res, y_train_res)\n", + " \n", + " y_pred = vc.predict(X_test_vect)\n", + " \n", + " accs.append(accuracy_score(y_test, y_pred))\n", + " f1s.append(f1_score(y_test, y_pred))\n", + " cms.append(confusion_matrix(y_test, y_pred))" + ] + }, + { + "cell_type": "code", + "execution_count": 267, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Voting Classifier\n", + "------------------------------\n", + "Avg. Accuracy: 75.78%\n", + "Avg. F1 Score: 68.51\n", + "Confusion Matrix:\n", + " [[123.7 28.7]\n", + " [ 32.1 66.5]]\n" + ] + } + ], + "source": [ + "print(\"Voting Classifier\")\n", + "print(\"-\" * 30)\n", + "print(\"Avg. Accuracy: {:.2f}%\".format(sum(accs) / len(accs) * 100))\n", + "print(\"Avg. F1 Score: {:.2f}\".format(sum(f1s) / len(f1s) * 100))\n", + "print(\"Confusion Matrix:\\n\", sum(cms) / len(cms))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Although our majority classifier performed great, it didn't differ much from the results we got from Multinomial Naive Bayes, which might have been suprising. Surely mashing a bunch together would give better results, but this lack of difference in performance proves that there's still a lot of areas that need to be explored. For example:\n", + "+ How more data affects performance (best place to start due to our small dataset)\n", + "+ Grid searching different parameters for each model\n", + "+ Debugging the ensemble by looking at model correlations\n", + "+ Trying different styles of [bagging, boosting, and stacking](https://stats.stackexchange.com/a/19053)\n", + "\n", + "## Final words and where to go from here\n", + "\n", + "So far we've \n", + "+ Mined data from Reddit's /r/politics\n", + "+ Obtained sentiment scores for headlines\n", + "+ Vectorized the data\n", + "+ Run the data through several types of models\n", + "+ Ensembled models together\n", + "\n", + "Unfortunately, there isn't an obvious winning model. There's a couple we've seen that definitely perform poorly, but there's a few that hover around the same accuracy. Additionally, the confusion matrices are showing roughly half of the positive headlines are being misclassified, so there's a lot more work to be done.\n", + "\n", + "Now that you've seen how this pipeline works, there's a lot of room for improvement on the architecture of the code and modeling. I encourage you to try all of this out in the provided notebook. See what other subreddits you can tap into for sentiment, like stocks, companies, products, etc.. There's a lot of valuable data to be had!\n", + "\n", + "### Help us make this article and series better\n", + "\n", + "If you're interested in the expansion of this article and series into some of these areas of exploration, drop a comment below and we'll add it to the content pipeline. \n", + "\n", + "Thanks for reading!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.4" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}