From 18d1389f9347fce83a393f2cc52775fed0589aac Mon Sep 17 00:00:00 2001 From: Afonja Tejumade Date: Thu, 27 Aug 2020 00:09:25 +0200 Subject: [PATCH] Index out of bound Index 50 is out of bounds for axis 0 with size 50 --- lab2/solutions/Part2_Debiasing_Solution.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lab2/solutions/Part2_Debiasing_Solution.ipynb b/lab2/solutions/Part2_Debiasing_Solution.ipynb index 29a52601..ffe8745c 100644 --- a/lab2/solutions/Part2_Debiasing_Solution.ipynb +++ b/lab2/solutions/Part2_Debiasing_Solution.ipynb @@ -1 +1 @@ -{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Part2_Debiasing_Solution.ipynb","provenance":[{"file_id":"https://github.com/aamini/introtodeeplearning/blob/master/lab2/Part2_debiasing_solution.ipynb","timestamp":1578015896226}],"collapsed_sections":["Ag_e7xtTzT1W","NDj7KBaW8Asz"]},"kernelspec":{"name":"python3","display_name":"Python 3"},"accelerator":"GPU"},"cells":[{"cell_type":"markdown","metadata":{"id":"Ag_e7xtTzT1W","colab_type":"text"},"source":["\n"," \n"," \n"," \n","
\n"," \n"," Visit MIT Deep Learning\n"," Run in Google Colab\n"," View Source on GitHub
\n","\n","# Copyright Information"]},{"cell_type":"code","metadata":{"id":"rNbf1pRlSDby","colab_type":"code","colab":{}},"source":["# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.\n","# \n","# Licensed under the MIT License. You may not use this file except in compliance\n","# with the License. Use and/or modification of this code outside of 6.S191 must\n","# reference:\n","#\n","# © MIT 6.S191: Introduction to Deep Learning\n","# http://introtodeeplearning.com\n","#"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"QOpPUH3FR179","colab_type":"text"},"source":["# Laboratory 2: Computer Vision\n","\n","# Part 2: Debiasing Facial Detection Systems\n","\n","In the second portion of the lab, we'll explore two prominent aspects of applied deep learning: facial detection and algorithmic bias. \n","\n","Deploying fair, unbiased AI systems is critical to their long-term acceptance. Consider the task of facial detection: given an image, is it an image of a face? This seemingly simple, but extremely important, task is subject to significant amounts of algorithmic bias among select demographics. \n","\n","In this lab, we'll investigate [one recently published approach](http://introtodeeplearning.com/AAAI_MitigatingAlgorithmicBias.pdf) to addressing algorithmic bias. We'll build a facial detection model that learns the *latent variables* underlying face image datasets and uses this to adaptively re-sample the training data, thus mitigating any biases that may be present in order to train a *debiased* model.\n","\n","\n","Run the next code block for a short video from Google that explores how and why it's important to consider bias when thinking about machine learning:"]},{"cell_type":"code","metadata":{"id":"XQh5HZfbupFF","colab_type":"code","colab":{}},"source":["import IPython\n","IPython.display.YouTubeVideo('59bMh59JQDo')"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"3Ezfc6Yv6IhI","colab_type":"text"},"source":["Let's get started by installing the relevant dependencies:"]},{"cell_type":"code","metadata":{"id":"E46sWVKK6LP9","colab_type":"code","colab":{}},"source":["# Import Tensorflow 2.0\n","%tensorflow_version 2.x\n","import tensorflow as tf\n","\n","import IPython\n","import functools\n","import matplotlib.pyplot as plt\n","import numpy as np\n","from tqdm import tqdm\n","\n","# Download and import the MIT 6.S191 package\n","!pip install mitdeeplearning\n","import mitdeeplearning as mdl"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"V0e77oOM3udR","colab_type":"text"},"source":["## 2.1 Datasets\n","\n","We'll be using three datasets in this lab. In order to train our facial detection models, we'll need a dataset of positive examples (i.e., of faces) and a dataset of negative examples (i.e., of things that are not faces). We'll use these data to train our models to classify images as either faces or not faces. Finally, we'll need a test dataset of face images. Since we're concerned about the potential *bias* of our learned models against certain demographics, it's important that the test dataset we use has equal representation across the demographics or features of interest. In this lab, we'll consider skin tone and gender. \n","\n","1. **Positive training data**: [CelebA Dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html). A large-scale (over 200K images) of celebrity faces. \n","2. **Negative training data**: [ImageNet](http://www.image-net.org/). Many images across many different categories. We'll take negative examples from a variety of non-human categories. \n","[Fitzpatrick Scale](https://en.wikipedia.org/wiki/Fitzpatrick_scale) skin type classification system, with each image labeled as \"Lighter'' or \"Darker''.\n","\n","Let's begin by importing these datasets. We've written a class that does a bit of data pre-processing to import the training data in a usable format."]},{"cell_type":"code","metadata":{"id":"RWXaaIWy6jVw","colab_type":"code","colab":{}},"source":["# Get the training data: both images from CelebA and ImageNet\n","path_to_training_data = tf.keras.utils.get_file('train_face.h5', 'https://www.dropbox.com/s/bp54q547mfg15ze/train_face.h5?dl=1')\n","# Instantiate a TrainingDatasetLoader using the downloaded dataset\n","loader = mdl.lab2.TrainingDatasetLoader(path_to_training_data)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yIE321rxa_b3","colab_type":"text"},"source":["We can look at the size of the training dataset and grab a batch of size 100:"]},{"cell_type":"code","metadata":{"id":"DjPSjZZ_bGqe","colab_type":"code","colab":{}},"source":["number_of_training_examples = loader.get_train_size()\n","(images, labels) = loader.get_batch(100)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"sxtkJoqF6oH1","colab_type":"text"},"source":["Play around with displaying images to get a sense of what the training data actually looks like!"]},{"cell_type":"code","metadata":{"id":"Jg17jzwtbxDA","colab_type":"code","colab":{}},"source":["### Examining the CelebA training dataset ###\n","\n","#@title Change the sliders to look at positive and negative training examples! { run: \"auto\" }\n","\n","face_images = images[np.where(labels==1)[0]]\n","not_face_images = images[np.where(labels==0)[0]]\n","\n","idx_face = 23 #@param {type:\"slider\", min:0, max:50, step:1}\n","idx_not_face = 9 #@param {type:\"slider\", min:0, max:50, step:1}\n","\n","plt.figure(figsize=(5,5))\n","plt.subplot(1, 2, 1)\n","plt.imshow(face_images[idx_face])\n","plt.title(\"Face\"); plt.grid(False)\n","\n","plt.subplot(1, 2, 2)\n","plt.imshow(not_face_images[idx_not_face])\n","plt.title(\"Not Face\"); plt.grid(False)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"NDj7KBaW8Asz","colab_type":"text"},"source":["### Thinking about bias\n","\n","Remember we'll be training our facial detection classifiers on the large, well-curated CelebA dataset (and ImageNet), and then evaluating their accuracy by testing them on an independent test dataset. Our goal is to build a model that trains on CelebA *and* achieves high classification accuracy on the the test dataset across all demographics, and to thus show that this model does not suffer from any hidden bias. \n","\n","What exactly do we mean when we say a classifier is biased? In order to formalize this, we'll need to think about [*latent variables*](https://en.wikipedia.org/wiki/Latent_variable), variables that define a dataset but are not strictly observed. As defined in the generative modeling lecture, we'll use the term *latent space* to refer to the probability distributions of the aforementioned latent variables. Putting these ideas together, we consider a classifier *biased* if its classification decision changes after it sees some additional latent features. This notion of bias may be helpful to keep in mind throughout the rest of the lab. "]},{"cell_type":"markdown","metadata":{"id":"AIFDvU4w8OIH","colab_type":"text"},"source":["## 2.2 CNN for facial detection \n","\n","First, we'll define and train a CNN on the facial classification task, and evaluate its accuracy. Later, we'll evaluate the performance of our debiased models against this baseline CNN. The CNN model has a relatively standard architecture consisting of a series of convolutional layers with batch normalization followed by two fully connected layers to flatten the convolution output and generate a class prediction. \n","\n","### Define and train the CNN model\n","\n","Like we did in the first part of the lab, we'll define our CNN model, and then train on the CelebA and ImageNet datasets using the `tf.GradientTape` class and the `tf.GradientTape.gradient` method."]},{"cell_type":"code","metadata":{"id":"82EVTAAW7B_X","colab_type":"code","colab":{}},"source":["### Define the CNN model ###\n","\n","n_filters = 12 # base number of convolutional filters\n","\n","'''Function to define a standard CNN model'''\n","def make_standard_classifier(n_outputs=1):\n"," Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')\n"," BatchNormalization = tf.keras.layers.BatchNormalization\n"," Flatten = tf.keras.layers.Flatten\n"," Dense = functools.partial(tf.keras.layers.Dense, activation='relu')\n","\n"," model = tf.keras.Sequential([\n"," Conv2D(filters=1*n_filters, kernel_size=5, strides=2),\n"," BatchNormalization(),\n"," \n"," Conv2D(filters=2*n_filters, kernel_size=5, strides=2),\n"," BatchNormalization(),\n","\n"," Conv2D(filters=4*n_filters, kernel_size=3, strides=2),\n"," BatchNormalization(),\n","\n"," Conv2D(filters=6*n_filters, kernel_size=3, strides=2),\n"," BatchNormalization(),\n","\n"," Flatten(),\n"," Dense(512),\n"," Dense(n_outputs, activation=None),\n"," ])\n"," return model\n","\n","standard_classifier = make_standard_classifier()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"c-eWf3l_lCri","colab_type":"text"},"source":["Now let's train the standard CNN!"]},{"cell_type":"code","metadata":{"colab_type":"code","id":"eJlDGh1o31G1","colab":{}},"source":["### Train the standard CNN ###\n","\n","# Training hyperparameters\n","batch_size = 32\n","num_epochs = 2 # keep small to run faster\n","learning_rate = 5e-4\n","\n","optimizer = tf.keras.optimizers.Adam(learning_rate) # define our optimizer\n","loss_history = mdl.util.LossHistory(smoothing_factor=0.99) # to record loss evolution\n","plotter = mdl.util.PeriodicPlotter(sec=2, scale='semilogy')\n","if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists\n","\n","@tf.function\n","def standard_train_step(x, y):\n"," with tf.GradientTape() as tape:\n"," # feed the images into the model\n"," logits = standard_classifier(x) \n"," # Compute the loss\n"," loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)\n","\n"," # Backpropagation\n"," grads = tape.gradient(loss, standard_classifier.trainable_variables)\n"," optimizer.apply_gradients(zip(grads, standard_classifier.trainable_variables))\n"," return loss\n","\n","# The training loop!\n","for epoch in range(num_epochs):\n"," for idx in tqdm(range(loader.get_train_size()//batch_size)):\n"," # Grab a batch of training data and propagate through the network\n"," x, y = loader.get_batch(batch_size)\n"," loss = standard_train_step(x, y)\n","\n"," # Record the loss and plot the evolution of the loss as a function of training\n"," loss_history.append(loss.numpy().mean())\n"," plotter.plot(loss_history.get())"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"AKMdWVHeCxj8","colab_type":"text"},"source":["### Evaluate performance of the standard CNN\n","\n","Next, let's evaluate the classification performance of our CelebA-trained standard CNN on the training dataset.\n"]},{"cell_type":"code","metadata":{"colab_type":"code","id":"35-PDgjdWk6_","colab":{}},"source":["### Evaluation of standard CNN ###\n","\n","# TRAINING DATA\n","# Evaluate on a subset of CelebA+Imagenet\n","(batch_x, batch_y) = loader.get_batch(5000)\n","y_pred_standard = tf.round(tf.nn.sigmoid(standard_classifier.predict(batch_x)))\n","acc_standard = tf.reduce_mean(tf.cast(tf.equal(batch_y, y_pred_standard), tf.float32))\n","\n","print(\"Standard CNN accuracy on (potentially biased) training set: {:.4f}\".format(acc_standard.numpy()))"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Qu7R14KaEEvU","colab_type":"text"},"source":["We will also evaluate our networks on an independent test dataset containing faces that were not seen during training. For the test data, we'll look at the classification accuracy across four different demographics, based on the Fitzpatrick skin scale and sex-based labels: dark-skinned male, dark-skinned female, light-skinned male, and light-skinned female. \n","\n","Let's take a look at some sample faces in the test set. "]},{"cell_type":"code","metadata":{"colab_type":"code","id":"vfDD8ztGWk6x","colab":{}},"source":["### Load test dataset and plot examples ###\n","\n","test_faces = mdl.lab2.get_test_faces()\n","keys = [\"Light Female\", \"Light Male\", \"Dark Female\", \"Dark Male\"]\n","for group, key in zip(test_faces,keys): \n"," plt.figure(figsize=(5,5))\n"," plt.imshow(np.hstack(group))\n"," plt.title(key, fontsize=15)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"uo1z3cdbEUMM","colab_type":"text"},"source":["Now, let's evaluated the probability of each of these face demographics being classified as a face using the standard CNN classifier we've just trained. "]},{"cell_type":"code","metadata":{"id":"GI4O0Y1GAot9","colab_type":"code","colab":{}},"source":["### Evaluate the standard CNN on the test data ### \n","\n","standard_classifier_logits = [standard_classifier(np.array(x, dtype=np.float32)) for x in test_faces]\n","standard_classifier_probs = tf.squeeze(tf.sigmoid(standard_classifier_logits))\n","\n","# Plot the prediction accuracies per demographic\n","xx = range(len(keys))\n","yy = standard_classifier_probs.numpy().mean(1)\n","plt.bar(xx, yy)\n","plt.xticks(xx, keys)\n","plt.ylim(max(0,yy.min()-yy.ptp()/2.), yy.max()+yy.ptp()/2.)\n","plt.title(\"Standard classifier predictions\");"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"j0Cvvt90DoAm","colab_type":"text"},"source":["Take a look at the accuracies for this first model across these four groups. What do you observe? Would you consider this model biased or unbiased? What are some reasons why a trained model may have biased accuracies? "]},{"cell_type":"markdown","metadata":{"id":"0AKcHnXVtgqJ","colab_type":"text"},"source":["## 2.3 Mitigating algorithmic bias\n","\n","Imbalances in the training data can result in unwanted algorithmic bias. For example, the majority of faces in CelebA (our training set) are those of light-skinned females. As a result, a classifier trained on CelebA will be better suited at recognizing and classifying faces with features similar to these, and will thus be biased.\n","\n","How could we overcome this? A naive solution -- and on that is being adopted by many companies and organizations -- would be to annotate different subclasses (i.e., light-skinned females, males with hats, etc.) within the training data, and then manually even out the data with respect to these groups.\n","\n","But this approach has two major disadvantages. First, it requires annotating massive amounts of data, which is not scalable. Second, it requires that we know what potential biases (e.g., race, gender, pose, occlusion, hats, glasses, etc.) to look for in the data. As a result, manual annotation may not capture all the different features that are imbalanced within the training data.\n","\n","Instead, let's actually **learn** these features in an unbiased, unsupervised manner, without the need for any annotation, and then train a classifier fairly with respect to these features. In the rest of this lab, we'll do exactly that."]},{"cell_type":"markdown","metadata":{"id":"nLemS7dqECsI","colab_type":"text"},"source":["## 2.4 Variational autoencoder (VAE) for learning latent structure\n","\n","As you saw, the accuracy of the CNN varies across the four demographics we looked at. To think about why this may be, consider the dataset the model was trained on, CelebA. If certain features, such as dark skin or hats, are *rare* in CelebA, the model may end up biased against these as a result of training with a biased dataset. That is to say, its classification accuracy will be worse on faces that have under-represented features, such as dark-skinned faces or faces with hats, relevative to faces with features well-represented in the training data! This is a problem. \n","\n","Our goal is to train a *debiased* version of this classifier -- one that accounts for potential disparities in feature representation within the training data. Specifically, to build a debiased facial classifier, we'll train a model that **learns a representation of the underlying latent space** to the face training data. The model then uses this information to mitigate unwanted biases by sampling faces with rare features, like dark skin or hats, *more frequently* during training. The key design requirement for our model is that it can learn an *encoding* of the latent features in the face data in an entirely *unsupervised* way. To achieve this, we'll turn to variational autoencoders (VAEs).\n","\n","![The concept of a VAE](http://kvfrans.com/content/images/2016/08/vae.jpg)\n","\n","As shown in the schematic above and in Lecture 4, VAEs rely on an encoder-decoder structure to learn a latent representation of the input data. In the context of computer vision, the encoder network takes in input images, encodes them into a series of variables defined by a mean and standard deviation, and then draws from the distributions defined by these parameters to generate a set of sampled latent variables. The decoder network then \"decodes\" these variables to generate a reconstruction of the original image, which is used during training to help the model identify which latent variables are important to learn. \n","\n","Let's formalize two key aspects of the VAE model and define relevant functions for each.\n"]},{"cell_type":"markdown","metadata":{"id":"KmbXKtcPkTXA","colab_type":"text"},"source":["### Understanding VAEs: loss function\n","\n","In practice, how can we train a VAE? In learning the latent space, we constrain the means and standard deviations to approximately follow a unit Gaussian. Recall that these are learned parameters, and therefore must factor into the loss computation, and that the decoder portion of the VAE is using these parameters to output a reconstruction that should closely match the input image, which also must factor into the loss. What this means is that we'll have two terms in our VAE loss function:\n","\n","1. **Latent loss ($L_{KL}$)**: measures how closely the learned latent variables match a unit Gaussian and is defined by the Kullback-Leibler (KL) divergence.\n","2. **Reconstruction loss ($L_{x}{(x,\\hat{x})}$)**: measures how accurately the reconstructed outputs match the input and is given by the $L^1$ norm of the input image and its reconstructed output. \n","\n","The equations for both of these losses are provided below:\n","\n","$$ L_{KL}(\\mu, \\sigma) = \\frac{1}{2}\\sum\\limits_{j=0}^{k-1}\\small{(\\sigma_j + \\mu_j^2 - 1 - \\log{\\sigma_j})} $$\n","\n","$$ L_{x}{(x,\\hat{x})} = ||x-\\hat{x}||_1 $$ \n","\n","Thus for the VAE loss we have: \n","\n","$$ L_{VAE} = c\\cdot L_{KL} + L_{x}{(x,\\hat{x})} $$\n","\n","where $c$ is a weighting coefficient used for regularization. \n","\n","Now we're ready to define our VAE loss function:"]},{"cell_type":"code","metadata":{"id":"S00ASo1ImSuh","colab_type":"code","colab":{}},"source":["### Defining the VAE loss function ###\n","\n","''' Function to calculate VAE loss given:\n"," an input x, \n"," reconstructed output x_recon, \n"," encoded means mu, \n"," encoded log of standard deviation logsigma, \n"," weight parameter for the latent loss kl_weight\n","'''\n","def vae_loss_function(x, x_recon, mu, logsigma, kl_weight=0.0005):\n"," # TODO: Define the latent loss. Note this is given in the equation for L_{KL}\n"," # in the text block directly above\n"," latent_loss = 0.5 * tf.reduce_sum(tf.exp(logsigma) + tf.square(mu) - 1.0 - logsigma, axis=1)\n"," # latent_loss = # TODO\n","\n"," # TODO: Define the reconstruction loss as the mean absolute pixel-wise \n"," # difference between the input and reconstruction. Hint: you'll need to \n"," # use tf.reduce_mean, and supply an axis argument which specifies which \n"," # dimensions to reduce over. For example, reconstruction loss needs to average \n"," # over the height, width, and channel image dimensions.\n"," # https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean\n"," reconstruction_loss = tf.reduce_mean(tf.abs(x-x_recon), axis=(1,2,3))\n"," # reconstruction_loss = # TODO\n","\n"," # TODO: Define the VAE loss. Note this is given in the equation for L_{VAE}\n"," # in the text block directly above\n"," vae_loss = kl_weight * latent_loss + reconstruction_loss\n"," # vae_loss = # TODO\n"," \n"," return vae_loss"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"E8mpb3pJorpu","colab_type":"text"},"source":["Great! Now that we have a more concrete sense of how VAEs work, let's explore how we can leverage this network structure to train a *debiased* facial classifier."]},{"cell_type":"markdown","metadata":{"id":"DqtQH4S5fO8F","colab_type":"text"},"source":["### Understanding VAEs: reparameterization \n","\n","As you may recall from lecture, VAEs use a \"reparameterization trick\" for sampling learned latent variables. Instead of the VAE encoder generating a single vector of real numbers for each latent variable, it generates a vector of means and a vector of standard deviations that are constrained to roughly follow Gaussian distributions. We then sample from the standard deviations and add back the mean to output this as our sampled latent vector. Formalizing this for a latent variable $z$ where we sample $\\epsilon \\sim \\mathcal{N}(0,(I))$ we have: \n","\n","$$ z = \\mathbb{\\mu} + e^{\\left(\\frac{1}{2} \\cdot \\log{\\Sigma}\\right)}\\circ \\epsilon $$\n","\n","where $\\mu$ is the mean and $\\Sigma$ is the covariance matrix. This is useful because it will let us neatly define the loss function for the VAE, generate randomly sampled latent variables, achieve improved network generalization, **and** make our complete VAE network differentiable so that it can be trained via backpropagation. Quite powerful!\n","\n","Let's define a function to implement the VAE sampling operation:"]},{"cell_type":"code","metadata":{"id":"cT6PGdNajl3K","colab_type":"code","colab":{}},"source":["### VAE Reparameterization ###\n","\n","\"\"\"Reparameterization trick by sampling from an isotropic unit Gaussian.\n","# Arguments\n"," z_mean, z_logsigma (tensor): mean and log of standard deviation of latent distribution (Q(z|X))\n","# Returns\n"," z (tensor): sampled latent vector\n","\"\"\"\n","def sampling(z_mean, z_logsigma):\n"," # By default, random.normal is \"standard\" (ie. mean=0 and std=1.0)\n"," batch, latent_dim = z_mean.shape\n"," epsilon = tf.random.normal(shape=(batch, latent_dim))\n","\n"," # TODO: Define the reparameterization computation!\n"," # Note the equation is given in the text block immediately above.\n"," z = z_mean + tf.math.exp(0.5 * z_logsigma) * epsilon\n"," # z = # TODO\n"," return z"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"qtHEYI9KNn0A","colab_type":"text"},"source":["## 2.5 Debiasing variational autoencoder (DB-VAE)\n","\n","Now, we'll use the general idea behind the VAE architecture to build a model, termed a [*debiasing variational autoencoder*](https://lmrt.mit.edu/sites/default/files/AIES-19_paper_220.pdf) or DB-VAE, to mitigate (potentially) unknown biases present within the training idea. We'll train our DB-VAE model on the facial detection task, run the debiasing operation during training, evaluate on the PPB dataset, and compare its accuracy to our original, biased CNN model. \n","\n","### The DB-VAE model\n","\n","The key idea behind this debiasing approach is to use the latent variables learned via a VAE to adaptively re-sample the CelebA data during training. Specifically, we will alter the probability that a given image is used during training based on how often its latent features appear in the dataset. So, faces with rarer features (like dark skin, sunglasses, or hats) should become more likely to be sampled during training, while the sampling probability for faces with features that are over-represented in the training dataset should decrease (relative to uniform random sampling across the training data). \n","\n","A general schematic of the DB-VAE approach is shown here:\n","\n","![DB-VAE](https://raw.githubusercontent.com/aamini/introtodeeplearning/2019/lab2/img/DB-VAE.png)"]},{"cell_type":"markdown","metadata":{"id":"ziA75SN-UxxO","colab_type":"text"},"source":["Recall that we want to apply our DB-VAE to a *supervised classification* problem -- the facial detection task. Importantly, note how the encoder portion in the DB-VAE architecture also outputs a single supervised variable, $z_o$, corresponding to the class prediction -- face or not face. Usually, VAEs are not trained to output any supervised variables (such as a class prediction)! This is another key distinction between the DB-VAE and a traditional VAE. \n","\n","Keep in mind that we only want to learn the latent representation of *faces*, as that's what we're ultimately debiasing against, even though we are training a model on a binary classification problem. We'll need to ensure that, **for faces**, our DB-VAE model both learns a representation of the unsupervised latent variables, captured by the distribution $q_\\phi(z|x)$, **and** outputs a supervised class prediction $z_o$, but that, **for negative examples**, it only outputs a class prediction $z_o$."]},{"cell_type":"markdown","metadata":{"id":"XggIKYPRtOZR","colab_type":"text"},"source":["### Defining the DB-VAE loss function\n","\n","This means we'll need to be a bit clever about the loss function for the DB-VAE. The form of the loss will depend on whether it's a face image or a non-face image that's being considered. \n","\n","For **face images**, our loss function will have two components:\n","\n","\n","1. **VAE loss ($L_{VAE}$)**: consists of the latent loss and the reconstruction loss.\n","2. **Classification loss ($L_y(y,\\hat{y})$)**: standard cross-entropy loss for a binary classification problem. \n","\n","In contrast, for images of **non-faces**, our loss function is solely the classification loss. \n","\n","We can write a single expression for the loss by defining an indicator variable $\\mathcal{I}_f$which reflects which training data are images of faces ($\\mathcal{I}_f(y) = 1$ ) and which are images of non-faces ($\\mathcal{I}_f(y) = 0$). Using this, we obtain:\n","\n","$$L_{total} = L_y(y,\\hat{y}) + \\mathcal{I}_f(y)\\Big[L_{VAE}\\Big]$$\n","\n","Let's write a function to define the DB-VAE loss function:\n","\n"]},{"cell_type":"code","metadata":{"id":"VjieDs8Ovcqs","colab_type":"code","colab":{}},"source":["### Loss function for DB-VAE ###\n","\n","\"\"\"Loss function for DB-VAE.\n","# Arguments\n"," x: true input x\n"," x_pred: reconstructed x\n"," y: true label (face or not face)\n"," y_logit: predicted labels\n"," mu: mean of latent distribution (Q(z|X))\n"," logsigma: log of standard deviation of latent distribution (Q(z|X))\n","# Returns\n"," total_loss: DB-VAE total loss\n"," classification_loss = DB-VAE classification loss\n","\"\"\"\n","def debiasing_loss_function(x, x_pred, y, y_logit, mu, logsigma):\n","\n"," # TODO: call the relevant function to obtain VAE loss\n"," vae_loss = vae_loss_function(x, x_pred, mu, logsigma)\n"," # vae_loss = vae_loss_function('''TODO''') # TODO\n","\n"," # TODO: define the classification loss using sigmoid_cross_entropy\n"," # https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits\n"," classification_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_logit)\n"," # classification_loss = # TODO\n","\n"," # Use the training data labels to create variable face_indicator:\n"," # indicator that reflects which training data are images of faces\n"," face_indicator = tf.cast(tf.equal(y, 1), tf.float32)\n","\n"," # TODO: define the DB-VAE total loss! Use tf.reduce_mean to average over all\n"," # samples\n"," total_loss = tf.reduce_mean(\n"," classification_loss + \n"," face_indicator * vae_loss\n"," )\n"," # total_loss = # TODO\n","\n"," return total_loss, classification_loss"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"YIu_2LzNWwWY","colab_type":"text"},"source":["### DB-VAE architecture\n","\n","Now we're ready to define the DB-VAE architecture. To build the DB-VAE, we will use the standard CNN classifier from above as our encoder, and then define a decoder network. We will create and initialize the two models, and then construct the end-to-end VAE. We will use a latent space with 100 latent variables.\n","\n","The decoder network will take as input the sampled latent variables, run them through a series of deconvolutional layers, and output a reconstruction of the original input image."]},{"cell_type":"code","metadata":{"id":"JfWPHGrmyE7R","colab_type":"code","colab":{}},"source":["### Define the decoder portion of the DB-VAE ###\n","\n","n_filters = 12 # base number of convolutional filters, same as standard CNN\n","latent_dim = 100 # number of latent variables\n","\n","def make_face_decoder_network():\n"," # Functionally define the different layer types we will use\n"," Conv2DTranspose = functools.partial(tf.keras.layers.Conv2DTranspose, padding='same', activation='relu')\n"," BatchNormalization = tf.keras.layers.BatchNormalization\n"," Flatten = tf.keras.layers.Flatten\n"," Dense = functools.partial(tf.keras.layers.Dense, activation='relu')\n"," Reshape = tf.keras.layers.Reshape\n","\n"," # Build the decoder network using the Sequential API\n"," decoder = tf.keras.Sequential([\n"," # Transform to pre-convolutional generation\n"," Dense(units=4*4*6*n_filters), # 4x4 feature maps (with 6N occurances)\n"," Reshape(target_shape=(4, 4, 6*n_filters)),\n","\n"," # Upscaling convolutions (inverse of encoder)\n"," Conv2DTranspose(filters=4*n_filters, kernel_size=3, strides=2),\n"," Conv2DTranspose(filters=2*n_filters, kernel_size=3, strides=2),\n"," Conv2DTranspose(filters=1*n_filters, kernel_size=5, strides=2),\n"," Conv2DTranspose(filters=3, kernel_size=5, strides=2),\n"," ])\n","\n"," return decoder"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yWCMu12w1BuD","colab_type":"text"},"source":["Now, we will put this decoder together with the standard CNN classifier as our encoder to define the DB-VAE. Note that at this point, there is nothing special about how we put the model together that makes it a \"debiasing\" model -- that will come when we define the training operation. Here, we will define the core VAE architecture by sublassing the `Model` class; defining encoding, reparameterization, and decoding operations; and calling the network end-to-end."]},{"cell_type":"code","metadata":{"id":"dSFDcFBL13c3","colab_type":"code","colab":{}},"source":["### Defining and creating the DB-VAE ###\n","\n","class DB_VAE(tf.keras.Model):\n"," def __init__(self, latent_dim):\n"," super(DB_VAE, self).__init__()\n"," self.latent_dim = latent_dim\n","\n"," # Define the number of outputs for the encoder. Recall that we have \n"," # `latent_dim` latent variables, as well as a supervised output for the \n"," # classification.\n"," num_encoder_dims = 2*self.latent_dim + 1\n","\n"," self.encoder = make_standard_classifier(num_encoder_dims)\n"," self.decoder = make_face_decoder_network()\n","\n"," # function to feed images into encoder, encode the latent space, and output\n"," # classification probability \n"," def encode(self, x):\n"," # encoder output\n"," encoder_output = self.encoder(x)\n","\n"," # classification prediction\n"," y_logit = tf.expand_dims(encoder_output[:, 0], -1)\n"," # latent variable distribution parameters\n"," z_mean = encoder_output[:, 1:self.latent_dim+1] \n"," z_logsigma = encoder_output[:, self.latent_dim+1:]\n","\n"," return y_logit, z_mean, z_logsigma\n","\n"," # VAE reparameterization: given a mean and logsigma, sample latent variables\n"," def reparameterize(self, z_mean, z_logsigma):\n"," # TODO: call the sampling function defined above\n"," z = sampling(z_mean, z_logsigma)\n"," # z = # TODO\n"," return z\n","\n"," # Decode the latent space and output reconstruction\n"," def decode(self, z):\n"," # TODO: use the decoder to output the reconstruction\n"," reconstruction = self.decoder(z)\n"," # reconstruction = # TODO\n"," return reconstruction\n","\n"," # The call function will be used to pass inputs x through the core VAE\n"," def call(self, x): \n"," # Encode input to a prediction and latent space\n"," y_logit, z_mean, z_logsigma = self.encode(x)\n","\n"," # TODO: reparameterization\n"," z = self.reparameterize(z_mean, z_logsigma)\n"," # z = # TODO\n","\n"," # TODO: reconstruction\n"," recon = self.decode(z)\n"," # recon = # TODO\n"," return y_logit, z_mean, z_logsigma, recon\n","\n"," # Predict face or not face logit for given input x\n"," def predict(self, x):\n"," y_logit, z_mean, z_logsigma = self.encode(x)\n"," return y_logit\n","\n","dbvae = DB_VAE(latent_dim)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"M-clbYAj2waY","colab_type":"text"},"source":["As stated, the encoder architecture is identical to the CNN from earlier in this lab. Note the outputs of our constructed DB_VAE model in the `call` function: `y_logit, z_mean, z_logsigma, z`. Think carefully about why each of these are outputted and their significance to the problem at hand.\n","\n"]},{"cell_type":"markdown","metadata":{"id":"nbDNlslgQc5A","colab_type":"text"},"source":["### Adaptive resampling for automated debiasing with DB-VAE\n","\n","So, how can we actually use DB-VAE to train a debiased facial detection classifier?\n","\n","Recall the DB-VAE architecture: as input images are fed through the network, the encoder learns an estimate $\\mathcal{Q}(z|X)$ of the latent space. We want to increase the relative frequency of rare data by increased sampling of under-represented regions of the latent space. We can approximate $\\mathcal{Q}(z|X)$ using the frequency distributions of each of the learned latent variables, and then define the probability distribution of selecting a given datapoint $x$ based on this approximation. These probability distributions will be used during training to re-sample the data.\n","\n","You'll write a function to execute this update of the sampling probabilities, and then call this function within the DB-VAE training loop to actually debias the model. "]},{"cell_type":"markdown","metadata":{"id":"Fej5FDu37cf7","colab_type":"text"},"source":["First, we've defined a short helper function `get_latent_mu` that returns the latent variable means returned by the encoder after a batch of images is inputted to the network:"]},{"cell_type":"code","metadata":{"id":"ewWbf7TE7wVc","colab_type":"code","colab":{}},"source":["# Function to return the means for an input image batch\n","def get_latent_mu(images, dbvae, batch_size=1024):\n"," N = images.shape[0]\n"," mu = np.zeros((N, latent_dim))\n"," for start_ind in range(0, N, batch_size):\n"," end_ind = min(start_ind+batch_size, N+1)\n"," batch = (images[start_ind:end_ind]).astype(np.float32)/255.\n"," _, batch_mu, _ = dbvae.encode(batch)\n"," mu[start_ind:end_ind] = batch_mu\n"," return mu"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"wn4yK3SC72bo","colab_type":"text"},"source":["Now, let's define the actual resampling algorithm `get_training_sample_probabilities`. Importantly note the argument `smoothing_fac`. This parameter tunes the degree of debiasing: for `smoothing_fac=0`, the re-sampled training set will tend towards falling uniformly over the latent space, i.e., the most extreme debiasing."]},{"cell_type":"code","metadata":{"id":"HiX9pmmC7_wn","colab_type":"code","colab":{}},"source":["### Resampling algorithm for DB-VAE ###\n","\n","'''Function that recomputes the sampling probabilities for images within a batch\n"," based on how they distribute across the training data'''\n","def get_training_sample_probabilities(images, dbvae, bins=10, smoothing_fac=0.001): \n"," print(\"Recomputing the sampling probabilities\")\n"," \n"," # TODO: run the input batch and get the latent variable means\n"," mu = get_latent_mu(images, dbvae)\n"," # mu = get_latent_mu('''TODO''') # TODO\n","\n"," # sampling probabilities for the images\n"," training_sample_p = np.zeros(mu.shape[0])\n"," \n"," # consider the distribution for each latent variable \n"," for i in range(latent_dim):\n"," \n"," latent_distribution = mu[:,i]\n"," # generate a histogram of the latent distribution\n"," hist_density, bin_edges = np.histogram(latent_distribution, density=True, bins=bins)\n","\n"," # find which latent bin every data sample falls in \n"," bin_edges[0] = -float('inf')\n"," bin_edges[-1] = float('inf')\n"," \n"," # TODO: call the digitize function to find which bins in the latent distribution \n"," # every data sample falls in to\n"," # https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.digitize.html\n"," bin_idx = np.digitize(latent_distribution, bin_edges)\n"," # bin_idx = np.digitize('''TODO''', '''TODO''') # TODO\n","\n"," # smooth the density function\n"," hist_smoothed_density = hist_density + smoothing_fac\n"," hist_smoothed_density = hist_smoothed_density / np.sum(hist_smoothed_density)\n","\n"," # invert the density function \n"," p = 1.0/(hist_smoothed_density[bin_idx-1])\n"," \n"," # TODO: normalize all probabilities\n"," p = p / np.sum(p)\n"," # p = # TODO\n"," \n"," # TODO: update sampling probabilities by considering whether the newly\n"," # computed p is greater than the existing sampling probabilities.\n"," training_sample_p = np.maximum(p, training_sample_p)\n"," # training_sample_p = # TODO\n"," \n"," # final normalization\n"," training_sample_p /= np.sum(training_sample_p)\n","\n"," return training_sample_p"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"pF14fQkVUs-a","colab_type":"text"},"source":["Now that we've defined the resampling update, we can train our DB-VAE model on the CelebA/ImageNet training data, and run the above operation to re-weight the importance of particular data points as we train the model. Remember again that we only want to debias for features relevant to *faces*, not the set of negative examples. Complete the code block below to execute the training loop!"]},{"cell_type":"code","metadata":{"id":"xwQs-Gu5bKEK","colab_type":"code","colab":{}},"source":["### Training the DB-VAE ###\n","\n","# Hyperparameters\n","batch_size = 32\n","learning_rate = 5e-4\n","latent_dim = 100\n","\n","# DB-VAE needs slightly more epochs to train since its more complex than \n","# the standard classifier so we use 6 instead of 2\n","num_epochs = 6 \n","\n","# instantiate a new DB-VAE model and optimizer\n","dbvae = DB_VAE(100)\n","optimizer = tf.keras.optimizers.Adam(learning_rate)\n","\n","# To define the training operation, we will use tf.function which is a powerful tool \n","# that lets us turn a Python function into a TensorFlow computation graph.\n","@tf.function\n","def debiasing_train_step(x, y):\n","\n"," with tf.GradientTape() as tape:\n"," # Feed input x into dbvae. Note that this is using the DB_VAE call function!\n"," y_logit, z_mean, z_logsigma, x_recon = dbvae(x)\n","\n"," '''TODO: call the DB_VAE loss function to compute the loss'''\n"," loss, class_loss = debiasing_loss_function(x, x_recon, y, y_logit, z_mean, z_logsigma)\n"," # loss, class_loss = debiasing_loss_function('''TODO arguments''') # TODO\n"," \n"," '''TODO: use the GradientTape.gradient method to compute the gradients.\n"," Hint: this is with respect to the trainable_variables of the dbvae.'''\n"," grads = tape.gradient(loss, dbvae.trainable_variables)\n"," # grads = tape.gradient('''TODO''', '''TODO''') # TODO\n","\n"," # apply gradients to variables\n"," optimizer.apply_gradients(zip(grads, dbvae.trainable_variables))\n"," return loss\n","\n","# get training faces from data loader\n","all_faces = loader.get_all_train_faces()\n","\n","if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists\n","\n","# The training loop -- outer loop iterates over the number of epochs\n","for i in range(num_epochs):\n","\n"," IPython.display.clear_output(wait=True)\n"," print(\"Starting epoch {}/{}\".format(i+1, num_epochs))\n","\n"," # Recompute data sampling proabilities\n"," '''TODO: recompute the sampling probabilities for debiasing'''\n"," p_faces = get_training_sample_probabilities(all_faces, dbvae)\n"," # p_faces = get_training_sample_probabilities('''TODO''', '''TODO''') # TODO\n"," \n"," # get a batch of training data and compute the training step\n"," for j in tqdm(range(loader.get_train_size() // batch_size)):\n"," # load a batch of data\n"," (x, y) = loader.get_batch(batch_size, p_pos=p_faces)\n"," # loss optimization\n"," loss = debiasing_train_step(x, y)\n"," \n"," # plot the progress every 200 steps\n"," if j % 500 == 0: \n"," mdl.util.plot_sample(x, y, dbvae)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"uZBlWDPOVcHg","colab_type":"text"},"source":["Wonderful! Now we should have a trained and (hopefully!) debiased facial classification model, ready for evaluation!"]},{"cell_type":"markdown","metadata":{"id":"Eo34xC7MbaiQ","colab_type":"text"},"source":["## 2.6 Evaluation of DB-VAE on Test Dataset\n","\n","Finally let's test our DB-VAE model on the test dataset, looking specifically at its accuracy on each the \"Dark Male\", \"Dark Female\", \"Light Male\", and \"Light Female\" demographics. We will compare the performance of this debiased model against the (potentially biased) standard CNN from earlier in the lab."]},{"cell_type":"code","metadata":{"id":"bgK77aB9oDtX","colab_type":"code","colab":{}},"source":["dbvae_logits = [dbvae.predict(np.array(x, dtype=np.float32)) for x in test_faces]\n","dbvae_probs = tf.squeeze(tf.sigmoid(dbvae_logits))\n","\n","xx = np.arange(len(keys))\n","plt.bar(xx, standard_classifier_probs.numpy().mean(1), width=0.2, label=\"Standard CNN\")\n","plt.bar(xx+0.2, dbvae_probs.numpy().mean(1), width=0.2, label=\"DB-VAE\")\n","plt.xticks(xx, keys); \n","plt.title(\"Network predictions on test dataset\")\n","plt.ylabel(\"Probability\"); plt.legend(bbox_to_anchor=(1.04,1), loc=\"upper left\");\n"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"rESoXRPQo_mq","colab_type":"text"},"source":["## 2.7 Conclusion \n","\n","We encourage you to think about and maybe even address some questions raised by the approach and results outlined here:\n","\n","* How does the accuracy of the DB-VAE across the four demographics compare to that of the standard CNN? Do you find this result surprising in any way?\n","* How can the performance of the DB-VAE classifier be improved even further? We purposely did not optimize hyperparameters to leave this up to you! If you want to go further, try to optimize your model to achieve the best performance. **[Email us](mailto:introtodeeplearning-staff@mit.edu) a copy of your notebook with the 2.6 bar plot executed, and we'll give out prizes to the best performers!** \n","* In which applications (either related to facial detection or not!) would debiasing in this way be desired? Are there applications where you may not want to debias your model? \n","* Do you think it should be necessary for companies to demonstrate that their models, particularly in the context of tasks like facial detection, are not biased? If so, do you have thoughts on how this could be standardized and implemented?\n","* Do you have ideas for other ways to address issues of bias, particularly in terms of the training data?\n","\n","Hopefully this lab has shed some light on a few concepts, from vision based tasks, to VAEs, to algorithmic bias. We like to think it has, but we're biased ;). \n","\n","![Faces](https://media1.tenor.com/images/44e1f590924eca94fe86067a4cf44c72/tenor.gif?itemid=3394328)"]}]} +{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"Part2_Debiasing_Solution.ipynb","provenance":[{"file_id":"https://github.com/aamini/introtodeeplearning/blob/master/lab2/Part2_debiasing_solution.ipynb","timestamp":1578015896226}],"collapsed_sections":["Ag_e7xtTzT1W","NDj7KBaW8Asz"]},"kernelspec":{"name":"python3","display_name":"Python 3"},"accelerator":"GPU"},"cells":[{"cell_type":"markdown","metadata":{"id":"Ag_e7xtTzT1W","colab_type":"text"},"source":["\n"," \n"," \n"," \n","
\n"," \n"," Visit MIT Deep Learning\n"," Run in Google Colab\n"," View Source on GitHub
\n","\n","# Copyright Information"]},{"cell_type":"code","metadata":{"id":"rNbf1pRlSDby","colab_type":"code","colab":{}},"source":["# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.\n","# \n","# Licensed under the MIT License. You may not use this file except in compliance\n","# with the License. Use and/or modification of this code outside of 6.S191 must\n","# reference:\n","#\n","# © MIT 6.S191: Introduction to Deep Learning\n","# http://introtodeeplearning.com\n","#"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"QOpPUH3FR179","colab_type":"text"},"source":["# Laboratory 2: Computer Vision\n","\n","# Part 2: Debiasing Facial Detection Systems\n","\n","In the second portion of the lab, we'll explore two prominent aspects of applied deep learning: facial detection and algorithmic bias. \n","\n","Deploying fair, unbiased AI systems is critical to their long-term acceptance. Consider the task of facial detection: given an image, is it an image of a face? This seemingly simple, but extremely important, task is subject to significant amounts of algorithmic bias among select demographics. \n","\n","In this lab, we'll investigate [one recently published approach](http://introtodeeplearning.com/AAAI_MitigatingAlgorithmicBias.pdf) to addressing algorithmic bias. We'll build a facial detection model that learns the *latent variables* underlying face image datasets and uses this to adaptively re-sample the training data, thus mitigating any biases that may be present in order to train a *debiased* model.\n","\n","\n","Run the next code block for a short video from Google that explores how and why it's important to consider bias when thinking about machine learning:"]},{"cell_type":"code","metadata":{"id":"XQh5HZfbupFF","colab_type":"code","colab":{}},"source":["import IPython\n","IPython.display.YouTubeVideo('59bMh59JQDo')"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"3Ezfc6Yv6IhI","colab_type":"text"},"source":["Let's get started by installing the relevant dependencies:"]},{"cell_type":"code","metadata":{"id":"E46sWVKK6LP9","colab_type":"code","colab":{}},"source":["# Import Tensorflow 2.0\n","%tensorflow_version 2.x\n","import tensorflow as tf\n","\n","import IPython\n","import functools\n","import matplotlib.pyplot as plt\n","import numpy as np\n","from tqdm import tqdm\n","\n","# Download and import the MIT 6.S191 package\n","!pip install mitdeeplearning\n","import mitdeeplearning as mdl"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"V0e77oOM3udR","colab_type":"text"},"source":["## 2.1 Datasets\n","\n","We'll be using three datasets in this lab. In order to train our facial detection models, we'll need a dataset of positive examples (i.e., of faces) and a dataset of negative examples (i.e., of things that are not faces). We'll use these data to train our models to classify images as either faces or not faces. Finally, we'll need a test dataset of face images. Since we're concerned about the potential *bias* of our learned models against certain demographics, it's important that the test dataset we use has equal representation across the demographics or features of interest. In this lab, we'll consider skin tone and gender. \n","\n","1. **Positive training data**: [CelebA Dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html). A large-scale (over 200K images) of celebrity faces. \n","2. **Negative training data**: [ImageNet](http://www.image-net.org/). Many images across many different categories. We'll take negative examples from a variety of non-human categories. \n","[Fitzpatrick Scale](https://en.wikipedia.org/wiki/Fitzpatrick_scale) skin type classification system, with each image labeled as \"Lighter'' or \"Darker''.\n","\n","Let's begin by importing these datasets. We've written a class that does a bit of data pre-processing to import the training data in a usable format."]},{"cell_type":"code","metadata":{"id":"RWXaaIWy6jVw","colab_type":"code","colab":{}},"source":["# Get the training data: both images from CelebA and ImageNet\n","path_to_training_data = tf.keras.utils.get_file('train_face.h5', 'https://www.dropbox.com/s/bp54q547mfg15ze/train_face.h5?dl=1')\n","# Instantiate a TrainingDatasetLoader using the downloaded dataset\n","loader = mdl.lab2.TrainingDatasetLoader(path_to_training_data)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yIE321rxa_b3","colab_type":"text"},"source":["We can look at the size of the training dataset and grab a batch of size 100:"]},{"cell_type":"code","metadata":{"id":"DjPSjZZ_bGqe","colab_type":"code","colab":{}},"source":["number_of_training_examples = loader.get_train_size()\n","(images, labels) = loader.get_batch(100)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"sxtkJoqF6oH1","colab_type":"text"},"source":["Play around with displaying images to get a sense of what the training data actually looks like!"]},{"cell_type":"code","metadata":{"id":"Jg17jzwtbxDA","colab_type":"code","colab":{}},"source":["### Examining the CelebA training dataset ###\n","\n","#@title Change the sliders to look at positive and negative training examples! { run: \"auto\" }\n","\n","face_images = images[np.where(labels==1)[0]]\n","not_face_images = images[np.where(labels==0)[0]]\n","\n","idx_face = 23 #@param {type:\"slider\", min:0, max:49, step:1}\n","idx_not_face = 9 #@param {type:\"slider\", min:0, max:49, step:1}\n","\n","plt.figure(figsize=(5,5))\n","plt.subplot(1, 2, 1)\n","plt.imshow(face_images[idx_face])\n","plt.title(\"Face\"); plt.grid(False)\n","\n","plt.subplot(1, 2, 2)\n","plt.imshow(not_face_images[idx_not_face])\n","plt.title(\"Not Face\"); plt.grid(False)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"NDj7KBaW8Asz","colab_type":"text"},"source":["### Thinking about bias\n","\n","Remember we'll be training our facial detection classifiers on the large, well-curated CelebA dataset (and ImageNet), and then evaluating their accuracy by testing them on an independent test dataset. Our goal is to build a model that trains on CelebA *and* achieves high classification accuracy on the the test dataset across all demographics, and to thus show that this model does not suffer from any hidden bias. \n","\n","What exactly do we mean when we say a classifier is biased? In order to formalize this, we'll need to think about [*latent variables*](https://en.wikipedia.org/wiki/Latent_variable), variables that define a dataset but are not strictly observed. As defined in the generative modeling lecture, we'll use the term *latent space* to refer to the probability distributions of the aforementioned latent variables. Putting these ideas together, we consider a classifier *biased* if its classification decision changes after it sees some additional latent features. This notion of bias may be helpful to keep in mind throughout the rest of the lab. "]},{"cell_type":"markdown","metadata":{"id":"AIFDvU4w8OIH","colab_type":"text"},"source":["## 2.2 CNN for facial detection \n","\n","First, we'll define and train a CNN on the facial classification task, and evaluate its accuracy. Later, we'll evaluate the performance of our debiased models against this baseline CNN. The CNN model has a relatively standard architecture consisting of a series of convolutional layers with batch normalization followed by two fully connected layers to flatten the convolution output and generate a class prediction. \n","\n","### Define and train the CNN model\n","\n","Like we did in the first part of the lab, we'll define our CNN model, and then train on the CelebA and ImageNet datasets using the `tf.GradientTape` class and the `tf.GradientTape.gradient` method."]},{"cell_type":"code","metadata":{"id":"82EVTAAW7B_X","colab_type":"code","colab":{}},"source":["### Define the CNN model ###\n","\n","n_filters = 12 # base number of convolutional filters\n","\n","'''Function to define a standard CNN model'''\n","def make_standard_classifier(n_outputs=1):\n"," Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')\n"," BatchNormalization = tf.keras.layers.BatchNormalization\n"," Flatten = tf.keras.layers.Flatten\n"," Dense = functools.partial(tf.keras.layers.Dense, activation='relu')\n","\n"," model = tf.keras.Sequential([\n"," Conv2D(filters=1*n_filters, kernel_size=5, strides=2),\n"," BatchNormalization(),\n"," \n"," Conv2D(filters=2*n_filters, kernel_size=5, strides=2),\n"," BatchNormalization(),\n","\n"," Conv2D(filters=4*n_filters, kernel_size=3, strides=2),\n"," BatchNormalization(),\n","\n"," Conv2D(filters=6*n_filters, kernel_size=3, strides=2),\n"," BatchNormalization(),\n","\n"," Flatten(),\n"," Dense(512),\n"," Dense(n_outputs, activation=None),\n"," ])\n"," return model\n","\n","standard_classifier = make_standard_classifier()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"c-eWf3l_lCri","colab_type":"text"},"source":["Now let's train the standard CNN!"]},{"cell_type":"code","metadata":{"colab_type":"code","id":"eJlDGh1o31G1","colab":{}},"source":["### Train the standard CNN ###\n","\n","# Training hyperparameters\n","batch_size = 32\n","num_epochs = 2 # keep small to run faster\n","learning_rate = 5e-4\n","\n","optimizer = tf.keras.optimizers.Adam(learning_rate) # define our optimizer\n","loss_history = mdl.util.LossHistory(smoothing_factor=0.99) # to record loss evolution\n","plotter = mdl.util.PeriodicPlotter(sec=2, scale='semilogy')\n","if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists\n","\n","@tf.function\n","def standard_train_step(x, y):\n"," with tf.GradientTape() as tape:\n"," # feed the images into the model\n"," logits = standard_classifier(x) \n"," # Compute the loss\n"," loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)\n","\n"," # Backpropagation\n"," grads = tape.gradient(loss, standard_classifier.trainable_variables)\n"," optimizer.apply_gradients(zip(grads, standard_classifier.trainable_variables))\n"," return loss\n","\n","# The training loop!\n","for epoch in range(num_epochs):\n"," for idx in tqdm(range(loader.get_train_size()//batch_size)):\n"," # Grab a batch of training data and propagate through the network\n"," x, y = loader.get_batch(batch_size)\n"," loss = standard_train_step(x, y)\n","\n"," # Record the loss and plot the evolution of the loss as a function of training\n"," loss_history.append(loss.numpy().mean())\n"," plotter.plot(loss_history.get())"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"AKMdWVHeCxj8","colab_type":"text"},"source":["### Evaluate performance of the standard CNN\n","\n","Next, let's evaluate the classification performance of our CelebA-trained standard CNN on the training dataset.\n"]},{"cell_type":"code","metadata":{"colab_type":"code","id":"35-PDgjdWk6_","colab":{}},"source":["### Evaluation of standard CNN ###\n","\n","# TRAINING DATA\n","# Evaluate on a subset of CelebA+Imagenet\n","(batch_x, batch_y) = loader.get_batch(5000)\n","y_pred_standard = tf.round(tf.nn.sigmoid(standard_classifier.predict(batch_x)))\n","acc_standard = tf.reduce_mean(tf.cast(tf.equal(batch_y, y_pred_standard), tf.float32))\n","\n","print(\"Standard CNN accuracy on (potentially biased) training set: {:.4f}\".format(acc_standard.numpy()))"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Qu7R14KaEEvU","colab_type":"text"},"source":["We will also evaluate our networks on an independent test dataset containing faces that were not seen during training. For the test data, we'll look at the classification accuracy across four different demographics, based on the Fitzpatrick skin scale and sex-based labels: dark-skinned male, dark-skinned female, light-skinned male, and light-skinned female. \n","\n","Let's take a look at some sample faces in the test set. "]},{"cell_type":"code","metadata":{"colab_type":"code","id":"vfDD8ztGWk6x","colab":{}},"source":["### Load test dataset and plot examples ###\n","\n","test_faces = mdl.lab2.get_test_faces()\n","keys = [\"Light Female\", \"Light Male\", \"Dark Female\", \"Dark Male\"]\n","for group, key in zip(test_faces,keys): \n"," plt.figure(figsize=(5,5))\n"," plt.imshow(np.hstack(group))\n"," plt.title(key, fontsize=15)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"uo1z3cdbEUMM","colab_type":"text"},"source":["Now, let's evaluated the probability of each of these face demographics being classified as a face using the standard CNN classifier we've just trained. "]},{"cell_type":"code","metadata":{"id":"GI4O0Y1GAot9","colab_type":"code","colab":{}},"source":["### Evaluate the standard CNN on the test data ### \n","\n","standard_classifier_logits = [standard_classifier(np.array(x, dtype=np.float32)) for x in test_faces]\n","standard_classifier_probs = tf.squeeze(tf.sigmoid(standard_classifier_logits))\n","\n","# Plot the prediction accuracies per demographic\n","xx = range(len(keys))\n","yy = standard_classifier_probs.numpy().mean(1)\n","plt.bar(xx, yy)\n","plt.xticks(xx, keys)\n","plt.ylim(max(0,yy.min()-yy.ptp()/2.), yy.max()+yy.ptp()/2.)\n","plt.title(\"Standard classifier predictions\");"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"j0Cvvt90DoAm","colab_type":"text"},"source":["Take a look at the accuracies for this first model across these four groups. What do you observe? Would you consider this model biased or unbiased? What are some reasons why a trained model may have biased accuracies? "]},{"cell_type":"markdown","metadata":{"id":"0AKcHnXVtgqJ","colab_type":"text"},"source":["## 2.3 Mitigating algorithmic bias\n","\n","Imbalances in the training data can result in unwanted algorithmic bias. For example, the majority of faces in CelebA (our training set) are those of light-skinned females. As a result, a classifier trained on CelebA will be better suited at recognizing and classifying faces with features similar to these, and will thus be biased.\n","\n","How could we overcome this? A naive solution -- and on that is being adopted by many companies and organizations -- would be to annotate different subclasses (i.e., light-skinned females, males with hats, etc.) within the training data, and then manually even out the data with respect to these groups.\n","\n","But this approach has two major disadvantages. First, it requires annotating massive amounts of data, which is not scalable. Second, it requires that we know what potential biases (e.g., race, gender, pose, occlusion, hats, glasses, etc.) to look for in the data. As a result, manual annotation may not capture all the different features that are imbalanced within the training data.\n","\n","Instead, let's actually **learn** these features in an unbiased, unsupervised manner, without the need for any annotation, and then train a classifier fairly with respect to these features. In the rest of this lab, we'll do exactly that."]},{"cell_type":"markdown","metadata":{"id":"nLemS7dqECsI","colab_type":"text"},"source":["## 2.4 Variational autoencoder (VAE) for learning latent structure\n","\n","As you saw, the accuracy of the CNN varies across the four demographics we looked at. To think about why this may be, consider the dataset the model was trained on, CelebA. If certain features, such as dark skin or hats, are *rare* in CelebA, the model may end up biased against these as a result of training with a biased dataset. That is to say, its classification accuracy will be worse on faces that have under-represented features, such as dark-skinned faces or faces with hats, relevative to faces with features well-represented in the training data! This is a problem. \n","\n","Our goal is to train a *debiased* version of this classifier -- one that accounts for potential disparities in feature representation within the training data. Specifically, to build a debiased facial classifier, we'll train a model that **learns a representation of the underlying latent space** to the face training data. The model then uses this information to mitigate unwanted biases by sampling faces with rare features, like dark skin or hats, *more frequently* during training. The key design requirement for our model is that it can learn an *encoding* of the latent features in the face data in an entirely *unsupervised* way. To achieve this, we'll turn to variational autoencoders (VAEs).\n","\n","![The concept of a VAE](http://kvfrans.com/content/images/2016/08/vae.jpg)\n","\n","As shown in the schematic above and in Lecture 4, VAEs rely on an encoder-decoder structure to learn a latent representation of the input data. In the context of computer vision, the encoder network takes in input images, encodes them into a series of variables defined by a mean and standard deviation, and then draws from the distributions defined by these parameters to generate a set of sampled latent variables. The decoder network then \"decodes\" these variables to generate a reconstruction of the original image, which is used during training to help the model identify which latent variables are important to learn. \n","\n","Let's formalize two key aspects of the VAE model and define relevant functions for each.\n"]},{"cell_type":"markdown","metadata":{"id":"KmbXKtcPkTXA","colab_type":"text"},"source":["### Understanding VAEs: loss function\n","\n","In practice, how can we train a VAE? In learning the latent space, we constrain the means and standard deviations to approximately follow a unit Gaussian. Recall that these are learned parameters, and therefore must factor into the loss computation, and that the decoder portion of the VAE is using these parameters to output a reconstruction that should closely match the input image, which also must factor into the loss. What this means is that we'll have two terms in our VAE loss function:\n","\n","1. **Latent loss ($L_{KL}$)**: measures how closely the learned latent variables match a unit Gaussian and is defined by the Kullback-Leibler (KL) divergence.\n","2. **Reconstruction loss ($L_{x}{(x,\\hat{x})}$)**: measures how accurately the reconstructed outputs match the input and is given by the $L^1$ norm of the input image and its reconstructed output. \n","\n","The equations for both of these losses are provided below:\n","\n","$$ L_{KL}(\\mu, \\sigma) = \\frac{1}{2}\\sum\\limits_{j=0}^{k-1}\\small{(\\sigma_j + \\mu_j^2 - 1 - \\log{\\sigma_j})} $$\n","\n","$$ L_{x}{(x,\\hat{x})} = ||x-\\hat{x}||_1 $$ \n","\n","Thus for the VAE loss we have: \n","\n","$$ L_{VAE} = c\\cdot L_{KL} + L_{x}{(x,\\hat{x})} $$\n","\n","where $c$ is a weighting coefficient used for regularization. \n","\n","Now we're ready to define our VAE loss function:"]},{"cell_type":"code","metadata":{"id":"S00ASo1ImSuh","colab_type":"code","colab":{}},"source":["### Defining the VAE loss function ###\n","\n","''' Function to calculate VAE loss given:\n"," an input x, \n"," reconstructed output x_recon, \n"," encoded means mu, \n"," encoded log of standard deviation logsigma, \n"," weight parameter for the latent loss kl_weight\n","'''\n","def vae_loss_function(x, x_recon, mu, logsigma, kl_weight=0.0005):\n"," # TODO: Define the latent loss. Note this is given in the equation for L_{KL}\n"," # in the text block directly above\n"," latent_loss = 0.5 * tf.reduce_sum(tf.exp(logsigma) + tf.square(mu) - 1.0 - logsigma, axis=1)\n"," # latent_loss = # TODO\n","\n"," # TODO: Define the reconstruction loss as the mean absolute pixel-wise \n"," # difference between the input and reconstruction. Hint: you'll need to \n"," # use tf.reduce_mean, and supply an axis argument which specifies which \n"," # dimensions to reduce over. For example, reconstruction loss needs to average \n"," # over the height, width, and channel image dimensions.\n"," # https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean\n"," reconstruction_loss = tf.reduce_mean(tf.abs(x-x_recon), axis=(1,2,3))\n"," # reconstruction_loss = # TODO\n","\n"," # TODO: Define the VAE loss. Note this is given in the equation for L_{VAE}\n"," # in the text block directly above\n"," vae_loss = kl_weight * latent_loss + reconstruction_loss\n"," # vae_loss = # TODO\n"," \n"," return vae_loss"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"E8mpb3pJorpu","colab_type":"text"},"source":["Great! Now that we have a more concrete sense of how VAEs work, let's explore how we can leverage this network structure to train a *debiased* facial classifier."]},{"cell_type":"markdown","metadata":{"id":"DqtQH4S5fO8F","colab_type":"text"},"source":["### Understanding VAEs: reparameterization \n","\n","As you may recall from lecture, VAEs use a \"reparameterization trick\" for sampling learned latent variables. Instead of the VAE encoder generating a single vector of real numbers for each latent variable, it generates a vector of means and a vector of standard deviations that are constrained to roughly follow Gaussian distributions. We then sample from the standard deviations and add back the mean to output this as our sampled latent vector. Formalizing this for a latent variable $z$ where we sample $\\epsilon \\sim \\mathcal{N}(0,(I))$ we have: \n","\n","$$ z = \\mathbb{\\mu} + e^{\\left(\\frac{1}{2} \\cdot \\log{\\Sigma}\\right)}\\circ \\epsilon $$\n","\n","where $\\mu$ is the mean and $\\Sigma$ is the covariance matrix. This is useful because it will let us neatly define the loss function for the VAE, generate randomly sampled latent variables, achieve improved network generalization, **and** make our complete VAE network differentiable so that it can be trained via backpropagation. Quite powerful!\n","\n","Let's define a function to implement the VAE sampling operation:"]},{"cell_type":"code","metadata":{"id":"cT6PGdNajl3K","colab_type":"code","colab":{}},"source":["### VAE Reparameterization ###\n","\n","\"\"\"Reparameterization trick by sampling from an isotropic unit Gaussian.\n","# Arguments\n"," z_mean, z_logsigma (tensor): mean and log of standard deviation of latent distribution (Q(z|X))\n","# Returns\n"," z (tensor): sampled latent vector\n","\"\"\"\n","def sampling(z_mean, z_logsigma):\n"," # By default, random.normal is \"standard\" (ie. mean=0 and std=1.0)\n"," batch, latent_dim = z_mean.shape\n"," epsilon = tf.random.normal(shape=(batch, latent_dim))\n","\n"," # TODO: Define the reparameterization computation!\n"," # Note the equation is given in the text block immediately above.\n"," z = z_mean + tf.math.exp(0.5 * z_logsigma) * epsilon\n"," # z = # TODO\n"," return z"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"qtHEYI9KNn0A","colab_type":"text"},"source":["## 2.5 Debiasing variational autoencoder (DB-VAE)\n","\n","Now, we'll use the general idea behind the VAE architecture to build a model, termed a [*debiasing variational autoencoder*](https://lmrt.mit.edu/sites/default/files/AIES-19_paper_220.pdf) or DB-VAE, to mitigate (potentially) unknown biases present within the training idea. We'll train our DB-VAE model on the facial detection task, run the debiasing operation during training, evaluate on the PPB dataset, and compare its accuracy to our original, biased CNN model. \n","\n","### The DB-VAE model\n","\n","The key idea behind this debiasing approach is to use the latent variables learned via a VAE to adaptively re-sample the CelebA data during training. Specifically, we will alter the probability that a given image is used during training based on how often its latent features appear in the dataset. So, faces with rarer features (like dark skin, sunglasses, or hats) should become more likely to be sampled during training, while the sampling probability for faces with features that are over-represented in the training dataset should decrease (relative to uniform random sampling across the training data). \n","\n","A general schematic of the DB-VAE approach is shown here:\n","\n","![DB-VAE](https://raw.githubusercontent.com/aamini/introtodeeplearning/2019/lab2/img/DB-VAE.png)"]},{"cell_type":"markdown","metadata":{"id":"ziA75SN-UxxO","colab_type":"text"},"source":["Recall that we want to apply our DB-VAE to a *supervised classification* problem -- the facial detection task. Importantly, note how the encoder portion in the DB-VAE architecture also outputs a single supervised variable, $z_o$, corresponding to the class prediction -- face or not face. Usually, VAEs are not trained to output any supervised variables (such as a class prediction)! This is another key distinction between the DB-VAE and a traditional VAE. \n","\n","Keep in mind that we only want to learn the latent representation of *faces*, as that's what we're ultimately debiasing against, even though we are training a model on a binary classification problem. We'll need to ensure that, **for faces**, our DB-VAE model both learns a representation of the unsupervised latent variables, captured by the distribution $q_\\phi(z|x)$, **and** outputs a supervised class prediction $z_o$, but that, **for negative examples**, it only outputs a class prediction $z_o$."]},{"cell_type":"markdown","metadata":{"id":"XggIKYPRtOZR","colab_type":"text"},"source":["### Defining the DB-VAE loss function\n","\n","This means we'll need to be a bit clever about the loss function for the DB-VAE. The form of the loss will depend on whether it's a face image or a non-face image that's being considered. \n","\n","For **face images**, our loss function will have two components:\n","\n","\n","1. **VAE loss ($L_{VAE}$)**: consists of the latent loss and the reconstruction loss.\n","2. **Classification loss ($L_y(y,\\hat{y})$)**: standard cross-entropy loss for a binary classification problem. \n","\n","In contrast, for images of **non-faces**, our loss function is solely the classification loss. \n","\n","We can write a single expression for the loss by defining an indicator variable $\\mathcal{I}_f$which reflects which training data are images of faces ($\\mathcal{I}_f(y) = 1$ ) and which are images of non-faces ($\\mathcal{I}_f(y) = 0$). Using this, we obtain:\n","\n","$$L_{total} = L_y(y,\\hat{y}) + \\mathcal{I}_f(y)\\Big[L_{VAE}\\Big]$$\n","\n","Let's write a function to define the DB-VAE loss function:\n","\n"]},{"cell_type":"code","metadata":{"id":"VjieDs8Ovcqs","colab_type":"code","colab":{}},"source":["### Loss function for DB-VAE ###\n","\n","\"\"\"Loss function for DB-VAE.\n","# Arguments\n"," x: true input x\n"," x_pred: reconstructed x\n"," y: true label (face or not face)\n"," y_logit: predicted labels\n"," mu: mean of latent distribution (Q(z|X))\n"," logsigma: log of standard deviation of latent distribution (Q(z|X))\n","# Returns\n"," total_loss: DB-VAE total loss\n"," classification_loss = DB-VAE classification loss\n","\"\"\"\n","def debiasing_loss_function(x, x_pred, y, y_logit, mu, logsigma):\n","\n"," # TODO: call the relevant function to obtain VAE loss\n"," vae_loss = vae_loss_function(x, x_pred, mu, logsigma)\n"," # vae_loss = vae_loss_function('''TODO''') # TODO\n","\n"," # TODO: define the classification loss using sigmoid_cross_entropy\n"," # https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits\n"," classification_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_logit)\n"," # classification_loss = # TODO\n","\n"," # Use the training data labels to create variable face_indicator:\n"," # indicator that reflects which training data are images of faces\n"," face_indicator = tf.cast(tf.equal(y, 1), tf.float32)\n","\n"," # TODO: define the DB-VAE total loss! Use tf.reduce_mean to average over all\n"," # samples\n"," total_loss = tf.reduce_mean(\n"," classification_loss + \n"," face_indicator * vae_loss\n"," )\n"," # total_loss = # TODO\n","\n"," return total_loss, classification_loss"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"YIu_2LzNWwWY","colab_type":"text"},"source":["### DB-VAE architecture\n","\n","Now we're ready to define the DB-VAE architecture. To build the DB-VAE, we will use the standard CNN classifier from above as our encoder, and then define a decoder network. We will create and initialize the two models, and then construct the end-to-end VAE. We will use a latent space with 100 latent variables.\n","\n","The decoder network will take as input the sampled latent variables, run them through a series of deconvolutional layers, and output a reconstruction of the original input image."]},{"cell_type":"code","metadata":{"id":"JfWPHGrmyE7R","colab_type":"code","colab":{}},"source":["### Define the decoder portion of the DB-VAE ###\n","\n","n_filters = 12 # base number of convolutional filters, same as standard CNN\n","latent_dim = 100 # number of latent variables\n","\n","def make_face_decoder_network():\n"," # Functionally define the different layer types we will use\n"," Conv2DTranspose = functools.partial(tf.keras.layers.Conv2DTranspose, padding='same', activation='relu')\n"," BatchNormalization = tf.keras.layers.BatchNormalization\n"," Flatten = tf.keras.layers.Flatten\n"," Dense = functools.partial(tf.keras.layers.Dense, activation='relu')\n"," Reshape = tf.keras.layers.Reshape\n","\n"," # Build the decoder network using the Sequential API\n"," decoder = tf.keras.Sequential([\n"," # Transform to pre-convolutional generation\n"," Dense(units=4*4*6*n_filters), # 4x4 feature maps (with 6N occurances)\n"," Reshape(target_shape=(4, 4, 6*n_filters)),\n","\n"," # Upscaling convolutions (inverse of encoder)\n"," Conv2DTranspose(filters=4*n_filters, kernel_size=3, strides=2),\n"," Conv2DTranspose(filters=2*n_filters, kernel_size=3, strides=2),\n"," Conv2DTranspose(filters=1*n_filters, kernel_size=5, strides=2),\n"," Conv2DTranspose(filters=3, kernel_size=5, strides=2),\n"," ])\n","\n"," return decoder"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"yWCMu12w1BuD","colab_type":"text"},"source":["Now, we will put this decoder together with the standard CNN classifier as our encoder to define the DB-VAE. Note that at this point, there is nothing special about how we put the model together that makes it a \"debiasing\" model -- that will come when we define the training operation. Here, we will define the core VAE architecture by sublassing the `Model` class; defining encoding, reparameterization, and decoding operations; and calling the network end-to-end."]},{"cell_type":"code","metadata":{"id":"dSFDcFBL13c3","colab_type":"code","colab":{}},"source":["### Defining and creating the DB-VAE ###\n","\n","class DB_VAE(tf.keras.Model):\n"," def __init__(self, latent_dim):\n"," super(DB_VAE, self).__init__()\n"," self.latent_dim = latent_dim\n","\n"," # Define the number of outputs for the encoder. Recall that we have \n"," # `latent_dim` latent variables, as well as a supervised output for the \n"," # classification.\n"," num_encoder_dims = 2*self.latent_dim + 1\n","\n"," self.encoder = make_standard_classifier(num_encoder_dims)\n"," self.decoder = make_face_decoder_network()\n","\n"," # function to feed images into encoder, encode the latent space, and output\n"," # classification probability \n"," def encode(self, x):\n"," # encoder output\n"," encoder_output = self.encoder(x)\n","\n"," # classification prediction\n"," y_logit = tf.expand_dims(encoder_output[:, 0], -1)\n"," # latent variable distribution parameters\n"," z_mean = encoder_output[:, 1:self.latent_dim+1] \n"," z_logsigma = encoder_output[:, self.latent_dim+1:]\n","\n"," return y_logit, z_mean, z_logsigma\n","\n"," # VAE reparameterization: given a mean and logsigma, sample latent variables\n"," def reparameterize(self, z_mean, z_logsigma):\n"," # TODO: call the sampling function defined above\n"," z = sampling(z_mean, z_logsigma)\n"," # z = # TODO\n"," return z\n","\n"," # Decode the latent space and output reconstruction\n"," def decode(self, z):\n"," # TODO: use the decoder to output the reconstruction\n"," reconstruction = self.decoder(z)\n"," # reconstruction = # TODO\n"," return reconstruction\n","\n"," # The call function will be used to pass inputs x through the core VAE\n"," def call(self, x): \n"," # Encode input to a prediction and latent space\n"," y_logit, z_mean, z_logsigma = self.encode(x)\n","\n"," # TODO: reparameterization\n"," z = self.reparameterize(z_mean, z_logsigma)\n"," # z = # TODO\n","\n"," # TODO: reconstruction\n"," recon = self.decode(z)\n"," # recon = # TODO\n"," return y_logit, z_mean, z_logsigma, recon\n","\n"," # Predict face or not face logit for given input x\n"," def predict(self, x):\n"," y_logit, z_mean, z_logsigma = self.encode(x)\n"," return y_logit\n","\n","dbvae = DB_VAE(latent_dim)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"M-clbYAj2waY","colab_type":"text"},"source":["As stated, the encoder architecture is identical to the CNN from earlier in this lab. Note the outputs of our constructed DB_VAE model in the `call` function: `y_logit, z_mean, z_logsigma, z`. Think carefully about why each of these are outputted and their significance to the problem at hand.\n","\n"]},{"cell_type":"markdown","metadata":{"id":"nbDNlslgQc5A","colab_type":"text"},"source":["### Adaptive resampling for automated debiasing with DB-VAE\n","\n","So, how can we actually use DB-VAE to train a debiased facial detection classifier?\n","\n","Recall the DB-VAE architecture: as input images are fed through the network, the encoder learns an estimate $\\mathcal{Q}(z|X)$ of the latent space. We want to increase the relative frequency of rare data by increased sampling of under-represented regions of the latent space. We can approximate $\\mathcal{Q}(z|X)$ using the frequency distributions of each of the learned latent variables, and then define the probability distribution of selecting a given datapoint $x$ based on this approximation. These probability distributions will be used during training to re-sample the data.\n","\n","You'll write a function to execute this update of the sampling probabilities, and then call this function within the DB-VAE training loop to actually debias the model. "]},{"cell_type":"markdown","metadata":{"id":"Fej5FDu37cf7","colab_type":"text"},"source":["First, we've defined a short helper function `get_latent_mu` that returns the latent variable means returned by the encoder after a batch of images is inputted to the network:"]},{"cell_type":"code","metadata":{"id":"ewWbf7TE7wVc","colab_type":"code","colab":{}},"source":["# Function to return the means for an input image batch\n","def get_latent_mu(images, dbvae, batch_size=1024):\n"," N = images.shape[0]\n"," mu = np.zeros((N, latent_dim))\n"," for start_ind in range(0, N, batch_size):\n"," end_ind = min(start_ind+batch_size, N+1)\n"," batch = (images[start_ind:end_ind]).astype(np.float32)/255.\n"," _, batch_mu, _ = dbvae.encode(batch)\n"," mu[start_ind:end_ind] = batch_mu\n"," return mu"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"wn4yK3SC72bo","colab_type":"text"},"source":["Now, let's define the actual resampling algorithm `get_training_sample_probabilities`. Importantly note the argument `smoothing_fac`. This parameter tunes the degree of debiasing: for `smoothing_fac=0`, the re-sampled training set will tend towards falling uniformly over the latent space, i.e., the most extreme debiasing."]},{"cell_type":"code","metadata":{"id":"HiX9pmmC7_wn","colab_type":"code","colab":{}},"source":["### Resampling algorithm for DB-VAE ###\n","\n","'''Function that recomputes the sampling probabilities for images within a batch\n"," based on how they distribute across the training data'''\n","def get_training_sample_probabilities(images, dbvae, bins=10, smoothing_fac=0.001): \n"," print(\"Recomputing the sampling probabilities\")\n"," \n"," # TODO: run the input batch and get the latent variable means\n"," mu = get_latent_mu(images, dbvae)\n"," # mu = get_latent_mu('''TODO''') # TODO\n","\n"," # sampling probabilities for the images\n"," training_sample_p = np.zeros(mu.shape[0])\n"," \n"," # consider the distribution for each latent variable \n"," for i in range(latent_dim):\n"," \n"," latent_distribution = mu[:,i]\n"," # generate a histogram of the latent distribution\n"," hist_density, bin_edges = np.histogram(latent_distribution, density=True, bins=bins)\n","\n"," # find which latent bin every data sample falls in \n"," bin_edges[0] = -float('inf')\n"," bin_edges[-1] = float('inf')\n"," \n"," # TODO: call the digitize function to find which bins in the latent distribution \n"," # every data sample falls in to\n"," # https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.digitize.html\n"," bin_idx = np.digitize(latent_distribution, bin_edges)\n"," # bin_idx = np.digitize('''TODO''', '''TODO''') # TODO\n","\n"," # smooth the density function\n"," hist_smoothed_density = hist_density + smoothing_fac\n"," hist_smoothed_density = hist_smoothed_density / np.sum(hist_smoothed_density)\n","\n"," # invert the density function \n"," p = 1.0/(hist_smoothed_density[bin_idx-1])\n"," \n"," # TODO: normalize all probabilities\n"," p = p / np.sum(p)\n"," # p = # TODO\n"," \n"," # TODO: update sampling probabilities by considering whether the newly\n"," # computed p is greater than the existing sampling probabilities.\n"," training_sample_p = np.maximum(p, training_sample_p)\n"," # training_sample_p = # TODO\n"," \n"," # final normalization\n"," training_sample_p /= np.sum(training_sample_p)\n","\n"," return training_sample_p"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"pF14fQkVUs-a","colab_type":"text"},"source":["Now that we've defined the resampling update, we can train our DB-VAE model on the CelebA/ImageNet training data, and run the above operation to re-weight the importance of particular data points as we train the model. Remember again that we only want to debias for features relevant to *faces*, not the set of negative examples. Complete the code block below to execute the training loop!"]},{"cell_type":"code","metadata":{"id":"xwQs-Gu5bKEK","colab_type":"code","colab":{}},"source":["### Training the DB-VAE ###\n","\n","# Hyperparameters\n","batch_size = 32\n","learning_rate = 5e-4\n","latent_dim = 100\n","\n","# DB-VAE needs slightly more epochs to train since its more complex than \n","# the standard classifier so we use 6 instead of 2\n","num_epochs = 6 \n","\n","# instantiate a new DB-VAE model and optimizer\n","dbvae = DB_VAE(100)\n","optimizer = tf.keras.optimizers.Adam(learning_rate)\n","\n","# To define the training operation, we will use tf.function which is a powerful tool \n","# that lets us turn a Python function into a TensorFlow computation graph.\n","@tf.function\n","def debiasing_train_step(x, y):\n","\n"," with tf.GradientTape() as tape:\n"," # Feed input x into dbvae. Note that this is using the DB_VAE call function!\n"," y_logit, z_mean, z_logsigma, x_recon = dbvae(x)\n","\n"," '''TODO: call the DB_VAE loss function to compute the loss'''\n"," loss, class_loss = debiasing_loss_function(x, x_recon, y, y_logit, z_mean, z_logsigma)\n"," # loss, class_loss = debiasing_loss_function('''TODO arguments''') # TODO\n"," \n"," '''TODO: use the GradientTape.gradient method to compute the gradients.\n"," Hint: this is with respect to the trainable_variables of the dbvae.'''\n"," grads = tape.gradient(loss, dbvae.trainable_variables)\n"," # grads = tape.gradient('''TODO''', '''TODO''') # TODO\n","\n"," # apply gradients to variables\n"," optimizer.apply_gradients(zip(grads, dbvae.trainable_variables))\n"," return loss\n","\n","# get training faces from data loader\n","all_faces = loader.get_all_train_faces()\n","\n","if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists\n","\n","# The training loop -- outer loop iterates over the number of epochs\n","for i in range(num_epochs):\n","\n"," IPython.display.clear_output(wait=True)\n"," print(\"Starting epoch {}/{}\".format(i+1, num_epochs))\n","\n"," # Recompute data sampling proabilities\n"," '''TODO: recompute the sampling probabilities for debiasing'''\n"," p_faces = get_training_sample_probabilities(all_faces, dbvae)\n"," # p_faces = get_training_sample_probabilities('''TODO''', '''TODO''') # TODO\n"," \n"," # get a batch of training data and compute the training step\n"," for j in tqdm(range(loader.get_train_size() // batch_size)):\n"," # load a batch of data\n"," (x, y) = loader.get_batch(batch_size, p_pos=p_faces)\n"," # loss optimization\n"," loss = debiasing_train_step(x, y)\n"," \n"," # plot the progress every 200 steps\n"," if j % 500 == 0: \n"," mdl.util.plot_sample(x, y, dbvae)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"uZBlWDPOVcHg","colab_type":"text"},"source":["Wonderful! Now we should have a trained and (hopefully!) debiased facial classification model, ready for evaluation!"]},{"cell_type":"markdown","metadata":{"id":"Eo34xC7MbaiQ","colab_type":"text"},"source":["## 2.6 Evaluation of DB-VAE on Test Dataset\n","\n","Finally let's test our DB-VAE model on the test dataset, looking specifically at its accuracy on each the \"Dark Male\", \"Dark Female\", \"Light Male\", and \"Light Female\" demographics. We will compare the performance of this debiased model against the (potentially biased) standard CNN from earlier in the lab."]},{"cell_type":"code","metadata":{"id":"bgK77aB9oDtX","colab_type":"code","colab":{}},"source":["dbvae_logits = [dbvae.predict(np.array(x, dtype=np.float32)) for x in test_faces]\n","dbvae_probs = tf.squeeze(tf.sigmoid(dbvae_logits))\n","\n","xx = np.arange(len(keys))\n","plt.bar(xx, standard_classifier_probs.numpy().mean(1), width=0.2, label=\"Standard CNN\")\n","plt.bar(xx+0.2, dbvae_probs.numpy().mean(1), width=0.2, label=\"DB-VAE\")\n","plt.xticks(xx, keys); \n","plt.title(\"Network predictions on test dataset\")\n","plt.ylabel(\"Probability\"); plt.legend(bbox_to_anchor=(1.04,1), loc=\"upper left\");\n"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"rESoXRPQo_mq","colab_type":"text"},"source":["## 2.7 Conclusion \n","\n","We encourage you to think about and maybe even address some questions raised by the approach and results outlined here:\n","\n","* How does the accuracy of the DB-VAE across the four demographics compare to that of the standard CNN? Do you find this result surprising in any way?\n","* How can the performance of the DB-VAE classifier be improved even further? We purposely did not optimize hyperparameters to leave this up to you! If you want to go further, try to optimize your model to achieve the best performance. **[Email us](mailto:introtodeeplearning-staff@mit.edu) a copy of your notebook with the 2.6 bar plot executed, and we'll give out prizes to the best performers!** \n","* In which applications (either related to facial detection or not!) would debiasing in this way be desired? Are there applications where you may not want to debias your model? \n","* Do you think it should be necessary for companies to demonstrate that their models, particularly in the context of tasks like facial detection, are not biased? If so, do you have thoughts on how this could be standardized and implemented?\n","* Do you have ideas for other ways to address issues of bias, particularly in terms of the training data?\n","\n","Hopefully this lab has shed some light on a few concepts, from vision based tasks, to VAEs, to algorithmic bias. We like to think it has, but we're biased ;). \n","\n","![Faces](https://media1.tenor.com/images/44e1f590924eca94fe86067a4cf44c72/tenor.gif?itemid=3394328)"]}]}