Skip to content

alexraya/data_lake

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Project: Data Lake

Introduction

A music streaming startup, Sparkify, has grown their user base and song database substantially, prompting a transition from a data warehouse into a data lake. Their data resides in S3, in a directory of JSON logs on user activity on the app, as well as a directory with JSON metadata on the songs in their app.

To create the datalake, an ETL pipeline that extracts their data from S3, processes them using Spark, and loads the data back into S3 as a set of dimensional tables will be done. This will allow their analytics team to continue finding insights in what songs their users are listening to.

S3 data

Song data: s3://udacity-dend/song_data Log data: s3://udacity-dend/log_data

Song Dataset

The first dataset is a subset of real data from the Million Song Dataset. Each file is in JSON format and contains metadata about a song and the artist of that song. The files are partitioned by the first three letters of each song's track ID. For example, here are filepaths to two files in this dataset.

song_data/A/B/C/TRABCEI128F424C983.json song_data/A/A/B/TRAABJL12903CDCF1A.json

And below is an example of what a single song file, TRAABJL12903CDCF1A.json, looks like.

{"num_songs": 1, "artist_id": "ARJIE2Y1187B994AB7", "artist_latitude": null, "artist_longitude": null, "artist_location": "", "artist_name": "Line Renaud", "song_id": "SOUPIRU12A6D4FA1E1", "title": "Der Kleine Dompfaff", "duration": 152.92036, "year": 0}

Log Dataset

The second dataset consists of log files in JSON format generated by an event simulator based on the songs in the dataset above. These simulate app activity logs from an imaginary music streaming app based on configuration settings. For example, here are filepaths to two files in this dataset.

log_data/2018/11/2018-11-12-events.json log_data/2018/11/2018-11-13-events.json

Each JSON has the following columns of data:

artist auth firstName gender itemInSession lastName length level location method page registration sessionId song status ts userAgent userId

Schema for Song Play Analysis

The song and log data will be copied from S3, processed in Spark, then loaded back into S3 as parquet files. The schema will be as follows:

Fact table: songplays - records in event data associated with song plays i.e. records with page NextSong columns: songplay_id, start_time, user_id, level, song_id, artist_id, session_id, location, user_agent

Dim tables: users - users in the app columns: user_id, first_name, last_name, gender, level

songs in music database columns: song_id, title, artist_id, year, duration

artists in music database columns: artist_id, name, location, lattitude, longitude

time - timestamps of records in songplays broken down into specific units columns: start_time, hour, day, week, month, year, weekday

Pipeline considerations The data pipeline runs in two phases, the first function is to process song data into songs and artists tables, and the second is to process log data into users and time tables, and then with the log dataframe, read in the songs data and join them together to create the songsplays table. Although this could all be done in one run, splitting the pipeline into two phases allows for ease of troubleshooting any pipeline issues by segmenting the song and log processing into two pipeline phases.

About

Udacity Data Lake Project

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages