Skip to content

Master's Final Degree Project on Artificial Intelligence and Big Data

License

Notifications You must be signed in to change notification settings

jofaval/tfm-iabd

Repository files navigation

Master's Final Degree Project

Artificial Intelligence and Big Data

The motivation behind the project is to work as a team with the idea of joining everthing we've seen, in other words:

Being able to design, research, develop and deploy a Data Science idea designing a Big Data Architecture from which to train a model with a conclusion in mind while being ethical and not breaking any EU laws.

For reference about the changes, please, check out our CHANGELOG.

Grade

To be graded

Table of Contents

  1. Title
  2. Description
  3. Objectives
  4. Ethics
  5. Design
    1. Flow of the Data
    2. Data Structure
    3. Data Sources
  6. Product
    1. Product Roadmap
    2. How is the Product managed?
  7. Methodology
    1. Product Owner
    2. Scrum Muster
    3. Software
  8. Tech Stack
    1. Programming Language
    2. ETL
    3. Database
    4. Cloud computing
    5. Infrastructure
  9. Usage
    1. Requirements
    2. Install the project
    3. How to boot it
    4. Stop the execution
    5. Deployment
  10. Team
    1. Infrastructure (Big Data Architecture)
    2. Data Extraction/Mining
    3. Data Normalization
    4. Data Storage/Loading
    5. Data Cleansing
    6. Data Science/Modeling (AI Engineering, sort of)
    7. Data Visualization
    8. Deploy (CI/CD integration)
  11. License
  12. Legal Notice
  13. Credits
  14. Gratitude

Title

↑ Back to top

"Hype" is all you need

Description

↑ Back to top

This is research into what defines the success of films, and whether success can be predicted (proportionally) based on the hype (expectation) generated around a film; to be able to be expandable with both series and anime, video games or any other type of multimedia content or not.

It is intended, as possible definitions of the success of a film, to be able to predict:

  • The benefits generated of a film based on its initial investment and how good will it be received
  • The acceptance/acclamation of a film with respect to the initial "hype"
  • Predict the note on IMDB a week after release, and whoever says IMDB can say other platforms (Rotten Tomatoes, Metacritic)
  • Predict your success (previously defined) one week after your release

For this, various data sources will be used, such as: Twitter, Reddit, YouTube, IMDB, and those that we can discover as the investigation progresses. One of the main and central components of the application is sentiment analysis, which would become the main focus of the prediction.

Documentation

↑ Back to top

For the official documentation visit the /docs folder

Objectives

↑ Back to top

Not in a specific order.

  • Teamwork as a team of Data Sciencist with (almost) no experience in the data field.
  • Use knowledge from every subject seen in the degree.
  • Develop all the required elements components and integrate them.
  • Design a Data Infrastructure.
  • Research about the movie's hype and it's success, and it's total box-office.
  • Manage and develop an E2E (end-to-end) Big Data project, from idea to analysis/visualizations.
  • Apply AI Engineering techniques to deliver a product that showcases our conclusion.
  • Develop the (A.I. and machine learning) models required for the desired outcome.
  • Use Cloud Computing Services where needed and learn to work with them.
  • Fullfill a Data Science Project requirements with a Data Team.
  • Trying to understand and predict the box office of blockbuster (mainly) movies, wether independent or from a franchise.

Ethics

↑ Back to top

Our idea is to have a non-biased model that does not get influenced by people's opinion, rather, can know the difference between the general sentiment and how well will it reflect the movie's success.

Regarding the ethics, our goal woudln't be to forcefeed certain movies, nor to dictate whatpeople should do/watch, it'd be to have, just another tool to decide what you may want to see.

Design

↑ Back to top

Flow of the Data

↑ To the section

  1. Node-RED sniffs the data and sends them to
  2. Kafka, which itself distributes it to
  3. Spark for them to be transformed and stored in
  4. MongoDB to be later retrieved with
  5. Google Colab/Python
  6. To be trained with Spark saving the predictions in
  7. MongoDB so they can be accessed from
  8. PowerBI/Tableau and display them in
  9. Azure Web Service with a simple Front with an even simpler interaction

Data Structure

↑ To the section

All the data will have an origin tag/field as to better identify it's properties

Data Lake

Instead of following the classic paradigm of ETL, first extract the data, then transform it BEFORE loading it. Data Lakes strives for the ELT, extract the data, load it FIRST then transform it when you need to use it.

And we'll be using it to store all the (raw) data, that we collect in the span of the project. We'll be having Diogenes syndrome towards the data. We'd rather delete data than not having enough.

Data Warehouse

From this point forward we should have quality data, data that is "clean". Following the aforementioned ELT paradigm, a Data Warehouse is where the information will be loaded ONCE Transformed.

It will serve us as the main storage for our models, all the data that comes to this point, should and must be: clean, standarized, normalized and regularized. It should be as ready as possible for the model.

Data Sources

↑ To the section

  • IMDB
  • Twitter
  • YouTube
  • Reddit
  • Google Trends

Product

↑ Back to top

We're not going to sell anyting, but, our Product idea is to have a model that retrains with differente sources of information to display the outcome on the web and with some storytelling with the conclusion.

Product Roadmap

↑ To the section

Original estimation

↑ To the section

The initial estimation, it should be updated with the real roadmap at the end.

Roadmap

Initial product roadmap

Real

↑ To the section

The project has not yet been finished

How is the Product managed?

↑ To the section

We've splitted the product in different phases. The traditiona Product phases, and expanded the Data Science development ones:

Traditional

↑ To the section

  • Product Identification
  • Product Planification
  • Product Development
  • Product Control
  • Product Closure

Product Development

↑ To the section

  • Infrastructure
  • Data Extraction
  • Data Normalization
  • Data Storage/Loading
  • Data Cleansing
  • Data Science/Modeling
  • Data Visualization
  • Deploy
  • Documentation Draft
  • Validation

Methodology

↑ Back to top

SCRUM

  • Kanban Board
  • Planning Poker

Product Owner

↑ To the section

Pepe

Tech/Team Lead

↑ To the section

Pepe

Scrum Muster

↑ To the section

Our teachers

Software

↑ To the section

  • Trello

Tech Stack

↑ Back to top

Programming Language

↑ To the section

  • Python
    An easy-to-learn language chosen, mainly, because it's what the team's most comfortable with related to Big Data and A.I. technologies and it's usage. There were alternatives such as Scala, C++ or Java.

ETL

↑ To the section

  1. Node-RED
    A light weight graph/node based npm package for flow development to connect services, such as, APIs, and IoT.
  2. Kafka
    A data broker, one of the most used ones, if not the most used, meant to be used with Java or Scala, but can be interacted with through plugins, add-ons, and shell scripts
  3. Spark
    A highly efficient cluster computation and paralelization. It's API allows for Python (PySpark), Java, Scala, R and SQL, which makes it a perfect fit for our team. It is in high demand nowadays.

Database

↑ To the section

  • MongoDB
    An opensource NoSQL document based Database, it has a great community and multiple implementations and integrations.

Cloud computing

↑ To the section

  • AWS or Azure
    Both great cloud computing services that offer similar services, each with their own pros and cons, but both are top notch in the world of cloud computing, data science and DaaS (Data as a Service)
  • Terraform (and maybe AWS CloudFormation)
    IaC (Infrastructure as Code) is the way to go, cloudformation forces/restricts us to one service, but it is important that, however it is that we develop and deploy our cloud infrastructure, if ever, it is, cloud agnostic if possible, but easily replicable, and highly reliable, it should always produce the some output, the same outcome, without (as much) human mistake.

Infrastructure

↑ To the section

  • Docker
    An open-source software container service that adds and extra layer of abstraction for packing software solutions
  • Compose
    A cloud-agnostic standard for container orchestration maintained by Docker that is supported by: Docker Swarm, AWS ECS, Azure Container Instances, and many more.

Usage

↑ Back to top

Requirements

↑ To the section

  • Docker
    • Engine Version 20.10

    • Compose Version 1.29.2

  • Python
    • >= 3.6.x

  • Node
    • >= v15.14.0

All the images versions will be provided on each Dockerfile with the exact version, avoid the latest for security reasons, upgrades will be manual.

Install the project

↑ To the section

Execute the following command on the folder you want to store the project in

git clone https://github.com/jofaval/tfm-iabd.git
cd tfm-iabd

And now configure the project's branches with Git flow

For Windows

cd tools/windows/git/
git-flow.bat

For Linux

cd tools/linux/git/
./git-flow.sh

How to boot it

↑ To the section

Execute the tools/windows/infra/stop.bat or the tools/linux/infra/stop.sh file

or execute the following commands on the shell

cd app/infra
docker-compose up -d

Stop the execution

↑ To the section

Execute the tools/windows/infra/stop.bat or the tools/linux/infra/stop.sh file

or execute the following commands on the shell

cd app/infra
docker-compose down

Deployment

↑ To the section

Handled by the Github Actions workflow

Team

↑ Back to top

Name Role
Diego del Caño Data Scientist / Data Analyst
Juan Crespin Valero Data Analyst / SysAdmin
Nerea Gluskova Data Engineer / SysAdmin
Pepe Fabra Valverde Data Architect / Data Engineer / Data Scientist

Table generated with: https://www.tablesgenerator.com/markdown_tables

I (Pepe) will be supervising each task, but we're all out here to help each other.

Infrastructure (Big Data Architecture)

↑ To the section

Description

Defined as Preparation of docker images, ready and interjoined to support the architecture.

Software

Docker (Docker-compose), Linux, if cloud computing were to be required (AWS, Azure or Google Cloud)

Elements

The information regarding the infrastructure it's in the Infrastructure section.

Asignees

  • Nerea
  • Juan
  • Pepe (only if cloud computing is required)

Data Extraction/Mining

↑ To the section

Description

Defined as Retrieving all the necessary data for it's work. (JUST retrieving data)

Software

Node-RED

Asignees

  • Nerea
  • Pepe
  • Everyone to search for Data Sources

Data Sources

  • Twitter Developer API
  • IMDB API
  • YouTube API
  • Reddit API
  • Google Trends

Data Normalization

↑ To the section

Description

Defined as After the data has being retrieved, create a middleground with the common data that may be needed so that all sources end up with the same Data Model, in other words, standarizing the sources.

Software

Node-RED

Asignees

  • Diego
  • Nerea
  • Juan
  • Pepe

Data Storage/Loading

↑ To the section

Description

Defined as Storing the normalized data into the NoSQL DB (MongoDB most likely).

Software

Node-RED

Asignees

  • Nerea

Data Cleansing

↑ To the section

Description

Defined as At this point, the data has been normalized, but not cleaned, the data should be ready for the Model to train with.

Software

Python (Google Colab?)

Asignees

  • Diego
  • Juan
  • Pepe

Data Science/Modeling (AI Engineering, sort of)

↑ To the section

Description

Defined as Developing and implement the required model(s) for the desired performance and outcome.

Artificial Intelligence and/or Machine Learning.

Software

Python (Google Colab?)

Asignees

  • Diego
  • Pepe

Data Visualization

↑ To the section

Description

Defined as Designing and developing the story (StoryTelling) and all the required/desired visualizations for whaterever the outcome(s) are that we want.

Software

PowerBI or Tableau, up to taste.

Asignees

  • Juan
  • Nerea
  • Diego

Deploy (CI/CD integration)

↑ To the section

Description

Defined as Prepare the connections, and proper usage of the model via endpoints and utilities.

Software

Cloud Platform (if used), Git (Github)

Asignees

  • Diego
  • Pepe

License

↑ Back to top

The license used (MIT License) can be seen here or you can read it locally by downloading the LICENSE file

Legal Notice

↑ Back to top

All the data used is being used and stored up-to-date with the European Union's legislation, more precisely, to Span's laws which comply with E.U.'s law GDPR (General Data Protection Regulation) and following the standards described at the Charter of European Digital Rights (EDRi, EDR initiative), surrounding the usage A.I. towards sentiment analysis and overall in the possible bias it may provide to the user. As to be ethical and prepare the model for the coming years.

For more information about the ethics of our model, please refer to the Ethics' section.

Use of the Data

↑ Back to top

We plan to use the extracted data and it's provided data to better analyze the sentiments of users all around the world about the hype generated by a movie, wether is it's announcement, a trailer, some celeb talking about it.

By analyzing the general feeling, whether positive, negative, or neutral, we could determine if one user at a time, had a good or bad experience, they were hyped, or not. So we can later influence our model towards the idea people have/had of the movie.

We'll collect the raw text data, if it's a thread, the more information we'll collect, so we can tokenize, lemmatize, preprocess and prepare the text. Our methodology is to preprocess, and clean the data, tokenized it into a word embedding, and using Transformers, maybe Siamese Neural Networks, but surely mT5 HuggingFace BERT to make a Logic Consequence with NLI so that we can “classify the data”.

Maybe even reviews or the general feeling, in case of adaptations we'd have even more information.

And to display the conclusion obtained thanks to the insight of the data extracted. We’ll use personal websites, github of course, a medium article. We’d like to develop and research a paper so that we could more clearly provide, document and explain the results obtained and it’s conclusions.

As for the tools, Tableau, but maybe we could get PowerBI through studentship, it’s unclear at the moment.

Credits

↑ Back to top

  • Ismael, for the idea

Gratitude

↑ Back to top

TODO