- Introduction
- Dataset overview
- data collection and processing
- attribute information
- objective
- insights
- file overview
Modern smartwatches and smartphones have built-in sensors that can quickly track and understand what a person is doing. These sensors help detect movements, steps, body position, and other activities in real time. This makes it possible to monitor health, fitness, and daily behavior easily and accurately.
This project uses data collected from smartphones to classify six different human activities. The aim is to predict the activity a person is doing (like walking or sitting) based on motion sensor data from a waist-mounted smartphone.
The dataset was built from 30 volunteers aged between 19–48. Each participant wore a Samsung Galaxy S II smartphone on their waist and performed six daily activities:
- WALKING
- WALKING_UPSTAIRS
- WALKING_DOWNSTAIRS
- SITTING
- STANDING
- LAYING
The phone's accelerometer and gyroscope recorded 3-axis linear acceleration and angular velocity at 50Hz. The data was then manually labeled using video recordings.
-
Sampling Rate: 50Hz
-
Windowing: 2.56 seconds per window (128 readings), with 50% overlap
-
Noise Filtering: Applied low-pass Butterworth filter
-
Feature Extraction:
-
Time-Domain: Mean, standard deviation, max, min, MAD, energy, entropy, etc.
-
Frequency-Domain: Fast Fourier Transform (FFT) based features like frequency mean, frequency energy, etc.
-
Axes: X, Y, Z for both acceleration and gyroscope data
-
Prefixes:
t: Time-domain featuresf: Frequency-domain features
-
Each record includes:
- Triaxial total acceleration (accelerometer)
- Triaxial body acceleration (after removing gravity)
- Triaxial angular velocity (gyroscope)
- 561 features derived from time and frequency domain
- Activity label (one of six)
- Subject ID (person who performed the activity)
To train and evaluate machine learning models that can accurately classify human activities using sensor data. Feature selection techniques were applied to handle the high-dimensional dataset and improve model performance and computational cost.
This project demonstrates how smartphones can detect and classify user activities with high accuracy using sensor data and machine learning. It showcases important concepts like:
- Sensor data preprocessing
- Feature engineering
- Dimensionality reduction
- Supervised classification
- human_activity_recog -> contains various algorithms and prepared a model
- activity_with_tkinter -> contains a gui window that predicts using the model
- run_model -> where one can give the path of the file and it generates a csv file with the predicted result .