Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Multi-Tiered Caching (MTC) algorithm #1852

Merged
merged 2 commits into from
Nov 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 91 additions & 0 deletions Miscellaneous Algorithms/Multi-Tiered Caching Algorithm/MTC.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
#include <stdio.h>
#include <stdlib.h>

#define L1_CACHE_SIZE 3
#define L2_CACHE_SIZE 5

typedef struct Cache {
int *data;
int size;
int count;
} Cache;

// Initialize cache
Cache *initialize_cache(int size) {
Cache *cache = (Cache *)malloc(sizeof(Cache));
cache->data = (int *)malloc(size * sizeof(int));
cache->size = size;
cache->count = 0;
return cache;
}

// Check if a value exists in cache and return its position
int find_in_cache(Cache *cache, int value) {
for (int i = 0; i < cache->count; i++) {
if (cache->data[i] == value) {
return i;
}
}
return -1;
}

// Add value to cache with FIFO replacement
void add_to_cache(Cache *cache, int value) {
if (cache->count < cache->size) {
cache->data[cache->count++] = value;
} else {
// Shift data and add new value at the end
for (int i = 1; i < cache->size; i++) {
cache->data[i - 1] = cache->data[i];
}
cache->data[cache->size - 1] = value;
}
}

// Multi-tiered caching function
void multi_tiered_cache(Cache *L1, Cache *L2, int value) {
int pos_in_L1 = find_in_cache(L1, value);
int pos_in_L2 = find_in_cache(L2, value);

if (pos_in_L1 != -1) {
printf("Value %d found in L1 cache.\n", value);
} else if (pos_in_L2 != -1) {
printf("Value %d found in L2 cache. Moving to L1.\n", value);
// Move from L2 to L1 cache
add_to_cache(L1, value);
// Remove from L2 (by shifting)
for (int i = pos_in_L2; i < L2->count - 1; i++) {
L2->data[i] = L2->data[i + 1];
}
L2->count--;
} else {
printf("Value %d not found in L1 or L2. Adding to L1 and L2.\n", value);
add_to_cache(L1, value);
add_to_cache(L2, value);
}
}

// Free allocated memory for cache
void free_cache(Cache *cache) {
free(cache->data);
free(cache);
}

// Main function to test multi-tiered caching
int main() {
Cache *L1 = initialize_cache(L1_CACHE_SIZE);
Cache *L2 = initialize_cache(L2_CACHE_SIZE);

int requests[] = {10, 20, 10, 30, 40, 50, 20, 60, 70, 10};
int num_requests = sizeof(requests) / sizeof(requests[0]);

for (int i = 0; i < num_requests; i++) {
multi_tiered_cache(L1, L2, requests[i]);
}

// Free memory
free_cache(L1);
free_cache(L2);

return 0;
}
61 changes: 61 additions & 0 deletions Miscellaneous Algorithms/Multi-Tiered Caching Algorithm/Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Multi-Tiered Caching (MTC) Algorithm

This project implements a Multi-Tiered Caching (MTC) algorithm in C. The MTC algorithm manages multiple cache levels to improve data retrieval efficiency by storing frequently accessed data in faster, higher-priority caches. It dynamically moves data between cache levels based on access patterns, reducing retrieval time and optimizing memory utilization in systems with large data workloads.

## Table of Contents

- [Overview](#overview)
- [Features](#features)
- [Algorithm Explanation](#algorithm-explanation)
- [Input and Output](#input-and-output)


## Overview

The Multi-Tiered Caching (MTC) algorithm uses multiple cache levels (e.g., L1, L2) to store frequently accessed data closer to the processor, reducing data retrieval time. This approach is particularly useful for systems with limited memory and a high volume of data requests, as it minimizes access time and improves memory management.

## Features

- Multi-tiered caching system with multiple cache levels (e.g., L1 and L2).
- Caching based on access frequency, moving data between levels as needed.
- Simple FIFO (First-In-First-Out) replacement policy in each cache tier.
- Efficient data access management for large datasets and high-throughput applications.

## Algorithm Explanation

1. **Cache Initialization**: Allocate memory for each cache level with a predefined size (L1 and L2).
2. **Data Lookup**:
- Check if the data exists in the higher-priority cache (L1).
- If not in L1, search the lower-priority cache (L2).
3. **Data Movement**:
- If found in L2, move the data to L1 for quicker access in future requests.
- If not found in either cache, add it to both L1 and L2 caches.
4. **Replacement Policy**: Uses a First-In-First-Out (FIFO) approach for data replacement, removing the oldest entry when the cache is full.

### Input

- **Data Requests**: An array of integers representing the data access requests.
- **L1 and L2 Cache Sizes**: Fixed cache sizes for each level (e.g., L1 with 3 slots, L2 with 5 slots).

### Output

The program will output the following for each request:
- Whether the requested data was found in L1, L2, or was not found.
- Any movement between cache levels when data is accessed.

#### Example Input
Requests: {10, 20, 10, 30, 40, 50, 20, 60, 70, 10}
L1 Cache Size: 3
L2 Cache Size: 5

### Example Output
Value 10 added to L1 and L2.
Value 20 added to L1 and L2.
Value 10 found in L1 cache.
Value 30 added to L1 and L2.
Value 40 added to L1 and L2.
Value 50 added to L1 and L2.
Value 20 found in L1 cache.
Value 60 added to L1 and L2.
Value 70 added to L1 and L2.
Value 10 moved from L2 to L1.
Loading