This repository provides a simple example of setting up a FastAPI application with a PostgreSQL database using Docker for containerization. It demonstrates how to create a RESTful API using FastAPI and use a PostgreSQL database as the backend storage.
Before you begin, ensure you have the following dependencies installed on your system:
Follow these steps to get the project up and running:
-
Clone the repository to your local machine:
git clone https://github.com/raushanraj9427/Yogi-v1.git
-
Navigate to the project directory:
cd app
-
Build Docker image using Docker Compose:
docker-compose build
-
run docker containers
docker-compose up
-
Once containers are up and running, you can access the FastAPI application at http://localhost:8000.
The FastAPI application provides the following endpoints:
GET /plants/{plant_id}
: Retrieve a plant with plant_idGET /details/(plant_id)
: Retrieve details of plant with plant_idPOST /plants/
: create a plant with plant_text and other detailsPOST /predict_plant
: predict a plant after uploading image
EffNet is a PyTorch-based deep learning model designed for image classification tasks. It uses the EfficientNet architecture as its backbone for feature extraction and classification. This model provides a powerful yet lightweight solution for a variety of image classification applications.
EffNet consists of the following components:
-
Backbone: EffNet uses the
efficientnet_b1
architecture as its backbone. This backbone is pretrained on a large dataset and can extract meaningful features from input images. -
Classifier: The classifier head of EffNet consists of multiple fully connected layers with batch normalization and PReLU activation functions. It reduces the feature dimensionality to the desired embedding size and performs the final classification.
def predict(img):
if torch.cuda.is_available():
DEVICE = torch.device(type='cuda')
else:
DEVICE = torch.device('cpu')
model= EffNet(num_classes=NUM_CLASSES, embedding_size=EMBEDDING_SIZE)
model.load_state_dict(torch.load(W_PATH,map_location=torch.device(DEVICE)))
image = Image.open(io.BytesIO(img))
image = my_transforms(image)
model.to(DEVICE)
model.eval()
with torch.inference_mode():
image = image.to(DEVICE)
image = image.unsqueeze(0)
with autocast():
logits = model.forward(image)
y_pred = torch.softmax(logits, dim=1).argmax(dim=1).detach().cpu()
return CLASS_LABEL[y_pred.item()]