Follow the steps below to set up and run the application.
-
Install Dependencies
Navigate to thecode
directory and install the required Python packages:
cd soft-engg-project-may-2024-se-team-1\Milestone 6\code
pip install -r new_requirements.txt
-
Delete Existing Database
If a previous version ofseek_app.db
exists, delete it from thecode
directory to avoid conflicts. -
Populate the Database
Create the database and populate it with initial data:
python populate_data.py
This will create a newseek_app.db
file. -
Run Flask
Run the Flask application to get server live on localhost:
flask run
-
Install Frontend Dependencies
Navigate to theFrontend
directory and install the required npm packages:
cd soft-engg-project-may-2024-se-team-1\Milestone 6\code\frontend
npm install
-
Run the Frontend Development Server
Start the frontend development server:
npm run dev
The frontend should now be running and accessible in your browser.
The GenAI features of this project will only function if Ollama is installed and running locally. We recommend using the gemma2:2b
model for optimal performance.
-
Download Ollama
Visit the Ollama website and download the Windows installer. -
Install Ollama
Run the installer and follow the on-screen instructions to complete the installation. -
Run Ollama
Once installed, open Command Prompt or PowerShell and run the following command to start Ollama:ollama start
-
Load Gemma2:2b Model
To load thegemma2:2b
model, execute the following command:ollama run gemma2:2b
-
Download Ollama
Visit the Ollama website and download the macOS installer. -
Install Ollama
Open the downloaded.dmg
file and drag the Ollama application to your Applications folder. -
Run Ollama
Open Terminal and start Ollama with the following command:ollama start
-
Load Gemma2:2b Model
Load thegemma2:2b
model by running the following command in Terminal:ollama run gemma2:2b
-
Install Ollama
Ollama provides a Linux binary that can be downloaded from their website. Alternatively, usecurl
to download and install it directly:curl -LO https://ollama.com/download/linux/ollama_latest_amd64.deb sudo dpkg -i ollama_latest_amd64.deb
-
Run Ollama
Start Ollama by running the following command in your terminal:ollama start
-
Load Gemma2:2b Model
Use the following command to load thegemma2:2b
model:ollama run gemma2:2b
- Ensure that the backend is running before starting the frontend.
- For any issues, ensure all dependencies are correctly installed and that the database is properly set up.
- Ensure that Ollama is running before you start the backend of this project. If Ollama is not running, the GenAI features will not be available.
- For optimal performance, keep Ollama running throughout your development session.
- In case of issues with model loading, verify your internet connection as Ollama might need to download the
gemma2:2b
model initially.