Skip to content

Commit 303e315

Browse files
authored
Update structure of the repo
1 parent 1154e7e commit 303e315

File tree

9 files changed

+72
-62
lines changed

9 files changed

+72
-62
lines changed

README.md

Lines changed: 7 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -1,68 +1,13 @@
1-
# BME Lab
1+
# Hands-on Machine Learning Lab for Biomedical Engineering Students
22

33
This is part of the code for BME Lab at Mahidol University.
4-
Lab II teaches 3rd year student where we aim to have them run
54

6-
## Lab II: Running Large Language Model
5+
### Building Machine Learning Application with Gradio
76

8-
**Task 1: Running Open WebUI**
7+
Gradio is a powerful tool for creating web interfaces for machine learning models.
8+
In this lab, students will learn to build practical medical applications using Gradio's intuitive interface.
99

10-
- Download [Ollama](https://ollama.com/) on your laptop, then run `ollama run llama3.1`
10+
### Generative AI
1111

12-
Here, you can run prompt Llama3.1 on your local machine. After running Ollama, you can connect with Open WebUI:
13-
14-
- Download [docker](https://www.docker.com/products/docker-desktop/) on your laptop
15-
- Run [Open WebUI](https://github.com/open-webui/open-webui) using `docker` and goes to http://localhost:3000/ to prompt! You can check the section "If Ollama is on your computer, use this command: ..."
16-
17-
**Task 2: Connect Python with Ollama**
18-
19-
You can connect Python using [Langchain](https://www.langchain.com/) with Ollama.
20-
Use [`00_Connect_Ollama_with_Langchain`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/00_Connect_Ollama_with_Langchain.ipynb) to try it out after running Ollama.
21-
22-
```py
23-
%%capture
24-
!pip install langchain
25-
!pip install langchain_community
26-
```
27-
28-
```py
29-
# Code from https://stackoverflow.com/a/78430197/3626961
30-
from langchain_community.llms import Ollama
31-
from langchain import PromptTemplate # Added
32-
33-
llm = Ollama(model="llama3.1", stop=["<|eot_id|>"]) # Added stop token
34-
35-
def get_model_response(user_prompt, system_prompt):
36-
# NOTE: No f string and no whitespace in curly braces
37-
template = """
38-
<|begin_of_text|>
39-
<|start_header_id|>system<|end_header_id|>
40-
{system_prompt}
41-
<|eot_id|>
42-
<|start_header_id|>user<|end_header_id|>
43-
{user_prompt}
44-
<|eot_id|>
45-
<|start_header_id|>assistant<|end_header_id|>
46-
"""
47-
48-
# Added prompt template
49-
prompt = PromptTemplate(
50-
input_variables=["system_prompt", "user_prompt"],
51-
template=template
52-
)
53-
54-
# Modified invoking the model
55-
response = llm(prompt.format(system_prompt=system_prompt, user_prompt=user_prompt))
56-
57-
return response
58-
59-
# Example
60-
user_prompt = "What is 1 + 1?"
61-
system_prompt = "You are a helpful assistant doing as the given prompt."
62-
get_model_response(user_prompt, system_prompt)
63-
```
64-
65-
**Task 3: Transcribe Youtube video and summarize the transcriptions**
66-
67-
- Use [`01_Thonburian_Whisper_Longform_Youtube`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/01_Thonburian_Whisper_Longform_Youtube.ipynb) to transcribe the text from Youtube URL (select one URL) and [`02_Ollama_Summarize_Transcript`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/02_Ollama_Summarize_Transcript.ipynb) to write a summarization prompt
68-
- Your task is to write a prompt to summarize the text from your selected Youtube URL
12+
This lab will explore the practical applications of Large Language Models (LLMs) using Ollama and OpenWebUI.
13+
Students will work with locally-hosted models, learning how to interact with them.

generative-ai/README.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
# Generative AI Lab: Up and Running your First Large Language Model (LLM)
2+
3+
### Task 1: Setting up Ollama and Running Open WebUI
4+
5+
**Setting up Ollama**
6+
- Download [Ollama](https://ollama.com/) on your laptop, then run `ollama run llama3.1`
7+
- Here, you can run prompt Llama3.1 on your local machine. After running Ollama, you can connect with Open WebUI:
8+
9+
**Setting up Open WebUI**
10+
- Download [docker](https://www.docker.com/products/docker-desktop/) on your laptop
11+
- Run [Open WebUI](https://github.com/open-webui/open-webui) using `docker` and goes to http://localhost:3000/ to prompt! You can check the section "If Ollama is on your computer, use this command: ..."
12+
13+
### Task 2: Connect Python with Ollama
14+
15+
You can connect Python using [Langchain](https://www.langchain.com/) with Ollama.
16+
Use [`00_Connect_Ollama_with_Langchain`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/00_Connect_Ollama_with_Langchain.ipynb) to try it out after running Ollama.
17+
18+
```py
19+
%%capture
20+
!pip install langchain
21+
!pip install langchain_community
22+
```
23+
24+
```py
25+
# Code from https://stackoverflow.com/a/78430197/3626961
26+
from langchain_community.llms import Ollama
27+
from langchain import PromptTemplate # Added
28+
29+
llm = Ollama(model="llama3.1", stop=["<|eot_id|>"]) # Added stop token
30+
31+
def get_model_response(user_prompt, system_prompt):
32+
# NOTE: No f string and no whitespace in curly braces
33+
template = """
34+
<|begin_of_text|>
35+
<|start_header_id|>system<|end_header_id|>
36+
{system_prompt}
37+
<|eot_id|>
38+
<|start_header_id|>user<|end_header_id|>
39+
{user_prompt}
40+
<|eot_id|>
41+
<|start_header_id|>assistant<|end_header_id|>
42+
"""
43+
44+
# Added prompt template
45+
prompt = PromptTemplate(
46+
input_variables=["system_prompt", "user_prompt"],
47+
template=template
48+
)
49+
50+
# Modified invoking the model
51+
response = llm(prompt.format(system_prompt=system_prompt, user_prompt=user_prompt))
52+
53+
return response
54+
55+
# Example
56+
user_prompt = "What is 1 + 1?"
57+
system_prompt = "You are a helpful assistant doing as the given prompt."
58+
get_model_response(user_prompt, system_prompt)
59+
```
60+
61+
**Task 3: Transcribe Youtube video and summarize the transcriptions**
62+
63+
- Use [`01_Thonburian_Whisper_Longform_Youtube`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/01_Thonburian_Whisper_Longform_Youtube.ipynb) to transcribe the text from Youtube URL (select one URL) and [`02_Ollama_Summarize_Transcript`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/02_Ollama_Summarize_Transcript.ipynb) to write a summarization prompt
64+
- Your task is to write a prompt to summarize the text from your selected Youtube URL
File renamed without changes.
File renamed without changes.
File renamed without changes.

ml-app-gradio/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# Building ML Application with Gradio

0 commit comments

Comments
 (0)