|
1 |
| -# BME Lab |
| 1 | +# Hands-on Machine Learning Lab for Biomedical Engineering Students |
2 | 2 |
|
3 | 3 | This is part of the code for BME Lab at Mahidol University.
|
4 |
| -Lab II teaches 3rd year student where we aim to have them run |
5 | 4 |
|
6 |
| -## Lab II: Running Large Language Model |
| 5 | +### Building Machine Learning Application with Gradio |
7 | 6 |
|
8 |
| -**Task 1: Running Open WebUI** |
| 7 | +Gradio is a powerful tool for creating web interfaces for machine learning models. |
| 8 | +In this lab, students will learn to build practical medical applications using Gradio's intuitive interface. |
9 | 9 |
|
10 |
| -- Download [Ollama](https://ollama.com/) on your laptop, then run `ollama run llama3.1` |
| 10 | +### Generative AI |
11 | 11 |
|
12 |
| -Here, you can run prompt Llama3.1 on your local machine. After running Ollama, you can connect with Open WebUI: |
13 |
| - |
14 |
| -- Download [docker](https://www.docker.com/products/docker-desktop/) on your laptop |
15 |
| -- Run [Open WebUI](https://github.com/open-webui/open-webui) using `docker` and goes to http://localhost:3000/ to prompt! You can check the section "If Ollama is on your computer, use this command: ..." |
16 |
| - |
17 |
| -**Task 2: Connect Python with Ollama** |
18 |
| - |
19 |
| -You can connect Python using [Langchain](https://www.langchain.com/) with Ollama. |
20 |
| -Use [`00_Connect_Ollama_with_Langchain`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/00_Connect_Ollama_with_Langchain.ipynb) to try it out after running Ollama. |
21 |
| - |
22 |
| -```py |
23 |
| -%%capture |
24 |
| -!pip install langchain |
25 |
| -!pip install langchain_community |
26 |
| -``` |
27 |
| - |
28 |
| -```py |
29 |
| -# Code from https://stackoverflow.com/a/78430197/3626961 |
30 |
| -from langchain_community.llms import Ollama |
31 |
| -from langchain import PromptTemplate # Added |
32 |
| - |
33 |
| -llm = Ollama(model="llama3.1", stop=["<|eot_id|>"]) # Added stop token |
34 |
| - |
35 |
| -def get_model_response(user_prompt, system_prompt): |
36 |
| - # NOTE: No f string and no whitespace in curly braces |
37 |
| - template = """ |
38 |
| - <|begin_of_text|> |
39 |
| - <|start_header_id|>system<|end_header_id|> |
40 |
| - {system_prompt} |
41 |
| - <|eot_id|> |
42 |
| - <|start_header_id|>user<|end_header_id|> |
43 |
| - {user_prompt} |
44 |
| - <|eot_id|> |
45 |
| - <|start_header_id|>assistant<|end_header_id|> |
46 |
| - """ |
47 |
| - |
48 |
| - # Added prompt template |
49 |
| - prompt = PromptTemplate( |
50 |
| - input_variables=["system_prompt", "user_prompt"], |
51 |
| - template=template |
52 |
| - ) |
53 |
| - |
54 |
| - # Modified invoking the model |
55 |
| - response = llm(prompt.format(system_prompt=system_prompt, user_prompt=user_prompt)) |
56 |
| - |
57 |
| - return response |
58 |
| - |
59 |
| -# Example |
60 |
| -user_prompt = "What is 1 + 1?" |
61 |
| -system_prompt = "You are a helpful assistant doing as the given prompt." |
62 |
| -get_model_response(user_prompt, system_prompt) |
63 |
| -``` |
64 |
| - |
65 |
| -**Task 3: Transcribe Youtube video and summarize the transcriptions** |
66 |
| - |
67 |
| -- Use [`01_Thonburian_Whisper_Longform_Youtube`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/01_Thonburian_Whisper_Longform_Youtube.ipynb) to transcribe the text from Youtube URL (select one URL) and [`02_Ollama_Summarize_Transcript`](https://github.com/biodatlab/bme-labs/blob/main/notebooks/02_Ollama_Summarize_Transcript.ipynb) to write a summarization prompt |
68 |
| -- Your task is to write a prompt to summarize the text from your selected Youtube URL |
| 12 | +This lab will explore the practical applications of Large Language Models (LLMs) using Ollama and OpenWebUI. |
| 13 | +Students will work with locally-hosted models, learning how to interact with them. |
0 commit comments