Skip to content

Commit

Permalink
Imrprove docs
Browse files Browse the repository at this point in the history
  • Loading branch information
snekkenull authored Aug 4, 2024
1 parent 86c38fb commit b53e355
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 16 deletions.
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,9 @@ translation = ta.translate(source_lang, target_lang, source_text, country)
```
See examples/example_script.py for an example script to try out.

### WebUI App:
See the [guide](app/README.md) on running webui App for more information.

## License

Translation Agent is released under the **MIT License**. You are free to use, modify, and distribute the code
Expand Down
22 changes: 6 additions & 16 deletions app/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

## Translation Agent WebUI

This repository contains a Gradio web UI for a translation agent that utilizes various language models for translation.
A Translation-Agent webUI based on Gradio library 🤗

### Preview

Expand All @@ -11,15 +11,9 @@ This repository contains a Gradio web UI for a translation agent that utilizes v

- **Tokenized Text:** Displays translated text with tokenization, highlighting differences between original and translated words.
- **Document Upload:** Supports uploading various document formats (PDF, TXT, DOC, etc.) for translation.
- **Multiple API Support:** Integrates with popular language models like:
- Groq
- OpenAI
- Ollama
- Together AI
...
- **OpenAI compatible APIs Supports:** Supports for customizing any OpenAI compatible APIs.
- **Different LLM for reflection**: Now you can enable second Endpoint to use another LLM for reflection.


**Getting Started**

1. **Install Dependencies:**
Expand Down Expand Up @@ -72,17 +66,13 @@ This repository contains a Gradio web UI for a translation agent that utilizes v
5. Enable Second Endpoint, you can add another endpoint by different LLMs for reflection.
6. Using a custom endpoint, you can enter an OpenAI compatible API base url.

**Customization:**

- **Add New LLMs:** Modify the `patch.py` file to integrate additional LLMs.

**Contributing:**
**Advanced Options:**

Contributions are welcome! Feel free to open issues or submit pull requests.
- **Nax tokens Per chunk:** Break down text into smaller chunks. LLMs have a limited context window, appropriate setting based on model information will ensure that the model has enough context to understand each individual chunk and generate accurate reponses. Defaults to 1000.

**License:**
- **Temprature:** The sampling temperature for controlling the randomness of the generated text. Defaults to 0.3.

This project is licensed under the MIT License.
- **Request Per Minute:** This parameter affects the request speed. Rate limits are a common practice for APIs, such as RPM(Request Per Minute), TPM(Tokens Per Minute), please refer to the information of the API service provider and set the parameter value reasonably. Defaults to 60.

**DEMO:**

Expand Down

0 comments on commit b53e355

Please sign in to comment.