diff --git a/README.md b/README.md index c2f3e80..4480df9 100644 --- a/README.md +++ b/README.md @@ -48,6 +48,9 @@ translation = ta.translate(source_lang, target_lang, source_text, country) ``` See examples/example_script.py for an example script to try out. +### WebUI App: +See the [guide](app/README.md) on running webui App for more information. + ## License Translation Agent is released under the **MIT License**. You are free to use, modify, and distribute the code diff --git a/app/README.md b/app/README.md index 1d84349..582dbf3 100644 --- a/app/README.md +++ b/app/README.md @@ -1,7 +1,7 @@ ## Translation Agent WebUI -This repository contains a Gradio web UI for a translation agent that utilizes various language models for translation. +A Translation-Agent webUI based on Gradio library 🤗 ### Preview @@ -11,15 +11,9 @@ This repository contains a Gradio web UI for a translation agent that utilizes v - **Tokenized Text:** Displays translated text with tokenization, highlighting differences between original and translated words. - **Document Upload:** Supports uploading various document formats (PDF, TXT, DOC, etc.) for translation. -- **Multiple API Support:** Integrates with popular language models like: - - Groq - - OpenAI - - Ollama - - Together AI - ... +- **OpenAI compatible APIs Supports:** Supports for customizing any OpenAI compatible APIs. - **Different LLM for reflection**: Now you can enable second Endpoint to use another LLM for reflection. - **Getting Started** 1. **Install Dependencies:** @@ -72,17 +66,13 @@ This repository contains a Gradio web UI for a translation agent that utilizes v 5. Enable Second Endpoint, you can add another endpoint by different LLMs for reflection. 6. Using a custom endpoint, you can enter an OpenAI compatible API base url. -**Customization:** - -- **Add New LLMs:** Modify the `patch.py` file to integrate additional LLMs. - -**Contributing:** +**Advanced Options:** -Contributions are welcome! Feel free to open issues or submit pull requests. +- **Nax tokens Per chunk:** Break down text into smaller chunks. LLMs have a limited context window, appropriate setting based on model information will ensure that the model has enough context to understand each individual chunk and generate accurate reponses. Defaults to 1000. -**License:** +- **Temprature:** The sampling temperature for controlling the randomness of the generated text. Defaults to 0.3. -This project is licensed under the MIT License. +- **Request Per Minute:** This parameter affects the request speed. Rate limits are a common practice for APIs, such as RPM(Request Per Minute), TPM(Tokens Per Minute), please refer to the information of the API service provider and set the parameter value reasonably. Defaults to 60. **DEMO:**