Skip to content

Releases: kspviswa/pyOllaMx

v0.0.7

02 Mar 19:37
Compare
Choose a tag to compare
  1. Added support to Thinking tokens for reasoning models. Now you can "collapse thinking tokens" to see how the model reasons. This appears only if the model response contains <think> tags .

pyollamx_thinking_support

  1. Other few bug fixes on usability

v0.0.6

16 Feb 22:50
Compare
Choose a tag to compare

Fixed the PyOllaMx startup issue #7 & #8 . Also added a helper text in settings page if either Ollama or PyOMlx is not running

Screenshot 2025-02-16 at 5 43 25 PM

v0.0.5

25 Jan 04:50
Compare
Choose a tag to compare
  • updated PyOMlx client to use OpenAI Chat completions endpoint
  • fixed multi-line copy issue
  • added version string at the settings page
  • some changes at the code organization

v0.0.4

04 Jul 01:31
Compare
Choose a tag to compare

New Functionality

  1. Now you can download Ollama models right within 🤌🏻 PyOllaMx's Model Hub tab. You can also inspect existing models 🧐, delete models 🗑️ right within PyOllaMx instead of using Ollama CLI. This greatly simplifies the user experience 🤩🤩. And you before you ask, yes I'm working to bring similar functionality for MLX models from huggingface hub. Please stay tuned 😎

pyollamx_v004

BugFixes

  1. Updated DDGS dependency to fix some of the rate limit issues

v0.0.3

02 Mar 04:36
Compare
Choose a tag to compare

Dark Mode Support

Toggle between Dark & Light mode with a click of the icon

darkmode_toggle

Model settings menu

Brand new settings menu to set the model name and the temperature along with Ollama & MlX model toggle

settings_menu

Streaming support

Streaming support for both chat & search tasks

streaming_support_edited

Brand New Status bar

Status bar that displays the selected mode name, model type & model temperature

status_bar_edited

Web search enabled for Apple MlX models

Now you can use Apple MlX models to power the web search when choosing the search tab

v0.0.2

11 Feb 23:37
Compare
Choose a tag to compare
  1. Web search capability (powered by DuckDuckGo search engine via https://github.com/deedy5/duckduckgo_search)
    a. Web search powered via basic RAG using prompt engineering. More advanced techniques are in pipeline
    b. Search response will cite clickable sources for easy follow-up / deep dive
    c. Beneath every search response, search keywords are also shown to verify the search scope
    d. Easy toggle between chat and search operations
  2. Clear / Erase history
  3. Automatic scroll on chat messages for better user experience
  4. Basic error & exception handling for searches

Limitations:

  • Web search only enabled for Ollama models. Use dolphin-mistral:7b model for better results. MlX model support is planned for next release
  • Search results aren't deterministic and vary vastly among the chosen models. So play with different models to find your optimum
  • Sometimes search results are gibberish. It is due to the fact that search engine RAG is vanilla i.e done via basic prompt engineering without any library support. So re-trigger the same search prompt and see the response once again if the results aren't satisfactory.

Screenshot 2024-02-11 at 5 35 31 PM

v0.0.1

03 Feb 10:19
Compare
Choose a tag to compare

v0.0.1 Features

  • Auto discover Ollama & MlX models. Simply download the models as you do with respective tools and pyOllaMx would discover and use the models seamlessly
  • Markdown support on chat messages for programming code
  • Selectable Text
  • Temperature control
  • Basic error handling