🤘 awesome-semantic-segmentation
-
Updated
May 8, 2021
🤘 awesome-semantic-segmentation
☁️ 🚀 📊 📈 Evaluating state of the art in AI
Python package for the evaluation of odometry and SLAM
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
TCExam is a CBA (Computer-Based Assessment) system (e-exam, CBT - Computer Based Testing) for universities, schools and companies, that enables educators and trainers to author, schedule, deliver, and report on surveys, quizzes, tests and exams.
🪢 Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
Building a modern functional compiler from first principles. (http://dev.stephendiehl.com/fun/)
FuzzBench - Fuzzer benchmarking as a service.
🤗 Evaluate: A library for easily evaluating machine learning models and datasets.
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
recommender system library for the CLR (.NET)
Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
Python implementation of the IOU Tracker
Visual Object Tracking (VOT) challenge evaluation toolkit
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications. We provide grades for 20+ preconfigured checks (covering language, code, embedding use-cases), perform root cause analysis on failure cases and give insights on how to resolve them.
Add a description, image, and links to the evaluation topic page so that developers can more easily learn about it.
To associate your repository with the evaluation topic, visit your repo's landing page and select "manage topics."