Atividades da disciplina IA024 - Redes Neurais Profundas para Processamento de Linguagem Natural, FEEC-Unicamp, 1s2024
-
Updated
May 15, 2024 - HTML
Atividades da disciplina IA024 - Redes Neurais Profundas para Processamento de Linguagem Natural, FEEC-Unicamp, 1s2024
A high-performance inference system for large language models, designed for production environments.
A high-throughput and memory-efficient inference and serving engine for LLMs
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
sentiment-analysis using pertrained models (BERT, BiLSTM)
A large-scale simulation framework for LLM inference
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
This repo contains everything about transformers and NLP.
Large Language Model Text Generation Inference
Scalable and user friendly neural 🧠 forecasting algorithms.
Port of OpenAI's Whisper model in C/C++
Import SVG files in your React Native project the same way that you would in a Web application.
Implementations of Deep Learning Techniques
A simple transformer-based autoencoder model
Lumina-T2X is a unified framework for Text to Any Modality Generation
A low-latency & high-throughput serving engine for LLMs
A paper list of some recent Transformer-based CV works.
Add a description, image, and links to the transformer topic page so that developers can more easily learn about it.
To associate your repository with the transformer topic, visit your repo's landing page and select "manage topics."