Foundation Models: Emphasize the creation and application of large-scale models that can be adapted to a wide range of tasks with minimal task-specific tuning.
Predictive Human Preference (PHP): Leveraging human feedback in the loop of model training to refine outputs or predictions based on what is preferred or desired by humans.
- Predictive Human Preference - php
Fine Tuning: The process of training an existing pre-trained model on a specific task or dataset to improve its performance on that task.
- https://llama.meta.com/docs/how-to-guides/fine-tuning
- https://github.com/hiyouga/LLaMA-Factory
- TODO: Link to Javelin Open Source Fine Tuning Repo goes here
Cross-cutting Themes:
"Our results show conditioning away risk of attack remains an unsolved problem; for example, all tested models showed between 25% and 50% successful prompt injection tests."
Personal Identifiable Information (PII) and Security: These considerations are crucial for ensuring that ML models respect privacy and are secure against potential threats.
- Personal Identifiable Information - pii
Code, SQL, Genomics, and More: These areas highlight the interdisciplinary nature of ML, where knowledge in programming, databases, biology, and other fields converge to advance ML applications.
Neural Architecture Search (NAS): Highlights the automation of the design of neural network architectures to optimize performance for specific tasks.
- Biology (Collab w/ Ashish Phal) - genomics
Few-Shot and Zero-Shot Learning: Points to learning paradigms that aim to reduce the dependency on large labeled datasets for training models.
Federated Learning: Focuses on privacy-preserving techniques that enable model training across multiple decentralized devices or servers holding local data samples.
Transformers in Vision and Beyond: Discusses the application of transformer models, originally designed for NLP tasks, in other domains like vision and audio processing.
Reinforcement Learning Enhancements: Looks at advancements in RL techniques that improve efficiency and applicability in various decision-making contexts.
MLOps and AutoML: Concentrates on the operationalization of ML models and the automation of the ML pipeline to streamline development and deployment processes.
Hybrid Models: Explores the integration of different model types or AI approaches to leverage their respective strengths in solving complex problems.
AI Ethics and Bias Mitigation: Underlines the importance of developing fair and ethical AI systems by addressing and mitigating biases in ML models.
Energy-Efficient ML: Reflects the growing concern and need for environmentally sustainable AI by developing models that require less computational power and energy.
Hardware: Points to the importance of developing and utilizing hardware optimized for ML tasks to improve efficiency and performance.