Skip to content

aditikhare007/AI_Research_Junction_Aditi_Khare

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 

Repository files navigation

Greetings AI Community 👋 About me

** AWS & AI Research Specialist - Principal Applied AI Product Engineer [Product-Owner] & Enterprise Architect @PepsiCo | IIM-A | Community Member @Landing AI | AI Research Specialist [Portfolio] | Author | Quantum AI | Mojo | Bootstrap | React JS | 8+ Years of Experience in Fortune 50 Product Companies | **

** Global Top AI Community Member @Landing.AI @MLOPS Community, @Pandas AI, @Full Stack Deep Learning, @H2o.ai Generative AI, @Modular & @Cohere AI @hugging Face Research Papers Group @Papers with Code** ** Completed 90+ Online Technical Courses from Udemy & Coursera as I believe in Continuous Learning and Growth Mindset **

** Aditi Khare @ AI Research Junction Newletter **

** Aditi Khare @ AI Research Junction Newletter **

Thank you so much for visting my AI Research Profile & Happy Reading

AI RESEARCH PAPERS SUMMARIES

** GENERATIVE AI & QUANTUM AI RESEARCH PAPERS SUMMARIES **

** MAY 2024 **

AI RESEARCH PAPERS PAPERS SUMMARIES RESOURCE LINKS PAPERS CATEGORY
1. **Quantum Solutions Lab team diving deep into the prospects for quantum speedups with near-term Rydberg atom arrays. ** JPMorgan Chase and AWS study the prospects for quantum speedups with near-term Rydberg atom arrays. Amazon Blog QUANTUM AI
2. **Learning Quantum Computing. ** Learning Quantum Computing. Github QUANTUM AI
3. **Meta's LLama3. ** Extending Llama-3's Context Ten-Fold Overnight. paper GENERATIVE AI
4. **Meta's Muti-Token Prediction. ** Better & Faster Large Language Models via Multi-token Prediction. Paper GENERATIVE AI
5. **Comprehensive Library of Variational LSE Solvers. ** Comprehensive Library of Variational LSE Solvers. Paper QUANTUM AI
6. **Evaluating LLM Generations with a Panel of Diverse Models. ** Evaluating LLM Generations with a Panel of Diverse Models. Paper GENERATIVE AI
7. ** Snowflake Arctic-Best LLM for Enterprise AI — Efficiently Intelligent, Truly Open** Top-tier enterprise intelligence at incredibly low training cost. Snowflake Blog GENERATIVE AI
8. ** Photonic Quantum Memory Capacity Expanded, Paving Way for Quantum Internet** Photonic Quantum Memory Capacity Expanded, Paving Way for Quantum Internet. quantumzeitgeist QUANTUM AI
9. Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone Phi-3 Technical Report-A Highly Capable Language Model Locally on Your Phone. Paper GENERATIVE AI
10. ** Make Your LLM Fully Utilize the Context** Make Your LLM Fully Utilize the Context. Paper QUANTUM AI
11. ** Production Guides - Implementing FrugalGPT: Reducing LLM Costs & Improving Performance** Implementing FrugalGPT: Reducing LLM Costs & Improving Performance. Blog GENERATIVE AI
12. ** Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or - How I learned to start worrying about prompt formatting** Implementing FrugalGPT: Reducing LLM Costs & Improving Performance. Paper GENERATIVE AI
13. ** Inner Workings of Transformer-based Language Models** Inner Workings of Transformer-based Language Models. Paper GENERATIVE AI

** JAN 2024 - APRIL 2024 **

AI RESEARCH PAPERS PAPERS SUMMARIES RESOURCE LINKS PAPERS CATEGORY
1. Practical application of quantum neural network to materials informatics This Paper aims to construct a QNN model to predict the melting points of metal oxides as an example of a multivariate regression task for the MI problem. Different architectures (encoding methods and entangler arrangements) are explored to create an effective QNN model. Paper QUANTUM AI
2. BCQQ: Batch-Constraint Quantum Q-Learning with Cyclic Data Re-uploading In this paper, we investigate this potential advantage by proposing a batch RL algorithm that utilizes VQC as function approximators within the discrete batch-constraint deep Q-learning (BCQ) algorithm. Additionally, we introduce a novel data re-uploading scheme by cyclically shifting the order of input variables in the data encoding layers. We evaluate the efficiency of our algorithm on the OpenAI CartPole environment and compare its performance to the classical neural network-based discrete BCQ. Paper QUANTUM AI
3. Quantum Algorithms: A New Frontier in Financial Crime Prevention This Paper showcases advanced methodologies such as Quantum Machine Learning (QML) and Quantum Artificial Intelligence (QAI) as powerful solutions for detecting and preventing financial crimes, including money laundering, financial crime detection, cryptocurrency attacks, and market manipulation. Paper QUANTUM AI-Machine Learning
4. Ground state-based quantum feature maps This Paper Introduces a quantum data embedding protocol based on the preparation of a ground state of a parameterized Hamiltonian. Paper Quantum Physics
5. Satellite-based entanglement distribution and quantum teleportation with continuous variables This Paper the effects of atmospheric turbulence in continuous-variable entanglement distribution and quantum teleportation in the optical regime between a ground station and a satellite. Paper Quantum Physics

** GENERATIVE AI RESEARCH PAPERS SUMMARIES **

** FEB-MARCH-APRIL 2024 **

AI Research Papers Papers Summaries Resource Links Papers Category
1. Leave No Context Behind:Efficient Infinite Context Transformers with Infini-attention This Paper Introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. Paper,Github,Hugging-Face Generative AI
2. Google Research's CodecLM CodecLM - General Framework for adaptively generating high quality synthetic data for LLM alignment with different downstream instruction distributions and LLMs. Paper LLMs Synthetic Data
3. Multilingual Large Language Model-An Overview This Paper Multilingual Large Language Models are capable of using powerful LLMs to handle and respond to queries supporting multiple languages, which achieves remarkable success in multilingual natural language processing tasks. Despite these breakthroughs, there still remains a lack of a comprehensive surveys to summarize existing approaches. Paper MLLMs
4. Google-DeepMind's Mixture-of-Depths This Paper Introduces an approach of Dynamically allocating compute in Transformer-based Language Models. Paper Generative AI
5. Giskard Open-Source Evaluation & Testing framework for LLMs and ML models. Docs,Github,Hugging-Face Generative AI
6. Databricks-Mosaic AI Research's DBRX This state-of-the-art quality comes with marked improvements in training and inference performance, DBRX advances the state-of-the-art in efficiency among open models due to its fine-grained mixture-of-experts (MoE) architecture. Hugging-Face,Github,Blog Generative AI
7. AI21's Jamba SSM-Transformer Model-AI2I Labs. Blog,Website,Hugging-Face Generative AI
8. Google's Genie Open Language Model - Accelerating the Science of Language Models-Weights. Paper Generative AI
9. Allen Institute for AI's OLMo Generative Interactive Environments & Large Language Models on Tabular Data. Paper,Github,Hugging-Face, Website Generative AI

** JAN 2024 **

  1. Amazon's Diffuse to Choose: Enriching Image Conditioned Inpainting in Latent Diffusion Models - https://arxiv.org/abs/2401.13795 - Computer Vision and Pattern Recognition
  2. Quantum-Inspired Machine Learning for Molecular Docking - https://arxiv.org/abs/2401.12999 - Quantum Computing
  3. ChatQA - NIVIDIA'S GPT-4 Level Conversational QA Models - https://arxiv.org/pdf/2401.10225v1.pdf - Generative AI
  4. Meta's Self-Rewarding Language Models - https://arxiv.org/abs/2401.10020 - Generative AI
  5. Chainpoll - A high efficacy method for LLM hallucination detection - https://arxiv.org/pdf/2310.18344v1.pdf
  6. AI-Optimized-Catheter-Design-could-prevent-urinary-tract-infections-without-drugs/ - https://www.scientificamerican.com/article/ai-optimized-catheter-design-could-prevent-urinary-tract-infections-without-drugs/
  7. TrustLLM - Trustworthiness in Large Language Models - https://arxiv.org/abs/2308.05374
  8. LLaMA Pro: Progressive LLaMA with Block Expansion - https://arxiv.org/abs/2401.02415
  9. DeepSeekMoE - Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models - https://arxiv.org/abs/2401.06066
  10. VAST AI releases Triplane Meets Gaussian Splatting on Hugging Face - Fast and Generalizable Single-View 3D Reconstruction with Transformers Demo - https://arxiv.org/abs/2312.09147
  11. Masked Audio Generation using a Single Non-Autoregressive Transformers - https://arxiv.org/abs/2401.04577

AI LEARNING RESOURCES

AI LEARNING RESOURCES DESCRIPTION AI LEARNING RESOURCES AI LEARNING CATEGORY
1. Open AI's Prompt Engineering Handbook website,Github Generative AI
2. Lilianweng's Gitub Resources & Blogs Github Generative AI
3. Real-time mMchine-Learning-Challenges & Solutions by @Chip Huyen blog PRODUCTION-GRADE MACHINE LEARNING SYSTEMS
4. Building LLM Applications for Production Blog PRODUCTION-GRADE LLMS APPLICATIONS

** AI RESEARCH PAPERS COLLECTION [JAN 2023 - DEC 2023] **

** DEC 2023 **

  1. 11th Dec 2023 - ** Mistral-embed - An embedding model with a 1024 embedding dimension, achieves 55.26 on MTEB** - https://mistral.ai/news/mixtral-of-experts/-
  2. 11th Dec 2023 - ** LLM360-Fully Transparent Open-Source LLMs** -https://arxiv.org/pdf/2312.06550.pdf
  3. 12th Dec 2023 - ** Mathematical Language Models: A Survey**-https://arxiv.org/abs/2312.07622
  4. 13th Dec 2023 - ** PromptBench: A Library for Evaluation of Large Language Models**-https://arxiv.org/pdf/2312.07910.pdf
  5. 1st Dec 2023 - ** Mamba: Linear-Time Sequence Modeling with Selective State Spaces**-https://arxiv.org/ftp/arxiv/papers/2312/2312.00752.pdf
  6. 14th Dec 2023 - ** Distributed Representations of Words and Phrases and their Compositionality (Word2vec)**-https://papers.nips.cc/paper_files/paper/2013/file/9aa42b31882ec039965f3c4923ce901b-Paper.pdf
  7. 11th Dec 2023 - ** Beyond Human Data - Scaling Self-Training for Problem-Solving with Language Models**
    • Approach for self-training with feedback that can substantially reduce dependence on human-generated data.
    • Combined Model-generated data with a reward function improves the performance of LLMs on problem-solving tasks-https://arxiv.org/abs/2312.06585
  8. 21st Dec 2023 - ** Exploiting Novel GPT-4 APIs ** - https://arxiv.org/abs/2312.14302
  9. 18th Dec 2023 - ** Design of Quantum Machine Learning Course for a Computer Science Program ** - https://ieeexplore.ieee.org/document/10313632
  10. 2nd Dec 2023 - ** Hybrid Quantum Neural Network in High-dimensional Data Classification ** - https://arxiv.org/abs/2312.01024

** Nov 2023 **

  1. 9th Nov 2023 - A Survey of Large Language Models in Medicine: Principles, Applications, and Challenges- https://arxiv.org/abs/2311.05112
  2. System 2 Attention -https://arxiv.org/abs/2311.11829),
  3. Advancing Long-Context LLMs - Overview of the methodologies for enhancing Transformer architecture modules that optimize long-context capabilities across all stages from pre-training to inference.-https://arxiv.org/abs/2311.12351.
  4. Parallel Speculative Sampling-https://arxiv.org/abs/2311.13581.
  5. Mirasol3B-https://arxiv.org/abs/2311.05698
  6. GPQA -https://arxiv.org/abs/2311.12022
  7. Chain-of-Thought Reasoning to Language Agents - summary of CoT reasoning, foundational mechanics underpinning CoT techniques, and their application to language agent frameworks.-https://arxiv.org/abs/2311.11797
  8. GAIA-https://arxiv.org/abs/2311.12983.
  9. LLMs for Scientific Discovery-https://arxiv.org/abs/2311.08401. 12.Contrastive CoT Prompting-https://arxiv.org/abs/2311.09277. 13.A Survey on Language Models for Code-https://arxiv.org/abs/2311.07989v1
  10. JARVIS-1 - Open-world agent that can perceive multimodal input-https://arxiv.org/abs/2311.05997.
  11. Learning to Filter Context for RAG-https://arxiv.org/abs/2311.08377v1.
  12. MART-https://arxiv.org/abs/2311.07689. 17.LLMs can Deceive Users-Explores the use of an autonomous stock trading agent powered by LLMs; finds that the agent acts upon insider tips and hides the reason behind the trading decision; shows that helpful and safe LLMs can strategically deceive users in a realistic situation without direction instructions or training for deception-https://arxiv.org/abs/2311.07590.
  13. Hallucination in LLMs-A comprehensive survey-https://arxiv.org/abs/2311.05232.
  14. GPT4All-Outlines technical details of the GPT4All model family along with the open-source repository that aims to democratize access to LLMs-https://arxiv.org/abs/2311.04931.
  15. FreshLLMs-https://arxiv.org/abs/2310.03214.

** OCT 2023**

  1. Spectron--https://arxiv.org/abs/2305.15255.
  2. LLMs Meet New Knowledge-https://arxiv.org/abs/2310.14820
  3. Detecting Pretraining Data from LLMs-https://arxiv.org/abs/2310.16764.
  4. Managing AI Risks-https://managing-ai-risks.com/managing_ai_risks.pdf.
  5. Branch-Solve-Merge Reasoning in LLMs-https://arxiv.org/abs/2310.15123.
  6. LLMs for Software Engineering-https://arxiv.org/abs/2310.11511.
  7. Retrieval-Augmentation for Long-form Question Answering-https://arxiv.org/abs/2310.12150.
  8. A Study of LLM-Generated Self-Explanations**-https://arxiv.org/abs/2310.11207.
  9. OpenAgents-https://arxiv.org/abs/2310.10634v1. | 12.LLMs can Learn Rules-https://arxiv.org/abs/2310.07064. 13.Meta Chain-of-Thought Prompting - a generalizable chain-of-thought-https://arxiv.org/abs/2310.06692.
  10. Improving Retrieval-Augmented LMs with Compressors-https://arxiv.org/abs/2310.04408.
  11. Retrieval meets Long Context LLMs-https://arxiv.org/abs/2310.03025.
  12. StreamingLLM-https://arxiv.org/abs/2309.17453. 17.The Dawn of LMMs-Comprehensive analysis of GPT-4V to deepen the understanding of large multimodal models-https://arxiv.org/abs/2309.17421.
  13. Training LLMs with Pause Tokens-https://arxiv.org/abs/2310.02226. 19.Analogical Prompting-https://arxiv.org/abs/2310.01714.

** SEPT 2023 **

  1. AlphaMissense-https://www.science.org/doi/10.1126/science.adg7492.
  2. Chain-of-Verification reduces Hallucination in LLMs-Develops a method to enable LLMs to "deliberate" on responses to correct mistakes; include the following steps: 1) draft initial response, 2) plan verification questions to fact-check the draft.
  3. Contrastive Decoding Improves Reasoning in Large Language Models-https://arxiv.org/abs/2309.09117.
  4. LongLoRA - Efficient fine-tuning approach to significantly extend the context windows of pre-trained LLMs; implements shift short attention-a substitute that approximates the standard self-attention pattern during training; it has less GPU memory cost and training time compared to full fine-tuning while not compromising accuracy- https://arxiv.org/abs/2309.12307.
  5. Textbooks Are All You Need II-New 1.3 billion parameter model trained on 30 billion tokens; the dataset consists of "textbook-quality" synthetically generated data; phi-1.5 competes or outperforms other larger models on reasoning tasks suggesting that data quality plays a more important role than previously thought-https://arxiv.org/abs/2309.05463.
  6. The Rise and Potential of LLM Based Agents - A Comprehensive overview of LLM based agents; covers from how to construct these agents to how to harness them for good-https://arxiv.org/abs/2309.07864. |

** AUG 2023 **

  1. Open Problem and Limitation of RLHF - provides an overview of open problems and the limitations of RLHF- https://arxiv.org/abs/2307.15217
  2. Skeleton-of-Thought - proposes a prompting strategy that firsts generate an answer skeleton and then performs parallel API calls to generate the content of each skeleton point; reports quality improvements in addition to speed-up of up to 2.39x-https://arxiv.org/abs/2307.153373. 3.MetaGPT - a framework involving LLM-based multi-agents that encodes human standardized operating procedures (SOPs) to extend complex problem-solving capabilities that mimic efficient human workflows; this enables MetaGPT to perform multifaceted software development, code generation tasks, and even data analysis using tools like AutoGPT and LangChain-https://arxiv.org/abs/2308.00352v2
  3. OpenFlamingo - Introduces a family of autoregressive vision-language models ranging from 3B to 9B parameters; the technical report describes the models, training data, and evaluation suite-https://arxiv.org/abs/2308.01390.

** JULY 2023 **

  1. Universal Adversarial LLM Attacks**-Finds universal and transferable adversarial attacks that cause aligned models like ChatGPT and Bard to generate objectionable behaviors; the approach automatically produces adversarial suffixes using greedy and gradient search-https://arxiv.org/abs/2307.15043.
  2. A Survey on Evaluation of LLMs-Comprehensive overview of evaluation methods for LLMs focusing on what to evaluate, where to evaluate, and how to evaluate-https://arxiv.org/abs/2307.03109
  3. How Language Models Use Long Contexts-Finds that LM performance is often highest when relevant information occurs at the beginning or end of the input context; performance degrades when relevant information is provided in the middle of a long context-https://arxiv.org/abs/2307.03172.
  4. LLMs as Effective Text Rankers-Proposes a prompting technique that enables open-source LLMs to perform state-of-the-art text ranking on standard benchmarks- https://arxiv.org/abs/2306.17563
  5. Multimodal Generation with Frozen LLMs-Introduces an approach that effectively maps images to the token space of LLMs; enables models like PaLM and GPT-4 to tackle visual tasks without parameter updates; enables multimodal tasks and uses in-context learning to tackle various visual tasks-https://arxiv.org/abs/2306.17842.
  6. CodeGen2.5-New code LLM trained on 1.5T tokens; the 7B model is on par with >15B code-generation models and it’s optimized for fast sampling-https://arxiv.org/abs/2305.02309.
  7. InterCode -Framework of interactive coding as a reinforcement learning environment that is different from the typical coding benchmarks that consider a static sequence-to-sequence process- https://arxiv.org/abs/2306.14898.

** JUNE 2023 **

  1. LeanDojo - Open-Source Lean Playground consisting of toolkits, data, models, and benchmarks for theorem proving-also develops ReProver, Retrieval augmented LLM-based prover for theorem solving using premises from a vast math library-https://arxiv.org/abs/2306.15626.
  2. Extending Context Window of LLMs-https://arxiv.org/abs/2306.15595.
  3. Computer Vision Through the Lens of Natural Language-https://arxiv.org/abs/2306.16410.
  4. Understanding Theory-of-Mind in LLMs with LLMs- Framework for procedurally generating evaluations with LLMs; proposes a benchmark to study the social reasoning capabilities of LLMs with LLMs. https://arxiv.org/abs/2306.15448.
  5. Evaluations with No Labels-https://arxiv.org/abs/2306.13651v1
  6. Long-range Language Modeling with Self-Retrieval-https://arxiv.org/abs/2306.13421.
  7. Scaling MLPs-A Tale of Inductive Bias - Shows that the performance of MLPs improves with scale and highlights that lack of inductive bias can be compensated- https://arxiv.org/abs/2306.13575
  8. Textbooks Are All You Need-https://arxiv.org/abs/2306.11644
  9. RoboCat - New Foundation agent that can operate different robotic arms and can solve tasks from as few as 100 demonstrations; the self-improving AI agent can self-generate new training data to improve its technique and get more efficient at adapting to new tasks-https://arxiv.org/abs/2306.11706. |
  10. ClinicalGPT - Language model optimized through extensive and diverse medical data, including medical records, domain-specific knowledge, and multi-round dialogue consultations. https://arxiv.org/abs/2306.09968s |
  11. An Overview of Catastrophic AI Risks - provides an overview of the main sources of catastrophic AI risks; the goal is to foster more understanding of these risks and ensure AI systems are developed in a safe manner-https://arxiv.org/abs/2306.12001v1.
  12. AudioPaLM-Text-based and speech-based LMs, PaLM-2 and AudioLM, into a multimodal architecture that supports speech understanding and generation; outperforms existing systems for speech translation tasks with zero-shot speech-to-text translation capabilities-https://arxiv.org/abs/2306.12925v1.

** MAY 2023 **

  1. Gorilla-Finetuned LLaMA-based model that surpasses GPT-4 on writing API calls. This capability can help identify the right API, boosting the ability of LLMs to interact with external tools to complete specific tasks-https://arxiv.org/abs/2305.15334.
  2. The False Promise of Imitating Proprietary LLMs - provides a critical analysis of models that are finetuned on the outputs of a stronger model; argues that model imitation is a false premise and that the higher leverage action to improve open source models is to develop better base models-https://arxiv.org/abs/2305.15717
  3. InstructBLIP-Explores visual-language instruction tuning based on the pre-trained BLIP-2 models; achieves state-of-the-art zero-shot performance on 13 held-out datasets, outperforming BLIP-2 and Flamingo. https://arxiv.org/abs/2305.06500
  4. Active Retrieval Augmented LLMs-Introduces FLARE, retrieval augmented generation to improve the reliability of LLMs; FLARE actively decides when and what to retrieve across the course of the generation; demonstrates superior or competitive performance on long-form knowledge-intensive generation tasks-https://arxiv.org/abs/2305.06983.
  5. AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head - connects ChatGPT with audio foundational models to handle challenging audio tasks and a modality transformation interface to enable spoken dialogue-https://arxiv.org/abs/2304.12995

** MARCH 2023 **

  1. GPT-4 Technical Report-https://arxiv.org/abs/2303.08774v2.
  2. An Overview on Language Models: Recent Developments and Outlook-Provides overview of anguage models covering recent developments and future directions. It also covers topics like linguistic units, structures, training methods, evaluation, and applications-https://arxiv.org/abs/2303.05759.
  3. Eliciting Latent Predictions from Transformers with the Tuned Lens**-Method for transformer interpretability that can trace a language model predictions as it develops layer by layer-https://arxiv.org/abs/2303.08112.

** FEB 2023 **

  1. Multimodal Chain-of-Thought Reasoning in Language Models-https://arxiv.org/abs/2302.00923.
  2. Dreamix: Video Diffusion Models are General Video Editors - a diffusion model that performs text-based motion and appearance editing of general videos.
  3. Benchmarking Large Language Models for News Summarization-https://arxiv.org/abs/2301.13848.

** JAN 2023 **

  1. Rethinking with Retrieval: Faithful Large Language Model Inference - shows the potential of enhancing LLMs by retrieving relevant external knowledge based on decomposed reasoning steps obtained through chain-of-thought prompting- https://arxiv.org/abs/2301.00303.
  2. SparseGPT: Massive Language Models Can Be Accurately Pruned In One-Shot - Presents a technique for compressing large language models while not sacrificing performance; "pruned to at least 50% sparsity in one-shot, without any retraining-https://arxiv.org/abs/2301.00774.
  3. ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders-https://arxiv.org/abs/2301.00808.

** Thank you so much for visiting my AI Research Junction@Aditi Khare - Research Papers Summaries @Generative AI @Computer Vision @Quautum AI **

** If you find my AI Research Junction@Aditi Khare useful please star my repository to support my work - Happy Learning **