每月都有重磅研究,2024全年值得一讀的論文都在這了

机器之心發表於2025-01-01

2024 年,是 AI 領域讓人興奮的一年。在這一年中,各大科技公司、機構釋出了數不勝數的研究。

從年初的 Sora,到年尾 DeepSeek-V3,我們見證了 AI 一輪又一輪的轟炸,AI給我們帶來了意想不到的驚喜。

在這一年中,AI 論文被源源不斷的產出。對於剛剛過去的 2024 年,有哪些論文值得反覆閱讀?知名機器學習與 AI 研究者 Sebastian Raschka 整理了一份關於LLM 的閱讀清單,清單詳細介紹了每個月都有哪些重要論文產出。

圖片

原文連結:https://sebastianraschka.com/blog/2024/llm-research-papers-the-2024-list.html

一月論文

論文標題:Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models

論文連結:https://arxiv.org/abs/2401.00788

論文標題:A Comprehensive Study of Knowledge Editing for Large Language Models

論文連結:https://arxiv.org/abs/2401.01286

論文標題:LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

論文連結:https://arxiv.org/abs/2401.01325

論文標題:Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

論文連結:https://arxiv.org/abs/2401.01335

論文標題:LLaMA Beyond English: An Empirical Study on Language Capability Transfer

論文連結 https://arxiv.org/abs/2401.01055

論文標題:A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity

論文連結:https://arxiv.org/abs/2401.01967

論文標題:LLaMA Pro: Progressive LLaMA with Block Expansion

論文連結:https://arxiv.org/abs/2401.02415

論文標題:LLM Augmented LLMs: Expanding Capabilities through Composition

論文連結:https://arxiv.org/abs/2401.02412

論文標題: Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM

論文連結: https://arxiv.org/abs/2401.02994

論文標題:DeepSeek LLM: Scaling Open-Source Language Models with Longtermism

論文連結:https://arxiv.org/abs/2401.02954

論文標題:Denoising Vision Transformers

論文連結:https://arxiv.org/abs/2401.02957

論文標題:Long Context Compression with Activation Beacon

論文連結:https://arxiv.org/abs/2401.03462

論文標題:Mixtral of Experts

論文連結: https://arxiv.org/abs/2401.04088

論文標題:MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts

論文連結:https://arxiv.org/abs/2401.04081

論文標題:A Minimaximalist Approach to Reinforcement Learning from Human Feedback

論文連結:https://arxiv.org/abs/2401.04056

論文標題:RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation

論文連結: https://arxiv.org/abs/2401.04679

論文標題: Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

論文連結:https://arxiv.org/abs/2401.05566

論文標題:Transformers are Multi-State RNNs

論文連結:https://arxiv.org/abs/2401.06104

論文標題:A Closer Look at AUROC and AUPRC under Class Imbalance

論文連結:https://arxiv.org/abs/2401.06091

論文標題:An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models

論文連結:https://arxiv.org/abs/2401.06692

論文標題:Tuning Language Models by Proxy

論文連結: https://arxiv.org/abs/2401.08565

論文標題:Scalable Pre-training of Large Autoregressive Image Models

論文連結 https://arxiv.org/abs/2401.08541

論文標題:Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering

論文連結https://arxiv.org/abs/2401.08500

論文標題:RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture

論文連結: https://arxiv.org/abs/2401.08406

論文標題:ReFT: Reasoning with Reinforced Fine-Tuning

論文連結: https://arxiv.org/abs/2401.08967

論文標題:DiffusionGPT: LLM-Driven Text-to-Image Generation System

論文連結: https://arxiv.org/abs/2401.10061

論文標題:Self-Rewarding Language Models

論文連結:https://arxiv.org/abs/2401.10020

論文標題:VMamba: Visual State Space Model

論文連結: https://arxiv.org/abs/2401.10166

論文標題:Knowledge Fusion of Large Language Models

論文連結: https://arxiv.org/abs/2401.10491

論文標題:SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities

論文連結:https://arxiv.org/abs/2401.12168

論文標題:WARM: On the Benefits of Weight Averaged Reward Models

論文連結: https://arxiv.org/abs/2401.12187

論文標題: Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text

論文連結: https://arxiv.org/abs/2401.12070

論文標題:MambaByte: Token-free Selective State Space Model

論文連結:https://arxiv.org/abs/2401.13660

論文標題:SpacTor-T5: Pre-training T5 Models with Span Corruption and Replaced Token Detection

論文連結:https://arxiv.org/abs/2401.13160

論文標題:Rethinking Patch Dependence for Masked Autoencoders

論文連結:https://arxiv.org/abs/2401.14391

論文標題:Pix2gestalt: Amodal Segmentation by Synthesizing Wholes

論文連結:https://arxiv.org/abs/2401.14398

論文標題:Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities

論文連結:https://arxiv.org/abs/2401.14405

論文標題:EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty

論文連結:https://arxiv.org/abs/2401.15077

論文標題:MoE-LLaVA: Mixture of Experts for Large Vision-Language Models

論文連結:https://arxiv.org/abs/2401.15947

論文標題:Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling

論文連結: https://arxiv.org/abs/2401.16380

論文標題:KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization

論文連結:https://arxiv.org/abs/2401.18079

二月論文

論文標題:Efficient Exploration for LLMs

論文連結:https://arxiv.org/abs/2402.00396

論文標題:OLMo: Accelerating the Science of Language Models

論文連結:https://arxiv.org/abs/2402.00838

論文標題:Tiny Titans: Can Smaller Large Language Models Punch Above Their Weight in the Real World for Meeting Summarization?

論文連結:https://arxiv.org/abs/2402.00841

論文標題:Repeat After Me: Transformers are Better than State Space Models at Copying

論文連結:https://arxiv.org/abs/2402.01032

論文標題:LiPO: Listwise Preference Optimization through Learning-to-Rank

論文連結:https://arxiv.org/abs/2402.01878

論文標題:FindingEmo: An Image Dataset for Emotion Recognition in the Wild

論文連結: https://arxiv.org/abs/2402.01355

論文標題:More Agents Is All You Need

論文連結:https://arxiv.org/abs/2402.05120

論文標題:DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

論文連結: https://arxiv.org/abs/2402.03300

論文標題:MobileVLM V2: Faster and Stronger Baseline for Vision Language Model

論文連結: https://arxiv.org/abs/2402.03766

論文標題:A Phase Transition Between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention

論文連結:https://arxiv.org/abs/2402.03902

論文標題:Scaling Laws for Downstream Task Performance of Large Language Models

論文連結:https://arxiv.org/abs/2402.04177

論文標題:MOMENT: A Family of Open Time-series Foundation Models

論文連結: https://arxiv.org/abs/2402.03885

論文標題:Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models

論文連結:https://arxiv.org/abs/2402.03749

論文標題:Self-Discover: Large Language Models Self-Compose Reasoning Structures

論文連結:https://arxiv.org/abs/2402.03620

論文標題:Grandmaster-Level Chess Without Search

論文連結: https://arxiv.org/abs/2402.04494

論文標題:Direct Language Model Alignment from Online AI Feedback

論文連結: https://arxiv.org/abs/2402.04792

論文標題:Buffer Overflow in Mixture of Experts

論文連結: https://arxiv.org/abs/2402.05526

論文標題:The Boundary of Neural Network Trainability is Fractal

論文連結: https://arxiv.org/abs/2402.06184

論文標題:ODIN: Disentangled Reward Mitigates Hacking in RLHF

論文連結: https://arxiv.org/abs/2402.07319

論文標題:Policy Improvement using Language Feedback Models

論文連結: https://arxiv.org/abs/2402.07876

論文標題:Scaling Laws for Fine-Grained Mixture of Experts

論文連結:https://arxiv.org/abs/2402.07871

論文標題:Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model

論文連結: https://arxiv.org/abs/2402.07610

論文標題:Step-On-Feet Tuning: Scaling Self-Alignment of LLMs via Bootstrapping

論文連結: https://arxiv.org/abs/2402.07610

論文標題:Suppressing Pink Elephants with Direct Principle Feedback

論文連結: https://arxiv.org/abs/2402.07896

論文標題:World Model on Million-Length Video And Language With RingAttention

論文連結:https://arxiv.org/abs/2402.08268

論文標題:Mixtures of Experts Unlock Parameter Scaling for Deep RL

論文連結: https://arxiv.org/abs/2402.08609

論文標題:DoRA: Weight-Decomposed Low-Rank Adaptation

論文連結:https://arxiv.org/abs/2402.09353

論文標題:Transformers Can Achieve Length Generalization But Not Robustly

論文連結: https://arxiv.org/abs/2402.09371

論文標題:BASE TTS: Lessons From Building a Billion-Parameter Text-to-Speech Model on 100K Hours of Data

論文連結:https://arxiv.org/abs/2402.08093

論文標題:Recovering the Pre-Fine-Tuning Weights of Generative Models

論文連結: https://arxiv.org/abs/2402.10208

論文標題:Generative Representational Instruction Tuning

論文連結: https://arxiv.org/abs/2402.09906

論文標題:FinTral: A Family of GPT-4 Level Multimodal Financial Large Language Models

論文連結: https://arxiv.org/abs/2402.10986

論文標題:OneBit: Towards Extremely Low-bit Large Language Models

論文連結: https://arxiv.org/abs/2402.11295

論文標題:LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration

論文連結:https://arxiv.org/abs/2402.11550

論文標題:Reformatted Alignment

論文連結: https://arxiv.org/abs/2402.12219

論文標題:AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling

論文連結: https://arxiv.org/abs/2402.12226

論文標題:Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs

論文連結: https://arxiv.org/abs/2402.12030

論文標題:LoRA+: Efficient Low Rank Adaptation of Large Models

論文連結: https://arxiv.org/abs/2402.12354

論文標題:Neural Network Diffusion

論文連結: https://arxiv.org/abs/2402.13144

論文標題:YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information

論文連結:https://arxiv.org/abs/2402.13616

論文標題:LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens

論文標題:https://arxiv.org/abs/2402.13753

論文標題:Large Language Models for Data Annotation: A Survey

論文連結:https://arxiv.org/abs/2402.13446

論文標題:TinyLLaVA: A Framework of Small-scale Large Multimodal Models

論文連結:https://arxiv.org/abs/2402.14289

論文標題:Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs

論文連結:https://arxiv.org/abs/2402.14740

論文標題: Genie: Generative Interactive Environments

論文連結:https://arxiv.org/abs/2402.15391

論文標題:CARTE: Pretraining and Transfer for Tabular Learning

論文連結:https://arxiv.org/abs/2402.16785

論文標題:The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

論文連結:https://arxiv.org/abs/2402.17764

論文標題:Sora Generates Videos with Stunning Geometrical Consistency

論文連結:https://arxiv.org/abs/2402.17403

論文標題:When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method

論文連結:https://arxiv.org/abs/2402.17193

論文標題:Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models

論文連結:https://arxiv.org/abs/2402.19427

三月論文

論文標題:Learning and Leveraging World Models in Visual Representation Learning

論文連結: https://arxiv.org/abs/2403.00504

論文標題:Improving LLM Code Generation with Grammar Augmentation

論文連結: https://arxiv.org/abs/2403.01632

論文標題:The Hidden Attention of Mamba Models

論文連結: https://arxiv.org/abs/2403.01590

論文標題:Training-Free Pretrained Model Merging

論文連結: https://arxiv.org/abs/2403.01753

論文標題:Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures

論文連結: https://arxiv.org/abs/2403.02308

論文標題:The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning

論文連結:https://arxiv.org/abs/2403.03218

論文標題:Evolution Transformer: In-Context Evolutionary Optimization

論文連結: https://arxiv.org/abs/2403.02985

論文標題:Enhancing Vision-Language Pre-training with Rich Supervisions

論文連結: https://arxiv.org/abs/2403.03346

論文標題:Scaling Rectified Flow Transformers for High-Resolution Image Synthesis

論文連結:https://arxiv.org/abs/2403.03206

論文標題:Design2Code: How Far Are We From Automating Front-End Engineering?

論文連結: https://arxiv.org/abs/2403.03163

論文標題:ShortGPT: Layers in Large Language Models are More Redundant Than You Expect

論文連結: https://arxiv.org/abs/2403.03853

論文標題:Backtracing: Retrieving the Cause of the Query

論文連結: https://arxiv.org/abs/2403.03956

論文標題:Learning to Decode Collaboratively with Multiple Language Models

論文連結: https://arxiv.org/abs/2403.03870

論文標題:SaulLM-7B: A pioneering Large Language Model for Law

論文連結: https://arxiv.org/abs/2403.03883

論文標題:Are Language Models Puzzle Prodigies? Algorithmic Puzzles Unveil Serious Challenges in Multimodal Reasoning

論文連結: https://arxiv.org/abs/2403.03864

論文標題:3D Diffusion Policy

論文連結: https://arxiv.org/abs/2403.03954

論文標題:MedMamba: Vision Mamba for Medical Image Classification

論文連結: https://arxiv.org/abs/2403.03849

論文標題:GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

論文連結: https://arxiv.org/abs/2403.03507

論文標題:Stop Regressing: Training Value Functions via Classification for Scalable Deep RL

論文連結: https://arxiv.org/abs/2403.03950

論文標題:How Far Are We from Intelligent Visual Deductive Reasoning?

論文連結:https://arxiv.org/abs/2403.04732

論文標題:Common 7B Language Models Already Possess Strong Math Capabilities

論文連結:https://arxiv.org/abs/2403.04706

論文標題:Gemini 1.5: Unlocking Multimodal Understanding Across Millions of Tokens of Context

論文連結: https://arxiv.org/abs/2403.05530

論文標題:Is Cosine-Similarity of Embeddings Really About Similarity?

論文連結:https://arxiv.org/abs/2403.05440

論文標題:LLM4Decompile: Decompiling Binary Code with Large Language Models

論文連結: https://arxiv.org/abs/2403.05286

論文標題:Algorithmic Progress in Language Models

論文連結:https://arxiv.org/abs/2403.05812

論文標題:Stealing Part of a Production Language Model

論文連結: https://arxiv.org/abs/2403.06634

論文標題:Chronos: Learning the Language of Time Series

論文連結:https://arxiv.org/abs/2403.07815

論文標題:Simple and Scalable Strategies to Continually Pre-train Large Language Models

論文連結:https://arxiv.org/abs/2403.08763

論文標題:Language Models Scale Reliably With Over-Training and on Downstream Tasks

論文連結:https://arxiv.org/abs/2403.08540

論文標題:BurstAttention: An Efficient Distributed Attention Framework for Extremely Long Sequences

論文連結:https://arxiv.org/abs/2403.09347

論文標題: LocalMamba: Visual State Space Model with Windowed Selective Scan

論文連結:https://arxiv.org/abs/2403.09338

論文標題:GiT: Towards Generalist Vision Transformer through Universal Language Interface

論文連結:https://arxiv.org/abs/2403.09394

論文標題:MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training

論文連結: https://arxiv.org/abs/2403.09611

論文標題: RAFT: Adapting Language Model to Domain Specific RAG

論文連結: https://arxiv.org/abs/2403.10131

論文標題:TnT-LLM: Text Mining at Scale with Large Language Models

論文連結: https://arxiv.org/abs/2403.12173

論文標題: Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

論文連結: https://arxiv.org/abs/2403.15447

論文標題: PERL: Parameter Efficient Reinforcement Learning from Human Feedback

論文連結: https://arxiv.org/abs/2403.10704

論文標題:RewardBench: Evaluating Reward Models for Language Modeling

論文連結:https://arxiv.org/abs/2403.13787

論文標題:LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models

論文連結: https://arxiv.org/abs/2403.13372

論文標題:RakutenAI-7B: Extending Large Language Models for Japanese

論文連結: https://arxiv.org/abs/2403.15484

論文標題:SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time Series

論文連結:https://arxiv.org/abs/2403.15360

論文標題:Can Large Language Models Explore In-Context?

論文連結:https://arxiv.org/abs/2403.15371

論文標題:LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement

論文連結:https://arxiv.org/abs/2403.15042

論文標題: LLM Agent Operating System

論文連結:https://arxiv.org/abs/2403.16971

論文標題:The Unreasonable Ineffectiveness of the Deeper Layers

論文連結:https://arxiv.org/abs/2403.17887

論文標題:BioMedLM: A 2.7B Parameter Language Model Trained On Biomedical Text

論文連結:https://arxiv.org/abs/2403.18421

論文標題:ViTAR: Vision Transformer with Any Resolution

論文連結:https://arxiv.org/abs/2403.18361

論文標題:Long-form Factuality in Large Language Models

論文連結:https://arxiv.org/abs/2403.18802

論文標題:Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models

論文連結: https://arxiv.org/abs/2403.18814

論文標題:LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning

論文連結:https://arxiv.org/abs/2403.17919

論文標題:Mechanistic Design and Scaling of Hybrid Architectures

論文連結:https://arxiv.org/abs/2403.17844

論文標題:MagicLens: Self-Supervised Image Retrieval with Open-Ended Instructions

論文連結:https://arxiv.org/abs/2403.19651

論文標題:Model Stock: All We Need Is Just a Few Fine-Tuned Models

論文連結:https://arxiv.org/abs/2403.19522

四月論文

論文標題: Do Language Models Plan Ahead for Future Tokens?

論文連結: https://arxiv.org/abs/2404.00859

論文標題:Bigger is not Always Better: Scaling Properties of Latent Diffusion Models

論文連結:https://arxiv.org/abs/2404.01367

論文標題:The Fine Line: Navigating Large Language Model Pretraining with Down-streaming Capability Analysis

論文連結: https://arxiv.org/abs/2404.01204

論文標題:Diffusion-RWKV: Scaling RWKV-Like Architectures for Diffusion Models

論文連結:https://arxiv.org/abs/2404.04478

論文標題:Mixture-of-Depths: Dynamically Allocating Compute in Transformer-Based Language Models

論文連結:https://arxiv.org/abs/2404.02258

論文標題:Long-context LLMs Struggle with Long In-context Learning

論文連結:https://arxiv.org/abs/2404.02060

論文標題:Emergent Abilities in Reduced-Scale Generative Language Models

論文連結: https://arxiv.org/abs/2404.02204

論文標題:Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks

論文連結: https://arxiv.org/abs/2404.02151

論文標題:On the Scalability of Diffusion-based Text-to-Image Generation

論文連結: https://arxiv.org/abs/2404.02883

論文標題:BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models

論文連結: https://arxiv.org/abs/2404.02827

論文標題:Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models

論文連結: https://arxiv.org/abs/2404.02747

論文標題:Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences

論文連結: https://arxiv.org/abs/2404.02151

論文標題:Training LLMs over Neurally Compressed Text

論文連結: https://arxiv.org/abs/2404.03626

論文標題:CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues

論文連結: https://arxiv.org/abs/2404.03820

論文標題:ReFT: Representation Finetuning for Language Models

論文連結: https://arxiv.org/abs/2404.03592

論文標題:Verifiable by Design: Aligning Language Models to Quote from Pre-Training Data

論文連結: https://arxiv.org/abs/2404.03862

論文標題:Sigma: Siamese Mamba Network for Multi-Modal Semantic Segmentation

論文連結: https://arxiv.org/abs/2404.04256

論文標題:AutoCodeRover: Autonomous Program Improvement

論文連結: https://arxiv.org/abs/2404.05427

論文標題:Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence

論文連結: https://arxiv.org/abs/2404.05892

論文標題:CodecLM: Aligning Language Models with Tailored Synthetic Data

論文連結: https://arxiv.org/abs/2404.05875

論文標題:MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies

論文連結: https://arxiv.org/abs/2404.06395

論文標題:Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models

論文連結: https://arxiv.org/abs/2404.06209

論文標題:LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders

論文連結: https://arxiv.org/abs/2404.05961

論文標題:Adapting LLaMA Decoder to Vision Transformer

論文連結: https://arxiv.org/abs/2404.06773

論文標題: Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention

論文連結: https://arxiv.org/abs/2404.07143

論文標題:LLoCO: Learning Long Contexts Offline

論文連結: https://arxiv.org/abs/2404.07979

論文標題:JetMoE: Reaching Llama2 Performance with 0.1M Dollars

論文連結: https://arxiv.org/abs/2404.07413

論文標題: Best Practices and Lessons Learned on Synthetic Data for Language Models

論文連結: https://arxiv.org/abs/2404.07503

論文標題:Rho-1: Not All Tokens Are What You Need

論文連結: https://arxiv.org/abs/2404.07965

論文標題:Pre-training Small Base LMs with Fewer Tokens

論文連結: https://arxiv.org/abs/2404.08634

論文標題:Dataset Reset Policy Optimization for RLHF

論文連結: https://arxiv.org/abs/2404.08495

論文標題:LLM In-Context Recall is Prompt Dependent

論文連結: https://arxiv.org/abs/2404.08865

論文標題:State Space Model for New-Generation Network Alternative to Transformers: A Survey

論文連結: https://arxiv.org/abs/2404.09516

論文標題:Chinchilla Scaling: A Replication Attempt

論文連結: https://arxiv.org/abs/2404.10102

論文標題:Learn Your Reference Model for Real Good Alignment

論文連結: https://arxiv.org/abs/2404.09656

論文標題:Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study

論文連結: https://arxiv.org/abs/2404.10719

論文標題:Scaling (Down) CLIP: A Comprehensive Analysis of Data, Architecture, and Training Strategies

論文連結: https://arxiv.org/abs/2404.08197

論文標題:How Faithful Are RAG Models? Quantifying the Tug-of-War Between RAG and LLMs’ Internal Prior

論文連結: https://arxiv.org/abs/2404.10198

論文標題:A Survey on Retrieval-Augmented Text Generation for Large Language Models

論文連結:https://arxiv.org/abs/2404.10981

論文標題:When LLMs are Unfit Use FastFit: Fast and Effective Text Classification with Many Classes

論文連結: https://arxiv.org/abs/2404.12365

論文標題:Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing

論文連結: https://arxiv.org/abs/2404.12253

論文標題:OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data

論文連結: https://arxiv.org/abs/2404.12195

論文標題:The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions

論文連結: https://arxiv.org/abs/2404.13208

論文標題:An Empirical Study of LLaMA3 Quantization: From LLMs to MLLMs

論文連結: https://arxiv.org/abs/2404.14047

論文標題:Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone

論文連結: https://arxiv.org/abs/2404.14219

論文標題: OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework

論文連結: https://arxiv.org/abs/2404.14619

論文標題: A Survey on Self-Evolution of Large Language Models

論文連結: https://arxiv.org/abs/2404.14662

論文標題: Multi-Head Mixture-of-Experts

論文連結: https://arxiv.org/abs/2404.15045

論文標題:NExT: Teaching Large Language Models to Reason about Code Execution

論文連結: https://arxiv.org/abs/2404.14662

論文標題:Graph Machine Learning in the Era of Large Language Models (LLMs)

論文連結: https://arxiv.org/abs/2404.14928

論文標題:Retrieval Head Mechanistically Explains Long-Context Factuality

論文連結: https://arxiv.org/abs/2404.15574

論文標題:Layer Skip: Enabling Early Exit Inference and Self-Speculative Decoding

論文連結: https://arxiv.org/abs/2404.16710

論文標題:Make Your LLM Fully Utilize the Context

論文連結:https://arxiv.org/abs/2404.16811

論文標題:LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report

論文連結: https://arxiv.org/abs/2405.00732

論文標題:Better & Faster Large Language Models via Multi-token Prediction

論文連結: https://arxiv.org/abs/2404.19737

論文標題:RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing

論文連結: https://arxiv.org/abs/2404.19543

論文標題:A Primer on the Inner Workings of Transformer-based Language Models

論文連結: https://arxiv.org/abs/2405.00208

論文標題:When to Retrieve: Teaching LLMs to Utilize Information Retrieval Effectively

論文連結:https://arxiv.org/abs/2404.19705

論文標題:KAN: Kolmogorov–Arnold Networks

論文連結: https://arxiv.org/abs/2404.19756

五月論文

論文標題:Is Bigger Edit Batch Size Always Better? An Empirical Study on Model Editing with Llama-3

論文連結:https://arxiv.org/abs/2405.00664

論文標題:Self-Play Preference Optimization for Language Model Alignment

論文連結: https://arxiv.org/abs/2405.00675

論文標題:A Careful Examination of Large Language Model Performance on Grade School Arithmetic

論文連結: https://arxiv.org/abs/2405.00332

論文標題:Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models

論文連結: https://arxiv.org/abs/2405.01535

論文標題:What Matters When Building Vision-Language Models?

論文連結: https://arxiv.org/abs/2405.02246

論文標題:Is Flash Attention Stable?

論文連結:https://arxiv.org/abs/2405.02803

論文標題:vAttention: Dynamic Memory Management for Serving LLMs without PagedAttention

論文連結: https://arxiv.org/abs/2405.04437

論文標題:xLSTM: Extended Long Short-Term Memory

論文連結:https://arxiv.org/abs/2405.04517

論文標題:You Only Cache Once: Decoder-Decoder Architectures for Language Models

論文連結: https://arxiv.org/abs/2405.05254

論文標題:DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model

論文連結: https://arxiv.org/abs/2405.04434

論文標題:Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models

論文連結: https://arxiv.org/abs/2405.05417

論文標題:Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?

論文連結:https://arxiv.org/abs/2405.05904

論文標題:Value Augmented Sampling for Language Model Alignment and Personalization

論文標題: https://arxiv.org/abs/2405.06639

論文標題:PHUDGE: Phi-3 as Scalable Judge

論文連結: https://arxiv.org/abs/2405.08029

論文標題:RLHF Workflow: From Reward Modeling to Online RLHF

論文連結:https://arxiv.org/abs/2405.07863

論文標題:LoRA Learns Less and Forgets Less

論文連結:https://arxiv.org/abs/2405.09673

論文標題:Xmodel-VLM: A Simple Baseline for Multimodal Vision Language Model

論文連結:https://arxiv.org/abs/2405.09215

論文標題:Chameleon: Mixed-Modal Early-Fusion Foundation Models

論文連結: https://arxiv.org/abs/2405.09818

論文標題:Towards Modular LLMs by Building and Reusing a Library of LoRAs

論文連結:https://arxiv.org/abs/2405.11157

論文標題:SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-parameterized Batch Normalization

論文連結:https://arxiv.org/abs/2405.11582

論文標題:MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning

論文連結:https://arxiv.org/abs/2405.12130

論文標題:Attention as an RNN

論文連結:https://arxiv.org/abs/2405.13956

論文標題:Dense Connector for MLLMs

論文連結: https://arxiv.org/abs/2405.13800

論文標題:AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability

論文連結: https://arxiv.org/abs/2405.14129

論文標題: SimPO: Simple Preference Optimization with a Reference-Free Reward

論文連結: https://arxiv.org/abs/2405.14734

論文標題:Instruction Tuning With Loss Over Instructions

論文連結:https://arxiv.org/abs/2405.14394

論文標題:The Road Less Scheduled

論文連結:https://arxiv.org/abs/2405.15682

論文標題:Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training

論文連結: https://arxiv.org/abs/2405.15319

論文標題:gzip Predicts Data-dependent Scaling Laws

論文連結:https://arxiv.org/abs/2405.16684

論文標題:Trans-LoRA: Towards Data-free Transferable Parameter Efficient Finetuning

論文連結: https://arxiv.org/abs/2405.17258

論文標題:VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projections

論文連結:https://arxiv.org/abs/2405.17991

論文標題:LLaMA-NAS: Efficient Neural Architecture Search for Large Language Models

論文連結: https://arxiv.org/abs/2405.18377

論文標題:Contextual Position Encoding: Learning to Count What’s Important

論文連結:https://arxiv.org/abs/2405.18719

六月論文

論文標題:Show, Don’t Tell: Aligning Language Models with Demonstrated Feedback

論文連結: https://arxiv.org/abs/2406.00888

論文標題:Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models

論文連結:https://arxiv.org/abs/2406.06563

論文標題:OLoRA: Orthonormal Low-Rank Adaptation of Large Language Models

論文連結:https://arxiv.org/abs/2406.01775

論文標題:The Geometry of Categorical and Hierarchical Concepts in Large Language Models

論文連結: https://arxiv.org/abs/2406.01506

論文標題:Towards Scalable Automated Alignment of LLMs: A Survey

論文連結:https://arxiv.org/abs/2406.01252

論文標題:Scalable MatMul-free Language Modeling

論文連結:https://arxiv.org/abs/2406.02528

論文標題:Block Transformer: Global-to-Local Language Modeling for Fast Inference

論文連結: https://arxiv.org/abs/2406.02657

論文標題:Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models

論文連結:https://arxiv.org/abs/2406.04271

論文標題:The Prompt Report: A Systematic Survey of Prompting Techniques

論文連結: https://arxiv.org/abs/2406.06608

論文標題:Transformers Need Glasses! Information Over-Squashing in Language Tasks

論文連結: https://arxiv.org/abs/2406.04267

論文標題:Are We Done with MMLU?

論文連結:https://arxiv.org/abs/2406.04127

論文標題:Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step

論文連結: https://arxiv.org/abs/2406.04314

論文標題:Boosting Large-scale Parallel Training Efficiency with C4: A Communication-Driven Approach

論文連結: https://arxiv.org/abs/2406.04594

論文標題:CRAG – Comprehensive RAG Benchmark

論文連結:https://arxiv.org/abs/2406.04744

論文標題:WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild

論文連結: https://arxiv.org/abs/2406.04770

論文標題:Mixture-of-Agents Enhances Large Language Model Capabilities

論文連結:https://arxiv.org/abs/2406.04692

論文標題:BERTs are Generative In-Context Learners

論文連結:https://arxiv.org/abs/2406.04823

論文標題:3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination

論文連結: https://arxiv.org/abs/2406.05132

論文標題:Creativity Has Left the Chat: The Price of Debiasing Language Models

論文連結:https://arxiv.org/abs/2406.05587

論文標題:Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation

論文連結: https://arxiv.org/abs/2406.06525

論文標題:Margin-aware Preference Optimization for Aligning Diffusion Models Without Reference

論文連結: https://arxiv.org/abs/2406.06424

論文標題:Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning

論文連結: https://arxiv.org/abs/2406.06469

論文標題: Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters

論文連結: https://arxiv.org/abs/2406.05955

論文標題:Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching

論文連結: https://arxiv.org/abs/2406.06326

論文標題:An Image is Worth 32 Tokens for Reconstruction and Generation

論文連結: https://arxiv.org/abs/2406.07550

論文標題:TextGrad: Automatic “Differentiation” via Text

論文連結:https://arxiv.org/abs/2406.07496

論文標題:Simple and Effective Masked Diffusion Language Models

論文連結:https://arxiv.org/abs/2406.07524

論文標題:Never Miss A Beat: An Efficient Recipe for Context Window Extension of Large Language Models with Consistent “Middle” Enhancement

論文連結:https://arxiv.org/abs/2406.07138

論文標題:Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

論文連結: https://arxiv.org/abs/2406.07522

論文標題:Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing

論文連結: https://arxiv.org/abs/2406.08464

論文標題:What If We Recaption Billions of Web Images with LLaMA-3?

論文連結:https://arxiv.org/abs/2406.08478

論文標題:Large Language Model Unlearning via Embedding-Corrupted Prompts

論文連結:https://arxiv.org/abs/2406.07933

論文標題:Large Language Models Must Be Taught to Know What They Don’t Know

論文連結: https://arxiv.org/abs/2406.08391

論文標題:An Empirical Study of Mamba-based Language Models

論文連結:https://arxiv.org/abs/2406.07887

論文標題: Discovering Preference Optimization Algorithms with and for Large Language Models

論文連結: https://arxiv.org/abs/2406.08414

論文標題:Transformers Meet Neural Algorithmic Reasoners

論文連結: https://arxiv.org/abs/2406.09308

論文標題:MLKV: Multi-Layer Key-Value Heads for Memory Efficient Transformer Decoding

論文連結: https://arxiv.org/abs/2406.09297

論文標題:An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels

論文連結: https://arxiv.org/abs/2406.09415

論文標題:FouRA: Fourier Low Rank Adaptation

論文連結:https://arxiv.org/abs/2406.08798

論文標題: Bootstrapping Language Models with DPO Implicit Rewards

論文連結:https://arxiv.org/abs/2406.09760

論文標題:Be like a Goldfish, Don’t Memorize! Mitigating Memorization in Generative LLMs

論文連結: https://arxiv.org/abs/2406.10209

論文標題:Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs

論文連結: https://arxiv.org/abs/2406.10216

論文標題:THEANINE: Revisiting Memory Management in Long-term Conversations with Timeline-augmented Response Generation

論文連結:https://arxiv.org/abs/2406.10996

論文標題:Task Me Anything

論文連結: https://arxiv.org/abs/2406.11775

論文標題:How Do Large Language Models Acquire Factual Knowledge During Pretraining?

論文連結: https://arxiv.org/abs/2406.11813

論文標題:mDPO: Conditional Preference Optimization for Multimodal Large Language Models

論文連結: https://arxiv.org/abs/2406.11839

論文標題:Nemotron-4 340B Technical Report

論文連結:https://arxiv.org/abs/2406.11704

論文標題:DataComp-LM: In Search of the Next Generation of Training Sets for Language Models

論文連結:https://arxiv.org/abs/2406.11794

論文標題:Tokenization Falling Short: The Curse of Tokenization

論文連結: https://arxiv.org/abs/2406.11687

論文標題: DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence

論文連結: https://arxiv.org/abs/2406.11931

論文標題:Unveiling Encoder-Free Vision-Language Models

論文連結:https://arxiv.org/abs/2406.11832

論文標題:Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level

論文連結: https://arxiv.org/abs/2406.11817

論文標題:HARE: HumAn pRiors, a key to small language model Efficiency

論文連結:https://arxiv.org/abs/2406.11410

論文標題:Measuring memorization in RLHF for code completion

論文連結: https://arxiv.org/abs/2406.11715

論文標題:Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts

論文連結: https://arxiv.org/abs/2406.12034

論文標題:From RAGs to Rich Parameters: Probing How Language Models Utilize External Knowledge Over Parametric Information for Factual Queries

論文連結: https://arxiv.org/abs/2406.12824

論文標題:Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges

論文連結: https://arxiv.org/abs/2406.12624

論文標題:Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?

論文連結: https://arxiv.org/abs/2406.13121

論文標題:Instruction Pre-Training: Language Models are Supervised Multitask Learners

論文連結: https://arxiv.org/abs/2406.14491

論文標題:Can LLMs Learn by Teaching? A Preliminary Study

論文連結:https://arxiv.org/abs/2406.14629

論文標題:A Tale of Trust and Accuracy: Base vs. Instruct LLMs in RAG Systems

論文連結:https://arxiv.org/abs/2406.14972

論文標題: LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs

論文連結: https://arxiv.org/abs/2406.15319

論文標題:MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

論文連結: https://arxiv.org/abs/2406.14909

論文標題:Efficient Continual Pre-training by Mitigating the Stability Gap

論文連結:https://arxiv.org/abs/2406.14833

論文標題:Sparser is Faster and Less is More: Efficient Sparse Attention for Long-Range Transformers

論文連結: https://arxiv.org/abs/2406.16747

論文標題:WARP: On the Benefits of Weight Averaged Rewarded Policies

論文連結:https://arxiv.org/abs/2406.16768

論文標題:Adam-mini: Use Fewer Learning Rates To Gain More

論文連結:https://arxiv.org/abs/2406.16793

論文標題:The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale

論文連結: https://arxiv.org/abs/2406.17557

論文標題:LongIns: A Challenging Long-context Instruction-based Exam for LLMs

論文連結: https://arxiv.org/abs/2406.17588

論文標題:Following Length Constraints in Instructions

論文連結:https://arxiv.org/abs/2406.17744

論文標題:A Closer Look into Mixture-of-Experts in Large Language Models

論文連結:https://arxiv.org/abs/2406.18219

論文標題: RouteLLM: Learning to Route LLMs with Preference Data

論文連結: https://arxiv.org/abs/2406.18665

論文標題:Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs

論文連結: https://arxiv.org/abs/2406.18629

論文標題:Dataset Size Recovery from LoRA Weights

論文連結: https://arxiv.org/abs/2406.19395

論文標題:From Artificial Needles to Real Haystacks: Improving Retrieval Capabilities in LLMs by Finetuning on Synthetic Data

論文連結: https://arxiv.org/abs/2406.19292

論文標題:Changing Answer Order Can Decrease MMLU Accuracy

論文連結: https://arxiv.org/abs/2406.19470

論文標題:Direct Preference Knowledge Distillation for Large Language Models

論文連結: https://arxiv.org/abs/2406.19774

論文標題:LLM Critics Help Catch LLM Bugs

論文連結:https://arxiv.org/abs/2407.00215

論文標題:Scaling Synthetic Data Creation with 1,000,000,000 Personas

論文連結: https://arxiv.org/abs/2406.20094

七月論文

論文標題:LLM See, LLM Do: Guiding Data Generation to Target Non-Differentiable Objectives

論文連結:https://arxiv.org/abs/2407.01490

論文標題:Searching for Best Practices in Retrieval-Augmented Generation

論文連結:https://arxiv.org/abs/2407.01219

論文標題:Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language Models

論文連結:https://arxiv.org/abs/2407.01906

論文標題:Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion

論文連結:https://arxiv.org/abs/2407.01392

論文標題:Eliminating Position Bias of Language Models: A Mechanistic Approach

論文連結:https://arxiv.org/abs/2407.01100

論文標題:JMInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention

論文連結:https://arxiv.org/abs/2407.02490

論文標題:TokenPacker: Efficient Visual Projector for Multimodal LLM

論文連結:https://arxiv.org/abs/2407.02392

論文標題:Reasoning in Large Language Models: A Geometric Perspective

論文連結:https://arxiv.org/abs/2407.02678

論文標題:RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs

論文連結:https://arxiv.org/abs/2407.02485

論文標題:AgentInstruct: Toward Generative Teaching with Agentic Flows

論文連結:https://arxiv.org/abs/2407.03502

論文標題:HEMM: Holistic Evaluation of Multimodal Foundation Models

論文連結:https://arxiv.org/abs/2407.03418

論文標題:Mixture of A Million Experts

論文連結:https://arxiv.org/abs/2407.04153

論文標題:Learning to (Learn at Test Time): RNNs with Expressive Hidden States

論文連結:https://arxiv.org/abs/2407.04620

論文標題:Vision Language Models Are Blind

論文連結:https://arxiv.org/abs/2407.06581

論文標題:Self-Recognition in Language Models

論文連結:https://arxiv.org/abs/2407.06946

論文標題:Inference Performance Optimization for Large Language Models on CPUs

論文連結:https://arxiv.org/abs/2407.07304

論文標題:Gradient Boosting Reinforcement Learning

論文連結:https://arxiv.org/abs/2407.08250

論文標題:FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision

論文連結:https://arxiv.org/abs/2407.08608

論文標題:SpreadsheetLLM: Encoding Spreadsheets for Large Language Models

論文連結:https://arxiv.org/abs/2407.09025

論文標題:New Desiderata for Direct Preference Optimization

論文連結:https://arxiv.org/abs/2407.09072

論文標題:Context Embeddings for Efficient Answer Generation in RAG

論文連結:https://arxiv.org/abs/2407.09252

論文標題:Qwen2 Technical Report

論文連結:https://arxiv.org/abs/2407.10671

論文標題:The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism

論文連結:https://arxiv.org/abs/2407.10457

論文標題:From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients

論文連結:https://arxiv.org/abs/2407.11239

論文標題:GoldFinch: High Performance RWKV/Transformer Hybrid with Linear Pre-Fill and Extreme KV-Cache Compression

論文連結:https://arxiv.org/abs/2407.12077

論文標題:Scaling Diffusion Transformers to 16 Billion Parameters

論文連結:https://arxiv.org/abs/2407.11633

論文標題:NeedleBench: Can LLMs Do Retrieval and Reasoning in 1 Million Context Window?

論文連結:https://arxiv.org/abs/2407.11963

論文標題:Patch-Level Training for Large Language Models

論文連結:https://arxiv.org/abs/2407.12665

論文標題:LMMs-Eval: Reality Check on the Evaluation of Large Multimodal Models

論文連結:https://arxiv.org/abs/2407.12772

論文標題:A Survey of Prompt Engineering Methods in Large Language Models for Different NLP Tasks

論文連結:https://arxiv.org/abs/2407.12994

論文標題:Spectra: A Comprehensive Study of Ternary, Quantized, and FP16 Language Models

論文連結:https://arxiv.org/abs/2407.12327

論文標題:Attention Overflow: Language Model Input Blur during Long-Context Missing Items Recommendation

論文連結:https://arxiv.org/abs/2407.13481

論文標題:Weak-to-Strong Reasoning

論文連結:https://arxiv.org/abs/2407.13647

論文標題:Understanding Reference Policies in Direct Preference Optimization

論文連結:https://arxiv.org/abs/2407.13709

論文標題:Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies

論文連結:https://arxiv.org/abs/2407.13623

論文標題:BOND: Aligning LLMs with Best-of-N Distillation

論文連結:https://arxiv.org/abs/2407.14622

論文標題:Compact Language Models via Pruning and Knowledge Distillation

論文連結:https://arxiv.org/abs/2407.14679

論文標題:LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inference

論文連結:https://arxiv.org/abs/2407.14057

論文標題:Mini-Sequence Transformer: Optimizing Intermediate Memory for Long Sequences Training

論文連結:https://arxiv.org/abs/2407.15892

論文標題:DDK: Distilling Domain Knowledge for Efficient Large Language Models

論文連結:https://arxiv.org/abs/2407.16154

論文標題:Generation Constraint Scaling Can Mitigate Hallucination

論文連結:https://arxiv.org/abs/2407.16908

論文標題:Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach

論文連結:https://arxiv.org/abs/2407.16833

論文標題:Course-Correction: Safety Alignment Using Synthetic Preferences

論文連結:https://arxiv.org/abs/2407.16637

論文標題:Data Mixture Inference: What do BPE Tokenizers Reveal about their Training Data?

論文連結:https://arxiv.org/abs/2407.16607

論文標題:Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge

論文連結:https://arxiv.org/abs/2407.19594

論文標題:Improving Retrieval Augmented Language Model with Self-Reasoning

論文連結:https://arxiv.org/abs/2407.19813

論文標題:Apple Intelligence Foundation Language Models

論文連結:https://arxiv.org/abs/2407.21075

論文標題:ThinK: Thinner Key Cache by Query-Driven Pruning

論文連結:https://arxiv.org/abs/2407.21018

論文標題:The Llama 3 Herd of Models

論文連結:https://arxiv.org/abs/2407.21783

論文標題:Gemma 2: Improving Open Language Models at a Practical Size

論文連結:https://arxiv.org/abs/2408.00118

八月論文

論文標題:SAM 2: Segment Anything in Images and Videos

論文連結:https://arxiv.org/abs/2408.00714

論文標題:POA: Pre-training Once for Models of All Sizes

論文連結:https://arxiv.org/abs/2408.01031

論文標題:RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework

論文連結:https://arxiv.org/abs/2408.01262

論文標題:A Survey of Mamba

論文連結:https://arxiv.org/abs/2408.01129

論文標題:MiniCPM-V: A GPT-4V Level MLLM on Your Phone

論文連結:https://arxiv.org/abs/2408.01800

論文標題:RAG Foundry: A Framework for Enhancing LLMs for Retrieval Augmented Generation

論文連結:https://arxiv.org/abs/2408.02545

論文標題:Self-Taught Evaluators

論文連結:https://arxiv.org/abs/2408.02666

論文標題:BioMamba: A Pre-trained Biomedical Language Representation Model Leveraging Mamba

論文連結:https://arxiv.org/abs/2408.02600

論文標題:EXAONE 3.0 7.8B Instruction Tuned Language Model

論文連結:https://arxiv.org/abs/2408.03541

論文標題:1.5-Pints Technical Report: Pretraining in Days, Not Months – Your Language Model Thrives on Quality Data

論文連結:https://arxiv.org/abs/2408.03506

論文標題:Conversational Prompt Engineering

論文連結:https://arxiv.org/abs/2408.04560

論文標題:Trans-Tokenization and Cross-lingual Vocabulary Transfers: Language Adaptation of LLMs for Low-Resource NLP

論文連結:https://arxiv.org/abs/2408.04303

論文標題:The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery

論文連結:https://arxiv.org/abs/2408.06292

論文標題:Hermes 3 Technical Report

論文連結:https://arxiv.org/abs/2408.12570

論文標題:Customizing Language Models with Instance-wise LoRA for Sequential Recommendation

論文連結:https://arxiv.org/abs/2408.10159

論文標題:Enhancing Robustness in Large Language Models: Prompting for Mitigating the Impact of Irrelevant Information

論文連結:https://arxiv.org/abs/2408.10615

論文標題:To Code, or Not To Code? Exploring Impact of Code in Pre-training

論文連結:https://arxiv.org/abs/2408.10914

論文標題:LLM Pruning and Distillation in Practice: The Minitron Approach

論文連結:https://arxiv.org/abs/2408.11796

論文標題:Jamba-1.5: Hybrid Transformer-Mamba Models at Scale

論文連結:https://arxiv.org/abs/2408.12570

論文標題:Controllable Text Generation for Large Language Models: A Survey

論文連結:https://arxiv.org/abs/2408.12599

論文標題:Multi-Layer Transformers Gradient Can be Approximated in Almost Linear Time

論文連結:https://arxiv.org/abs/2408.13233

論文標題:A Practitioner's Guide to Continual Multimodal Pretraining

論文連結:https://arxiv.org/abs/2408.14471


論文標題:Building and better understanding vision-language models: insights and future directions

論文連結:https://arxiv.org/abs/2408.12637

論文標題:CURLoRA: Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigation

論文連結:https://arxiv.org/abs/2408.14572

論文標題:The Mamba in the Llama: Distilling and Accelerating Hybrid Models

論文連結:https://arxiv.org/abs/2408.15237

論文標題:ReMamba: Equip Mamba with Effective Long-Sequence Modeling

論文連結:https://arxiv.org/abs/2408.15496

論文標題:Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling

論文連結:https://arxiv.org/abs/2408.16737

論文標題:LongRecipe: Recipe for Efficient Long Context Generalization in Large Languge Models

論文連結:https://arxiv.org/abs/2409.00509

九月論文

論文標題:OLMoE: Open Mixture-of-Experts Language Models

論文連結:https://arxiv.org/abs/2409.02060

論文標題:In Defense of RAG in the Era of Long-Context Language Models

論文連結:https://arxiv.org/abs/2409.01666

論文標題:Attention Heads of Large Language Models: A Survey

論文連結:https://arxiv.org/abs/2409.03752

論文標題:LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA

論文連結:https://arxiv.org/abs/2409.02897

論文標題:How Do Your Code LLMs Perform? Empowering Code Instruction Tuning with High-Quality Data

論文連結:https://arxiv.org/abs/2409.03810

論文標題:Theory, Analysis, and Best Practices for Sigmoid Self-Attention

論文連結:https://arxiv.org/abs/2409.04431

論文標題:LLaMA-Omni: Seamless Speech Interaction with Large Language Models

論文連結:https://arxiv.org/abs/2409.06666

論文標題:What is the Role of Small Models in the LLM Era: A Survey

論文連結:https://arxiv.org/abs/2409.06857

論文標題:Policy Filtration in RLHF to Fine-Tune LLM for Code Generation

論文連結:https://arxiv.org/abs/2409.06957

論文標題:RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval

論文連結:https://arxiv.org/abs/2409.10516

論文標題:Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement

論文連結:https://arxiv.org/abs/2409.12122

論文標題:Qwen2.5-Coder Technical Report

論文連結:https://arxiv.org/abs/2409.12186

論文標題:Instruction Following without Instruction Tuning

論文連結:https://arxiv.org/abs/2409.14254

論文標題:Is Preference Alignment Always the Best Option to Enhance LLM-Based Translation? An Empirical Analysis

論文連結:https://arxiv.org/abs/2409.20059

論文標題:The Perfect Blend: Redefining RLHF with Mixture of Judges

論文連結:https://arxiv.org/abs/2409.20370

十月論文

論文標題:Addition is All You Need for Energy-efficient Language Models

論文連結:https://arxiv.org/abs/2410.00907

論文標題:Quantifying Generalization Complexity for Large Language Models

論文連結:https://arxiv.org/abs/2410.01769

論文標題:When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1

論文連結:https://arxiv.org/abs/2410.01792

論文標題:Were RNNs All We Needed?

論文連結:https://arxiv.org/abs/2410.01201

論文標題:Selective Attention Improves Transformer

論文連結:https://arxiv.org/abs/2410.02703

論文標題:LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations

論文連結:https://arxiv.org/abs/2410.02707

論文標題:LLaVA-Critic: Learning to Evaluate Multimodal Models

論文連結:https://arxiv.org/abs/2410.02712

論文標題:Differential Transformer

論文連結:https://arxiv.org/abs/2410.05258

論文標題:GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

論文連結:https://arxiv.org/abs/2410.05229

論文標題:ARIA: An Open Multimodal Native Mixture-of-Experts Model

論文連結:https://arxiv.org/abs/2410.05993

論文標題:O1 Replication Journey: A Strategic Progress Report – Part 1

論文連結:https://arxiv.org/abs/2410.18982

論文標題:Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG

論文連結:https://arxiv.org/abs/2410.05983

論文標題:From Generalist to Specialist: Adapting Vision Language Models via Task-Specific Visual Instruction Tuning

論文連結:https://arxiv.org/abs/2410.06456

論文標題:KV Prediction for Improved Time to First Token

論文連結:https://arxiv.org/abs/2410.08391

論文標題:Baichuan-Omni Technical Report

論文連結:https://arxiv.org/abs/2410.08565

論文標題:MMIE: Massive Multimodal Interleaved Comprehension Benchmark for Large Vision-Language Models

論文連結:https://arxiv.org/abs/2410.10139

論文標題:LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models

論文連結:https://arxiv.org/abs/2410.09732

論文標題:AFlow: Automating Agentic Workflow Generation

論文連結:https://arxiv.org/abs/2410.10762

論文標題:Toward General Instruction-Following Alignment for Retrieval-Augmented Generation

論文連結:https://arxiv.org/abs/2410.09584

論文標題:Pre-training Distillation for Large Language Models: A Design Space Exploration

論文連結:https://arxiv.org/abs/2410.16215

論文標題:MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models

論文連結:https://arxiv.org/abs/2410.17637

論文標題:Scalable Ranked Preference Optimization for Text-to-Image Generation

論文連結:https://arxiv.org/abs/2410.18013

論文標題:Scaling Diffusion Language Models via Adaptation from Autoregressive Models

論文連結:https://arxiv.org/abs/2410.17891

論文標題:Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback

論文連結:https://arxiv.org/abs/2410.19133

論文標題:Counting Ability of Large Language Models and Impact of Tokenization

論文連結:https://arxiv.org/abs/2410.19730

論文標題:A Survey of Small Language Models

論文連結:https://arxiv.org/abs/2410.20011

論文標題:Accelerating Direct Preference Optimization with Prefix Sharing

論文連結:https://arxiv.org/abs/2410.20305

論文標題:Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse

論文連結:https://arxiv.org/abs/2410.21333

論文標題:LongReward: Improving Long-context Large Language Models with AI Feedback

論文連結:https://arxiv.org/abs/2410.21252

論文標題:ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference

論文連結:https://arxiv.org/abs/2410.21465

論文標題:Beyond Text: Optimizing RAG with Multimodal Inputs for Industrial Applications

論文連結:https://arxiv.org/abs/2410.21943

論文標題:CORAL: Benchmarking Multi-turn Conversational Retrieval-Augmentation Generation

論文連結:https://arxiv.org/abs/2410.23090

論文標題:What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective

論文連結:https://arxiv.org/abs/2410.23743

論文標題:GPT or BERT: why not both?

論文連結:https://arxiv.org/abs/2410.24159

論文標題:Language Models can Self-Lengthen to Generate Long Texts

論文連結:https://arxiv.org/abs/2410.23933

十一月論文

論文標題:Adding Error Bars to Evals: A Statistical Approach to Language Model Evaluations

論文連結:https://arxiv.org/abs/2411.00640

論文標題:Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation

論文連結:https://arxiv.org/abs/2411.00412

論文標題:Multi-expert Prompting Improves Reliability, Safety, and Usefulness of Large Language Models

論文連結:https://arxiv.org/abs/2411.00492

論文標題:Sample-Efficient Alignment for LLMs

論文連結:https://arxiv.org/abs/2411.01493

論文標題:A Comprehensive Survey of Small Language Models in the Era of Large Language Models: Techniques, Enhancements, Applications, Collaboration with LLMs, and Trustworthiness

論文連結:https://arxiv.org/abs/2411.03350

論文標題:"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization

論文連結:https://arxiv.org/abs/2411.02355

論文標題:Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study

論文連結:https://arxiv.org/abs/2411.02462

論文標題:HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems

論文連結:https://arxiv.org/abs/2411.02959

論文標題:Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination

論文連結:https://arxiv.org/abs/2411.03823

論文標題:Language Models are Hidden Reasoners: Unlocking Latent Reasoning Capabilities via Self-Rewarding

論文連結:https://arxiv.org/abs/2411.04282

論文標題:Number Cookbook: Number Understanding of Language Models and How to Improve It

論文連結:https://arxiv.org/abs/2411.03766

論文標題:Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models

論文連結:https://arxiv.org/abs/2411.04996

論文標題:BitNet a4.8: 4-bit Activations for 1-bit LLMs

論文連結:https://arxiv.org/abs/2411.04965

論文標題:Scaling Laws for Precision

論文連結:https://arxiv.org/abs/2411.04330

論文標題:Energy Efficient Protein Language Models: Leveraging Small Language Models with LoRA for Controllable Protein Generation

論文連結:https://arxiv.org/abs/2411.05966

論文標題:Balancing Pipeline Parallelism with Vocabulary Parallelism

論文連結:https://arxiv.org/abs/2411.05288

論文標題:Toward Optimal Search and Retrieval for RAG

論文連結:https://arxiv.org/abs/2411.07396

論文標題:Large Language Models Can Self-Improve in Long-context Reasoning

論文連結:https://arxiv.org/abs/2411.08147

論文標題:Stronger Models are NOT Stronger Teachers for Instruction Tuning

論文連結:https://arxiv.org/abs/2411.07133

論文標題:Direct Preference Optimization Using Sparse Feature-Level Constraints

論文連結:https://arxiv.org/abs/2411.07618

論文標題:Cut Your Losses in Large-Vocabulary Language Models

論文連結:https://arxiv.org/abs/2411.09009

論文標題:Does Prompt Formatting Have Any Impact on LLM Performance?

論文連結:https://arxiv.org/abs/2411.10541

論文標題:SymDPO: Boosting In-Context Learning of Large Multimodal Models with Symbol Demonstration Direct Preference Optimization

論文連結:https://arxiv.org/abs/2411.11909

論文標題:SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration

論文連結:https://arxiv.org/abs/2411.10958

論文標題:Bi-Mamba: Towards Accurate 1-Bit State Space Models

論文連結:https://arxiv.org/abs/2411.11843

論文標題:RedPajama: an Open Dataset for Training Large Language Models

論文連結:https://arxiv.org/abs/2411.12372

論文標題:Hymba: A Hybrid-head Architecture for Small Language Models

論文連結:https://arxiv.org/abs/2411.13676

論文標題:Loss-to-Loss Prediction: Scaling Laws for All Datasets

論文連結:https://arxiv.org/abs/2411.12925

論文標題:When Precision Meets Position: BFloat16 Breaks Down RoPE in Long-Context Training

論文連結:https://arxiv.org/abs/2411.13476

論文標題:Multimodal Autoregressive Pre-training of Large Vision Encoders

論文連結:https://arxiv.org/abs/2411.14402

論文標題:Natural Language Reinforcement Learning

論文連結:https://arxiv.org/abs/2411.14251

論文標題:Large Multi-modal Models Can Interpret Features in Large Multi-modal Models

論文連結:https://arxiv.org/abs/2411.14982

論文標題:TÜLU 3: Pushing Frontiers in Open Language Model Post-Training

論文連結:https://arxiv.org/abs/2411.15124

論文標題:MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs

論文連結:https://arxiv.org/abs/2411.15296

論文標題:LLMs Do Not Think Step-by-step In Implicit Reasoning

論文連結:https://arxiv.org/abs/2411.15862

論文標題:O1 Replication Journey – Part 2: Surpassing O1-preview through Simple Distillation, Big Progress or Bitter Lesson?

論文連結:https://arxiv.org/abs/2411.16489

論文標題:Star Attention: Efficient LLM Inference over Long Sequences

論文連結:https://arxiv.org/abs/2411.17116

論文標題:Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens

論文連結:https://arxiv.org/abs/2411.17691

論文標題:Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration

論文連結:https://arxiv.org/abs/2411.17686

論文標題:Reverse Thinking Makes LLMs Stronger Reasoners

論文連結:https://arxiv.org/abs/2411.19865

論文標題:Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM's Reasoning Capability

論文連結:https://arxiv.org/abs/2411.19943

十二月論文

論文標題:Designing Scale-Wise Transformers for Text-to-Image Synthesis

論文連結:https://arxiv.org/abs/2412.01819

論文標題:X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models

論文連結:https://arxiv.org/abs/2412.01824

論文標題:Free Process Rewards without Process Labels

論文連結:https://arxiv.org/abs/2412.01981

論文標題:Scaling Image Tokenizers with Grouped Spherical Quantization

論文連結:https://arxiv.org/abs/2412.02632

論文標題:RARE: Retrieval-Augmented Reasoning Enhancement for Large Language Models

論文連結:https://arxiv.org/abs/2412.02830

論文標題:Perception Tokens Enhance Visual Reasoning in Multimodal Language Models

論文連結:https://arxiv.org/abs/2412.03548

論文標題:Evaluating Language Models as Synthetic Data Generators

論文連結:https://arxiv.org/abs/2412.03679

論文標題:Best-of-N Jailbreaking

論文連結:https://arxiv.org/abs/2412.03556

論文標題:PaliGemma 2: A Family of Versatile VLMs for Transfer

論文連結:https://arxiv.org/abs/2412.03555

論文標題:VisionZip: Longer is Better but Not Necessary in Vision Language Models

論文連結:https://arxiv.org/abs/2412.04467

論文標題:Evaluating and Aligning CodeLLMs on Human Preference

論文連結:https://arxiv.org/abs/2412.05210

論文標題:MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale

論文連結:https://arxiv.org/abs/2412.05237

論文標題:Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling

論文連結:https://arxiv.org/abs/2412.05271

論文標題:LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods

論文連結:https://arxiv.org/abs/2412.05579

論文標題:Does RLHF Scale? Exploring the Impacts From Data, Model, and Method

論文連結:https://arxiv.org/abs/2412.06000

論文標題:Unraveling the Complexity of Memory in RL Agents: An Approach for Classification and Evaluation

論文連結:https://arxiv.org/abs/2412.06531

論文標題:Training Large Language Models to Reason in a Continuous Latent Space

論文連結:https://arxiv.org/abs/2412.06769

論文標題:AutoReason: Automatic Few-Shot Reasoning Decomposition

論文連結:https://arxiv.org/abs/2412.06975

論文標題:Large Concept Models: Language Modeling in a Sentence Representation Space

論文連結:https://arxiv.org/abs/2412.08821

論文標題:Phi-4 Technical Report

論文連結:https://arxiv.org/abs/2412.08905

論文標題:Byte Latent Transformer: Patches Scale Better Than Tokens

論文連結:https://arxiv.org/abs/2412.09871

論文標題:SCBench: A KV Cache-Centric Analysis of Long-Context Methods

論文連結:https://arxiv.org/abs/2412.10319

論文標題:Cultural Evolution of Cooperation among LLM Agents

論文連結:https://arxiv.org/abs/2412.10270

論文標題:DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding

論文連結:https://arxiv.org/abs/2412.10302

論文標題:No More Adam: Learning Rate Scaling at Initialization is All You Need

論文連結:https://arxiv.org/abs/2412.11768

論文標題:Precise Length Control in Large Language Models

論文連結:https://arxiv.org/abs/2412.11937

論文標題:The Open Source Advantage in Large Language Models (LLMs)

論文連結:https://arxiv.org/abs/2412.12004

論文標題:A Survey of Mathematical Reasoning in the Era of Multimodal Large Language Model: Benchmark, Method & Challenges

論文連結:https://arxiv.org/abs/2412.11936

論文標題:Are Your LLMs Capable of Stable Reasoning?

論文連結:https://arxiv.org/abs/2412.13147

論文標題:LLM Post-Training Recipes, Improving Reasoning in LLMs

論文連結:https://arxiv.org/abs/2412.14135

論文標題:Hansel: Output Length Controlling Framework for Large Language Models

論文連結:https://arxiv.org/abs/2412.14033

論文標題:Mind Your Theory: Theory of Mind Goes Deeper Than Reasoning

論文連結:https://arxiv.org/abs/2412.1363

論文標題:Alignment Faking in Large Language Models

論文連結:https://arxiv.org/abs/2412.14093

論文標題:SCOPE: Optimizing Key-Value Cache Compression in Long-Context Generation

論文連結:https://arxiv.org/abs/2412.13649

論文標題:LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-Context Multitasks

論文連結:https://arxiv.org/abs/2412.15204

論文標題:Offline Reinforcement Learning for LLM Multi-Step Reasoning

論文連結:https://arxiv.org/abs/2412.16145

論文標題:Mulberry: Empowering MLLM with O1-like Reasoning and Reflection via Collective Monte Carlo Tree Search

論文連結:https://arxiv.org/abs/2412.18319

相關文章