A deep dive into the frontier of mathematical LLMs: from the current SFT and GRPO recipes, to the introduction of formal mathematics (Lean), dissecting the AlphaProof workflow, symbolic reasoning pruning (LIPS), and the evaluation challenges in autoformalization.
A comprehensive deep dive into Tool Agents: from Toolken vocabulary injection and CodeAct execution, to DocPrompting, Toolformer self-learning, visual Set-of-Mark grounding, and autonomous environment exploration.
A comprehensive deep dive into Coding Agents, detailing fine-grained evaluation benchmarks (SWE-bench, LiveCodeBench), agentic frameworks (SWE-agent vs. Agentless), and the sophisticated mechanisms of code localization, code efficiency, and LLM safety.
From the definition of Agents and Language Agents and their three generations, through memory (episodic/semantic/procedural, RAG, HippoRAG), reasoning (ReAct interleaved with action), and planning (reactive, tree search, world models, WebDreamer), to a unified picture and the Bitter Lesson.
From CoT and analogical prompting to self-consistency, ORM/PRM verification, tree-of-thoughts, multi-round self-reflection and token budget allocation, with the Bitter Lesson in mind.
From reward design, policy gradient, and PPO to RLHF/RLVR, then inference-time sampling and verification, Archon architecture search, and when to use RL vs test-time scaling.