id stringlengths 10 10 | url stringlengths 42 42 | title stringlengths 5 214 | average_rating float64 -1 8.5 | average_confidence float64 -1 5 | ratings listlengths 0 9 | confidences listlengths 0 9 | reviewers_num int64 0 9 | keywords listlengths 1 42 | abstract stringlengths 26 4.31k | tldr stringlengths 0 250 | primary_area stringclasses 21 values | pdf_url stringlengths 40 40 | submission_date timestamp[s]date 2025-09-01 19:59:51 2025-09-20 20:18:08 | total_reviews int64 0 18 | reviews listlengths 0 9 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gdjCL4vQM6 | https://openreview.net/forum?id=gdjCL4vQM6 | LIGHT: LLM-guided Graph Expert Routing for Semi-supervised Domain Generalization | 3.333333 | 4 | [
4,
4,
2
] | [
3,
5,
4
] | 3 | [
"GNNs",
"OOD Generalization",
"Domain Generalization",
"Semi-supervised Learning",
"LLMs",
"MoE"
] | Although graph neural networks (GNNs) have shown remarkable performance in graph machine learning, their effectiveness in practice often suffers from realistic challenges including distribution shifts and label scarcity. Towards this end, this paper studies the problem of semi-supervised domain generalization, which aims to improve the performance of GNNs on unseen graphs using both labeled and unlabeled data. We propose a novel approach named $\underline{L}$LM-Gu$\underline{i}$ded $\underline{G}$rap$\underline{h}$ Expert Rou$\underline{t}$ing (LIGHT) for semi-supervised domain generalization. The core idea of \method{} is to distill the knowledge from LLM-as-a-judge to determine context-aware routing weights for a multi-hop graph mixture-of-experts framework. In particular, our LIGHT employs diverse graph experts that explore neighborhood information at varying depths. More importantly, we leverage LLMs to provide judgments of the most reliable graph experts are for crucial nodes, which provide context-aware routing guidance with high generalizability for knowledge distillation. To further address label scarcity, we introduce an expert-aware dynamic pseudo-labeling strategies that selects reliable nodes for additional training. Extensive experiments on various benchmark datasets validate the effectiveness of the proposed LIGHT in comparison with competing approaches. Our source code can be found at $\url{https://anonymous.4open.science/r/LIGHT-A817}$. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=gdjCL4vQM6 | 2025-09-19T20:27:14 | 3 | [
{
"id": "DVIzKzkAzN",
"forum": "gdjCL4vQM6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18186/Reviewer_1qUe",
"reviewer_name": "Reviewer_1qUe",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
6mpv9kG81P | https://openreview.net/forum?id=6mpv9kG81P | Domain-Specific Data Synthesis for LLMs via Minimal Sufficient Representation Learning | 4.666667 | 3.666667 | [
6,
2,
6
] | [
4,
3,
4
] | 3 | [
"synthetic data",
"representation learning"
] | Large Language Models have demonstrated remarkable progress in general-purpose capabilities and can achieve strong performance in specific domains through fine-tuning on domain-specific data. However, acquiring high-quality data for target domains remains a significant challenge. Existing data synthesis approaches follow a
deductive paradigm, heavily relying on explicit domain descriptions expressed in natural language and careful prompt engineering, limiting their applicability in real-world scenarios where domains are difficult to describe or formally articulate. In this work, we tackle the underexplored problem of domain-specific data synthesis through an inductive paradigm, where the target domain is defined only through a set of reference examples, particularly when domain characteristics are difficult to articulate in natural language. We propose a novel framework, DOMINO, that learns a minimal sufficient domain representation from reference samples and leverages it to guide the generation of domain-aligned synthetic data. DOMINO integrates prompt tuning with a contrastive disentanglement objective to separate domain-level patterns from sample-specific noise, mitigating overfitting while preserving core domain characteristics. Theoretically, we prove that DOMINO expands the support of the synthetic data distribution, ensuring greater diversity. Empirically, on challenging
coding benchmarks where domain definitions are implicit, fine-tuning on data synthesized by DOMINO improves Pass@1 accuracy by up to 4.63\% over strong, instruction-tuned backbones, demonstrating its effectiveness and robustness. This work establishes a new paradigm for domain-specific data synthesis, enabling practical and scalable domain adaptation without manual prompt design or natural language domain specifications. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=6mpv9kG81P | 2025-09-19T14:15:58 | 3 | [
{
"id": "r8LUuHMMUc",
"forum": "6mpv9kG81P",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16272/Reviewer_tYar",
"reviewer_name": "Reviewer_tYar",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
VJqfoHU4Op | https://openreview.net/forum?id=VJqfoHU4Op | XDex: Learning Cross-Embodiment Dexterous Grasping with 1000 Hands | 4.5 | 3.75 | [
4,
6,
2,
6
] | [
4,
3,
4,
4
] | 4 | [
"Dexterous Grasping",
"Cross-embodiment"
] | Synthesizing dexterous grasps across various hands remains a fundamental challenge in robotic manipulation due to morphology gaps in geometry, topology, and kinematics. We hypothesize that scaling the diversity and number of hand embodiments improves generalization to unseen hands. To this end, we introduce XDex, a framework trained on the largest cross embodiment grasping dataset, which we built using 1,000 diverse hands. XDex features an embodiment transformer that jointly encodes hand geometry and topology to learn from this large scale dataset. Additionally, we enforce grasp consistency across embodiments by training on a paired grasping dataset and introducing a retargeting loss. The paired data are generated by first synthesizing grasps for a source hand and then translating them to diverse target hands. XDex significantly outperforms prior methods in grasp quality, consistency, and diversity, and demonstrates strong generalization to unseen hands in real world settings. | applications to robotics, autonomy, planning | https://openreview.net/pdf?id=VJqfoHU4Op | 2025-09-03T04:55:15 | 4 | [
{
"id": "1gUtlOyV76",
"forum": "VJqfoHU4Op",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1130/Reviewer_2jri",
"reviewer_name": "Reviewer_2jri",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
JbyaS5zhcB | https://openreview.net/forum?id=JbyaS5zhcB | Geo-Invariant Scoring Lead with Domain-Adversarial Transformers | 3.6 | 3 | [
6,
2,
2,
6,
2
] | [
3,
4,
3,
3,
2
] | 5 | [
"domain adaptation",
"transformers",
"lead scoring",
"adversarial learning",
"geographic fairness",
"DANN",
"sequential modeling"
] | Predicting B2B lead conversion requires not only modeling long‑range dependencies in richly sequenced customer interactions but also ensuring fair performance across under‑represented geographies. While our DeepScore transformer backbone improved overall AUPR from $0.266$ to $0.360$, it exhibited significant geo‑skew: majority‑region (America) signals dominated feature learning (AUPR $0.474$), leaving East-Asia ($0.262$) under‑served. To address this, we embed a Domain‑Adversarial Neural Network (DANN) module into DeepScore’s architecture. A gradient‑reversal layer connects a geo‑discriminator to the shared transformer encoder, enforcing a minimax game that drives hidden representations to be predictive of conversion yet uninformative of geography. Simultaneously, lightweight geo‑specific classifier heads learn residual region‑nuances without re‑introducing large divergence. DeepScore + geo‑DANN achieves a $4.3 \%$ relative gain in macro‑AUPR and reduces inter‑region AUPR gaps by up to $12.3\%$ , all without degrading America accuracy. To our knowledge, this is the first demonstration of adversarial domain adaptation in large‑scale B2B lead scoring, offering a scalable path to equitable, high‑fidelity predictions across heterogeneous markets. | We augment transformer-based B2B lead scoring with domain-adversarial training to achieve geography-invariant representations, reducing regional performance gaps by 12.3% without degrading majority-region accuracy. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=JbyaS5zhcB | 2025-09-20T05:41:58 | 5 | [
{
"id": "1hgTgy5zgg",
"forum": "JbyaS5zhcB",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21486/Reviewer_xMND",
"reviewer_name": "Reviewer_xMND",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
kIT1aA8SbY | https://openreview.net/forum?id=kIT1aA8SbY | Stability in Training PINNs for Stiff PDEs: Why Initial Conditions Matter | 3 | 3.5 | [
2,
6,
2,
2
] | [
3,
3,
4,
4
] | 4 | [
"Physics-Informed Neural Networks",
"Stiff PDEs",
"Hard Constraints",
"Initial Condition",
"Ablation Study",
"Neural Tangent Kernel"
] | Training Physics-Informed Neural Networks (PINNs) on stiff time-dependent PDEs remains highly unstable. Through rigorous ablation studies, we identify a surprisingly critical factor: the enforcement of initial conditions. We present the first systematic ablation of two core strategies, hard initial-condition constraints and adaptive loss weighting. Across challenging benchmarks (sharp transitions, higher-order derivatives, coupled systems, and high frequency modes), we find that exact enforcement of initial conditions (ICs) is not optional but essential. Our study demonstrates that stability and efficiency in PINN training fundamentally depend on ICs, paving the way toward more reliable PINN solvers in stiff regimes. | neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.) | https://openreview.net/pdf?id=kIT1aA8SbY | 2025-09-20T05:25:52 | 6 | [
{
"id": "uYQasFfK0F",
"forum": "kIT1aA8SbY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21400/Reviewer_q5o1",
"reviewer_name": "Reviewer_q5o1",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This ... | |
s0nYSwlV3I | https://openreview.net/forum?id=s0nYSwlV3I | Influence without Confounding: Causal Discovery from Temporal Data with Long-term Carry-over Effects | 5 | 3.75 | [
8,
2,
4,
6
] | [
4,
4,
3,
4
] | 4 | [
"Causal Discovery",
"Reinforcement Learning",
"QR Decomposition",
"Long-term Carry-over Effects"
] | Learning causal structures from temporal data is fundamental to many practical tasks, such as physical laws discovery and root causes localization.
Real-world systems often exhibit long-term carry-over effects, where the value of a variable at the current time can be influenced by distant past values of other variables. These effects, due to their large temporal span, are challenging to observe or model. Existing methods typically consider finite lag orders, which may lead to confounding from early historical data. Moreover, incorporating historical information often results in computational scalability issues.
In this paper, we establish a theoretical framework for causal discovery in complex temporal scenarios where observational data exhibit long-term carry-over effect, and propose LEVER, a theoretically guaranteed novel causal discovery method for incomplete temporal data. Specifically, based on the \textit{Limited-history Causal Identifiability Theorem}, we refine the variable values at each time step with data at a few preceding steps to mitigate long-term historical influences. Furthermore, we establish a theoretical connection between QR decomposition and causal discovery, and design an efficient reinforcement learning process to determine the optimal variable ordering. Finally, we recover the causal structure from the R matrix.
We evaluate LEVER on both synthetic and real-world datasets. In static cases, LEVER reduces SHD by 17.29\%-40.00\% and improves the F1-score by 5.30\%-8.79\% compared to the best baseline. In temporal cases, it achieves a 64\% reduction in SHD and a 45\% improvement in F1-score. Additionally, LEVER demonstrates significantly higher precision on real-world data compared to baseline methods. | causal reasoning | https://openreview.net/pdf?id=s0nYSwlV3I | 2025-09-20T00:03:50 | 4 | [
{
"id": "CP1IWz9yAq",
"forum": "s0nYSwlV3I",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19630/Reviewer_eY1p",
"reviewer_name": "Reviewer_eY1p",
"rating": 8,
"confidence": 4,
"soundness": 4,
"contribution": 4,
"presentation": 3,
"summary": "The p... | |
uikLSN1yot | https://openreview.net/forum?id=uikLSN1yot | The SMeL Test: A simple benchmark for media literacy in language models | 4 | 3.5 | [
2,
4,
8,
2
] | [
3,
4,
4,
3
] | 4 | [
"media literacy",
"benchmark",
"LLM"
] | The internet is rife with unattributed, deliberately misleading, or otherwise untrustworthy content. Though large language models (LLMs) are often tasked with autonomous web browsing, the extent to which they have learned the simple heuristics human researchers use to navigate this noisy environment is not currently known. In this paper, we introduce the Synthetic Media Literacy Test (SMeL Test), a minimal benchmark that tests the ability of language models to actively filter out untrustworthy and fictional information in context. We benchmark a variety of commonly used instruction-tuned LLMs, including "reasoning" models, and find that no model consistently succeeds; while reasoning in particular is associated with higher scores, even the best API model we test hallucinates up to 70% of the time. Remarkably, larger and more capable models do not necessarily outperform their smaller counterparts. We hope our work sheds more light on this important form of hallucination and guides the development of new methods to combat it. | Current language models are incapable of filtering out untrustworthy information in context. | datasets and benchmarks | https://openreview.net/pdf?id=uikLSN1yot | 2025-09-20T03:10:48 | 4 | [
{
"id": "HXHpX9ByYu",
"forum": "uikLSN1yot",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20689/Reviewer_aMix",
"reviewer_name": "Reviewer_aMix",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The w... |
9b6Ox5MVhq | https://openreview.net/forum?id=9b6Ox5MVhq | FewGAD: Few-Shot Enhanced Graph Anomaly Detection via Generative Contrastive Learning | 3 | 4.25 | [
4,
2,
4,
2
] | [
5,
4,
4,
4
] | 4 | [
"Anomaly Detection; Graph Neural Network; Few-shot Learning;"
] | Graph anomaly detection (GAD) is critical in domains such as fraud detection, cybersecurity, and social network monitoring. However, existing approaches face two major challenges: the inherent scarcity of labeled anomalies in practical scenarios, and the widespread reliance on graph augmentation, which often distorts anomaly semantics and undermines model robustness. To address these issues, we propose FewGAD, a framework that leverages limited anomaly labels to enhance contrastive discrimination through high-order subgraph sampling without augmentation. By avoiding augmentation-induced distortion, this design fundamentally improves the robustness and semantic validity of learned representations, thereby enabling clearer separation between normal and anomalous nodes. Furthermore, a kernel density estimation mechanism expands the utility of scarce labels, enhancing data efficiency and strengthening anomaly discrimination under few-shot settings. Extensive experiments on five benchmark datasets demonstrate that FewGAD consistently surpasses state-of-the-art unsupervised and few-shot GAD methods, achieving an average AUC gain of 6.2\%. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=9b6Ox5MVhq | 2025-09-15T14:29:06 | 4 | [
{
"id": "BKrH1rcmeo",
"forum": "9b6Ox5MVhq",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5520/Reviewer_oEJ9",
"reviewer_name": "Reviewer_oEJ9",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
kbP98SvPGV | https://openreview.net/forum?id=kbP98SvPGV | Boosting Verifiable Industrial Code Generation by Reliable Task Generation at Scale | 5.5 | 3.25 | [
4,
6,
4,
8
] | [
4,
3,
3,
3
] | 4 | [
"Industrial Control Systems",
"Programmable Logic Controllers",
"Data Augmentation",
"Code Generation"
] | Recent advances in industrial copilots (e.g., from Siemens, Rockwell and Schneider) for Programmable Logic Controllers (PLCs) have the potential to transform the way control engineers program. However, the closed-source nature and scarcity of data for Industrial Control System (ICS) programming tasks cast fundamental challenges to utilize LLMs for generating verifiable industrial code (e.g., free of both syntactic and logical errors), which is of vital importance in industrial control applications.To address this critical gap, we introduce PLC-Spec-Syn, the first evolutionary framework to generate high-fidelity PLC programming tasks. Each task consists of a detailed specification—a structured, natural language engineering document—and its corresponding verified PLC code.The core idea is to guide LLM-based task generation (specification–code pair) with practical industrial engineering principles through a multi-axis evolutionary process considering six dimensions:functionality, safety, performance, maintenance, interoperability, and contextual complication.To ensure data quality, each generated specification–code pair will undergo rigorous auditing including compilation check and formal verification of semantic consistency between the specification and the code.The whole process yields PLC-Spec-Code, the first large-scale corpus of 11,669 PLC programming tasks with strict quality control.Besides, PLC-Spec-Code has 84.3% syntactic diversity, substantially exceeding that of existing corpus like OSCAT (29.2%). Importantly, fine-tuning multiple (code) LLMs using our corpus improves their performance on verifiable PLC code generation in unseen tasks by an average of 16.4% compared to the previous models, confirming the effectiveness of our task generation approach and the practical usefulness of our corpus. | datasets and benchmarks | https://openreview.net/pdf?id=kbP98SvPGV | 2025-09-19T11:51:31 | 4 | [
{
"id": "K8i5XbChbM",
"forum": "kbP98SvPGV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15685/Reviewer_kPKr",
"reviewer_name": "Reviewer_kPKr",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
APCJj5r6Os | https://openreview.net/forum?id=APCJj5r6Os | A Computationally Efficient Case-Control Sampling Framework for G-Formula with Longitudinal Data | 2.666667 | 3.333333 | [
2,
4,
2
] | [
3,
3,
4
] | 3 | [
"Causal inference",
"time-varying treatment",
"survival analysis",
"rare outcomes",
"case-control sampling"
] | Estimating the causal effect of time-varying treatments on survival outcomes in large observational studies is computationally demanding, particularly when outcomes are rare. The iterative conditional expectation (ICE) estimator within the g-formula framework is effective but becomes computationally burdensome when bootstrapping is used for variance estimation. Additionally, the rarity of outcomes at each time point can create extreme class imbalance, leading to instability and convergence issues in logistic regression and related models. To address these challenges, we propose a novel case-control enhanced g-formula approach, which integrates case-control sampling with ICE estimation. This approach significantly reduces computational burden while maintaining consistency and improving estimation stability. By strategically selecting informative subsets of data and applying appropriate reweighting, the approach mitigates class imbalance, substantially reduces computational cost, and preserves consistency and asymptotic efficiency. We evaluate the method through simulations and validate it using a large-scale EHR cohort study on social and behavioral determinants of health (SBDH) and suicide risk, demonstrating its effectiveness for modeling rare outcomes in longitudinal data. | We propose a case-control enhanced g-formula approach to efficiently estimate causal effects of time-varying treatments on rare survival outcomes. | causal reasoning | https://openreview.net/pdf?id=APCJj5r6Os | 2025-09-20T08:00:10 | 3 | [
{
"id": "IERFkg05Iy",
"forum": "APCJj5r6Os",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22123/Reviewer_xhkF",
"reviewer_name": "Reviewer_xhkF",
"rating": 2,
"confidence": 3,
"soundness": 3,
"contribution": 1,
"presentation": 1,
"summary": "The a... |
Ard2QzPAUK | https://openreview.net/forum?id=Ard2QzPAUK | BeliefFormer: Belief Attention in Transformer | 3.5 | 3 | [
4,
4,
4,
2
] | [
3,
3,
2,
4
] | 4 | [
"Transformer; orthogonal projection; BeliefFormer"
] | In this paper, we consider modifying the attention layer in Transformer to improve its generalization performance. Conceptually speaking, the standard attention layer takes the softmax-based weighted summation of V vectors as the residual signal (with a linear mapping for dimensionality alignment) when performing the skip-connection operation. Inspired by distribution optimization, we propose to first perform an orthogonal projection of the softmax-based weighted summation of V vectors with respect to the original V vectors and then take the orthogonal projection instead as the residual signal (with a linear mapping for dimensionality alignment) when performing the skip-connection operation. By doing so, the token vectors are modified relatively more along their tangent directions compared to their magnitudes. Intuitively speaking, the orthogonal projection reflects a belief about the discrepancy between the weighted summation of V vectors and the V vectors themselves. We refer to the newly modified layer and the overall architecture as the belief-attention and the BeliefFormer, respectively. To further improve performance, we also design a variant of belief-attention by incorporating two types of orthogonal projections, referred to as belief-attention$^{\ast}$. Extensive experiments show that the two new variants of attention layer in Transformers lead to better performance than the standard attention for image classification over ImageNet and natural language processing when training nano-GPT2. | incorporating orthogonal projection as residual signals into attention layer in Transformer to improve generation performance | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=Ard2QzPAUK | 2025-09-20T18:38:56 | 4 | [
{
"id": "EtPBRc0uYd",
"forum": "Ard2QzPAUK",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25156/Reviewer_XiS7",
"reviewer_name": "Reviewer_XiS7",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
LnbMSnQpXb | https://openreview.net/forum?id=LnbMSnQpXb | ADDI: A Simplified E2E Autonomous Driving Model with Distinct Experts and Implicit Interactions | 2 | 4.25 | [
2,
2,
2,
2
] | [
4,
4,
5,
4
] | 4 | [
"CV",
"Imitation Learning",
"Applications",
"3D vision"
] | End-to-end autonomous driving has emerged as a promising research trend aimed at achieving autonomy from a human-like driving perspective. Traditional solutions often divide the task into four sub-tasks—tracking-by-detection, online mapping, prediction, and planning—with several interactions to polish planning. However, this modular approach disrupts the cohesion of autonomous driving by ecomposing these processes and then linking them through interactions, leading to suboptimal and inefficient practical applications. To address this limitation, we propose ADDI, a simple and efficient end-to-end autonomous driving method. First, ADDI integrates tracking-by-detection and online mapping through a unified detection module paired with distinct expert designs, enabling simultaneous output of detection and mapping elements. Second, ADDI employs a unified motion planning model with distinct experts to jointly predict agent trajectories and ego planning trajectories. With this unified model structure, most interactions required by previous methods are rendered unnecessary. ADDI implements two implicit (resource-free) and two explicit interactions to associate the different components. Experimental results demonstrate that ADDI achieves state-of-the-art performance on both open-loop and closed-loop benchmarks while running significantly faster than prior end-to-end methods. | A simple and efficient end-to-end autonomous driving method. | applications to robotics, autonomy, planning | https://openreview.net/pdf?id=LnbMSnQpXb | 2025-09-05T17:09:05 | 5 | [
{
"id": "49gRGFrckE",
"forum": "LnbMSnQpXb",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2357/Reviewer_s69D",
"reviewer_name": "Reviewer_s69D",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This p... |
0wCTDqkK8I | https://openreview.net/forum?id=0wCTDqkK8I | Quantization with Purpose: Loss-Aware Bit Allocation for Gradient Compression | 4.5 | 3.25 | [
2,
8,
4,
4
] | [
4,
3,
2,
4
] | 4 | [
"Gradient Compression",
"Rate-Distortion Optimization",
"Bit Allocation",
"Quantization"
] | Gradient quantization is a critical technique for reducing communication overhead in large-scale distributed training. However, existing methods often employ fixed bit-width quantization or adaptive quantizers optimized with signal-level distortion metrics such as MSE, which poorly correlate with model performance. In this paper, we propose a novel layer-wise bit allocation framework for gradient quantization, formulated under a rate-distortion optimization (RDO) paradigm. Unlike prior approaches, our method introduces a loss-aware distortion metric that directly quantifies the impact of quantization on training loss, enabling task-aligned solution for bit allocation. A key insight of our work is the linear superposition property of cross-layer loss distortion, which we theoretically justify and empirically validate. This property allows us to decouple the original joint optimization problem and efficiently solve it via a Lagrangian optimization algorithm with linear complexity. Extensive experiments across vision and language tasks—using CNNs, ViTs, LSTMs, and Transformers—demonstrate the effectiveness of our approach. Moreover, our method integrates seamlessly with existing gradient compression techniques, yielding consistent performance gains. | optimization | https://openreview.net/pdf?id=0wCTDqkK8I | 2025-09-19T16:55:30 | 4 | [
{
"id": "7E4NGBn0h8",
"forum": "0wCTDqkK8I",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17101/Reviewer_xQTq",
"reviewer_name": "Reviewer_xQTq",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The a... | |
rpPtgMC5s9 | https://openreview.net/forum?id=rpPtgMC5s9 | Relational Transformer: Toward Zero-Shot Foundation Models for Relational Data | 6 | 3.666667 | [
6,
6,
6
] | [
5,
3,
3
] | 3 | [
"foundation models",
"relational deep learning",
"relational data",
"transformer"
] | Pretrained transformers readily adapt to new sequence modeling tasks via zero-shot prompting, but relational domains still lack architectures that transfer across datasets and tasks.
The core challenge is the diversity of relational data, with varying heterogeneous schemas, graph structures, and functional dependencies.
We propose the _Relational Transformer (RT)_, a cell-level architecture pretrained on diverse relational databases and directly applicable to unseen datasets and tasks, without any need for task- or dataset-specific fine-tuning or retrieval of in-context examples. RT (i) tokenizes cells with table/column metadata, (ii) is pretrained via masked token prediction, and (iii) utilizes a novel _Relational Attention_ mechanism over columns, rows, and primary–foreign key links.
Pretrained on RelBench datasets spanning tasks such as churn and sales forecasting, RT attains strong zero-shot performance; on binary classification it averages 94\% of fully supervised AUROC in a single forward pass, and fine-tuning yields state-of-the-art results with high sample efficiency. Our experiments show that RT’s zero-shot transfer harnesses task-table context,
column and feature attention, and schema semantics. Overall, RT provides a practical path toward foundation models for relational data. | A novel architecture for relational data that shows strong zero-shot abilities on unseen datasets after pre-training. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=rpPtgMC5s9 | 2025-09-18T04:59:54 | 3 | [
{
"id": "UdREXkie1B",
"forum": "rpPtgMC5s9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9833/Reviewer_fWfy",
"reviewer_name": "Reviewer_fWfy",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This p... |
IRrQgf2GAl | https://openreview.net/forum?id=IRrQgf2GAl | Query-Kontext: An Unified Multimodal Model for Image Generation and Editing | 5 | 3.75 | [
4,
4,
6,
6
] | [
4,
4,
4,
3
] | 4 | [
"Diffusion model",
"VLM",
"Image Generation",
"Image Editing"
] | Unified Multimodal Models (UMMs) have demonstrated remarkable performance in text-to-image generation (T2I) and editing (TI2I), whether instantiated as assembled unified frameworks which couple powerful vision-language model (VLM) with diffusion-based generator, or as naive Unified Multimodal Models with an early fusion of understanding and generation modalities. We contend that in current unified frameworks, the crucial capability of multimodal generative reasoning which encompasses instruction understanding, grounding, and image referring for identity preservation and faithful reconstruction, is intrinsically entangled with high-fidelity synthesis. In this work, we introduce Query-Kontext, a novel approach that bridges the VLM and diffusion model via a multimodal “kontext” composed of semantic cuse and coarse-grained image conditions encoded from multimodal inputs. This design delegates the complex ability of multimodal generative reasoning to powerful VLM while reserving diffusion model’s role for high-quality visual synthesis. To achieve this, we propose a three-stage progressive training strategy. First, we connect the VLM to a lightweight diffusion head via multimodal kontext tokens to unleash the VLM’s generative reasoning ability. Second, we scale this head to a large, pre-trained diffusion model to enhance visual detail
and realism. Finally, We introduce a low-level image encoder to improve image fidelity and perform instruction tuning on downstream tasks. Furthermore, We build a comprehensive data pipeline integrating real, synthetic, and curated open-source datasets, covering diverse multimodal reference-to-image scenarios, including image generation, instruction-driven editing, customized generation, and multi-subject composition. Experiments show that our approach matches strong unified baselines and even outperforms task-specific state-of-the-art methods in several cases. | generative models | https://openreview.net/pdf?id=IRrQgf2GAl | 2025-09-05T18:11:36 | 4 | [
{
"id": "qSTHsjq2LK",
"forum": "IRrQgf2GAl",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2376/Reviewer_Nkxi",
"reviewer_name": "Reviewer_Nkxi",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "1. Wel... | |
KKA59ai0x6 | https://openreview.net/forum?id=KKA59ai0x6 | Beyond Classification Accuracy: Neural-MedBench and the Need for Deeper Reasoning Benchmarks | 6 | 4 | [
4,
6,
8
] | [
3,
4,
5
] | 3 | [
"vision-language models",
"benchmark dataset",
"medical AI evaluation",
"reasoning-intensive tasks"
] | Recent advances in vision-language models (VLMs) have achieved remarkable performance on standard medical benchmarks, yet their true clinical reasoning ability remains unclear. Existing datasets predominantly emphasize classification accuracy, creating an evaluation illusion in which models appear proficient while still failing at high-stakes diagnostic reasoning.
We introduce \texttt{Neural-MedBench}, a compact yet reasoning-intensive benchmark specifically designed to probe the limits of multimodal clinical reasoning in neurology. Neural-MedBench integrates multi-sequence MRI scans, structured electronic health records, and clinical notes, and encompasses three core task families: differential diagnosis, lesion recognition, and rationale generation. To ensure reliable evaluation, we develop a hybrid scoring pipeline that combines LLM-based graders, clinician validation, and semantic similarity metrics.
Through systematic evaluation of state-of-the-art VLMs, including GPT-4o, Claude-4, and MedGemma, we observe a sharp performance drop compared to conventional datasets. Error analysis shows that reasoning failures, rather than perceptual errors, dominate model shortcomings.
Our findings highlight the necessity of a Two-Axis Evaluation Framework: breadth-oriented large datasets for statistical generalization, and depth-oriented, compact benchmarks such as Neural-MedBench for reasoning fidelity. We release Neural-MedBench as an open and extensible diagnostic testbed, which guides the expansion of future benchmarks and enables rigorous yet cost-effective assessment of clinically trustworthy AI. | We introduce Neural-MedBench, a reasoning-intensive benchmark that exposes how state-of-the-art vision-language models fail at clinical diagnosis despite excelling on standard medical AI benchmarks. | datasets and benchmarks | https://openreview.net/pdf?id=KKA59ai0x6 | 2025-09-19T18:43:36 | 4 | [
{
"id": "d6BqcY6Pnf",
"forum": "KKA59ai0x6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17633/Reviewer_Kijm",
"reviewer_name": "Reviewer_Kijm",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
jqbYkKrP7k | https://openreview.net/forum?id=jqbYkKrP7k | APEX: One-Step High-Resolution Image Synthesis | 5.5 | 3.5 | [
4,
6,
4,
8
] | [
4,
4,
3,
3
] | 4 | [
"Diffusion",
"T2I"
] | The pursuit of efficient text-to-image synthesis has driven the field toward a few-step generation paradigm, yet this endeavor is hampered by a persistent trilemma: achieving high fidelity, inference efficiency, and training efficiency simultaneously remains elusive.
Current approaches are often forced into a difficult trade-off.
While methods employing external discriminators can produce high-fidelity one-step generations, they suffer from significant drawbacks, including training instability, high GPU memory costs, and slow convergence.
Conversely, alternative paradigms like consistency distillation, though easier to train, often struggle to achieve high quality in one-step generation.
These challenges have restricted the scalability and broader application of one-step generative models.
In this work, we present APEX, a method that resolves this trilemma.
The core innovation is a self-condition-shifting adversarial mechanism that completely obviates the need for an external discriminator.
By eliminating this discriminator bottleneck, APEX achieves exceptional training efficiency and stability.
This design makes it particularly well-suited for both full-parameter and LoRA-based tuning of large-scale generative models, offering a truly end-to-end solution.
Experimentally, APEX demonstrates state-of-the-art (SOTA) performance, delivering high-fidelity synthesis with just a single function evaluation (NFE=1), yields a 15.33x speedup over the original Qwen-Image 20B.
Our 0.6B model improves upon substantially larger models, such as FLUX Schnell 12B in few-step generation.
We further showcase its efficiency by achieving a GenEval score of 0.89 on the Qwen-Image (original 50 NFE is 0.87) for 1 NFE, 20B model with LoRA tuning in just 6 hours.
APEX effectively reshapes the trade-off between training cost, inference speed, and generation quality in large text-to-image generative models. | generative models | https://openreview.net/pdf?id=jqbYkKrP7k | 2025-09-01T23:53:20 | 4 | [
{
"id": "fSnhowl2Az",
"forum": "jqbYkKrP7k",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission482/Reviewer_pjrF",
"reviewer_name": "Reviewer_pjrF",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This pa... | |
DQuWpKLNwd | https://openreview.net/forum?id=DQuWpKLNwd | Reasoning on a Spectrum: Aligning LLMs to System 1 and System 2 Thinking | 4.8 | 3.6 | [
6,
4,
2,
6,
6
] | [
2,
4,
4,
3,
5
] | 5 | [
"Alignment",
"System 1 and System 2 thinking",
"Cognitive heuristics",
"LLM",
"NLP"
] | Large Language Models (LLMs) exhibit impressive reasoning abilities, yet their reliance on structured step-by-step processing reveals a critical limitation. In contrast, human cognition fluidly adapts between intuitive, heuristic (System 1) and analytical, deliberative (System 2) reasoning depending on the context. This difference between human cognitive flexibility and LLMs' reliance on a single reasoning style raises a critical question: while human fast heuristic reasoning evolved for its efficiency and adaptability, is a uniform reasoning approach truly optimal for LLMs, or does its inflexibility make them brittle and unreliable when faced with tasks demanding more agile, intuitive responses? To answer these questions, we explicitly align LLMs to these reasoning styles by curating a dataset with valid System 1 and System 2 answers, and evaluate their performance across reasoning benchmarks. Our results reveal an accuracy-efficiency trade-off: System 2-aligned models excel in arithmetic and symbolic reasoning, while System 1-aligned models perform better in commonsense reasoning tasks. To analyze the reasoning spectrum, we interpolated between the two extremes by varying the proportion of alignment data, which resulted in a monotonic change in accuracy. A mechanistic analysis of model responses shows that System 1 models employ more definitive outputs, whereas System 2 models demonstrate greater uncertainty. Building on these findings, we further combine System 1- and System 2-aligned models based on the entropy of their generations, without additional training, and obtain a dynamic model that outperforms across nearly all benchmarks. This work challenges the assumption that step-by-step reasoning is always optimal and highlights the need for adapting reasoning strategies based on task demands. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=DQuWpKLNwd | 2025-09-20T06:07:23 | 5 | [
{
"id": "mPPshzgH6b",
"forum": "DQuWpKLNwd",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21612/Reviewer_6SYb",
"reviewer_name": "Reviewer_6SYb",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This ... | |
rBFDvZu6pb | https://openreview.net/forum?id=rBFDvZu6pb | Towards Spatial Supersensing in Video | 5.5 | 3.75 | [
6,
6,
4,
6
] | [
4,
4,
4,
3
] | 4 | [
"Multimodal Large Langauge Model",
"Super Sensing Model",
"Spatial Understanding",
"Video Understanding",
"Memory"
] | We frame spatial supersensing in video as an overarching goal for multimodal intelligence and argue that progress requires a shift from long-context brute force to predictive sensing. Using a four-level taxonomy: semantic perception, streaming event cognition, implicit 3D spatial cognition, and predictive world modeling, we audit existing benchmarks and show they focus heavily on the first tier, with only partial coverage of streaming and spatial cognition, and almost never test true world modeling. To ground these gaps, we introduce VSI-Super, a two-part benchmark for continual spatial sensing: VSO (long-horizon spatial observation and recall) and VSC (continual counting under changing viewpoints and scenes). These tasks admit arbitrarily long video inputs and are specifically built so that simply scaling tokens or context length isn’t enough. Within the current paradigm, we push spatial cognition by curating VSI-590K and training a new family of video MLLMs that deliver 30% absolute on VSI-Bench without sacrificing general semantic perception. Yet these models still underperform on VSI-Super, exposing a paradigm gap. We then prototype predictive sensing: a self-supervised next latent-frame predictor whose surprise (prediction error) drives long-horizon memory and event segmentation. On VSI-Super, this approach substantially outperforms leading video MLLMs, evidencing that advancing spatial supersensing requires models that not only see but also anticipate, select, and organize experience. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=rBFDvZu6pb | 2025-09-02T22:01:39 | 4 | [
{
"id": "ARkBhiaNDQ",
"forum": "rBFDvZu6pb",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission875/Reviewer_jh5E",
"reviewer_name": "Reviewer_jh5E",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This wo... | |
3IaP48VUes | https://openreview.net/forum?id=3IaP48VUes | Can Aha Moments Be Fake? Identifying True and Decorative Thinking Steps in Chain-of-Thought | 4.5 | 3.5 | [
4,
4,
4,
6
] | [
3,
4,
3,
4
] | 4 | [
"Faithfulness; Reasoning; LLMs; steering"
] | Recent large language models (LLMs) can generate long Chain-of-Thought (CoT)
at test time, enabling them to solve complex tasks. These reasoning traces are often
assumed as a faithful reflection of LLMs’ internal thinking process, and can be
used for monitoring LLMs’ unsafe intentions. However, by analyzing the step-wise
causal influence of CoT on a model’s prediction using Average Treatment Effect
(ATE), we show that LLMs often interleave between (1) true-thinking steps, which
are faithfully used to generate model’s final output and (2) decorative-thinking
steps, which give the appearance of reasoning but have minimal causal impact on
the model’s final output. Specifically, we design a True Thinking Score (TTS) and
reveal that only a small subset of the total thinking steps that have relatively high
scores and causally drive the final prediction (e.g., 2.3% steps in a CoT on average
have TTS ≥ 0.7 for a Qwen-2.5 model). Furthermore, we identify a TrueThinking
direction in the latent space of LLMs. By steering along this direction, we can force
the model to perform or disregard certain CoT steps when computing the result. Fi-
nally, we highlight that self-verification steps in CoT can also be decorative, where
LLMs do not truly check their solution, while steering along the TrueThinking
direction can force internal reasoning over these steps. Overall, our work reveals
that LLMs can verbalize reasoning steps without performing them internally, which
undermines both the efficiency of LLM reasoning and the trustworthiness of CoT. | interpretability and explainable AI | https://openreview.net/pdf?id=3IaP48VUes | 2025-09-14T01:53:30 | 4 | [
{
"id": "YDcvohyjug",
"forum": "3IaP48VUes",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4893/Reviewer_y5rC",
"reviewer_name": "Reviewer_y5rC",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This p... | |
DDuF4lpcTl | https://openreview.net/forum?id=DDuF4lpcTl | HarmoMoE: Unifying Domain-Specialized Experts into a Mixture-of-Experts Model under Privacy Constraints | 5 | 3.25 | [
4,
4,
6,
6
] | [
4,
3,
3,
3
] | 4 | [
"Mixture of Experts",
"Privacy-Preserving Learning"
] | Mixture-of-Experts (MoE) models offer a powerful way to scale capacity, but existing designs typically assume centralized access to all training data. In many real-world scenarios, however, data is distributed across clients from different domains and cannot be shared due to privacy constraints, making it challenging to build a unified and generalizable MoE. We propose HarmoMoE, a framework that unifies domain-specialized experts into a single MoE without sharing private data. HarmoMoE combines relevance-weighted DPP proxy selection with a context-aware router, ensuring that experts trained on both private and proxy data remain compatible and effectively coordinated. Experiments on CV and NLP show that HarmoMoE consistently outperforms recent methods such as BTX and FlexOlmo. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=DDuF4lpcTl | 2025-09-18T11:13:35 | 4 | [
{
"id": "UEuThbBmrD",
"forum": "DDuF4lpcTl",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10257/Reviewer_pz7X",
"reviewer_name": "Reviewer_pz7X",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "The a... | |
RzFdZ5Pjy6 | https://openreview.net/forum?id=RzFdZ5Pjy6 | Deception in Dialogue: Evaluating and Mitigating Deceptive Behavior in Large Language Models | 4 | 3.333333 | [
2,
4,
6
] | [
4,
4,
2
] | 3 | [
"Large Language Models (LLMs)",
"Reinforcement Learning",
"AI Safety"
] | Large Language Models (LLMs) now interact with hundreds of millions of people worldwide, powering applications such as customer support, education and healthcare. However, their ability to produce deceptive outputs, whether intentionally or inadvertently, poses significant safety concerns. The unpredictable nature of LLM behavior, combined with insufficient safeguards against hallucination, misinformation, and user manipulation, makes their misuse a serious, real-world risk. In this paper, we systematically investigate the extent to which LLMs engage in deception within dialogue, and propose the belief misalignment metric to measure deception. We evaluate deception across four distinct dialogue scenarios, using five established deception detection metrics and our proposed metric. Our findings reveal this novel deception measure correlates more closely with human judgments than any of the existing metrics we test. Additionally, our benchmarking of 8 state-of-the-art models indicates that LLMs naturally exhibit deceptive behaviors 24.4% of the time, even when prompted with seemingly benign objectives. When prompted to deceive, LLMs are capable of increasing deceptiveness to 43% of turns. We further explore how to use reinforcement learning to fine-tune LLMs to reduce deceptive behaviors, leading to a 15% reduction compared to other fine-tuned models. | Across 8 LLM models, we find deceptive behavior in dialogue in up to 43% dial and reduce it by 15% via reinforcement learning with a new deception detection metric. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=RzFdZ5Pjy6 | 2025-09-20T08:10:25 | 3 | [
{
"id": "lwPb4zcXxK",
"forum": "RzFdZ5Pjy6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22167/Reviewer_e2am",
"reviewer_name": "Reviewer_e2am",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... |
eAge74DIgk | https://openreview.net/forum?id=eAge74DIgk | LitExplorer: Training-Free Diffusion Guidance with Adaptive Exploration-Filtering Framework | 4 | 3.333333 | [
4,
4,
4
] | [
3,
3,
4
] | 3 | [
"Diffusion Model;Traning-free"
] | Diffusion models possess strong general generative capabilities, yet they remain insufficient when aligned with specific target objectives. Fine-tuning methods can enhance alignment but incur high training costs and face the risk of reward hacking. Consequently, training-free guidance mechanisms have emerged, which leverage external signals during inference to steer the generative distribution toward high-reward regions. However, existing training-free approaches encounter two key challenges: first, the guidance process tends to over-bias generation toward the target distribution, at the expense of excessively narrowing the pretrained model’s generative space; second, the guidance signals are mechanically imposed throughout inference, lacking mechanisms to identify and filter out ineffective or redundant signals. To mitigate these limitations, we propose \ourmethod{}. Regarding the first issue, we introduce exploratory guidance signals through \pos{} to prevent generation paths from prematurely converging to a single mode, while dynamically balancing the trade-off between exploration and stable generation based on denoising progress. This alleviates the excessive contraction of the generative space without deviating from the target distribution or the pretrained distribution. Regarding the second issue, to enable precise and efficient guidance, we incorporate an adjudication mechanism that evaluates the validity of guidance signals and adaptively eliminates ineffective or redundant ones. To demonstrate the generality of \ourmethod{}, we conduct extensive evaluations in both single-objective and multi-objective scenarios. Results show that \ourmethod{} achieves significant improvements over existing training-free baselines in terms of generative diversity, target alignment, and inference efficiency. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=eAge74DIgk | 2025-09-01T20:12:48 | 3 | [
{
"id": "CnRO9YAIb5",
"forum": "eAge74DIgk",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission101/Reviewer_6kFr",
"reviewer_name": "Reviewer_6kFr",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The pap... | |
sL0gkGGhLL | https://openreview.net/forum?id=sL0gkGGhLL | Inferring Attribute Subspaces from Visual Contexts | 3.5 | 3.5 | [
2,
4,
4,
4
] | [
4,
2,
4,
4
] | 4 | [
"visual attributes",
"diffusion",
"attribute subspace"
] | Recent advances in generative vision-language models have demonstrated remarkable capabilities in image synthesis, captioning, and multi-modal reasoning. Among their most intriguing behaviors is in-context learning, the ability to adapt to new tasks from just a few examples. While well-studied in language models, this capability remains underexplored in the visual domain. Motivated by this, we explore how generative vision models can infer and apply visual concepts directly from image sets, without relying on text or labels. We frame this as an attribute subspace inference task: given a small set of related images, the model identifies the shared variation and uses it to guide generation from a query image. During training, we use auxiliary groupings to provide weak structural supervision. At inference time, the model receives only unlabeled inputs and must generalize the visual concept based on example images alone. Our approach enables attribute-consistent image generation and contributes a novel direction for nonverbal concept learning in vision. | We propose Attribute Subspace Inference Tasks and develop a training setup that enables generative models to infer shared semantic attributes from just a few example images without labels, and to generate attribute-consistent images. | generative models | https://openreview.net/pdf?id=sL0gkGGhLL | 2025-09-19T16:40:18 | 4 | [
{
"id": "9sZRDVeFQF",
"forum": "sL0gkGGhLL",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17019/Reviewer_BDP5",
"reviewer_name": "Reviewer_BDP5",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... |
GWjvweJDnG | https://openreview.net/forum?id=GWjvweJDnG | Offline Federated Deep Reinforcement Learning with Awareness of Expected Returns and Policy Inconsistency | 4 | 3.25 | [
4,
4,
6,
2
] | [
4,
2,
3,
4
] | 4 | [
"Federated Deep Reinforcement Learning; Offline Deep Reinforcement Learning"
] | Offline Federated Deep Reinforcement Learning (FDRL) methods aggregate multiple client-side offline Deep Reinforcement Learning (DRL) models, each trained locally, to facilitate knowledge sharing while preserving privacy. Existing offline FDRL methods assign client weights during global aggregation using either simple averaging or Q-values, but they neglect the combined consideration of Q-values and policy inconsistency, the latter of which reflects the distributional discrepancy between the learned policy and the policy from offline data. This causes clients with no significant advantages in one aspect but obvious disadvantages in the other to disproportionately affect the global model, thereby degrading its capabilities in that aspect. During local training, clients in existing methods are compelled to fully adopt the global model, which negatively impacts clients when the global model is weak. To address these limitations, we propose a novel federated learning framework that can be seamlessly integrated into current offline FDRL approaches to improve their performance. Our method considers both policy inconsistency and Q-values to determine the weights of client models, with the latter adjusted by a scaling factor to align with the magnitude of the former. The aggregated global model is then distributed to clients, who minimize the discrepancy between their models and the global one. The impact of this discrepancy is reduced if the client’s model ability exceeds that of the global model, mitigating the effect of a weaker global model. Experiments on the Datasets for Deep Data-Driven Reinforcement Learning (D4RL) demonstrate that our method enhances four state-of-the-art (SOTA) offline FDRL methods in terms of return and D4RL score. | This paper proposes an offline federated deep reinforcement learning framework that evaluates the capabilities of client models and the global model by combining policy inconsistency and expected return. | reinforcement learning | https://openreview.net/pdf?id=GWjvweJDnG | 2025-09-16T11:01:07 | 4 | [
{
"id": "8v3TBDfEE6",
"forum": "GWjvweJDnG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6666/Reviewer_Sgzo",
"reviewer_name": "Reviewer_Sgzo",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The pa... |
refcXHU1Nh | https://openreview.net/forum?id=refcXHU1Nh | SafeFlowMatcher: Safe and Fast Planning using Flow Matching with Control Barrier Functions | 6 | 3.4 | [
6,
4,
8,
4,
8
] | [
2,
3,
3,
5,
4
] | 5 | [
"Flow matching",
"Safety guarantees",
"Planning and Control"
] | Generative planners based on Flow Matching (FM) produce high-quality paths in a single or a few ODE steps, but their sampling dynamics offer no formal safety guarantees and can yield incomplete paths near constraints. We present \emph{SafeFlowMatcher}, a planning framework that couples FM with control barrier functions (CBFs) to achieve \emph{both} real-time efficiency and certified safety. SafeFlowMatcher uses a two-phase \emph{prediction--correction} (PC) integrator: (i) a prediction phase integrates the learned FM once (or a few steps) to obtain a candidate path without intervention; (ii) a correction phase refines this path with a vanishing time‑scaled vector field and a CBF-based quadratic program that minimally perturbs the vector field. We prove a barrier certificate for the resulting flow system, establishing forward invariance of a robust safe set and finite-time convergence to the safe set. In addition, by enforcing safety only on the executed path---rather than all intermediate latent paths---SafeFlowMatcher avoids distributional drift and mitigates local trap problems. Moreover, SafeFlowMatcher attains faster, smoother, and safer paths than diffusion- and FM-based baselines on maze navigation and locomotion. Extensive ablations corroborate the contributions of the PC integrator and the barrier certificate. | We propose SafeFlowMatcher, a novel method for safe and fast planning that couples flow matching with control barrier functions via a two-phase prediction–correction integrator | applications to robotics, autonomy, planning | https://openreview.net/pdf?id=refcXHU1Nh | 2025-09-20T15:05:16 | 5 | [
{
"id": "wxZe5w39jq",
"forum": "refcXHU1Nh",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24026/Reviewer_Jozp",
"reviewer_name": "Reviewer_Jozp",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
lIIJDDxBrg | https://openreview.net/forum?id=lIIJDDxBrg | Empowering Protein Language Model for Sequence-Structure Co-Generation with Continuous Structure Tokens | 3.5 | 4 | [
4,
4,
4,
2
] | [
4,
4,
3,
5
] | 4 | [
"ai for science",
"protein language model",
"protein sequence-structure co-generation"
] | Proteins inherently possess a consistent sequence-structure duality. The abundance of protein sequence data, which can be readily represented as discrete tokens, has enabled fruitful developments in protein language models (pLMs). A key remaining challenge, however, is how to effectively integrate continuous structural knowledge into pLMs. Current methods often discretize protein structure to accommodate the language modeling framework, which inevitably results in information loss and limits the performance potential of multimodal pLMs. In this paper, we argue that such concerns can be circumvented: a sequence-based pLM can be extended to incorporate the structure modality through continuous tokens, i.e., high-fidelity protein structure latents that avoid vector quantization. Specifically, we propose a hybrid diffusion protein language model, HD-Prot, which embeds a continuous-valued diffusion generation head atop a discrete pLM, enabling seamless operation with both discrete and continuous tokens for sequence-structure joint-modeling in multimodal generative pLMs. The proposed model captures inter-token dependencies across modalities through a unified absorbing diffusion process, and estimates per-token distributions via categorical prediction for sequences and continuous diffusion for structures. Extensive empirical results show that our models achieve highly competitive performance in protein sequence–structure co-generation tasks, performing on par with state-of-the-art multimodal pLMs despite being developed under limited computational resources. It underscores the viability of jointly modeling discrete categorical and continuous arbitrary distributions using shared parameters within a pLM, pointing to an alternative and promising direction of progress for multimodal pLMs. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=lIIJDDxBrg | 2025-09-09T21:26:35 | 4 | [
{
"id": "A9JjnJUGT4",
"forum": "lIIJDDxBrg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3427/Reviewer_NCdP",
"reviewer_name": "Reviewer_NCdP",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This p... | |
nhcz0uni55 | https://openreview.net/forum?id=nhcz0uni55 | QuArch: A Benchmark for Evaluating LLM Reasoning in Computer Architecture | 4 | 3.333333 | [
4,
6,
2
] | [
3,
2,
5
] | 3 | [
"benchmark",
"computer architecture",
"dataset",
"language models",
"question-answering"
] | The field of computer architecture, which bridges high-level software abstractions and low-level hardware implementations, remains absent from current large language model (LLM) evaluations. To this end, we present QuArch (pronounced 'quark'), the first benchmark designed to facilitate the development and evaluation of LLM knowledge and reasoning capabilities specifically in computer architecture. QuArch provides a comprehensive collection of 2,671 expert-validated question-answer (QA) pairs covering various aspects of computer architecture, including processor design, memory systems, and interconnection networks. Our evaluation reveals that while frontier models possess domain-specific knowledge, they struggle with skills that require higher-order thinking in computer architecture. Frontier model accuracies vary widely (from 34% to 72%) on these advanced questions, highlighting persistent gaps in architectural reasoning across analysis, design, and implementation QAs. By holistically assessing fundamental skills, QuArch provides a foundation for building and measuring LLM capabilities that can accelerate innovation in computing systems. | We present QuArch, the first question-answering benchmark for the field of computer architecture, and find state-of-the-art models struggle with skills that require higher-order thinking. | datasets and benchmarks | https://openreview.net/pdf?id=nhcz0uni55 | 2025-09-19T04:43:23 | 3 | [
{
"id": "kJmTQKHnJG",
"forum": "nhcz0uni55",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14083/Reviewer_YovW",
"reviewer_name": "Reviewer_YovW",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
MG5UFZ2Fa2 | https://openreview.net/forum?id=MG5UFZ2Fa2 | Identifying Outcome-Oriented Root Causes via Cross Regression | 4 | 3.5 | [
6,
4,
4,
2
] | [
3,
4,
3,
4
] | 4 | [
"Root Causes",
"Regression Theory"
] | Root Cause Analysis (RCA) in complex and interconnected systems exhibits significant importance in fields such as microservice maintenance, and supply-chain management. By identifying every intervened variable, existing RCA methods have achieved remarkable progress in localizing and fixing anomalies. However, people may be more interested and focused on those intervened variables that produced effects on a specific outcome, rather than the intervened variables that do not necessarily affect that outcome. This raises concerns on redundant localizing and extra efforts in fixing the anomalies. To fill this gap, we study a novel and challenging problem, termed as Outcome-Oriented Root-Cause Analysis (OORCA), aiming to identify all intervened ancestor variables of the outcome variable. To handle the proposed OORCA problem, we then propose the Cross-Regressing-based Root Cause (CRRC) framework by cross-regressing observational (normal) and interventional (abnormal) data on the outcome variable. Theoretically, our identifiability analysis prove that the proposed CRRC can capture all outcome-oriented root-causes, and our asymptotic analysis offers tractable and informative criteria in the finite-sample regime. Extensive experiments across three benchmarks with 13 competitive baselines highlight the superiority of CRRC in both accuracy and running efficiency. | This paper presents a regression-based approach for identifying all outcome-oriented,necessary root-causes. | causal reasoning | https://openreview.net/pdf?id=MG5UFZ2Fa2 | 2025-09-17T09:14:39 | 4 | [
{
"id": "zS0wvvhsfE",
"forum": "MG5UFZ2Fa2",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8165/Reviewer_9ih8",
"reviewer_name": "Reviewer_9ih8",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... |
ceYA0t4yZ1 | https://openreview.net/forum?id=ceYA0t4yZ1 | Bringing Light to the Threshold: Identification of Multi-Score Regression Discontinuity Effects with Application to LED Manufacturing | 5.333333 | 3 | [
8,
4,
4
] | [
3,
4,
2
] | 3 | [
"Causal Machine Learning",
"Regression Discontinuity Design",
"Treatment Effect Identification",
"Causal Inference Application"
] | The regression discontinuity design (RDD) is a widely used framework for threshold-based causal effect estimation in causal inference. Recent extensions incorporating machine learning (ML) adjustments have made RDD an appealing approach for researchers utilizing causal ML toolkits. However, many real-world applications, such as production systems, involve multiple decision criteria and logically connected thresholds, necessitating more sophisticated identification strategies, which are not clearly addressed in the recent literature. We derive a novel identification result for the complier effect in the multi-score RDD (MRD) setting by extending unit behavior types to multiple dimensions. Further, we show that under mild assumptions, this identification result does not depend on subsets of units with constant response. We apply our findings to simulated and real-world data from opto-electronic semiconductor manufacturing, employing estimators that adjust for covariates through machine learning. Our results offer insights into enhancing current production policies by optimizing the cutoff points, demonstrating the applicability of MRD in a manufacturing context. | We derive new identification results for the Multi-Score RDD framework and apply ML-adjusted RDD estimators to manufacturing data accordingly. | causal reasoning | https://openreview.net/pdf?id=ceYA0t4yZ1 | 2025-09-19T17:58:22 | 3 | [
{
"id": "qd52FzfsSF",
"forum": "ceYA0t4yZ1",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17425/Reviewer_rcgL",
"reviewer_name": "Reviewer_rcgL",
"rating": 8,
"confidence": 3,
"soundness": 4,
"contribution": 4,
"presentation": 4,
"summary": "Reaso... |
K3vdeJ4R0k | https://openreview.net/forum?id=K3vdeJ4R0k | Towards Scalable Distance-Enhanced Graph Neural Network | 3.5 | 4 | [
4,
2,
4,
4
] | [
5,
4,
4,
3
] | 4 | [
"Graph Neural Networks",
"Expressive Power",
"Distance Encoding",
"Scalability"
] | Graph neural networks (GNNs) have demonstrated significant advantages in graph mining tasks, but often suffer from limited expressive power. Among existing expressive GNNs, distance-enhanced GNNs (DE-GNNs) arise as promising ones due to their conceptual simplicity and alignment with the expressive needs of real-world applications. However, scalability remains a key challenge for DE-GNNs, as constructing pairwise distance features requires quadratic complexity. Additionally, while existing work has shown that specialized distance features enable strong expressiveness, the expressive power of simpler distance metrics remains less understood.
In this paper, we propose a new Scalable Distance-Enhanced Graph Neural Network (termed SDE-GNN) to tackle the above issues. SDE-GNN introduces a distance-aware message-passing framework, where message weights are computed by a learnable distance feature mapping. It first linearly projects the adjacency-power-based distance vector to a scalar, then applies a polynomial expansion. To efficiently scale to large graphs, we reformulate the distance features as the product of two asymmetric node encodings and apply Randomized SVD for dimensionality reduction, lowering the computational complexity from quadratic in the number of nodes to linear in the number of edges. Additionally, we leverage the sparsity of the adjacency matrix to directly compute the first-order term of the distance feature mapping, further mitigating distortion from dimensionality reduction. Theoretically, we show that the adopted adjacency-power-based distance outperforms other commonly used distance features. Empirically, we conduct experiments on 17 datasets and verify the effectiveness, efficiency, and scalability of SDE-GNN. | We propose a scalable distance-enhanced graph neural network which is expressive and can scale to large graphs. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=K3vdeJ4R0k | 2025-09-18T22:10:18 | 4 | [
{
"id": "rVZo9bXFSp",
"forum": "K3vdeJ4R0k",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11912/Reviewer_LbV8",
"reviewer_name": "Reviewer_LbV8",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
RVmjA3u4PQ | https://openreview.net/forum?id=RVmjA3u4PQ | A Dynamic Multiscale Anti-Aliasing Network for Time Series Forecasting | 4.5 | 4 | [
6,
4,
6,
2
] | [
4,
4,
3,
5
] | 4 | [
"time series forecasting",
"frequency",
"aliasing"
] | Real-world time series inherently exhibit complex temporal patterns. Within chaotic systems, significant mixing and entanglement occur between different time-varying modes. Given that time series exhibit distinctly different patterns at various sampling scales, downsampling to extract multiscale features is a common approach. However, conventional downsampling causes high-frequency components in the original signal, those exceeding the new Nyquist frequency, to undergo spectral folding. This erroneously introduces spurious low-frequency patterns, perceived as low-frequency noise, thereby leading to the **aliasing problem**. To address this problem, we propose a Decomposition-Prevention-Fusion architecture framework called **DMANet**, which introduces the **D**ynamic **M**ultiscale **A**nti-Aliasing **Net**work. Specifically, DMANet comprises two key components: Multiscale Convolutional Downsampling, designed to capture temporal dependencies and inter-channel interactions, and an Anti-Aliasing Operation, which includes Pre-Sampling Anti-Aliasing Filtering and Post-Sampling Interpolation. These designs guarantee the fidelity of multiscale features before and after downsampling. We show that by mitigating the risk of aliasing, our proposed simple convolutional downsampling architecture achieves performance competitive with common baselines and larger Transformer-based models prevalent in existing studies across multiple benchmark datasets. Our codes are available at https://anonymous.4open.science/r/DMANet-ED7A. | We propose a novel model named DMANet, a dynamic multiscale anti-aliasing network for time series forecasting tasks. | learning on time series and dynamical systems | https://openreview.net/pdf?id=RVmjA3u4PQ | 2025-09-18T23:58:44 | 4 | [
{
"id": "8Xq3Yo4Sjx",
"forum": "RVmjA3u4PQ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12877/Reviewer_pqCd",
"reviewer_name": "Reviewer_pqCd",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
FShQHrDyEO | https://openreview.net/forum?id=FShQHrDyEO | Route-and-Reason: Scaling Large Language Model Reasoning with Reinforced Model Router | 4 | 3.5 | [
4,
4,
6,
2
] | [
4,
3,
3,
4
] | 4 | [
"Model Router",
"Large Language Model",
"LLM Reasoning",
"Efficient Reasoning",
"Reinforcement Learning"
] | Chain-of-thought has been proven essential for enhancing the complex reasoning abilities of Large Language Models (LLMs), but it also leads to high computational costs. Recent advances have explored the method to route queries among multiple models and proved it as a promising approach. However, previous works directly operate at the task level, i.e., assigning user queries to suitable LLMs, which does not allow hybrid LLMs to truly collaborate on finer-grained sub-tasks. Collaboration at the level of intermediate reasoning steps (thoughts) could enable more efficient coordination, but it also poses significant challenges for router scheduling, placing immense demands on the quality of task decomposition and the precision of the router. To address this, we propose **R2-Reasoner**, a novel framework centered around **a Reinforced Model Router** designed to efficiently scale LLM reasoning. This router orchestrates collaboration across 9 heterogeneous models, of whom the parameter scale ranges from less than 1B to hundreds of billions, by first breaking down a complex query into subtasks with a decomposer, and then assigning each subtask to the optimal model with a subtask allocator, balancing performance with cost. To train this router involves a 2-stage alternating process for the decomposer and the allocator, integrating supervised fine-tuning with reinforcement learning to enable effective self-supervised refinement. Extensive experiments across six challenging reasoning benchmarks demonstrate that R2-Reasoner reduces API costs by 84.46% compared with state-of-the-art baselines while maintaining competitive reasoning accuracy. Our framework paves the way for the development of more scalable and efficient reasoning systems. Our code is open-source at [https://anonymous.4open.science/r/R2_Reasoner](https://anonymous.4open.science/r/R2_Reasoner). | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=FShQHrDyEO | 2025-09-20T16:46:56 | 4 | [
{
"id": "FAST7jcXoK",
"forum": "FShQHrDyEO",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24566/Reviewer_gJ4a",
"reviewer_name": "Reviewer_gJ4a",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
vRwuBOxbsJ | https://openreview.net/forum?id=vRwuBOxbsJ | Solving Football by Exploiting Equilibrium Structure of 2p0s Differential Games with One-Sided Information | 5.2 | 2.8 | [
6,
6,
6,
6,
2
] | [
3,
3,
2,
2,
4
] | 5 | [
"Differential Game",
"Incomplete-Information Game",
"Game Theory"
] | For a two-player imperfect-information extensive-form game (IIEFG) with $K$ time steps and a player action space of size $U$, the game tree complexity is $U^{2K}$, causing existing IIEFG solvers to struggle with large or infinite $(U,K)$, e.g., differential games with continuous action spaces. To partially address this scalability challenge, we focus on an important class of 2p0s games where the informed player (P1) knows the payoff while the uninformed player (P2) only has a belief over the set of $I$ possible payoffs. Such games encompass a wide range of scenarios in sports, defense, cybersecurity, and finance.
We prove that under mild conditions, P1's (resp. P2's) equilibrium strategy at any infostate concentrates on at most $I$ (resp. $I+1$) action prototypes. When $I\ll U$, this equilibrium structure causes the game tree complexity to collapse to $I^K$ for P1 when P2 plays pure best responses, and $(I+1)^K$ for P2 in a dual game where P1 plays pure best responses. We then show that exploiting this structure in standard learning modes, i.e., model-free multiagent reinforcement learning and model predictive control, is straightforward, leading to significant improvements in learning accuracy and efficiency from SOTA IIEFG solvers. Our demonstration solves a 22-player football game ($K=10$, $U=\infty$) where the attacking team has to strategically conceal their intention until a critical moment in order to exploit information advantage. Code is available [here](https://anonymous.4open.science/r/iclr_2026). | The paper highlights the limitations of current state-of-the-art when applied to solving one-sided incomplete information differential games with continuous actions, such as Football, and proposes scalable methods to solve the problem. | reinforcement learning | https://openreview.net/pdf?id=vRwuBOxbsJ | 2025-09-19T13:10:14 | 5 | [
{
"id": "rWFBuPMJfL",
"forum": "vRwuBOxbsJ",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16010/Reviewer_52aL",
"reviewer_name": "Reviewer_52aL",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
3kvV1nfWVq | https://openreview.net/forum?id=3kvV1nfWVq | A$^2$FM: An Adaptive Agent Foundation Model for Tool-Aware Hybrid Reasoning | 5 | 3.5 | [
6,
4,
4,
6
] | [
4,
2,
3,
5
] | 4 | [
"Adaptive LLMs",
"Deep Research",
"Agent Reasoning"
] | Large language models split into two families: reasoning-centric LLMs, which strengthen internal chain-of-thought reasoning but cannot invoke external tools, and agentic LLMs, which learn to interact with environments and leverage tools but often lag in deep reasoning. This divide arises from fundamentally different training objectives, leading to mismatched strengths and inefficiency on simple queries, where both families tend to overthink or over-call tools. In this work, we present Adaptive Agent Foundation Model (A$^2$FM), a unified framework that follows a route-then-align principle: the model first learns task-aware routing and then aligns mode-specific trajectories under a shared backbone. To address the inefficiency gap, we introduce a third mode—instant—that handles simple queries directly, preventing unnecessary reasoning or tool calls while complementing the agentic and reasoning modes. To jointly enhance accuracy and efficiency, we propose Adaptive Policy Optimization (APO), which enforces adaptive sampling across modes and applies a cost-regularized reward. On the 32B scale, A$^2$FM achieves 13.4\% on BrowseComp, 70.4\% on AIME25, and 16.7\% on HLE, setting new SOTA among comparable models and performing competitively with frontier LLMs across agentic, reasoning, and general benchmarks. Notably, the adaptive execution achieves a cost of pass of only \$0.00487 per correct answer—cutting cost by 45.2\% relative to reasoning and 33.5\% relative to agentic, thus delivering substantially higher cost efficiency while maintaining comparable accuracy. | We propose A²FM, a unified 32B model combining agentic, reasoning, and instant modes via adaptive routing and APO, achieving state-of-the-art accuracy with substantially improved cost efficiency. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=3kvV1nfWVq | 2025-09-15T11:16:48 | 4 | [
{
"id": "MqIUkmGYDd",
"forum": "3kvV1nfWVq",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5384/Reviewer_S9XP",
"reviewer_name": "Reviewer_S9XP",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... |
qBAV2DEvAC | https://openreview.net/forum?id=qBAV2DEvAC | Implicit bias produces neural scaling laws in learning curves, from perceptrons to deep networks | 5.5 | 3.25 | [
8,
8,
4,
2
] | [
3,
3,
3,
4
] | 4 | [
"Neural scaling laws",
"Implicit bias",
"Learning curves",
"Spectral complexity norm",
"Perceptron theory"
] | Scaling laws in deep learning -- empirical power-law relationships linking model performance to resource growth -- have emerged as simple yet striking regularities across architectures, datasets, and tasks. These laws are particularly impactful in guiding the design of state-of-the-art models, since they quantify the benefits of increasing data or model size, and hint at the foundations of interpretability in machine learning. However, most studies focus on asymptotic behavior at the end of training. In this work, we describe a richer picture by analyzing the entire training dynamics: we identify two novel \textit{dynamical} scaling laws that govern how performance evolves as function of different norm-based complexity measures. Combined, our new laws recover the well-known scaling for test error at convergence. Our findings are consistent across CNNs, ResNets, and Vision Transformers trained on MNIST, CIFAR-10 and CIFAR-100. Furthermore, we provide analytical support using a single-layer perceptron trained with logistic loss, where we derive the new dynamical scaling laws, and we explain them through the implicit bias induced by gradient-based training. | We connect neural scaling laws in deep networks with the implicit bias induced by logistic losses through a surprisingly simple perceptron theory. | optimization | https://openreview.net/pdf?id=qBAV2DEvAC | 2025-09-19T22:22:30 | 4 | [
{
"id": "1y3Xchgyqs",
"forum": "qBAV2DEvAC",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18886/Reviewer_kC4U",
"reviewer_name": "Reviewer_kC4U",
"rating": 8,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 4,
"summary": "The a... |
dDHnO3Vhyj | https://openreview.net/forum?id=dDHnO3Vhyj | Closing the Gap Between Text and Speech Understanding in LLMs | 6 | 3.666667 | [
6,
6,
6
] | [
3,
4,
4
] | 3 | [
"Speech language models",
"large language models",
"multimodal language models",
"modality alignment",
"cross-modal alignment",
"cross-modal transfer",
"cross-modal distillation",
"modality gap",
"speech processing"
] | Large Language Models (LLMs) can be adapted to extend their text capabilities to speech inputs. However, these speech-adapted LLMs consistently underperform their text-based counterparts—and even cascaded pipelines—on language understanding tasks. We term this shortfall the text–speech understanding gap: the performance drop observed when a speech-adapted LLM processes spoken inputs relative to when the original text-based LLM processes the equivalent text. Recent approaches to narrowing this gap either rely on large-scale speech synthesis of text corpora, which is costly and heavily dependent on synthetic data, or on large-scale proprietary speech datasets, which are not reproducible. As a result, there remains a need for more data-efficient alternatives for closing the text-speech understanding gap. In this work, we analyze the gap as driven by two factors: (i) forgetting of text capabilities during adaptation, and (ii) cross-modal misalignment between speech and text. Based on this analysis, we introduce SALAD—Sample-efficient Alignment with Learning through Active selection and cross-modal Distillation—which combines cross-modal distillation with targeted synthetic data to improve alignment while mitigating forgetting. Applied to 3B and 7B LLMs, SALAD achieves competitive performance with a strong open-weight model across broad-domain benchmarks in knowledge, language understanding, and reasoning, while training on over an order of magnitude less speech data from publicly available corpora. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=dDHnO3Vhyj | 2025-09-19T00:21:10 | 3 | [
{
"id": "VaU8OclD0v",
"forum": "dDHnO3Vhyj",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12987/Reviewer_nz78",
"reviewer_name": "Reviewer_nz78",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
dB6DYLpjw4 | https://openreview.net/forum?id=dB6DYLpjw4 | Neural Mutual Information Estimation in Real Time via Pre-trained Hypernetworks | 5.333333 | 3 | [
4,
6,
6
] | [
4,
3,
2
] | 3 | [
"statistical dependence",
"transformers",
"hypernetwork",
"mutual information"
] | Measuring statistical dependency between high-dimensional random variables is
fundamental to data science and machine learning. Neural mutual information
(MI) estimators offer a promising avenue, but they typically require costly test-
time iterative optimization for each new dataset, making them impractical for
real-time applications. We present *FlashMI*, a pretrained, foundation model-like
architecture that eliminates this bottleneck by directly inferring MI in a single
forward pass. Pretrained on large-scale synthetic data covering diverse distributions
and dependency structures, *FlashMI* learns to identify distributional patterns and
predict MI directly from the input dataset. Comprehensive experiments demonstrate
that *FlashMI* matches state-of-the-art neural estimators in accuracy while achieving
100× speedup, can seamlessly handle varying dimensions and sample sizes through
a single unified model, and generalizes zero-shot to real-world tasks, including
CLIP embedding analysis and motion trajectory modeling. By reformulating
MI estimation from an optimization problem to a direct inference task, *FlashMI*
establishes a practical foundation for real-time dependency analysis. | A pre-trained attention-based model for statistical dependence quantification, accurate, fast and differentiable | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=dB6DYLpjw4 | 2025-09-05T15:45:02 | 3 | [
{
"id": "OWPGpQ7okY",
"forum": "dB6DYLpjw4",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2321/Reviewer_GWbp",
"reviewer_name": "Reviewer_GWbp",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
x8tl75Yznn | https://openreview.net/forum?id=x8tl75Yznn | Optimizing Temporal and Spatial Efficiency for Chain-of-Thought Reasoning in Large Language Models | 4 | 3.25 | [
6,
4,
4,
2
] | [
3,
4,
3,
3
] | 4 | [
"Reasoning Model",
"Model Compression",
"Efficiency"
] | Chain-of-Thought (CoT) reasoning in Large Language Models (LLMs) achieves remarkable performance but suffers from significant computational overhead. CoT reasoning exhibits redundancy across two critical dimensions: temporal redundancy, where reasoning steps may be unnecessary, and spatial redundancy, where computations can be performed at reduced precision. While existing approaches require expensive dataset construction and model fine-tuning to improve reasoning efficiency, we propose Temporal-Spatial Adaptive Reasoning (TSAR), a training-free framework that jointly exploits both redundancy dimensions through coordinated optimization. TSAR segments reasoning based on Dewey's reflective thinking model, employs progressive precision reduction that adapts to both reasoning phases and progress, and coordinates termination decisions through entropy-based confidence estimation. Our adaptive scheduler prevents precision-induced errors while enabling compound efficiency gains. Extensive evaluation on diverse reasoning tasks demonstrates up to 12.4× speedup while maintaining the accuracy, establishing coordinated multi-dimensional redundancy exploitation as superior to conventional optimization strategies. | We introduce TSAR, a training-free framework that dramatically accelerates LLM reasoning by adaptively reducing both unnecessary thinking steps and computational precision, achieving massive speedups without sacrificing accuracy. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=x8tl75Yznn | 2025-09-16T10:12:32 | 4 | [
{
"id": "vCihpHKjWK",
"forum": "x8tl75Yznn",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6555/Reviewer_tC8f",
"reviewer_name": "Reviewer_tC8f",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "The pa... |
OgiLGrPgw5 | https://openreview.net/forum?id=OgiLGrPgw5 | Structured Visual Landscape: Generating Preferred Representations in Multi-modal Biological and Artificial Neural Networks | 2.5 | 4.25 | [
2,
2,
2,
4
] | [
4,
4,
5,
4
] | 4 | [
"visual representation",
"preferred images",
"fMRI",
"EEG",
"generative models"
] | Understanding how neurons responding to visual stimulus inputs is an important question in both deep learning and neuroscience. It has significant implications in enhancing the interpretability of black-box artificial neural networks and understanding the visual representation in biological neural networks. We proposed a structured visual representation landscape and design an activation score based prior that allows effectively regularizing the landscape with either activations from a brain region or units in neural networks. Our model Vis-Lens integrates a variational auto-encoder and diffusion model as an image generative model. It allows generation of natural realistic preferred images with directly modifying the activation-regularized latents, which avoids the tedious optimization procedure. We demonstrate the effectiveness of our framework in both artificial neural networks and biological neural networks with multi-modal response data derived from human visual cortex, including functional Magnetic Resonance Imaging (fMRI) and electroencephalography (EEG). Our framework outperforming state-of-the-art method on generating visual representations of those networks. | Develop a structured visual representation landscape constrained by activations to generating preferred representations in biological and artificial neural networks | applications to neuroscience & cognitive science | https://openreview.net/pdf?id=OgiLGrPgw5 | 2025-09-19T11:38:14 | 4 | [
{
"id": "mDktuKfSgl",
"forum": "OgiLGrPgw5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15607/Reviewer_QtiM",
"reviewer_name": "Reviewer_QtiM",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This ... |
PifVwyqe5L | https://openreview.net/forum?id=PifVwyqe5L | RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking | 3 | 4 | [
2,
2,
4,
4
] | [
4,
4,
4,
4
] | 4 | [
"Multi-table Question Answering",
"RAG"
] | Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs) by integrating them with an external knowledge base to improve the answer relevance and accuracy. In real-world scenarios, beyond pure text, a substantial amount of knowledge is stored in tables, and user questions often require retrieving answers that are distributed across multiple tables. Retrieving knowledge from a table corpora (i.e., various individual tables) for a question remains nascent, at least, for (1) how to understand intra- and inter-table knowledge effectively, (2) how to filter unnecessary tables and how to retrieve the most relevant tables efficiently, (3) how to prompt LLMs to infer over the retrieval, (4) how to evaluate the corresponding performance in a realistic setting. Facing the above challenges, in this paper, we first propose a table-corpora-aware RAG framework, named T-RAG, which consists of the hierarchical memory index, multi-stage retrieval, and graph-aware prompting for effective and efficient table knowledge retrieval and inference. Further, we first develop a multi-table question answering benchmark named MultiTableQA, which spans 3 different task types, 57,193 tables, and 23,758 questions in total, and the sources are all from real-world scenarios. Based on MultiTableQA, we did the holistic comparison over table retrieval methods, RAG methods, and table-to-graph representation learning methods, where T-RAG shows the leading accuracy, recall, and running time performance. Also, under T-RAG, we evaluate the inference ability upgrade of different LLMs. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=PifVwyqe5L | 2025-09-17T11:02:10 | 4 | [
{
"id": "dIfFRlbPBY",
"forum": "PifVwyqe5L",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8309/Reviewer_cGTr",
"reviewer_name": "Reviewer_cGTr",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This p... | |
bNanN941dw | https://openreview.net/forum?id=bNanN941dw | Offline Clustering of Linear Bandits: The Power of Clusters under Limited Data | 4 | 3.5 | [
4,
4,
2,
6
] | [
4,
3,
4,
3
] | 4 | [
"Offline Clustering of Bandits",
"Offline Bandits Algorithms",
"Data Insufficiency"
] | Contextual multi-armed bandit is a fundamental learning framework for making a sequence of decisions, e.g., advertising recommendations for a sequence of arriving users. Recent works have shown that clustering these users based on the similarity of their learned preferences can accelerate the learning. However, prior work has primarily focused on the online setting, which requires continually collecting user data, ignoring the offline data widely available in many applications. To tackle these limitations, we study the offline clustering of bandits (Off-ClusBand) problem, which studies how to use the offline dataset to learn cluster properties and improve decision-making. The key challenge in Off-ClusBand arises from data insufficiency for users: unlike the online case where we continually learn from online data, in the offline case, we have a fixed, limited dataset to work from and thus must determine whether we have enough data to confidently cluster users together. To address this challenge, we propose two algorithms: Off-C$^2$LUB, which we show analytically and experimentally outperforms existing methods under limited offline user data, and Off-CLUB, which may incur bias when data is sparse but performs well and nearly matches the lower bound when data is sufficient. We experimentally validate these results on both real and synthetic datasets. | This paper addresses the offline clustering of bandits problem, proposing algorithms and theoretical bounds to handle various amount of data scenarios effectively. | learning theory | https://openreview.net/pdf?id=bNanN941dw | 2025-09-19T10:35:42 | 4 | [
{
"id": "SPfGnb8W7F",
"forum": "bNanN941dw",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15255/Reviewer_R7Pr",
"reviewer_name": "Reviewer_R7Pr",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
VRBKiWBPKZ | https://openreview.net/forum?id=VRBKiWBPKZ | Learning to Think in Blocks: A Prior-Guided Reinforcement Learning Framework for RAG | 3.5 | 3.5 | [
4,
4,
4,
2
] | [
2,
5,
3,
4
] | 4 | [
"Retrieval-Augmented Generation",
"Reinforcement Learning",
"Prior-Guided Learning",
"Structured Action Space",
"Query Rewriting"
] | Retrieval-Augmented Generation (RAG) systems mitigate factual inaccuracies in large language models (LLMs) by integrating external knowledge, but their effectiveness often hinges on query rewriting techniques. Prompt-based rewriting methods are frequently suboptimal, while existing reinforcement learning (RL) approaches struggle with inefficient, unguided exploration of the vast strategy space. To address these limitations, we propose an end-to-end RL framework that initializes the training process with human-defined prior rewriting strategies, enabling the model to learn from its interactions with the RAG environment and develop its own effective posterior rewriting strategies. Furthermore, we develop a novel RL algorithm, namely Block-wise Geometric Policy Optimization (BGPO), which resolves the granularity mismatch in previous methods by redefining the state-action space as blocks of tokens. This algorithm is enhanced by geometric averaging for balanced importance and a Bellman-equation-inspired credit assignment mechanism to reshape the reward. Extensive experiments on both local corpus retrieval and online search datasets demonstrate that our RL framework consistently surpasses the baselines, validating its superiority for complex RAG tasks. Our project code can be found at this anonymous repository: https://anonymous.4open.science/r/Learning-to-Think-in-Blocks-A-Prior-Guided-Reinforcement-Learning-Framework-for-RAG-0288/ | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=VRBKiWBPKZ | 2025-09-20T16:28:24 | 4 | [
{
"id": "KycdvGSGab",
"forum": "VRBKiWBPKZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24440/Reviewer_qJSP",
"reviewer_name": "Reviewer_qJSP",
"rating": 4,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
8zbWMaREah | https://openreview.net/forum?id=8zbWMaREah | Neighborhood Sampling Does Not Learn the Same Graph Neural Network | 3 | 3.25 | [
4,
4,
2,
2
] | [
3,
4,
3,
3
] | 4 | [
"graph neural network",
"neighborhood sampling",
"neural tangent kernel",
"Gaussian process posterior inference"
] | Neighborhood sampling is an important ingredient in the training of large-scale graph neural networks. It suppresses the exponential growth of the neighborhood size across network layers and maintains feasible memory consumption and time costs. While it becomes a standard implementation in practice, its systemic behaviors are less understood. We conduct a theoretical analysis by using the tool of neural tangent kernels, which characterize the (analogous) training dynamics of neural networks based on their infinitely wide counterparts---Gaussian processes (GPs). We study several established neighborhood sampling approaches and the corresponding posterior GP. With limited samples, the posteriors are all different, although they converge to the same one as the sample size increases. Moreover, the posterior covariance, which lower-bounds the mean squared prediction error, is uncomparable, aligning with observations that no sampling approach dominates. | We analyze the training dynamics of graph neural networks under neighborhood sampling by using graph neural tangent kernels. | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | https://openreview.net/pdf?id=8zbWMaREah | 2025-09-20T08:00:54 | 4 | [
{
"id": "OI6EsX3lxG",
"forum": "8zbWMaREah",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22125/Reviewer_8FHy",
"reviewer_name": "Reviewer_8FHy",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... |
vNZ420zDte | https://openreview.net/forum?id=vNZ420zDte | DREAM: Decoupled Reinforcement Learning with Reward Measurement for Large Language Model Test-time Training | 4 | 3.75 | [
4,
4,
6,
2
] | [
4,
4,
4,
3
] | 4 | [
"Reinforcement Learning",
"Test-Time Training",
"Large Language Model"
] | This paper studies the problem of large language model (LLM) test-time training, which aims to enhance the reasoning ability of LLMs via unlabeled test data. Recent works usually utilize majority voting to infer the labels of samples to guide the reinforcement learning process, which could be inaccurate and biased with potential error accumulation. Towards this end, we propose a novel approach named Decoupled Reinforcement Learning with Reward Measurement (DREAM) for LLM test-time training. The core of our proposed DREAM is to decouple the reward estimation from reinforcement learning with enhanced calibration. In particular, our DREAM trains an LLM-based calibration model which takes both questions and answers as input, and outputs the calibration scores. To mitigate overconfident results, the judge model is trained by simulating on an independent reference dataset with positive and negative pairs. The reference-based calibration scores would be incorporated into voting-based reward estimation to reduce the potential biases, which enhance reliable test-time training. Extensive experiments on benchmark datasets validate the superiority of the proposed DREAM in comparison with competing baselines. | reinforcement learning | https://openreview.net/pdf?id=vNZ420zDte | 2025-09-20T15:39:57 | 4 | [
{
"id": "W6CphpY9do",
"forum": "vNZ420zDte",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24201/Reviewer_eSoi",
"reviewer_name": "Reviewer_eSoi",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The a... | |
VRKKkfE4im | https://openreview.net/forum?id=VRKKkfE4im | HyenaMoE: A Hybrid and Scalable Architecture for Efficient Genomic Modeling | 4 | 4 | [
4,
4,
4,
4
] | [
4,
5,
3,
4
] | 4 | [
"Genomics",
"Hyena",
"Foundation Models",
"Large Language Models",
"Mixture of Experts"
] | DNA sequences serve as the fundamental blueprint of cellular life, encoding critical information for gene regulation, protein synthesis, and a broad spectrum of essential biological processes. Owing to their sequential structure, DNA sequences bear similarities to natural language, motivating the adaptation of large language model architectures and the pretraining–finetuning paradigm in genomics. This has led to the emergence of genomic foundation models that perform well across a wide range of downstream tasks. Nonetheless, current approaches face structural limitations. Transformer-based models possess strong representational capacity for local contexts, making them well-suited for tasks involving short sequences. However, their scalability is limited by the quadratic complexity of attention mechanisms. In contrast, methods based on state space models offer high computational efficiency and can process long-range genomic inputs, but they generally perform less strongly than Transformer counterparts on shorter sequences. To address these limitations, we introduce HyenaMoE, a unified hybrid architecture designed for genomic modeling using 3-mer tokenization. HyenaMoE combines efficient HyenaLite blocks for long-range dependency modeling with attention layers enhanced by Mixture-of-Experts routing, enabling scalable capacity expansion and more efficient allocation of model resources across diverse inputs. This design supports a favorable balance between model expressiveness and computational efficiency. Experiments on three representative benchmarks demonstrate that HyenaMoE achieves state-of-the-art performance across a diverse array of genomic prediction tasks. | HyenaMoE: A Hybrid and Scalable Architecture for Efficient Genomic Modeling | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=VRKKkfE4im | 2025-09-16T00:18:01 | 4 | [
{
"id": "l4EB7SyIPX",
"forum": "VRKKkfE4im",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6114/Reviewer_6WF8",
"reviewer_name": "Reviewer_6WF8",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The pa... |
D0u0glT060 | https://openreview.net/forum?id=D0u0glT060 | Deconstructing Positional Information: From Attention Logits to Training Biases | 7.2 | 3.2 | [
8,
8,
4,
8,
8
] | [
3,
3,
4,
3,
3
] | 5 | [
"Position Encoding; Toeplitz Matrix; Attention Logit."
] | Positional encodings, a mechanism for incorporating sequential information into the Transformer model, are central to contemporary research on neural architectures. Previous work has largely focused on understanding their function through the principle of distance attenuation, where proximity dictates influence. However, the interaction between positional and semantic information remains insufficiently explored, and the complexity of mainstream corpora hinders systematic, comparative studies of these methods. This paper addresses these challenges through a deconstruction of the attention-logit computation and a structured analysis of all mainstream positional encodings. A key focus is placed on Rotary Positional Embedding (RoPE), whose product-based structure uniquely facilitates a direct interaction between position and content. To probe this characteristic, we designed a novel synthetic task that explicitly demands a strong synthesis of positional and semantic information. As theoretically predicted, RoPE demonstrates a significant performance advantage over other encodings on this specialized task. Concurrently, this targeted evaluation uncovers an implicit training issue: a hidden bias manifesting as a distinct information aggregation phenomenon in the model's shallow layers, which we term the "single-head deposit pattern." Through subsequent ablation studies, we analyze this pattern and identify a method for its mitigation. These findings highlight the need for a deeper investigation into the training dynamics of positional encodings to bridge the gap between their theoretical design and practical implementation. | We propose a unifying perspective on the role of positional encoding and discover that rope training exhibits implicit biases. | interpretability and explainable AI | https://openreview.net/pdf?id=D0u0glT060 | 2025-09-08T23:14:37 | 5 | [
{
"id": "KE0SbyfDt5",
"forum": "D0u0glT060",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3159/Reviewer_hmEf",
"reviewer_name": "Reviewer_hmEf",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The pa... |
VQJFDRLeTK | https://openreview.net/forum?id=VQJFDRLeTK | Efficient Edge Test-Time Adaptation via Latent Feature Coordinate Correction | 3.5 | 4.5 | [
4,
2,
2,
6
] | [
4,
4,
5,
5
] | 4 | [
"Test-time Adaptation",
"Edge Devices",
"Forward-Only",
"Latent Feature"
] | Edge devices face significant challenges due to limited computational resources and distribution shifts, making efficient and adaptable machine learning essential. Existing test-time adaptation (TTA) methods often rely on gradient-based optimization or batch processing, which are inherently unsuitable for resource-constrained edge scenarios due to their reliance on backpropagation and high computational demands. Gradient-free alternatives address these issues but often suffer from limited learning capacity, lack flexibility, or impose architectural constraints. To overcome these limitations, we propose a novel single-instance TTA method tailored for edge devices (TED), which employs forward-only coordinate optimization in the principal subspace of latent using the covariance matrix adaptation evolution strategy (CMA-ES). By updating a compact low-dimensional vector, TED not only enhances output confidence but also aligns the latent representation closer to the source latent distribution within the latent principal subspace. This is achieved without backpropagation, keeping the model parameters frozen, and enabling efficient, forgetting-free adaptation with minimal memory and computational overhead. Experiments on image classification and keyword spotting tasks across the ImageNet and Google Speech Commands series datasets demonstrate that TED achieves state-of-the-art performance while $\textit{reducing computational complexity by up to 63 times}$, offering a practical and scalable solution for real-world edge applications. Furthermore, we successfully $\textit{deployed TED on the ZYNQ-7020 platform}$, demonstrating its feasibility and effectiveness for resource-constrained edge devices in real-world deployments. | An efficient single-instance TTA method for edge devices, leveraging forward-only optimization in the latent principal subspace. | transfer learning, meta learning, and lifelong learning | https://openreview.net/pdf?id=VQJFDRLeTK | 2025-09-01T22:58:57 | 4 | [
{
"id": "0voDlOpRwx",
"forum": "VQJFDRLeTK",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission416/Reviewer_8Tmh",
"reviewer_name": "Reviewer_8Tmh",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This pa... |
wNAUAPfceN | https://openreview.net/forum?id=wNAUAPfceN | Guided Star-Shaped Masked Diffusion | 5.5 | 3 | [
4,
6,
6,
6
] | [
3,
3,
3,
3
] | 4 | [
"Discrete Diffusion",
"Text Diffusion Models",
"Masked Diffusion",
"Guided Sampling"
] | The performance of pre-trained masked diffusion models is often constrained by their sampling procedure, which makes decisions irreversible and struggles in low-step generation regimes. We introduce a novel sampling algorithm that works with pre-trained models and, after a lightweight fine-tuning of a single layer, significantly improves sample quality and efficiency. Our method reformulates the generation process using a star-shaped paradigm, which inherently allows for error correction. To make this process effective, we augment it with a learnable re-masking scheduler that intelligently identifies and revises likely errors. This approach yields a substantial quality boost, particularly when using a small number of sampling steps. We extensively ablate key components of our approach and show its usability in different scenarios. In comprehensive experiments on text, and code generation, our sampling algorithm outperforms or matches existing methods. | We developed a new sampling algorithm that, with minimal fine-tuning, enables pre-trained diffusion models to self-correct, significantly boosting quality in few-step generation. | generative models | https://openreview.net/pdf?id=wNAUAPfceN | 2025-09-20T19:04:35 | 4 | [
{
"id": "NHDYmW21X8",
"forum": "wNAUAPfceN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25294/Reviewer_sA7a",
"reviewer_name": "Reviewer_sA7a",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... |
bs890te4so | https://openreview.net/forum?id=bs890te4so | Out-of-distributon Tests Reveal Compositionality in Chess Transformers | 2.666667 | 4.666667 | [
2,
2,
4
] | [
5,
5,
4
] | 3 | [
"Language model",
"transformer",
"chess model",
"out-of-distribution generalization",
"rule extrapolation",
"chess960"
] | Chess is a canonical example of a task that requires rigorous reasoning and long-term planning. Modern decision Transformers - trained similarly to LLMs - are able to learn competent gameplay, but it is unclear to what extent they truly capture the rules of chess.
To investigate this, we train a 270M parameter chess Transformer and test it on out-of-distribution scenarios, designed to reveal failures of systematic generalisation. Our analysis shows that Transformers exhibit compositional generalisation, as evidenced by strong rule extrapolation: they adhere to fundamental ‘syntactic’ rules of the game by consistently choosing valid moves even in situations very different from the training data. Moreover, they also generate high-quality moves for OOD puzzles. In a more challenging test, we evaluate the models on variants including Chess960 (Fischer Random Chess) - a variant of chess where starting positions of pieces are randomised. We found that while the model exhibits basic strategy adaptation, they are inferior to symbolic AI algorithms that perform explicit search, but gap is smaller when playing against users on Lichess. Moreover, the training dynamics revealed that the model initially learns to move only its own pieces, suggesting an emergent compositional understanding of the game. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=bs890te4so | 2025-09-20T04:53:16 | 3 | [
{
"id": "J1R1Dk7eeO",
"forum": "bs890te4so",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21232/Reviewer_r9CB",
"reviewer_name": "Reviewer_r9CB",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 1,
"presentation": 3,
"summary": "The p... | |
ggONlqUsUY | https://openreview.net/forum?id=ggONlqUsUY | Isolation-based Spherical Ensemble Representations for Anomaly Detection | 3.5 | 3.5 | [
4,
4,
2,
4
] | [
4,
3,
4,
3
] | 4 | [
"Anomaly Detection",
"Unsupervised Learning",
"Isolation Forest"
] | Anomaly detection is a critical task in data mining and management with applications spanning fraud detection, network security, and log monitoring. Despite extensive research, existing unsupervised anomaly detection methods still face fundamental challenges including conflicting distributional assumptions, computational inefficiency, and difficulty handling different anomaly types. To address these problems, we propose ISER (Isolation-based Spherical Ensemble Representations) that extends existing isolation-based methods by using hypersphere radii as proxies for local density characteristics while maintaining linear time and constant space complexity. ISER constructs ensemble representations where hypersphere radii encode density information: smaller radii indicate dense regions while larger radii correspond to sparse areas. We introduce a novel similarity-based scoring method that measures pattern consistency by comparing ensemble representations against a theoretical anomaly reference pattern. Additionally, we enhance the performance of Isolation Forest by using ISER and adapting the scoring function to address axis-parallel bias and local anomaly detection limitations. Comprehensive experiments on 22 real-world datasets demonstrate ISER's superior performance over 11 baseline methods. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=ggONlqUsUY | 2025-09-15T17:09:33 | 4 | [
{
"id": "oQ9kqGERU2",
"forum": "ggONlqUsUY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5690/Reviewer_vDkB",
"reviewer_name": "Reviewer_vDkB",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
5og80LMVxG | https://openreview.net/forum?id=5og80LMVxG | Latent Wavelet Diffusion For Ultra High-Resolution Image Synthesis | 4.5 | 4 | [
4,
6,
4,
4
] | [
4,
4,
4,
4
] | 4 | [
"Generative Models",
"Diffusion Models",
"Wavelet",
"Ultra High-Resolution"
] | High-resolution image synthesis remains a core challenge in generative modeling, particularly in balancing computational efficiency with the preservation of fine-grained visual detail. We present $\textit{Latent Wavelet Diffusion (LWD)}$, a lightweight training framework that significantly improves detail and texture fidelity in ultra-high-resolution (2K-4K) image synthesis. LWD introduces a novel, frequency-aware masking strategy derived from wavelet energy maps, which dynamically focuses the training process on detail-rich regions of the latent space. This is complemented by a scale-consistent VAE objective to ensure high spectral fidelity. The primary advantage of our approach is its efficiency: LWD requires no architectural modifications and adds zero additional cost during inference, making it a practical solution for scaling existing models. Across multiple strong baselines, LWD consistently improves perceptual quality and FID scores, demonstrating the power of signal-driven supervision as a principled and efficient path toward high-resolution generative modeling. | We enhance Ultra High-Resolution image generation by decomposing latent features into wavelet subbands, allowing the model to focus on frequency-specific refinement during diffusion. | generative models | https://openreview.net/pdf?id=5og80LMVxG | 2025-09-19T23:47:57 | 4 | [
{
"id": "m3wUaWedUV",
"forum": "5og80LMVxG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19510/Reviewer_9QM1",
"reviewer_name": "Reviewer_9QM1",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
FFnbfI84bP | https://openreview.net/forum?id=FFnbfI84bP | FairMedQA: Benchmarking Bias in Large Language Models for Medicine and Healthcare | 3.333333 | 3.666667 | [
4,
2,
4
] | [
4,
4,
3
] | 3 | [
"LLM Bias",
"Medical QA",
"Adversarial Testing",
"Bias Benchmark"
] | Large language models (LLMs) are reaching expert-level accuracy on medical diagnosis questions, yet their underlying biases pose life-critical risks. Bias linked to race, sex, and socioeconomic status is well documented in clinical settings, but a consistent, automatic testbed and a large-scale empirical study across models and versions remain missing. To fill this gap, we present FairMedQA, a benchmark for evaluating bias in medical QA via adversarial testing. FairMedQA contains 4,806 adversarial descriptions and counterfactual question pairs generated from a multi-agent framework sourced from 801 clinical vignettes of the United States Medical Licensing Examination (USMLE) dataset. Using FairMedQA, we benchmark 12 representative LLMs and observe substantial statistical parity differences (SPD) between the counterfactual pairs across models, ranging from 3 to 19 percentage points. Compared with the existing CPV benchmark, FairMedQA reveals 15\% larger average accuracy gaps between privileged and unprivileged groups. Moreover, our cross-version analysis shows that upgrading from GPT-4.1-Mini to GPT-5-Mini significantly improves accuracy and fairness simultaneously. These results demonstrate that LLMs' performance and fairness in medicine and healthcare are not inherently a zero-sum trade-off, while ``win–win'' outcomes are achievable. | We introduce FairMedQA and use it benchmark medical bias in LLMs cross models and versions | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=FFnbfI84bP | 2025-09-19T02:50:59 | 3 | [
{
"id": "hWGb7PRMUA",
"forum": "FFnbfI84bP",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13715/Reviewer_cWHL",
"reviewer_name": "Reviewer_cWHL",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
v1p4bDnDAP | https://openreview.net/forum?id=v1p4bDnDAP | Two Birds with One Stone: Neural Tangent Kernel for Efficient and Robust Gradual Domain Adaptation | 4.666667 | 3.333333 | [
6,
6,
2
] | [
3,
3,
4
] | 3 | [
"Gradual Domain Adaptation",
"distribution shift",
"Neural Tangent Kernel",
"Out-of-distribution Generalization"
] | Gradual Domain Adaptation (GDA) bridges large distribution shifts through intermediate domains, yet faces challenges in computational overhead and error accumulation. In view of these problems, we propose GradNTK, a novel framework to employ the Neural Tangent Kernel (NTK) as one stone to "hit" two birds of the efficiency and robust issues in GDA.
On one hand, by exploiting the short-time dynamics of wide neural networks, GradNTK instantiates an NTK-induced Maximum Mean Discrepancy (MMD) as a differentiable domain-alignment metric that enforces smooth transitions between adjacent domains while maintaining near-linear computational cost.
On the other hand, the same NTK dynamics generate a prospective utility function to weight source/target samples by their shift sensitivity, enabling curriculum-guided gradual adaptation while avoiding error accumulation.
Experiments on Portraits, Rotated MNIST and CIFAR-100-C demonstrate superior performance (e.g., 95.1\% on Rotated MNIST, 99.5\% on Color-Shift MNIST), while reducing training time by 1.8× compared to prior GDA methods. | One kernel, two roles: short-time NTK yields a differentiable NTK-MMD for smooth alignment and a utility score for per-sample weighting, enabling near-linear, single-pass GDA. | transfer learning, meta learning, and lifelong learning | https://openreview.net/pdf?id=v1p4bDnDAP | 2025-09-13T17:43:27 | 3 | [
{
"id": "GE1KUn4tGp",
"forum": "v1p4bDnDAP",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4746/Reviewer_ZvRF",
"reviewer_name": "Reviewer_ZvRF",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This w... |
7ZD9fSljxY | https://openreview.net/forum?id=7ZD9fSljxY | Robust Forecasting of Network Systems Subject to Topology Perturbation | 4 | 2.75 | [
4,
4,
6,
2
] | [
2,
2,
3,
4
] | 4 | [
"Robust Learning",
"Network Forecasting",
"Bayesian Coresets",
"Model Reduction"
] | Many real-world dynamical systems, such as epidemic, traffic, and logistics networks, consist of sparsely interacting components and thus naturally exhibit an underlying graph structure. Forecasting their evolution is computationally challenging due to high dimensionality and is further complicated by measurement noise and uncertainty in the network topology. We address this problem by studying the predictability of graph time series under random topology perturbations, a problem with major implications that has remained largely unexplored. In the limit of large networks, we uncover distinct noise regimes: systems that are predictable with arbitrary accuracy, systems predictable only up to limited accuracy, and systems that become entirely unpredictable. Motivated by this characterization, we propose a time series forecasting framework based on a probabilistic representation of network dynamics, which leverages Bayesian coreset approximations for scalable and robust dimentionality reduction. Numerical experiments on both synthetic and real-world networks demonstrate that our approach achieves competitive accuracy and robustness under topology uncertainty, while significantly reducing computational costs. | We propose a forecasting scheme for network time series that is robust to topology perturbation. | learning on time series and dynamical systems | https://openreview.net/pdf?id=7ZD9fSljxY | 2025-09-19T05:29:37 | 4 | [
{
"id": "sB42mkqi59",
"forum": "7ZD9fSljxY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14239/Reviewer_BzyQ",
"reviewer_name": "Reviewer_BzyQ",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... |
uAkexWJ7dW | https://openreview.net/forum?id=uAkexWJ7dW | Robust Selective Activation with Randomized Temporal K-Winner-Take-All in Spiking Neural Networks for Continual Learning | 6 | 4.25 | [
4,
6,
8,
6
] | [
3,
5,
5,
4
] | 4 | [
"Spiking neural networks"
] | The human brain exhibits remarkable efficiency in processing sequential information, a capability deeply rooted in the temporal selectivity and stochastic competition of neuronal activation. Current continual learning in spiking neural networks (SNNs) faces a critical challenge: balancing task-specific selectivity with adaptive resource allocation and enhancing the robustness with perturbations to mitigate catastrophic forgetting. Considering the intrinsic temporal dynamics of spiking neurons instead of traditional K-winner-take-all (K-WTA) based on firing rate, we explore how to leave networks robust to temporal perturbations in SNNs on lifelong learning tasks.
In this paper, we propose Randomized Temporal K-winner-take-all (RTK-WTA) SNNs for lifelong learning, a biologically grounded approach that integrates trace-dependent neuronal activation with probabilistic top-k selection. By dynamically prioritizing neurons based on their spatiotemporal relevance, RTK-WTA SNNs emulate the brain’s ability to modulate neural resources in spatial and temporal dimensions while introducing controlled randomness to prevent overlapping task representations. The proposed RTK-WTA SNNs enhance inter-class margins and robustness through expanded feature space utilization theoretically. The experimental results show that RTK-WTA surpasses deterministic K-WTA by 3.07–5.0\% accuracy on splitMNIST and splitCIFAR100 with elastic weight consolidation. Controlled stochasticity balances temporal coherence and adaptability, offering a scalable framework for lifelong learning in neuromorphic systems. | applications to neuroscience & cognitive science | https://openreview.net/pdf?id=uAkexWJ7dW | 2025-09-18T21:53:46 | 4 | [
{
"id": "E5Azytfs2Y",
"forum": "uAkexWJ7dW",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11765/Reviewer_X1JP",
"reviewer_name": "Reviewer_X1JP",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
mSSoedJ2h5 | https://openreview.net/forum?id=mSSoedJ2h5 | Prover Agent: An Agent-Based Framework for Formal Mathematical Proofs | 4 | 4 | [
8,
4,
2,
2
] | [
3,
4,
5,
4
] | 4 | [
"Agent",
"Formal Theorem Proving",
"Automated Theorem Proving",
"Small Language Model"
] | We present Prover Agent, a novel AI agent for automated theorem proving that integrates large language models (LLMs) with a formal proof assistant, Lean. Prover Agent coordinates an informal reasoning LLM, a formal prover model, and feedback from Lean while also generating auxiliary lemmas to assist in discovering the overall proof strategy. It achieves an 88.1% success rate on the MiniF2F benchmark, establishing a new state-of-the-art among methods using small language models (SLMs) with a much lower sample budget than previous approaches. We also present theoretical analyses and case studies that illustrate how these generated lemmas contribute to solving challenging problems. | We present Prover Agent, an AI agent for automated theorem proving that integrates LLMs with Lean and auxiliary lemma generation, achieving 88.1% on MiniF2F, the new SOTA among methods using small language models. | neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.) | https://openreview.net/pdf?id=mSSoedJ2h5 | 2025-09-19T21:22:57 | 4 | [
{
"id": "qGZAScgChL",
"forum": "mSSoedJ2h5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18484/Reviewer_rWUY",
"reviewer_name": "Reviewer_rWUY",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The g... |
h7g3HflcxE | https://openreview.net/forum?id=h7g3HflcxE | BAQ: Efficient Bit Allocation Quantization for Large Language Models | 2.5 | 4.5 | [
2,
2,
4,
2
] | [
4,
4,
5,
5
] | 4 | [
"model compression",
"post training quantization",
"weight-only quantization",
"bit allocation"
] | Post-training model quantization is a widely adopted technique for reducing the memory and computational costs of large language models (LLMs). However, most existing methods either fix a uniform bitwidth or rely on binary sensitivity groupings (``sensitive'' vs.\ ``non-sensitive'') that treat all weights within a group identically, ignoring how sensitive each weight actually is and leaving the degree of sensitivity under-exploited. To address this, for the first time in the neural network quantization literature, we introduce an explicit loss--bitwidth relation that links layer-output distortion to the assigned precision, together with a sensitivity-guided bit-allocation quantization (BAQ) framework. Under mild assumptions, this modeling makes the layer-wise loss an explicit function of quantization bitwidth and yields a convex resource-allocation problem with a \emph{closed-form} solution that adapts precision across weights. This choice is theoretically motivated by rate–distortion theory and validated by extensive simulations. Inspecting the solution of the proposed resource-allocation problem provides several insights (such as the equal-loss structure), which are then exploited to design the proposed algorithm. The proposed algorithm achieves a good trade-off between loss minimization and complexity and allows BAQ to be integrated into standard quantization pipelines with minimal overhead. Experimental results show that BAQ consistently outperforms GPTQ, achieving up to 56$\times$ lower perplexity at the same bitwidth on large language models (e.g., OPT, Llama) ranging from 125M to 30B parameters. Leveraging our analytical results derived from solving the optimal bit allocation problem, we also provide a theoretical explanation for the observed gains. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=h7g3HflcxE | 2025-09-19T22:26:29 | 4 | [
{
"id": "2r3Huvp7fn",
"forum": "h7g3HflcxE",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18920/Reviewer_cJWm",
"reviewer_name": "Reviewer_cJWm",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... | |
BlbLPArfCD | https://openreview.net/forum?id=BlbLPArfCD | MoST: Mixing Speech and Text with Modality-Aware Mixture of Experts | 4.666667 | 3 | [
6,
4,
4
] | [
3,
3,
3
] | 3 | [
"Mixture of Experts",
"Speech Language Model",
"Multimodal Language Model"
] | We present MoST (Mixture of Speech and Text), a novel multimodal large language model that seamlessly integrates speech and text processing through our proposed Modality-Aware Mixture of Experts (MAMoE) architecture. While current multimodal models typically process diverse modality representations with identical parameters—disregarding their inherent representational differences, we introduce specialized routing pathways that direct tokens to modality-appropriate experts based on input type. MAMoE simultaneously enhances modality-specific learning and cross-modal understanding through two complementary components: modality-specific expert groups that capture domain-specific patterns and shared experts that facilitate information transfer between modalities. Building on this architecture, we develop an efficient transformation pipeline that adapts the pretrained MoE language model through strategic post-training on ASR and TTS datasets, followed by fine-tuning with a carefully curated speech-text instruction dataset. A key feature of this pipeline is that it relies exclusively on fully accessible, open-source datasets to achieve strong performance and data efficiency. Comprehensive evaluations across ASR, TTS, audio language modeling, and spoken question answering benchmarks show that MoST consistently outperforms existing models of comparable parameter counts. Our ablation studies confirm that the modality-specific routing mechanism and shared experts design significantly contribute to performance gains across all tested domains. To our knowledge, MoST represents the first fully open-source speech-text LLM built on a Mixture of Experts architecture. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=BlbLPArfCD | 2025-09-18T19:57:20 | 3 | [
{
"id": "mTjQ60WWdz",
"forum": "BlbLPArfCD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11337/Reviewer_pG6E",
"reviewer_name": "Reviewer_pG6E",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
OSVlYfp9Po | https://openreview.net/forum?id=OSVlYfp9Po | REPRESENTATION LEARNING ON NATIVE CORTICAL SURFACES: FROM GEOMETRY TO INDIVIDUAL TRAITS | 3.333333 | 3.666667 | [
2,
4,
4
] | [
5,
4,
2
] | 3 | [
"cortical surface",
"self-attention",
"transformer"
] | Analyzing the intricate geometry of the cerebral cortex is fundamental to understanding the neuroanatomical basis of individual traits. However, the fundamental conflict between powerful, grid-dependent architectures like Transformers and the irregular cortical mesh has forced a compromise: the distortive practice of spherical projection. This act of simplification discards the geometric subtleties we aim to study.
To resolve this foundational data-architecture mismatch, we propose the Native Cortical Surface Representation Learning Model (NCS-RL), an end-to-end framework that reshapes the data to fit the model, not the other way around. Its first component, the Canonical Surface Generator, creates a shared, regular topological grid across all subjects. Onto this grid, it precisely maps each individual's unique geometric details via diffeomorphic deformation. This single process achieves three critical goals simultaneously: it establishes a principled tokenization for Transformers, resolves inter-subject correspondence, and yields a spectrum of anatomically faithful variations for data augmentation.
With the cortical surface now represented as a structured and geometrically rich sequence of tokens, the second component, the Cortical Transformer, is designed to interpret it. Its dual-pathway architecture is built to leverage this new data structure: one pathway uses our novel Adjacency Self-Attention to learn fine-grained local geometric patterns directly from the native surface priors, while the other captures global context. A gated mechanism then fuses these pathways, forging a holistic representation that understands not just what a cortical region is, but precisely how it is shaped.
Moreover, to ensure geometric fidelity, our model was pre-trained on over $5,000$ subjects from the ABCD, HCP, and ABIDE datasets. Our method demonstrates state-of-the-art performance in experiments and ablation studies, including phenotype prediction and functional map regression. Our implementation is available in the supplementary material and will be released. | applications to neuroscience & cognitive science | https://openreview.net/pdf?id=OSVlYfp9Po | 2025-09-19T16:09:24 | 3 | [
{
"id": "GiVddMgAR8",
"forum": "OSVlYfp9Po",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16848/Reviewer_7orw",
"reviewer_name": "Reviewer_7orw",
"rating": 2,
"confidence": 5,
"soundness": 1,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
BViZkEr0IA | https://openreview.net/forum?id=BViZkEr0IA | The Mechanistic Emergence of Symbol Grounding in Language Models | 4.8 | 3.4 | [
6,
4,
6,
6,
2
] | [
3,
4,
3,
4,
3
] | 5 | [
"Language Grounding",
"Mechanistic Interpretability",
"Language Models"
] | Symbol grounding (Harnad, 1990) describes how symbols such as words acquire their meanings by connecting to real-world sensorimotor experiences.
Recent work has shown preliminary evidence that grounding may emerge in (vision-)language models trained at scale without using explicit grounding objectives.
Yet, the specific loci of this emergence and the mechanisms that drive it remain largely unexplored.
To address this problem, we introduce a controlled evaluation framework that systematically traces how symbol grounding arises within the internal computations through mechanistic and causal analysis.
Our findings show that grounding concentrates in middle-layer computations and is implemented through the aggregate mechanism, where attention heads aggregate the environmental ground to support the prediction of linguistic forms.
This phenomenon replicates in multimodal dialogue and across architectures (Transformers and state-space models), but not in unidirectional LSTMs.
Our results provide behavioral and mechanistic evidence that symbol grounding can emerge in language models, with practical implications for predicting and potentially controlling the reliability of generation. | We provide behavioral and mechanistic evidence that symbol grounding can emerge in autoregressive language models. | interpretability and explainable AI | https://openreview.net/pdf?id=BViZkEr0IA | 2025-09-03T23:25:43 | 5 | [
{
"id": "cOhLjeRZfB",
"forum": "BViZkEr0IA",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1734/Reviewer_P6pG",
"reviewer_name": "Reviewer_P6pG",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This p... |
i6YOXKtzqN | https://openreview.net/forum?id=i6YOXKtzqN | Decoupled Diffusion Models for Efficient Spatio-Temporal Graph Forecasting | 4 | 4 | [
4,
4,
4
] | [
4,
5,
3
] | 3 | [
"spatio-temporal graph forecasting",
"probabilistic forecasting",
"diffusion model",
"decoupled graph neural network"
] | Graph-based diffusion models suffer from a critical computational bottleneck, limiting their use in practical applications such as spatio-temporal graph forecasting. We argue that this inefficiency stems from the fusion of information propagation and feature transformation within standard GNNs. In this paper, we introduce a design principle that decouples these two operations, enabling a highly efficient and linear architecture. Instantiating this principle, Decoupled Spatio-Temporal Diffusion Model (DSTD) leverages the principle alongside a dynamic multi-scale aggregation mechanism to achieve remarkable performance. On widely-used spatio-temporal graph forecasting benchmarks, DSTD not only outperforms existing probabilistic methods but also surpasses top-performing deterministic models, while demonstrating a significant reduction in inference time. Our results validate that decoupling is a powerful and effective strategy for building scalable and high-performing generative models for graph-structured data. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=i6YOXKtzqN | 2025-09-09T13:25:01 | 3 | [
{
"id": "avOTxGvzW8",
"forum": "i6YOXKtzqN",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3291/Reviewer_2Acj",
"reviewer_name": "Reviewer_2Acj",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The pa... | |
ni3BIqhCzw | https://openreview.net/forum?id=ni3BIqhCzw | On the Generalization of Dynamic GNNs: A Heavy-Tailed Wavelet Perspective | 2.666667 | 2.666667 | [
4,
2,
2
] | [
2,
3,
3
] | 3 | [
"dynamic F"
] | Dynamic graphs exhibit bursty and intermittent dynamics that are poorly captured by standard sequence models. We take a signal–statistical view and show that node-wise temporal signals, once transformed into wavelet space, display Pareto-type heavy tails: a small set of high-magnitude coefficients concentrates a large fraction of the total energy. Building on this observation, we introduce Tail-Aware Masking for Dynamic GNNs (DGNNs): a simple, plug-in mechanism that retains only the top wavelet coefficients (per node) and zeros out the rest before message passing.
On the theory side, under a mild regularly varying tail assumption with index $\alpha>2$, we prove that (i) the retained coefficients capture a constant fraction of energy scaling as $\rho^{1-2/\alpha}$ for retention ratio $\rho$, (ii) masking increases an effective tail index of the features, and (iii) the empirical Rademacher complexity and the generalisation gap of the resulting hypothesis class contract at rate $\mathcal{O}\!\big(\rho^{\frac{1}{2}-\frac{1}{\alpha}}/\sqrt{nT}\big)$. These results formalise why sparse, tail-focused representations improve sample efficiency.
Empirically, on METR-LA we observe clear heavy tails via survival curves and Q–Q plots, validating the modelling prior. Our tail-aware DGNN consistently outperforms its baseline counterpart, yielding substantial reductions in MSE and gains on tail-sensitive metrics, while maintaining training stability through a short warmup. The approach is architecture-agnostic, interpretable (the mask exposes the most informative time–node events), and requires minimal tuning. Together, our findings connect a robust statistical phenomenon of dynamic graph signals to concrete architectural choices and provable generalisation benefits. | learning theory | https://openreview.net/pdf?id=ni3BIqhCzw | 2025-09-19T12:29:54 | 3 | [
{
"id": "s3ax65sgUZ",
"forum": "ni3BIqhCzw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15857/Reviewer_3vf2",
"reviewer_name": "Reviewer_3vf2",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... | |
4w9HzBBLRk | https://openreview.net/forum?id=4w9HzBBLRk | Towards Multimodal Understanding, Reasoning, and Tool Usage across Vision, Speech, and Audio in Long Videos | 4 | 4.5 | [
4,
4,
4,
4
] | [
4,
5,
5,
4
] | 4 | [
"multimodal",
"long-form video understanding",
"benchmark",
"agentic pipeline",
"question answering",
"scenario-driven QA"
] | Long-form, multimodal video understanding requires models to integrate vision, speech, and ambient audio while reasoning coherently over extended contexts. However, existing benchmarks often emphasize either long temporal contexts or rich multimodal content, but rarely both. Moreover, they are typically restricted to multiple-choice evaluations and a single accuracy metric, offering limited insight into where models succeed or fail. To address these gaps, we introduce **STARBench**, a diagnostic benchmark designed for long-form, multimodal video understanding. STARBench features open-ended, intent-driven questions that reflect how humans naturally engage with video content. It supports single- and multi-turn dialogues, encompassing multimodal reasoning and agentic tool-use tasks across rich video, audio, and speech contexts. Each question includes a reference answer and a rubric with graded criteria, enabling interpretable and traceable evaluation. Importantly, STARBench is generated via a scalable, human-validated pipeline, ensuring reproducibility and coverage. Complementing the benchmark, we propose **STARAgent**, an agentic system for analyzing long videos using pre-processing, search, and refinement tools. Evaluating state-of-the-art closed- and open-source MLLMs on STARBench reveals substantial limitations: the top-performing Gemini-2.5-Flash reaches only 52.95\%, while open-source models remain below 25\%. STARAgent, leveraging structured reasoning over long videos, achieves 44.66\%, highlighting the challenge of complex, real-world video understanding. By combining breadth, interpretability, and reproducibility, STARBench provides a practical foundation for benchmarking and improving MLLMs on long-form, multimodal video tasks. All code, including the agentic pipeline, and datasets will be released publicly. | STARBench is a human-validated benchmark for long-form multimodal video understanding, and STARAgent is an agentic pipeline for multimodal long video understanding, together exposing current state-of-the-art MLLMs’ limits | datasets and benchmarks | https://openreview.net/pdf?id=4w9HzBBLRk | 2025-09-20T19:04:54 | 4 | [
{
"id": "VLoZTY39Fl",
"forum": "4w9HzBBLRk",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25295/Reviewer_bBJM",
"reviewer_name": "Reviewer_bBJM",
"rating": 4,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
ifesxQxlJp | https://openreview.net/forum?id=ifesxQxlJp | Amplitude-based Input Attribution in Quantum Learning via Integrated Gradients | 3 | 4.25 | [
0,
4,
4,
4
] | [
5,
4,
4,
4
] | 4 | [
"Quantum Computing",
"Quantum Machine Learning",
"Interpretability in QML"
] | Quantum machine learning (QML) algorithms have demonstrated early promise across hardware platforms, but remain difficult to interpret due to the inherent opacity of quantum state evolution. Widely-used classical interpretability methods, such as integrated gradients and surrogate-based sensitivity analysis, are not directly compatible with quantum circuits due to measurement collapse and the exponential complexityof simulating state evolution. In this work, we introduce HATTRIQ, a general-purpose framework to compute amplitude-based input attribution scores in circuit-based QML models. HATTRIQ supports the widely-used input amplitude embedding feature encoding scheme and uses a Hadamard test–based construction to compute input gradients directly on quantum hardware to generate provably faithful attributions. We validate HATTRIQ on classification tasks across several datasets (Bars and Stripes, MNIST, and FashionMNIST). | We introduce HattriQ, a technique for computing the input attribution scores of quantum machine learning models. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=ifesxQxlJp | 2025-09-19T07:25:06 | 4 | [
{
"id": "oULR5R3uMm",
"forum": "ifesxQxlJp",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14522/Reviewer_iYGU",
"reviewer_name": "Reviewer_iYGU",
"rating": 0,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
QTh3aRcbTt | https://openreview.net/forum?id=QTh3aRcbTt | Improved Regret for Decentralized Online Convex Optimization with Compressed Communication | 3 | 3.5 | [
2,
4,
2,
4
] | [
4,
2,
4,
4
] | 4 | [
"Decentralized Online Convex Optimization",
"Compressed Communication"
] | We investigate decentralized online convex optimization with compressed communication, where $n$ learners collaboratively minimize a sequence of global loss functions using only local information and compressed data from their neighbors. Prior work has established regret bounds of $O(\max\\{\omega^{-2}\rho^{-4}n^{1/2},\omega^{-4}\rho^{-8}\\}n\sqrt{T})$ and $O(\max\\{\omega^{-2}\rho^{-4}n^{1/2},\omega^{-4}\rho^{-8}\\}n\ln{T})$ for convex and strongly convex functions, respectively, where $\omega\in(0,1]$ is the compression quality factor ($\omega=1$ means no compression) and $\rho<1$ is the spectral gap of the communication matrix. However, these regret bounds suffer from a quadratic or even quartic dependence on $\omega^{-1}$. Moreover, the super-linear dependence on $n$ is also undesirable. To overcome these limitations, we propose a novel algorithm that achieves improved regret bounds of $\tilde{O}(\omega^{-1/2}\rho^{-1}n\sqrt{T})$ and $\tilde{O}(\omega^{-1}\rho^{-2}n\ln{T})$ for convex and strongly convex functions, respectively. The primary idea is to design a \emph{two-level blocking update framework} incorporating two novel ingredients: an online gossip strategy and an error compensation scheme, which collaborate to achieve a better consensus among local learners. Furthermore, we establish the first lower bounds for this problem, justifying the optimality of our results with respect to both $\omega$ and $T$. Additionally, we consider the bandit feedback scenario, and extend our method with the classic gradient estimators to enhance existing regret bounds. | optimization | https://openreview.net/pdf?id=QTh3aRcbTt | 2025-09-18T21:02:24 | 4 | [
{
"id": "qyCnlHDAla",
"forum": "QTh3aRcbTt",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11508/Reviewer_kFty",
"reviewer_name": "Reviewer_kFty",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This ... | |
4zoMnmZzh4 | https://openreview.net/forum?id=4zoMnmZzh4 | VisCoder2: Building Multi-Language Visualization Coding Agents | 5.5 | 3.75 | [
8,
4,
8,
2
] | [
4,
4,
3,
4
] | 4 | [
"Code Models",
"Visualization",
"Fine-tuning"
] | Large language models (LLMs) have recently enabled coding agents capable of generating, executing, and revising visualization code. However, existing models often fail in practical workflows due to limited language coverage, unreliable execution, and lack of iterative correction mechanisms. Progress has been constrained by narrow datasets and benchmarks that emphasize single-round generation and single-language tasks. To address these challenges, we introduce three complementary resources for advancing visualization coding agents. **VisCode-Multi-679K** is a large-scale, supervised dataset containing 679K validated and executable visualization samples with multi-turn correction dialogues across 12 programming languages. **VisPlotBench** is a benchmark for systematic evaluation, featuring executable tasks, rendered outputs, and protocols for both initial generation and multi-round self-debug. Finally, we present **VisCoder2**, a family of multi-language visualization models trained on VisCode-Multi-679K. Experiments show that VisCoder2 significantly outperforms strong open-source baselines and approaches the performance of proprietary models like GPT-4.1, with further gains from iterative self-debug, reaching **82.4%** overall execution pass rate at the 32B scale, particularly in symbolic or compiler-dependent languages. | generative models | https://openreview.net/pdf?id=4zoMnmZzh4 | 2025-09-19T18:53:05 | 4 | [
{
"id": "axI3jKVnz3",
"forum": "4zoMnmZzh4",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17682/Reviewer_R3P7",
"reviewer_name": "Reviewer_R3P7",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 1,
"summary": "This ... | |
xQRAo9YUQ3 | https://openreview.net/forum?id=xQRAo9YUQ3 | RAST-MoE-RL: A Regime-Aware Spatio-Temporal MoE Framework for Deep Reinforcement Learning in Ride-Hailing | 5 | 3.75 | [
4,
6,
6,
4
] | [
4,
3,
4,
4
] | 4 | [
"Deep Reinforcement Learning",
"Mixture of Experts",
"Urban Mobility",
"Ride-Hailing"
] | Ride-hailing platforms face the challenge of balancing passenger waiting times with overall system efficiency under highly uncertain supply–demand conditions. Adaptive delayed matching, which decides whether to assign drivers immediately or hold requests for batching, creates a fundamental trade-off between matching delay and pickup delay. Because these outcomes accumulate over long horizons and depend on stochastic, evolving supply–demand states, reinforcement learning (RL) is a natural framework for this problem. Yet existing approaches often oversimplify traffic dynamics, misrepresenting congestion effects, or employ RL models with shallow encoders that fail to capture complex spatiotemporal patterns.
We introduce the Regime-Aware Spatio-Temporal Mixture-of-Experts framework (RAST-MoE), which formalizes adaptive delayed matching as a regime-aware MDP and equips RL agents with a self-attention MoE encoder. Instead of relying on a single monolithic network, our design allows different experts to specialize automatically, improving representation capacity while keeping per-sample computation efficient. A physics-informed congestion surrogate preserves realistic density–speed feedback while enabling millions of efficient rollouts. An adaptive reward scheme further guards against pathological strategies by dynamically penalizing service-quality violations.
Despite its modest size of only 12M parameters, our framework consistently outperforms strong baselines. On real-world Uber trajectory data from San Francisco, it improves total reward by over 13%, reduces average matching delay by 15%, and reduces pickup delay by 10%. In addition, it demonstrates strong robustness across unseen demand regimes, stable training without reward hacking, and validated expert specialization. These findings demonstrate the broader potential of MoE-enhanced RL for large-scale decision-making tasks with complex spatiotemporal dynamics and large action spaces. | applications to robotics, autonomy, planning | https://openreview.net/pdf?id=xQRAo9YUQ3 | 2025-09-16T07:58:37 | 4 | [
{
"id": "EwRflc6PT5",
"forum": "xQRAo9YUQ3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission6403/Reviewer_jLCz",
"reviewer_name": "Reviewer_jLCz",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
77VCQs9VSU | https://openreview.net/forum?id=77VCQs9VSU | From Assistants to Companions: Towards the Usefulness of Improving Theory of Mind for Human-AI Symbiosis | 3 | 3 | [
4,
2,
2,
4
] | [
3,
2,
4,
3
] | 4 | [
"Theory of Mind",
"Large Language Model",
"Human-AI Interaction"
] | Theory of Mind (ToM) is crucial for successful human-AI (HAI) interactions. It is a key capability for AI to attribute humans' mental states based on dynamic interactions from a first-person perspective and then improve responses to humans accordingly. However, the existing benchmarks for Large Language Models (LLMs) focus on testing their ToM capability with story-reading from a third-person perspective, leading to a critical gap between benchmark performance and practical competence in HAI collaborative and supportive tasks. To bridge this gap, we introduce a novel evaluation framework within HAI contexts, shifting from static test-taking to dynamic, first-person engagement. Our framework assesses LLM performance across two fundamental types of interaction scenarios derived from cognitive science: goal-oriented tasks (e.g., coding, math) and experience-oriented tasks (e.g., counseling). With the framework, we systematically evaluate LLMs and related techniques to improve their ToM across four synthesized benchmarks and a crowdsourcing user study with 100 participants. Our findings reveal that improvements on static benchmarks do not always translate to better performance in dynamic HAI interactions. This paper offers critical insights into ToM evaluation, highlighting the necessity of interaction-based assessments and providing a roadmap for developing next-generation, socially aware LLMs for HAI symbiosis. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=77VCQs9VSU | 2025-09-19T12:42:30 | 4 | [
{
"id": "ljYTKgnAfU",
"forum": "77VCQs9VSU",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15908/Reviewer_1ry9",
"reviewer_name": "Reviewer_1ry9",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "It pr... | |
0GaCfBRFnf | https://openreview.net/forum?id=0GaCfBRFnf | A Stitch in Time Saves Nine: Proactive Self-Refinement for Language Models | 6 | 3.25 | [
8,
6,
4,
6
] | [
3,
4,
3,
3
] | 4 | [
"Large language models",
"Self-refine"
] | Recent advances in self-refinement have demonstrated significant potential for improving the outputs of large language models (LLMs) through iterative refinement. However, most existing self-refinement methods rely on a reactive process with a fixed number of iterations, making it difficult to determine the optimal timing and content of refinement based on the evolving generation context. Inspired by the way humans dynamically refine their thoughts during execution, we propose ProActive Self-Refinement (PASR), a novel method that enables LLMs to refine their outputs during the generation process. Unlike methods that regenerate entire responses, PASR proactively decides whether, when, and how to refine based on the model’s internal state and evolving context. We conduct extensive experiments on a diverse set of 10 tasks to evaluate the effectiveness of PASR. Experimental results show that PASR significantly enhances problem-solving performance. In particular, on Qwen3-8B, PASR reduces average token consumption by 41.6% compared to standard generation, while also achieving an 8.2% improvement in accuracy. Our code and all baselines used in the paper are available in the GitHub. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=0GaCfBRFnf | 2025-09-19T19:46:35 | 4 | [
{
"id": "ykGVimnysm",
"forum": "0GaCfBRFnf",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17956/Reviewer_RVyM",
"reviewer_name": "Reviewer_RVyM",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
1VCu7aFQzk | https://openreview.net/forum?id=1VCu7aFQzk | Scalable Variational Bayesian Fine-Tuning of LLMs via Orthogonalized Low-Rank Adapter | 3.5 | 4.5 | [
2,
2,
6,
4
] | [
5,
5,
3,
5
] | 4 | [
"Uncertainty Quantification",
"Bayesian Neural Network",
"Bayesian last layer",
"Large Language Models",
"Parameter-Efficient Fine-Tuning",
"Orthogonal Parametrization"
] | When deploying large language models (LLMs) to safety-critical applications, uncertainty quantification (UQ) is of utmost importance to self-assess the reliability of the LLM-based decisions. However, such decisions typically suffer from overconfidence, particularly after parameter-efficient fine-tuning (PEFT) for downstream domain-specific tasks with limited data.
To address these limitations, we build on the Bayesian last layer (BLL) model, where the LLM-based ${\it deterministic}$ feature extractor is followed by random LL parameters for uncertainty reasoning.
Since existing low-rank adapters (LoRA) for PEFT have limited expressiveness due to rank collapse, we address this with Polar-decomposed Low-rank Adapter Representation (PoLAR), an orthogonalized parameterization paired with Riemannian optimization to enable more stable and expressive adaptation.
The resulting PoLAR-VBLL is a flexible framework that nicely integrates architecture-enhanced optimization with scalable Bayesian inference to endow LLMs with well-calibrated UQ.
Our empirical results verify the effectiveness of PoLAR-VBLL in terms of generalization and uncertainty estimation on both in-distribution and out-of-distribution data for various common-sense reasoning tasks. | PoLAR-VBLL combines orthogonalized low-rank adapters with variational Bayesian inference on the last layer to achieve scalable, well-calibrated uncertainty quantification for fine-tuned LLMs while maintaining high accuracy. | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | https://openreview.net/pdf?id=1VCu7aFQzk | 2025-09-19T02:37:02 | 4 | [
{
"id": "eguO51Q4I2",
"forum": "1VCu7aFQzk",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13666/Reviewer_XbV5",
"reviewer_name": "Reviewer_XbV5",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
qUwOlwao20 | https://openreview.net/forum?id=qUwOlwao20 | TGT: Text-Grounded Trajectories for Locally Controlled Video Generation | 5 | 3.75 | [
4,
4,
6,
6
] | [
4,
3,
4,
4
] | 4 | [
"Text-to-Video Generation; Motion Control"
] | Text-to-video generation has advanced rapidly in visual fidelity, whereas standard methods still have limited ability to control the subject composition of generated scenes. Prior work shows that adding localized text control signals, such as bounding boxes or segmentation masks, can help. However, these methods struggle in complex scenarios and degrade in multi-object settings, offering limited precision and lacking a clear correspondence between individual trajectories and visual entities as the number of controllable objects increases. We introduce Text-Grounded Trajectories (TGT), a framework that conditions video generation on trajectories paired with localized text descriptions. We propose $\textit{Location-Aware Cross-Attention}$ (LACA) to integrate these signals and adopt a dual-CFG scheme to separately modulate local and global text guidance. In addition, we develop a data processing pipeline that produces trajectories with localized descriptions of tracked entities, and we annotate two million high quality video clips to train TGT. Together, these components enable TGT to use point trajectories as intuitive motion handles, pairing each trajectory with text to control both appearance and motion. Extensive experiments show that TGT achieves higher visual quality, more accurate text alignment, and improved motion controllability compared with prior approaches. Website: https://textgroundedtraj.github.io. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=qUwOlwao20 | 2025-09-12T10:16:33 | 4 | [
{
"id": "FuqGmN2Hvh",
"forum": "qUwOlwao20",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4232/Reviewer_TNvT",
"reviewer_name": "Reviewer_TNvT",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
7AlPbFkcs3 | https://openreview.net/forum?id=7AlPbFkcs3 | Semantic Voting: A Self-Evaluation-Free Approach for Efficient LLM Self-Improvement on Unverifiable Open-ended Tasks | 4.5 | 4 | [
6,
2,
6,
4
] | [
4,
4,
4,
4
] | 4 | [
"LLM",
"unsupervised learning",
"self-improvement"
] | The rising cost of acquiring supervised data has driven significant interest in self-improvement for large language models (LLMs). Straightforward unsupervised signals like majority voting have proven effective in generating pseudo-labels for verifiable tasks, while their applicability to unverifiable tasks (e.g., translation) is limited by the open-ended character of responses. As a result, self-evaluation mechanisms (e.g., self-judging and entropy minimization) are predominantly used to derive pseudo-labels. However, self-evaluation relying on LLMs typically incurs high computational overhead and introduces overconfidence issues due to intrinsic biases. To address these challenges, we propose a novel self-evaluation-free approach for unverifiable tasks, designed for lightweight yet effective self-improvement. Inspired by majority voting commonly employed in verifiable tasks, we propose semantic voting as a novel mechanism that relaxes the principle of hard matching (i.e., exact matching) toward soft matching (i.e., semantic similarity). Soft matching is achieved by leveraging a lightweight sentence embedding model to quantify semantic similarity, thereby mitigating excessive computational burden and intrinsic bias-associated limitations of self-evaluation. Comprehensive experiments demonstrate that our method achieves substantial gains in computational efficiency and overall better performance than self-evaluation methods across diverse model architectures and tasks. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=7AlPbFkcs3 | 2025-09-15T15:19:23 | 4 | [
{
"id": "aprR1WxfnR",
"forum": "7AlPbFkcs3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5575/Reviewer_nFpD",
"reviewer_name": "Reviewer_nFpD",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The pa... | |
4LiX5ddGcU | https://openreview.net/forum?id=4LiX5ddGcU | Unified Vision–Language Modeling via Concept Space Alignment | 5.5 | 3.25 | [
4,
6,
6,
6
] | [
4,
4,
2,
3
] | 4 | [
"multimodal embedding space",
"multilingual embedding space"
] | We introduce vSONAR, a vision–language embedding space extended from the text-only embedding space SONAR, which supports 200 text languages and 37 speech languages.
To construct vSONAR, we propose a post-hoc alignment pipeline that maps the representations of an existing vision encoder into the SONAR space.
We thoroughly evaluate vSONAR and show that its embeddings achieve competitive performance on text-to-video retrieval.
Equipped with the SONAR text decoder, vSONAR further surpasses state-of-the-art vision–language models on video captioning tasks, including DREAM-1K (BLEU 24.3 vs. 19.6) and VATEX (BLEU 45.0 vs. 41.5).
Leveraging vSONAR, we first demonstrate that the Large Concept Model (LCM) operating in SONAR and trained with English text only, can perform both single- and multi-visual concept understanding in a zero-shot manner.
Finally, we introduce vLCM, which extends the LCM with vision–language instruction tuning. vLCM encodes vision and language inputs into an unified sequence of latent embeddings via vSONARand SONAR, and it is trained with the same latent diffusion objective for next-embedding prediction as in LCM's text-only pre-training.
Experiments on a large-scale multilingual and -modal instruction–tuning data mixture highlight the potential of vLCM: vLCM matches state-of-the-art vision-language models on tasks covering image/video captioning and question answering, while significantly outperforming them across 61 rich- to low-resource languages out of all 62 tested languages. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=4LiX5ddGcU | 2025-09-19T00:21:54 | 4 | [
{
"id": "cqGhp46bQk",
"forum": "4LiX5ddGcU",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12990/Reviewer_wU7h",
"reviewer_name": "Reviewer_wU7h",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
4kz4586euw | https://openreview.net/forum?id=4kz4586euw | Simulation-Free Structure Learning for Stochastic Dynamics | 4 | 3.5 | [
4,
2,
2,
8
] | [
3,
4,
4,
3
] | 4 | [
"Structure Learning",
"Trajectory Inference",
"Single-cell",
"Flow Matching",
"Schrödinger Bridge"
] | Modeling dynamical systems and unraveling their underlying causal relationships is central to many domains in the natural sciences. Various physical systems, such as those arising in cell biology, are inherently high-dimensional and stochastic in nature, and admit only partial, noisy state measurements. This poses a significant challenge for addressing the problems of modeling the underlying dynamics and inferring the network structure of these systems. Existing methods are typically tailored either for structure learning or modeling dynamics at the population level, but are limited in their ability to address both problems together. In this work, we address both problems simultaneously: we present StructureFlow, a novel and principled simulation-free approach for jointly learning the structure and stochastic population dynamics of physical systems. We showcase the utility of StructureFlow for the tasks of structure learning from interventions and dynamical (trajectory) inference of conditional population dynamics. We empirically evaluate our approach on high-dimensional synthetic systems, a set of biologically plausible simulated systems, and an experimental single-cell dataset. We show that StructureFlow can learn the structure of underlying systems while simultaneously modeling their conditional population dynamics --- a key step toward the mechanistic understanding of systems behavior. | We introduce a principled approach for jointly recovering the underlying network structure and dynamic response of a physical system using flow- and score-matching. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=4kz4586euw | 2025-09-19T05:09:06 | 4 | [
{
"id": "wcXI5TLFe9",
"forum": "4kz4586euw",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14176/Reviewer_HiCq",
"reviewer_name": "Reviewer_HiCq",
"rating": 4,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "This ... |
JkKkquv5lw | https://openreview.net/forum?id=JkKkquv5lw | A Study on PAVE Specification for Learnware | 6 | 1.666667 | [
4,
6,
8
] | [
1,
2,
2
] | 3 | [
"Learnware",
"Model Specification",
"Parameter Vector",
"Learnware Identification",
"Model Capability"
] | The *Learnware* paradigm aims to help users solve machine learning tasks by leveraging existing well-trained models rather than starting from scratch. A learnware comprises a submitted model paired with a *specification* sketching its capabilities. For an open platform with continuously uploaded models, these specifications are essential to enabling users to identify helpful models, eliminating the requirement for prohibitively costly per-model evaluations. In previous research, specifications based on privacy-preserving reduced sets succeed in enabling learnware identification through distribution matching, but suffer from high sample complexity for learnwares from high-dimensional, unstructured data like images or text. In this paper, we formalize **Pa**rameter **Ve**ctor (PAVE) specification for learnware identification, which utilizes the changes in pre-trained model parameters to inherently encode the model capability and task requirements, offering an effective solution for these learnwares. Theoretically, from the neural tangent kernel perspective, we establish a tight connection between PAVE and prior specifications, providing a theoretical explanation for their shared underlying principles. We further approximate the parameter vector in a low-rank space and analyze the approximation error bound, highly reducing the computational and storage overhead. Extensive empirical studies demonstrate that PAVE specification excels at identifying CV and NLP learnwares for reuse on given user tasks, and succeeds in identifying helpful learnwares from open learnware repository with corrupted model quality for the first time. Reusing identified learnware to solve user tasks can even outperform user-fine-tuned pre-trained models in data-limited scenarios. | We formalize the Parameter Vector (PAVE) specification, which encodes model capabilities for efficient learnware identification, eliminating costly per-model evaluations and outperforming fine-tuned pre-trained models in limited-data scenarios. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=JkKkquv5lw | 2025-09-18T20:28:29 | 3 | [
{
"id": "9oer5wRYaF",
"forum": "JkKkquv5lw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11415/Reviewer_KAPE",
"reviewer_name": "Reviewer_KAPE",
"rating": 4,
"confidence": 1,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... |
qEzgXEBLIH | https://openreview.net/forum?id=qEzgXEBLIH | Let Physics Guide Your Protein Flows: Topology-aware Unfolding and Generation | 2 | 4.5 | [
2,
2,
2,
2
] | [
5,
4,
5,
4
] | 4 | [
"Protein Structure Generative Models",
"Structure prediction",
"Physics-informed generative model",
"Flow Matching"
] | Protein structure prediction and folding are fundamental to understanding biology, with recent deep learning advances reshaping the field. Diffusion-based generative models have revolutionized protein design, enabling the creation of novel proteins. However, these methods often neglect the intrinsic physical realism of proteins, driven by noising dynamics that lack grounding in physical principles. To address this, we first introduce a physically motivated non-linear noising process, grounded in classical physics, that unfolds proteins into secondary structures (e.g., $\alpha$-helices, linear $\beta$-sheets) while preserving structural integrity—maintaining bonds and preventing collisions. We then integrate this process with the flow-matching paradigm on $\mathrm{SE(3)}$ to model the invariant distribution of protein backbones with high fidelity, incorporating sequence information to enable sequence-conditioned folding and expand the generative capabilities of our model. Experimental results demonstrate state-of-the-art performance in unconditional protein generation, producing more designable and novel protein structures while accurately folding monomer sequences into precise protein conformations. | We propose a novel physics-informed generative model for protein backbone structure generation using flow matching | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=qEzgXEBLIH | 2025-09-07T04:17:17 | 4 | [
{
"id": "z5YBR0n0WR",
"forum": "qEzgXEBLIH",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2670/Reviewer_8kS8",
"reviewer_name": "Reviewer_8kS8",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "PhysFl... |
rkfbUc3kO4 | https://openreview.net/forum?id=rkfbUc3kO4 | TraceFlow: Dynamic 3D Reconstruction of Specular Scenes Driven by Ray Tracing | 5 | 4.25 | [
4,
2,
6,
8
] | [
4,
5,
4,
4
] | 4 | [
"4D Reconstruction; Ray Tracing; Specular"
] | We present TraceFlow, a novel framework for high-fidelity rendering of dynamic specular scenes by addressing two key challenges: precise reflection direction estimation and physically accurate reflection modeling. To achieve this, we propose a Residual Material-Augmented 2D Gaussian Splatting representation that models dynamic geometry and material properties, allowing accurate reflection ray computation. Furthermore, we introduce a Dynamic Environment Gaussian representation and a hybrid rendering pipeline that decomposes rendering into diffuse and specular components, enabling physically grounded specular synthesis via rasterization and ray tracing. Finally, we devise a coarse-to-fine training strategy to improve optimization stability and promote physically meaningful decomposition. Extensive experiments on dynamic scene benchmarks demonstrate that TraceFlow outperforms prior methods both quantitatively and qualitatively, producing sharper and more realistic specular reflections in complex dynamic environments. | We present TraceFlow, a novel framework for high-fidelity rendering of dynamic specular scenes. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=rkfbUc3kO4 | 2025-09-19T23:40:32 | 4 | [
{
"id": "Mq2RsJrdlI",
"forum": "rkfbUc3kO4",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19465/Reviewer_dcZt",
"reviewer_name": "Reviewer_dcZt",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... |
yHxSKM9kdr | https://openreview.net/forum?id=yHxSKM9kdr | IceCache: Memory-Efficient KV-cache Management for Long-Sequence LLMs | 4.666667 | 3.666667 | [
4,
4,
6
] | [
3,
4,
4
] | 3 | [
"LLM Inference; KV-cahce Optimization; Sparse Attention"
] | Key-Value (KV) cache plays a pivotal role in accelerating inference in large language models (LLMs) by storing intermediate attention outputs, thereby avoiding redundant computation during auto-regressive generation. However, the cache's memory footprint scales linearly with sequence length, often resulting in memory bottlenecks on constrained hardware. While prior work has explored offloading KV-cache to the CPU and maintaining a reduced subset on the GPU, these approaches frequently suffer from imprecise token prioritization and degraded performance in long-generation tasks such as multi-turn dialogues and chain-of-thought reasoning. In this paper, we propose a novel KV-cache management strategy called IceCache, that integrates semantic token clustering with PagedAttention, a memory-efficient paging mechanism. By clustering semantically related tokens and organizing them into a hierarchical, dynamically updateable structure, our method improves cache hit rates and memory bandwidth utilization during CPU-GPU transfers. Experimental results show that IceCache achieves over 99\% accuracy with a 256-token budget and still maintains 97\% accuracy with only a 64-token budget, compared to the full KV-cache model. It outperforms existing baselines even while using just 25\% of the KV-cache token budget, demonstrating its superior accuracy in long-sequence scenarios. | generative models | https://openreview.net/pdf?id=yHxSKM9kdr | 2025-09-02T00:46:37 | 3 | [
{
"id": "dfOJQiNstJ",
"forum": "yHxSKM9kdr",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission541/Reviewer_SWPZ",
"reviewer_name": "Reviewer_SWPZ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This pa... | |
8tDIzHFOx6 | https://openreview.net/forum?id=8tDIzHFOx6 | SPR$^2$Q: Static Priority-based Rectifier Routing Quantization for Image Super-Resolution | 5 | 4.5 | [
4,
6,
6,
4
] | [
4,
4,
5,
5
] | 4 | [
"Image Super-Resolution",
"model quantization",
"adapter routing"
] | Low-bit quantization has achieved significant progress in image super-resolution. However, existing quantization methods show evident limitations in handling the heterogeneity of different components. Particularly under extreme low-bit compression, the issue of information loss becomes especially pronounced. In this work, we present a novel low-bit post-training quantization method, namely static priority-based rectifier routing quantization (SPR$^2$Q). The starting point of this work is to attempt to inject rich and comprehensive compensation information into the model before the quantization , thereby enhancing the model's inference performance after quantization. Firstly, we constructed a low-rank rectifier group and embedded it into the model's fine-tuning process. By integrating weight increments learned from each rectifier, the model enhances the backbone network while minimizing information loss during the lightweighting process. Furthermore, we introduce a static rectifier priority routing mechanism that evaluates the offline capability of each rectifier and generates a fixed routing table. During quantisation, it updates weights based on each rectifier's priority, enhancing the model's capacity and representational power without introducing additional overhead during inference. Extensive experiments demonstrate that the proposed SPR$^2$Q significantly outperforms the state-of-the-arts in five benchmark datasets, achieving PSNR improvements of 0.55 and 1.31 dB on the Set5($\times 2$) dataset under 4-bit and 2-bit settings, respectively. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=8tDIzHFOx6 | 2025-09-19T12:12:15 | 4 | [
{
"id": "1RmJDO6ly6",
"forum": "8tDIzHFOx6",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15779/Reviewer_jipM",
"reviewer_name": "Reviewer_jipM",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "his p... | |
hGoDq7MIK5 | https://openreview.net/forum?id=hGoDq7MIK5 | On the Effect of Positional Encoding for In-context Learning in Transformers | 4.4 | 3.4 | [
4,
6,
2,
4,
6
] | [
3,
4,
5,
3,
2
] | 5 | [
"Transformer Theory",
"In-context Learning",
"Positional Encoding"
] | Transformer models have demonstrated a remarkable ability to perform a wide range of tasks through in-context learning (ICL), where the model infers patterns from a small number of example prompts provided during inference. However, empirical studies have shown that the effectiveness of ICL can be significantly influenced by the order in which these prompts are presented. Despite its significance, this phenomenon has been largely unexplored from a theoretical perspective. In this paper, we theoretically investigate how positional encoding (PE) affects the ICL capabilities of Transformer models, particularly in tasks where prompt order plays a crucial role. We examine two distinct cases: linear regression, which represents an order-equivariant task, and dynamic systems, a classic time-series task that is inherently sensitive to the order of input prompts. Theoretically, we evaluated the change in the model output when positional encoding (PE) is incorporated and the prompt order is altered. We proved that the magnitude of this change follows a convergence rate of $\mathcal{O}(k/N)$, where $k$ is the degree of permutation to the original prompt and $N$ is the number of in-context examples. Furthermore, for dynamical systems, we demonstrated that PE enables the Transformer to perform approximate gradient descent (GD) on permuted prompts, thereby ensuring robustness to changes in prompt order. These theoretical findings are experimentally validated. | interpretability and explainable AI | https://openreview.net/pdf?id=hGoDq7MIK5 | 2025-09-19T23:32:30 | 5 | [
{
"id": "Su8le5juia",
"forum": "hGoDq7MIK5",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19405/Reviewer_DDwu",
"reviewer_name": "Reviewer_DDwu",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
dHz2LBCyTh | https://openreview.net/forum?id=dHz2LBCyTh | Batch and Sequential Unlearning for Neural Networks | 6 | 3.5 | [
6,
8,
4,
6
] | [
3,
4,
4,
3
] | 4 | [
"machine unlearning",
"second-order unlearning"
] | With the increasing deployment of machine learning models trained on personal data, machine unlearning has become crucial for data owners to exercise their "right to be forgotten" and protect their privacy. While model owners can retrain the models without the erased data to achieve this goal, this process is often prohibitively expensive. Previous works have shown that Newton's method can be applied to linear models to unlearn multiple data points in batch (batch unlearning) with minimal iterations. However, adapting this method to non-linear models, such as neural networks, poses significant challenges due to the presence of degenerate Hessians. This problem becomes more pronounced when unlearning is performed sequentially (sequential unlearning). Existing techniques that tried to tackle this degeneracy often 1) incur unlearning updates with excessively large norm that yield unsatisfactory unlearning performance and 2) may require manual tuning of regularization hyperparameters. In this work, we propose new unlearning algorithms that leverage cubic regularization for Newton's method to address both challenges. We discuss the theoretical benefits of our method and empirically show that our algorithms can efficiently achieve competitive performance in both batch and sequential unlearning on real-world datasets. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=dHz2LBCyTh | 2025-09-19T13:35:38 | 4 | [
{
"id": "1cBLUxi6YP",
"forum": "dHz2LBCyTh",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16110/Reviewer_mMKS",
"reviewer_name": "Reviewer_mMKS",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... | |
k7ifkwmsXn | https://openreview.net/forum?id=k7ifkwmsXn | SonicMaster: Towards Controllable All-in-One Music Restoration and Mastering | 5 | 4.5 | [
4,
8,
6,
2
] | [
5,
4,
4,
5
] | 4 | [
"Music Restoration",
"Music Mastering",
"Music Generation",
"Audio Generation"
] | Music recordings often suffer from audio quality issues such as excessive reverberation, distortion, clipping, tonal imbalances, and a narrowed stereo image, especially when created in non-professional settings without specialized equipment or expertise. These problems are typically corrected using separate specialized tools and manual adjustments. In this paper, we introduce SonicMaster, the first unified generative model for music restoration and mastering that addresses a broad spectrum of audio artifacts with text-based control. SonicMaster is conditioned on natural language instructions to apply targeted enhancements, or can operate in an automatic mode for general restoration. To train this model, we construct the SonicMaster dataset, a large dataset of paired degraded and high-quality tracks by simulating common degradation types with nineteen degradation functions belonging to five enhancements groups: equalization, dynamics, reverb, amplitude, and stereo. Our approach leverages a flow-matching generative training paradigm to learn an audio transformation that maps degraded inputs to their cleaned, mastered versions guided by text prompts. Objective audio quality metrics demonstrate that SonicMaster significantly improves sound quality across all artifact categories. Furthermore, subjective listening tests confirm that listeners prefer SonicMaster’s enhanced outputs over other baselines. Demo samples are available through https://msonic793.github.io/SonicMaster/ | We present SonicMaster, an all-in-one music restoration and mastering model controllable by text prompts which is a first of its kind and defines a new task in the field. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=k7ifkwmsXn | 2025-09-18T23:39:50 | 4 | [
{
"id": "3ZYuoYPTEr",
"forum": "k7ifkwmsXn",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12745/Reviewer_BVLP",
"reviewer_name": "Reviewer_BVLP",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 4,
"summary": "The p... |
8HvWBamUkS | https://openreview.net/forum?id=8HvWBamUkS | Adaptive Curriculum Learning for RLHF with Influence-Based Cluster Bandits | 5 | 3.75 | [
4,
6,
4,
6
] | [
3,
4,
4,
4
] | 4 | [
"RLHF",
"Curriculum Learning",
"GRPO"
] | Reinforcement learning (RL) plays a central role in post-training large language models (LLMs). Yet, existing RLHF pipelines typically rely on fixed or uniform sampling strategies, which fail to adapt to the model’s evolving learning state. This mismatch leads to wasted computation on less informative samples while neglecting instances with higher training impact, ultimately limiting efficiency, generalization, and performance gains.
We introduce an adaptive curriculum learning framework that integrates influence-based clustering with a multi-armed bandit (MAB) scheduler. Training data are partitioned into clusters defined by semantic and difficulty-related features, each treated as an arm in the MAB formulation. A Cluster Score (CS), updated via sliding-window influence functions, quantifies the dynamic importance of each cluster as the model evolves. This adaptive scoring drives the scheduler to balance exploitation of high-impact clusters with exploration of underrepresented regions, ensuring efficient learning while maintaining diversity. Unlike prior approaches that overfit to narrow high-reward subsets, our cluster-level sampling prevents redundancy and broadens representational coverage.
Experiments with Group Relative Policy Optimization across mathematical reasoning benchmarks show that our method consistently accelerates convergence and improves generalization. These results highlight the value of distribution-level adaptive curricula in advancing RLHF for LLM training. | reinforcement learning | https://openreview.net/pdf?id=8HvWBamUkS | 2025-09-18T21:56:25 | 4 | [
{
"id": "st94h7Dsx7",
"forum": "8HvWBamUkS",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11782/Reviewer_2YAE",
"reviewer_name": "Reviewer_2YAE",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... | |
hAHbo4EGQY | https://openreview.net/forum?id=hAHbo4EGQY | From Masks to Worlds: A Hitchhiker’s Guide to World Models | 3.5 | 3.5 | [
6,
0,
4,
4
] | [
3,
3,
4,
4
] | 4 | [
"World Models",
"Position Paper"
] | This is not a typical survey of world models, it is a guide for those who want to build worlds. We do not aim to catalog every paper that has ever mentioned a ``world model". Instead, we follow one clear road: from early masked models that unified representation learning across modalities, to unified architectures that share a single paradigm, then to interactive generative models that close the action-perception loop, and finally to memory-augmented systems that sustain consistent worlds over time. We bypass noisy branches to focus on the core: the generative heart, the interactive loop, and the memory system. We show that this is the most promising path towards world models. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=hAHbo4EGQY | 2025-09-08T14:55:55 | 5 | [
{
"id": "mGcAz8i7WC",
"forum": "hAHbo4EGQY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2999/Reviewer_TWe8",
"reviewer_name": "Reviewer_TWe8",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This i... | |
mYzlRNMAxS | https://openreview.net/forum?id=mYzlRNMAxS | Why Attention Fails: The Degeneration of Transformers into MLPs in Time Series Forecasting | 4.5 | 3.75 | [
6,
6,
4,
2
] | [
4,
4,
3,
4
] | 4 | [
"Deep Learning",
"Time Series",
"Transformer",
"Degeneration"
] | Transformer-based architectures achieved high performance in natural language processing and computer vision, yet many studies have shown that they have not demonstrated a clear advantage in time series forecasting and even underperform simple linear baselines in some cases. However, most of these studies have not thoroughly explored the reasons behind the failure of transformers. To better understand time-series transformers(TST), we designed a series of experiments, progressively modifying transformers into MLPs to investigate the impact of the attention mechanism. Surprisingly, transformer blocks often degenerate into simple MLPs in existing time-series transformers. We designed a interpretable dataset to investigate the reasons behind the failure of the attention mechanism and revealed that the attention mechanism is not working in the expected way. We theoretically analyzed the reasons behind this phenomenon, demonstrating that the current embedding methods fail to allow transformers to function in a well-structured latent space, and further analyzed the deeper underlying causes of the failure of embedding. | learning on time series and dynamical systems | https://openreview.net/pdf?id=mYzlRNMAxS | 2025-09-20T09:21:25 | 4 | [
{
"id": "GxeG5thR0A",
"forum": "mYzlRNMAxS",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22455/Reviewer_urWk",
"reviewer_name": "Reviewer_urWk",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
xn9fS09Yir | https://openreview.net/forum?id=xn9fS09Yir | Can Large Language Models Truly Stay Helpful Harmless and Honest? | 4.5 | 2.75 | [
4,
2,
6,
6
] | [
2,
4,
2,
3
] | 4 | [
"LLM",
"Alignment",
"NLP"
] | Alignment of Large Language Models (LLMs) along multiple objectives—helpfulness, harmlessness, and honesty (HHH)—is critical for safe and reliable deployment. Prior work has used steering vectors—small control signals injected into hidden states—to guide LLM outputs, typically via one-to-one (1-to-1) Transformer decoders. In this setting, optimizing a single alignment objective can inadvertently overwrite representations learned for other objectives, leading to catastrophic forgetting. More recent approaches extend steering vectors via one-to-many (1-to-N) Transformer decoders. While this alleviates catastrophic forgetting, na¨ıve multi-branch designs optimize each objective independently, which can cause inference fragmentation—outputs across HHH objectives may become inconsistent. We propose Adaptive Multi-Branch Steering (AMBS), a two-stage 1-to-N framework for unified and efficient multi-objective alignment. In Stage I, post-attention hidden states of the Transformer layer are computed once to form a shared representation. In Stage II, this representation is cloned into parallel branches and steered via a policy–reference mechanism, enabling objective-specific control while maintaining cross objective consistency. Empirical evaluations on Alpaca, BeaverTails, and TruthfulQA show that AMBS consistently improves HHH alignment across multiple 7B LLM backbones. For example, on DeepSeek-7B, AMBS improves average alignment scores by +32.4% and reduces unsafe outputs by 11.0% compared to a naıve 1-to-N baseline, while remaining competitive with state-of-the-art methods. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=xn9fS09Yir | 2025-09-20T14:48:22 | 4 | [
{
"id": "tnxKOi1HIA",
"forum": "xn9fS09Yir",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23963/Reviewer_Tgs7",
"reviewer_name": "Reviewer_Tgs7",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The p... | |
6zNODYRJvI | https://openreview.net/forum?id=6zNODYRJvI | Reshape-then-Factorize: Communication-Efficient FL via Model-Agnostic Projection Optimization | 4 | 4 | [
4,
2,
4,
6
] | [
4,
5,
3,
4
] | 4 | [
"Federated Learning",
"Low-Rank Adaptation",
"Communication Efficiency",
"Subspace Optimization"
] | Federated learning (FL) enables collaborative model training across distributed clients without sharing sensitive data. However, communication overhead remains a significant bottleneck, particularly for large-scale models. Low-rank decomposition techniques address this by approximating each layer’s weights or gradients with a product of low-rank matrices, thereby reducing the communication cost in FL. While effective, these methods are constrained by the layer's architecture and shapes, limiting their flexibility and performance.
We propose *Model-Agnostic Projection Optimization* (MAPO), a novel method that reshapes and factorizes the full model gradient into a *fixed reconstruction matrix* and a *trainable projection vector*, avoiding layer-wise decomposition and architecture constraints. MAPO directly optimizes the projection in a randomly sampled subspace, with all clients generating the reconstruction matrix via a shared random seed, incurring no additional communication overhead for synchronization.
By decoupling the gradient from architectural constraints through reshaping and enabling communication-free exploration of dynamic subspaces via seed sharing, MAPO provides a more flexible and efficient low-rank representation.
Empirical results demonstrate the effectiveness of MAPO in various FL settings. | optimization | https://openreview.net/pdf?id=6zNODYRJvI | 2025-09-19T04:10:01 | 4 | [
{
"id": "1ISa3vX9rA",
"forum": "6zNODYRJvI",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13968/Reviewer_zkJ3",
"reviewer_name": "Reviewer_zkJ3",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The p... | |
GfPwZwZ9xZ | https://openreview.net/forum?id=GfPwZwZ9xZ | VLASim: World Modelling via VLM-Directed Abstraction and Simulation from a Single Image | 4.5 | 3.5 | [
4,
6,
6,
2
] | [
4,
3,
3,
4
] | 4 | [
"world models",
"video models",
"physical simulation",
"code generation"
] | Generative video models, a leading approach to world modeling, face fundamental limitations. They often violate physical and logical rules, lack interactivity, and operate as opaque black boxes ill-suited for building structured, queryable worlds. To overcome these challenges, we propose a new paradigm focused on distilling a single image into a tractable, abstract representation optimized for simulation. We introduce VLASim, a framework where a Vision-Language Model (VLM) acts as an intelligent agent to orchestrate this process. The VLM autonomously constructs a grounded (2D or 3D) scene representation by selecting from a suite of vision tools, and co-dependently chooses a compatible physics simulator (e.g., rigid body, fluid) to act upon it. Furthermore, VLASim can infer latent dynamics from the static scene to predict plausible future states. Our experiments show that this combination of intelligent abstraction and adaptive simulation results in a versatile world model capable of producing higher-quality simulations across a wider range of dynamic scenarios than prior approaches. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=GfPwZwZ9xZ | 2025-09-18T23:57:37 | 4 | [
{
"id": "dVYk98ztUR",
"forum": "GfPwZwZ9xZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12870/Reviewer_nHoT",
"reviewer_name": "Reviewer_nHoT",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... | |
5EKDKjNP6P | https://openreview.net/forum?id=5EKDKjNP6P | Beyond Taylor Expansion: Intermediate Activation Perspectives in Structured Pruning | 4 | 4 | [
4,
4,
4
] | [
4,
4,
4
] | 3 | [
"Large Language Model Pruning",
"Large Language Model Compression"
] | Extensive prior work on importance-based pruning relies on first- or second-order Taylor expansions of the loss to score parameters by the estimated loss increase upon removal. However, in large language models with massive parameters and multi-layered nonlinear mappings, such approximations inevitably lead to errors. When applied to structured pruning, Taylor-based criteria are typically extended from individual weights to entire neurons or channels by aggregating their sensitivities. While this enables parameter reduction at the structural level, Taylor expansion is constrained to low-order approximations, owing to the computational intractability of higher-order terms in large-scale models, which results in inaccurate estimates of loss change. Moreover, it neglects the hierarchical dependencies of deep models, failing to account for how parameters influence subsequent layers through forward propagation. In particular, the intermediate activations within the feed-forward network (FFN) layer provide a direct characterization of how the pre-activation projections transmits information forward, thereby offering a more faithful account of its influence on the model’s representations. Therefore, we propose $\textbf{ActTaylor}$, an intermediate $\textbf{act}$ivation enhanced $\textbf{Taylor}$ criterion for structured pruning, which integrates loss sensitivity with the hierarchical influence of parameters captured through intermediate activations. ActTaylor scores each hidden unit in the FFN by modulating its Taylor-based sensitivity with the activation statistics for one-shot pruning without any retraining. At pruning ratios of 20\% and 30\%, our method consistently outperforms state-of-the-art structured pruning baselines across seven commonsense benchmarks and one multi-task knowledge benchmark, improving the average accuracy on LLaMA-2 7B by $7.8\%$ and $12.9\%$, and on LLaMA-2 13B by $12.5\%$ and $14.0\%$, respectively. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=5EKDKjNP6P | 2025-09-18T15:17:05 | 3 | [
{
"id": "LzWSBTZPtp",
"forum": "5EKDKjNP6P",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10696/Reviewer_QEZ6",
"reviewer_name": "Reviewer_QEZ6",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... | |
OUQ8kLRK3m | https://openreview.net/forum?id=OUQ8kLRK3m | Truly Assessing Fluid Intelligence of Large Language Models through Dynamic Reasoning Evaluation | 4 | 3.75 | [
6,
2,
2,
6
] | [
3,
5,
4,
3
] | 4 | [
"Dynamic Reasoning Evaluation",
"Fluid Intelligence",
"Cognition-Inspired Level",
"Various Complexity"
] | Recent advances in large language models (LLMs) have demonstrated impressive reasoning capacities that mirror human-like thinking. However, whether LLMs possess genuine fluid intelligence (i.e., the ability to reason abstractly and generalize rules in novel situations) remains an open question. Existing reasoning benchmarks either focus on domain-specific knowledge (crystallized intelligence) or lack interpretability. To address these limitations, we propose DRE-Bench, a dynamic reasoning evaluation benchmark grounded in a hierarchical cognitive framework. DRE-Bench consists of 36 abstract reasoning tasks organized across four cognitive levels, with each task featuring multiple dynamic variants that test the same underlying latent rule. This design enables fine-grained, interpretable, and reliable assessments of fluid intelligence. We evaluate a range of state-of-the-art LLMs, including both general LLMs (GPT-4o, Claude 3.7) and reasoning LLMs (o1, DeepSeek-R1, QwQ, Skywork-OR1). Experimental results reveal that although most LLMs achieve competent and robust performance in low-level cognition, they struggle with high-level cognition and exhibit limited generalization as task complexity grows. Our findings highlight the gap between current LLMs and true human-like fluid intelligence and offer a new path for systematically tracking reasoning progress in LLMs. | datasets and benchmarks | https://openreview.net/pdf?id=OUQ8kLRK3m | 2025-09-15T15:35:09 | 4 | [
{
"id": "whNtuT6G1Z",
"forum": "OUQ8kLRK3m",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5591/Reviewer_cJQv",
"reviewer_name": "Reviewer_cJQv",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This w... | |
SX9A72RPU3 | https://openreview.net/forum?id=SX9A72RPU3 | Structured covariance estimation via tensor-train decomposition | 4.5 | 3.25 | [
4,
4,
8,
2
] | [
4,
3,
3,
3
] | 4 | [
"covariance estimation",
"tensor train",
"dimension-free bounds",
"concentration",
"Kronecker product",
"CANDECOMP/PARAFAC"
] | We consider a problem of covariance estimation from a sample of i.i.d. high-dimensional random vectors. To avoid the curse of dimensionality, we impose an additional assumption on the structure of the covariance matrix $\Sigma$. To be more precise, we study the case when $\Sigma$ can be approximated by a sum of double Kronecker products of smaller matrices in a tensor train (TT) format. Our setup naturally extends widely known Kronecker sum and CANDECOMP/PARAFAC models but admits richer interaction across modes. We suggest an iterative polynomial time algorithm based on TT-SVD and higher-order orthogonal iteration (HOOI) adapted to Tucker‑2 hybrid structure. We derive non-asymptotic dimension-free bounds on the accuracy of covariance estimation taking into account hidden Kronecker product and tensor train structures. The efficiency of our approach is illustrated with numerical experiments. | Structured covariance estimation with dimension-free concentration bounds | learning theory | https://openreview.net/pdf?id=SX9A72RPU3 | 2025-09-19T04:18:07 | 4 | [
{
"id": "y5Bgnym4Uw",
"forum": "SX9A72RPU3",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14001/Reviewer_43Kt",
"reviewer_name": "Reviewer_43Kt",
"rating": 4,
"confidence": 4,
"soundness": 4,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
IAFwK6NyrP | https://openreview.net/forum?id=IAFwK6NyrP | The Counting Power of Transformers | 6.4 | 3.8 | [
8,
4,
8,
4,
8
] | [
4,
4,
4,
3,
4
] | 5 | [
"FLaNN",
"expressiveness",
"attention",
"formal languages"
] | Counting properties (e.g. determining whether certain tokens occur more
than other tokens in a given input text) have played a significant role in
the study of expressiveness of transformers.
In this paper, we provide a formal
framework for investigating the counting power of transformers. We argue
that all existing results demonstrate transformers' expressivity only for
(semi-)linear counting properties, i.e., which are expressible as a
boolean combination of linear inequalities.
Our main result is that transformers can express counting properties that
are highly nonlinear. More precisely, we prove that transformers can
capture all semialgebraic counting properties, i.e., expressible as
a boolean combination of arbitrary multivariate polynomials (of any degree).
Among others, these generalize the counting properties that
can be captured by support vector machines via polynomial kernel in the
vector space model.
To complement this result, we exhibit a natural subclass of (softmax)
transformers that completely characterizes semialgebraic counting
properties.
Through connections with the
Hilbert's tenth problem, this expressivity of transformers also
yields a new undecidability result for analyzing an extremely simple
transformer model --- surprisingly with neither positional encodings
(i.e. NoPE-transformers) nor masking.
We also experimentally validate trainability of such counting
properties. | Transformers can express highly nonlinear counting properties | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=IAFwK6NyrP | 2025-09-20T02:25:03 | 5 | [
{
"id": "Yf1xZImhqm",
"forum": "IAFwK6NyrP",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20452/Reviewer_sqAV",
"reviewer_name": "Reviewer_sqAV",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 4,
"summary": "This ... |
GLELajHnCo | https://openreview.net/forum?id=GLELajHnCo | GAPrune: Gradient-Alignment Pruning for Domain-Aware Embeddings | 4 | 3.333333 | [
4,
2,
6
] | [
3,
4,
3
] | 3 | [
"Embedding Model; Domain Adaptation; Domain Pruning"
] | Domain-specific embedding models have shown promise for applications that require specialized semantic understanding, such as coding agents and financial retrieval systems, often achieving higher performance gains than general models. However, state-of-the-art embedding models are typically based on LLMs, which contain billions of parameters, making deployment challenging in resource-constrained environments. Model compression through pruning offers a promising solution, but existing pruning methods treat all parameters uniformly, failing to distinguish between general semantic representations and domain-specific patterns, leading to suboptimal pruning decisions. Thus, we propose GAPrune, a pruning framework that addresses this challenge by considering both domain importance and preserving general linguistic foundation. Our method uses Fisher Information to measure importance and general-domain gradient alignment to assess parameter behavior, then combines these signals using our Domain Alignment Importance (DAI) scoring. Lower DAI scores indicate that the parameter is either less important for the domain task or creates conflicts between domain and general objectives. Experiments on two domain benchmarks, FinMTEB and ChemTEB, show that GAPrune maintains performance within 2.5\% of dense models in one-shot pruning at 50\% sparsity, while outperforming all baselines. With retraining in 100 steps, GAPrune achieves +4.51\% improvement on FinMTEB and +1.73\% on ChemTEB, demonstrating that our pruning strategy not only preserves but enhances domain-specific capabilities. Our findings demonstrate that principled pruning strategies can achieve model compression and enhanced domain specialization, providing the research community with a new approach for development. | GAPrune prunes embedding models using domain-general gradient alignment, achieving 50% sparsity while enhancing domain performance. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=GLELajHnCo | 2025-09-13T16:00:31 | 3 | [
{
"id": "LvM5UZeuQc",
"forum": "GLELajHnCo",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4705/Reviewer_nY2B",
"reviewer_name": "Reviewer_nY2B",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The au... |
BwjNHzwAOq | https://openreview.net/forum?id=BwjNHzwAOq | Introducing Multimodal Paradigm for Learning Sleep Staging PSG via General-purpose Model | 5.2 | 4 | [
6,
2,
4,
8,
6
] | [
4,
4,
4,
4,
4
] | 5 | [
"Physiological Signal Processing",
"Sleep Staging",
"Brain Computer Interfaces",
"Interpretable AI"
] | Sleep staging is essential for diagnosing sleep disorders and assessing neurological health. Existing automatic methods typically extract features from complex polysomnography (PSG) signals and train domain-specific models, which often lack intuitiveness and require large, specialized datasets. To overcome these limitations, we introduce a new paradigm for sleep staging that leverages large multimodal general-purpose models to emulate clinical diagnostic practices. Specifically, we convert raw one-dimensional PSG time-series into intuitive two-dimensional waveform images and then fine-tune a multimodal large model to learn from these representations. Experiments on three public datasets (ISRUC, MASS, SHHS) demonstrate that our approach enables general-purpose models, without prior exposure to sleep data, to acquire robust staging capabilities. Moreover, explanation analysis reveals our model learned to mimic the visual diagnostic workflow of human experts for sleep staging by PSG images. The proposed method consistently outperforms state-of-the-art baselines in accuracy and robustness, highlighting its efficiency and practical value for medical applications. The code for the signal-to-image pipeline and the PSG image dataset will be released. | applications to neuroscience & cognitive science | https://openreview.net/pdf?id=BwjNHzwAOq | 2025-09-15T15:15:43 | 5 | [
{
"id": "DJp584LumL",
"forum": "BwjNHzwAOq",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5571/Reviewer_xaDv",
"reviewer_name": "Reviewer_xaDv",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "This p... | |
R0JM3BWP7W | https://openreview.net/forum?id=R0JM3BWP7W | Tricks or Traps? A Deep Dive into RL for LLM Reasoning | 6 | 3.5 | [
8,
4,
6,
6
] | [
4,
4,
3,
3
] | 4 | [
"Large Language Models Reasoning; Reinforcement Learning; Reasoning"
] | Reinforcement learning (RL) for LLM reasoning has rapidly emerged as a prominent research area, marked by a significant surge in related studies on both algorithmic innovations and practical applications. Despite this progress, several critical challenges remain, including the absence of standardized guidelines for applying RL techniques and a fragmented understanding of their underlying mechanisms. In addition, inconsistent experimental settings, variations in training data, and differences in model initialization have led to conflicting conclusions, obscuring the key characteristics of these techniques and creating confusion among practitioners when selecting appropriate techniques. This paper systematically reviews widely adopted RL techniques through rigorous reproductions and isolated evaluations within a unified open-source framework. We analyze the internal mechanisms, applicable scenarios, and core principles of each technique through fine-grained experiments, including datasets of varying difficulty, model sizes, and architectures. Based on these insights, we present clear guidelines for selecting RL techniques tailored to specific setups and provide a reliable roadmap for practitioners navigating the RL for the LLM domain. Finally, we show that a minimalist combination of two techniques can unlock the learning capability of critic-free policies with a vanilla PPO loss. The results demonstrate that our simple combination consistently improves performance, surpassing strategies such as GRPO and DAPO. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=R0JM3BWP7W | 2025-09-19T16:57:07 | 4 | [
{
"id": "rUBWurzab4",
"forum": "R0JM3BWP7W",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17104/Reviewer_rKt2",
"reviewer_name": "Reviewer_rKt2",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
KdYKSOY9MP | https://openreview.net/forum?id=KdYKSOY9MP | Cosmos-Eval: Towards Explainable Evaluation of Physics and Semantics in Text-to-Video Models | 4.5 | 3.25 | [
4,
4,
4,
6
] | [
4,
3,
3,
3
] | 4 | [
"Cosmos-Eval",
"Explainable Evaluation"
] | Recent text-to-video (T2V) models have achieved impressive visual fidelity, yet they remain prone to failures in two critical dimensions: adhering to prompt semantics and respecting physical commonsense. Existing benchmarks, including VideoPhy and VideoPhy-2, formalize these axes but provide only scalar scores, leaving model errors unexplained and hindering reliable evaluation. To address this, we present Cosmos-Eval, an explainable evaluation framework that jointly assesses semantic adherence and physical consistency. Cosmos-Eval produces fine-grained 5-point scores with natural-language rationales, leveraging the physically grounded ontology of Cosmos-Reason1 and an LLM-based rationale refinement pipeline. This enables precise identification of semantic mismatches and violations of physical laws, such as floating objects or momentum inconsistencies. Experiments on VideoPhy-2 show that Cosmos-Eval matches state-of-the-art auto-evaluators in score alignment (Pearson 0.46 vs. 0.43 for semantics; Q-Kappa 0.33 vs. 0.33 for physics) while also delivering state-of-the-art rationale quality (e.g., best BERTScore F1 and BLEU-4 on both SA and PC). Beyond this benchmark, our framework generalizes to other evaluation suites, establishing a unified paradigm for explainable physics-and-semantics reasoning in T2V evaluation and enabling safer, more reliable model development. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=KdYKSOY9MP | 2025-09-19T14:47:13 | 4 | [
{
"id": "LkJozPzECV",
"forum": "KdYKSOY9MP",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16433/Reviewer_JK2k",
"reviewer_name": "Reviewer_JK2k",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "Summa... | |
hABW989AOr | https://openreview.net/forum?id=hABW989AOr | Tensor Power Methods: Faster and Robust for Arbitrary Order | 4.5 | 3.75 | [
6,
4,
4,
4
] | [
5,
3,
4,
3
] | 4 | [
"tensor power method",
"arbitrary order",
"Canonical/Polyadic decomposition"
] | Tensor decomposition is a fundamental method used in various areas to deal with high-dimensional data. Among the widely recognized techniques for tensor decomposition is the Canonical/Polyadic (CP) decomposition, which breaks down a tensor into a combination of rank-1 components. In this paper, we specifically focus on CP decomposition and present a novel faster robust tensor power method (TPM) for decomposing arbitrary order tensors. Our approach overcomes the limitations of existing methods that are often restricted to lower-order ($\leq 3$) tensors or require strong assumptions about the underlying data structure. By applying the sketching method, we achieve a running time of $\widetilde{O}(n^{p-1})$ per iteration of TPM on a tensor of order $p$ and dimension $n$. Furthermore, we provide a detailed analysis applicable to any $p$-th order tensor, addressing a gap in previous works. Our proposed method offers robustness and efficiency, expanding the applicability of CP decomposition to a broader class of high-dimensional data problems. | optimization | https://openreview.net/pdf?id=hABW989AOr | 2025-09-20T06:37:39 | 4 | [
{
"id": "WdvfbzIgA5",
"forum": "hABW989AOr",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21763/Reviewer_c8NC",
"reviewer_name": "Reviewer_c8NC",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The p... | |
EPTVoeaz7Y | https://openreview.net/forum?id=EPTVoeaz7Y | RANGER: Repository-Level Agent for Graph Enhanced Retrieval | 2.5 | 4.25 | [
2,
2,
2,
4
] | [
4,
5,
4,
4
] | 4 | [
"GraphRAG",
"Monte Carlo Tree Search",
"Repository-level",
"Retrieval Agent",
"Code Retrieval",
"Retrieval-Augmented Generation",
"Software Engineering",
"Graph Traversal",
"Multi-hop Reasoning",
"Code Search"
] | General-purpose automated software engineering (ASE) includes tasks such as code completion, retrieval, repair, QA, and summarization. These tasks require a code retrieval system that can handle specific queries about code entities, or code entity queries (for example, locating a specific class or retrieving the dependencies of a function), as well as general queries without explicit code entities, or natural language queries (for example, describing a task and retrieving the corresponding code). We present RANGER, a repository-level code retrieval agent designed to address both query types, filling a gap in recent works that have focused primarily on code-entity queries. We first present a tool that constructs a comprehensive knowledge graph of the entire repository, capturing hierarchical and cross-file dependencies down to the variable level, and augments graph nodes with textual descriptions and embeddings to bridge the gap between code and natural language. RANGER then operates on this graph through a dual-stage retrieval pipeline. Entity-based queries are answered through fast Cypher lookups, while natural language queries are handled by MCTS-guided graph exploration. We evaluate RANGER across four diverse benchmarks that represent core ASE tasks, including code search, question answering, cross-file dependency retrieval, and repository-level code completion. On CodeSearchNet and RepoQA, it outperforms retrieval baselines that use embeddings from strong models such as Qwen3-8B. On RepoBench, it achieves superior cross-file dependency retrieval over baselines, and on CrossCodeEval, pairing RANGER with BM25 delivers the highest exact match rate in code completion compared to other RAG methods. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=EPTVoeaz7Y | 2025-09-20T00:37:36 | 4 | [
{
"id": "2NhWe9GwGm",
"forum": "EPTVoeaz7Y",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19837/Reviewer_eG5f",
"reviewer_name": "Reviewer_eG5f",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
VjtMhU3zWn | https://openreview.net/forum?id=VjtMhU3zWn | SchemaRAG: Enhancing Knowledge-Intensive Reasoning of LLMs via Inference-Time Adaptive Schema | 4.5 | 3.75 | [
6,
4,
4,
4
] | [
4,
4,
4,
3
] | 4 | [
"Knowledge Intensive Reasoning",
"RAG",
"LLM"
] | Retrieval-Augmented Generation (RAG) often struggles with integrating fragmented knowledge for complex reasoning tasks. Recent efforts introduce structural templates—such as graphs or knowledge-based organizations—to improve multi-document reasoning. However, they are constrained by their rigidity, failing to adapt to diverse, task-specific information structures and often omitting critical dependencies. To address this, we propose SchemaRAG: an adaptive schema-guided RAG framework. Instead of predefined formats like graphs, tables and chunks, SchemaRAG adaptively organize the factual information across documents based on query-specific requirements. Given the input query and documents, it first parses the query into sub-problems and generate strategies for schema constructions, then utilize the organized knowledge to generate final answer. Extensive experiments on real-world benchmarks demonstrate that SchemaRAG consistently outperforms state-of-the-art baselines in knowledge-intensive reasoning and generation quality. Our work highlights the importance of adaptive schema-guided strategies for advancing the capabilities of RAG systems in complex, domain-specific tasks. | generative models | https://openreview.net/pdf?id=VjtMhU3zWn | 2025-09-20T16:29:27 | 4 | [
{
"id": "CBhF0GEgRV",
"forum": "VjtMhU3zWn",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24445/Reviewer_Vc2h",
"reviewer_name": "Reviewer_Vc2h",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The p... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.