id
stringlengths 10
10
| url
stringlengths 42
42
| title
stringlengths 5
214
| average_rating
float64 -1
8.5
| average_confidence
float64 -1
5
| ratings
listlengths 0
9
| confidences
listlengths 0
9
| reviewers_num
int64 0
9
| keywords
listlengths 1
42
| abstract
stringlengths 26
4.31k
| tldr
stringlengths 0
250
| primary_area
stringclasses 21
values | pdf_url
stringlengths 40
40
| submission_date
timestamp[s]date 2025-09-01 19:59:51
2025-09-20 20:18:08
| total_reviews
int64 0
18
| reviews
listlengths 0
9
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vxkzW4ljeX
|
https://openreview.net/forum?id=vxkzW4ljeX
|
A universal compression theory: Lottery ticket hypothesis and superpolynomial scaling laws
| 5.5
| 3
|
[
4,
6,
8,
4
] |
[
3,
3,
2,
4
] | 4
|
[
"Neural scaling law",
"model compression",
"lottery ticket hypothesis",
"deep learning theory"
] |
When training large-scale models, the performance typically scales with the number of parameters and the dataset size according to a slow power law. A fundamental theoretical and practical question is whether comparable performance can be achieved with significantly smaller models and substantially less data. In this work, we provide a positive and constructive answer. We prove that a generic permutation-invariant function of $d$ objects can be asymptotically compressed into a function of $\operatorname{polylog} d$ objects with vanishing error. This theorem yields two key implications: (Ia) a large neural network can be compressed to polylogarithmic width while preserving its learning dynamics; (Ib) a large dataset can be compressed to polylogarithmic size while leaving the loss landscape of the corresponding model unchanged. (Ia) directly establishes a proof of the \textit{dynamical} lottery ticket hypothesis, which states that any ordinary network can be strongly compressed such that the learning dynamics and result remain unchanged. (Ib) shows that a neural scaling law of the form $L\sim d^{-\alpha}$ can be boosted to an arbitrarily fast power law decay, and ultimately to $\exp(-\alpha' \sqrt[m]{d})$.
|
We prove that permutation symmetry enables polylogarithmic compression of neural networks and datasets, thus establishing the dynamical lottery ticket hypothesis and boosting neural scaling laws
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=vxkzW4ljeX
| 2025-09-19T05:07:02
| 4
|
[
{
"id": "vvIZ8RIzRX",
"forum": "vxkzW4ljeX",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_YxjE",
"reviewer_name": "Reviewer_YxjE",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This work addresses dataset and neural network compression from a moment-matching perspective. Under certain assumptions, this approach establishes novel compression rates and power laws for these tasks. It also enables the boosting of neural power laws, which describe performance versus dataset size dynamics. A number of low-dimensional experiments are conducted to support the claims.",
"strengths": "The work is mathematically sound and easy to follow. The text is clear, supported by a decent and concise background overview. The authors provide both rigorous derivations and intuitive explanations for their theoretical results, and the experiments support their claims across a number of settings.",
"weaknesses": "My main criticism revolves around the **curse of dimensionality**, which the authors underaddress several times throughout the paper.\n\n1. Both (9) and (10) have dimensionality-dependent exponents, which explode when $m \\to \\infty$ given that other constants are fixed. This is later combated by selecting $k > (1 = \\sigma^{-1}) m - 1$, which, in turn, explodes $\\binom{m+k}{k}$. Through some trickery in Theorem 7 (unfortunately, due to time constraints, I was not able to fully verify the math), the authors miraculously balance these issues by attaining a poly-log compression rate.\n\n That said, one might expect that substituting $d'$ from (45) into (44) should yield errors which are (asymptotically) under some fixed $\\omega$. However, when done numerically for $m=10$, $\\rho=0.1$, $\\omega=0.1$, and any multiplicative constant in (45), I always get an exploding upper bound on the compression error. Reasonable variations of $\\rho$ and $\\omega$ do not alleviate the issue, which only worsens as $m$ grows.\n\n2. Since $k$ in Theorem 7 grows with increasing $d$, $f$ is required to be increasingly smooth. While most contemporary NNs are $\\infty$-smooth almost everywhere, their numerical smoothness degrades with increasing dimensionality or a decreasing learning rate [1]. In practice, this will take a toll on the derived bounds in terms of asymptotic constants or other parameters (e.g., $\\rho$ in (44)). This problem remains unaddressed in the main text.\n\n3. The experimental setups are toy, with the dimensionality being $4-12$ orders of magnitude lower than in real-world tasks. In my opinion, this might lead to the following problems:\n - While showing decent performance in low-dimensional regimes, the proposed compression method might entail overfitting in high-dimensional setups. Stochastic gradient descent (SGD) is known to apply implicit regularization during training [2], thus selecting less overfitting solutions. Your method, however, might \"overcompress\" a NN/dataset: among all solutions, a non-generalizable one is selected (train error or even dynamics are the same, but test error is not).\n - It is known that some problems in ML have exponential (in dimensionality) sample complexity (e.g., density estimation). Your result, however, suggests that these problems are also log-exponential in dimensionality (Theorem 7 applied to dataset compression) given the train error is preserved. The only logical conclusion I can arrive at is that such compression almost always entails overfitting when considering complex problems.\n\n4. While the authors briefly mention the manifold hypothesis in Section 7, it is not clear how one can use it to improve the method. Moment matching is agnostic to manifolds: i.e., it generally cannot capture such intricate structures. Therefore, another manifold learning strategy must be employed beforehand to decrease the dimensionality. Such a strategy typically requires the full dataset, as manifold learning is usually of exponential sample complexity.\n\n[1] Cohen et al. \"Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability\". Proc. of ICLR 2021.\n\n[2] Smith et al. \"On the Origin of Implicit Regularization in Stochastic Gradient Descent\". Proc. of ICLR 2021.\n\n**Minor issues:**\n\n1. Broken reference in line 190: \"Appendix ??\"",
"questions": "1. Can you, please, provide additional experiments (e.g., for high dataset dimensionality or low sampling sizes) proving that your method avoids overfitting?\n2. I kindly ask to address my concerns in Weakness 1. In particular, I am interested in the numerical verification of the bounds provided.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:58:28",
"modification_date": "2025-11-12T13:17:10",
"review_url": "https://openreview.net/forum?id=vxkzW4ljeX¬eId=vvIZ8RIzRX",
"license": "CC BY 4.0"
},
{
"id": "iCi3cG9IVu",
"forum": "vxkzW4ljeX",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_1dtD",
"reviewer_name": "Reviewer_1dtD",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 4,
"presentation": 4,
"summary": "The paper proves a universal compression theorem, showing that almost any symmetric function of $d$ elements can be compressed to a function with $O({\\rm polylog}$ (d)) elements losslessly. The theory leads to two key applications. First is the dynamical lottery ticket hypothesis, proving that large networks can be compressed to polylogarithmic width while preserving their training dynamics. Second is dataset compression, demonstrating that neural scaling laws can be theoretically improved from power-law to stretched-exponential decay.",
"strengths": "- The paper delivers a rigorous theoretical result that proves the dynamical lottery ticket hypothesis by showing that large networks can be compressed while preserving their original training dynamics.\n- Provides a generalized compression theory with broad applicability across diverse domains (e.g., dataset and model compression), demonstrating strong theoretical versatility and significant potential for cross-domain impact.\n- Establishes clear practical advantages, such as improved scaling laws and model compression, that are well grounded in the proposed theoretical framework.",
"weaknesses": "- The paper lacks a thorough discussion on the applicability of the proposed theory to complex neural architectures such as Transformer blocks, which integrate linear projections, attention mechanisms, and normalization layers.\n- There seems to be a missing reference link to the Appendix at line 190 on page 4 (“Appendix ??”).",
"questions": "- The model assumes neuron permutation symmetry. Does the assumption is applicable to complex modules in neural networks, such as Transformer block?\n- In experiments such as Figure 3 or 4, how much real computation time does the proposed compression take?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:45:33",
"modification_date": "2025-11-12T13:17:10",
"review_url": "https://openreview.net/forum?id=vxkzW4ljeX¬eId=iCi3cG9IVu",
"license": "CC BY 4.0"
},
{
"id": "BvtKM40N8v",
"forum": "vxkzW4ljeX",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_ui4S",
"reviewer_name": "Reviewer_ui4S",
"rating": 8,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper introduces the universal compression theorem as a step towards the dynamical lottery ticket hypothesis (LTH), which claims that in a dense network there exists a subnetwork, which when trained in isolation exhibits the same training dynamics as the original one. The theorem states (informally) that a permutation-invariant function of $d$ variables each of dimensionality $m$ can be asymptotically compressed to a function of $O(\\text{polylog } d)$ variables. The authors argue that, because many model / dataset objects are symmetric in parameters / datapoints, these results imply polylog-rate network and dataset compression under the assumptions of the theorem. Another implication of polylog compression is the scaling law $L \\approx L_0 + C d ^{-\\alpha}$ changing from power law form to stretched-exponential form $L \\approx L_0 + \\exp (- \\alpha’ \\sqrt[m]{d})$, both for model and dataset size.",
"strengths": "1. The paper provides theoretical grantees on asymptotic polylogarithmic compression for symmetrical functions. The authors provide Algorithm 1 for compression of symmetric functions using moment-matching and validate it numerically.\n2. An important feature is the universality of the result: the implications of the theorem include both neural networks and datasets.\n3. A major practical consequence of the work is the potential speed up guarantees on the power-law scaling laws, which are known to \"be slow\", i.e. have small power exponentials.\n4. Although the main result is theoretical, the authors back each claim with numerical experiments: they show on a synthetic function that compression error drops with in agreement with the theoretical bound (Fig. 2); that training dynamics on a compressed dataset follows training on the full dataset (Fig. 3); training performances of full and compressed models are identical to support dynamical LTH (Fig. 4); and compressing a network or dataset leads to a larger scaling law exponent (Fig. 5). These comprehensive validations neatly complement the theoretical backbone of the paper.",
"weaknesses": "1. Further empirical evaluation would strengthen this work, as the authors note.\n2. The proposed moment-matching algorithm scales poorly with moment order $k$ and dimension $m$ (via $\\binom{m+k}{k}$), which limits immediate practical effects despite the asymptotic guarantees.\n3. The theoretical claim of polylogarithmic compression yielding a stretched-exponential scaling $\\text{exp} (- \\sqrt[m]{d})$ is not supported with evidence. The numerical experiments in Section 6 demonstrate how the scaling laws can be improved only for quadratic compression.",
"questions": "1. Can you show an example with the scaling laws of a form $L \\approx L_0 + c \\text{exp} (- \\alpha’ \\sqrt[m]{d})$ to illustrate the stretched-exponential regime?\n2. In numerical experiments in Section 6 the exponent should have improved by a factor of 2: $C d^{-\\alpha} = C (\\frac{d’}{16})^{-2 \\alpha} =C’ (d’)^{-2\\alpha} $. The reported values are close but lower, 1.271 vs $2\\alpha = 1.366$ and 0.608 vs $2 \\alpha=0.616$. Why does this difference appear? And why is it larger for dataset compression? \n3. Many elements of modern neural networks do not fall under the smoothness assumptions, like ReLU, top-k selections, sparse \\ quantized representations. How do you imagine expanding your work around those limitations and how would compression rates be affected?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T03:02:05",
"modification_date": "2025-11-12T13:17:11",
"review_url": "https://openreview.net/forum?id=vxkzW4ljeX¬eId=BvtKM40N8v",
"license": "CC BY 4.0"
},
{
"id": "oaCo1YCmRM",
"forum": "vxkzW4ljeX",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_LiWj",
"reviewer_name": "Reviewer_LiWj",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper studies how neural networks and datasets can be compressed by exploiting permutation symmetries. The authors show that symmetric functions can be represented using fewer variables, which implies that both the model and the data can be reduced to polylogarithmic size without significantly changing the loss. This leads to what the authors call a dynamical lottery ticket hypothesis and stronger scaling laws.",
"strengths": "The paper presents an interesting idea: using permutation symmetry to achieve strong compression of both networks and datasets.\nThe theoretical argument (that symmetric functions can be represented with fewer variables), is promising. The results aim to connect model compression, scaling laws, and the lottery ticket hypothesis in a unified framework.",
"weaknesses": "The paper proposes a theoretical link between symmetry, compression, and scaling laws. However, the lack of clear algorithmic formulation and the absence of fair experimental baselines limit its current practical relevance.\n\nThe main limitation of the paper is the lack of rigor and clarity. The compression process is described only at a high level. It is not clear how one would actually construct the compressed network or dataset in practice. The paper does not include pseudocode or complexity estimates, making it hard to evaluate the tractability of the proposed methods. \n\nThe experimental comparison is incomplete. The proposed compressed network is compared with both the original network and a random sparse network. However, it is already known that random sparse networks perform poorly, while sparse networks obtained with *Iterative Magnitude Pruning* (IMP, Frankle & Carbin 2019) can match the performance of dense ones. A fair comparison should therefore include IMP or other modern sparse training methods.\n\nThe compression-error trade-off is not clearly quantified. The claim that a network with (d) parameters can be reduced to polylogarithmic size should be expressed as a function of the error, and possibly compared to existing theoretical bounds.\nFinally, some parts of the theoretical presentation are unclear. The meaning of the function $f$ in Theorem 5 is not explained, and the notation $|f' - f| = \\omega(d)$ is confusing, since $ \\omega(d)$ can mean any function that grows faster than $d$, but such a bound would be vacuous.",
"questions": "- Could you provide a concrete description of the compression algorithm? How are the compressed parameters and datasets obtained from the original ones?\n- How does your method compare, both in compression ratio and performance, with Iterative Magnitude Pruning or other sparse training techniques?\n- Can you explicitly state the trade-off between compression and approximation error, and how it compares with previous results (e.g. to those for the Strong LTH such as Pensia et al., 2020)?\n- What exactly does the function $f$ represent in Theorem 5? You didn't define it. Can you clarify the notation?\n- Can you argue that the bound $|f' - f| = \\omega(d)$ is not vacuous?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T18:32:31",
"modification_date": "2025-11-12T13:17:11",
"review_url": "https://openreview.net/forum?id=vxkzW4ljeX¬eId=oaCo1YCmRM",
"license": "CC BY 4.0"
}
] |
fwCoRzh0Dw
|
https://openreview.net/forum?id=fwCoRzh0Dw
|
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU
| 4
| 3
|
[
6,
4,
2
] |
[
2,
3,
4
] | 3
|
[
"Sparse Attention",
"Efficient Attention",
"Context Extrapolation",
"KV Cache Offloading"
] |
In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce \textit{InfiniteHiP}, a novel and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations.
|
InfiniteHiP extends the servable model context length beyond VRAM and pretrained model context limitation.
|
infrastructure, software libraries, hardware, systems, etc.
|
https://openreview.net/pdf?id=fwCoRzh0Dw
| 2025-09-17T09:29:23
| 3
|
[
{
"id": "1VQ0xZHvLL",
"forum": "fwCoRzh0Dw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_SD7R",
"reviewer_name": "Reviewer_SD7R",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "InfiniteHiP is a training-free long-context inference framework designed to address three key bottlenecks of LLMs when processing long sequences: computational efficiency, memory consumption, and generalization beyond the pretraining window.\nBuilding upon the original HiP, InfiniteHiP introduces a series of system-level improvements that make long-context inference feasible on a single GPU. The framework consists of three major components: Hierarchical Multi-Stage Pruning; Dynamic RoPE Adjustment, which adapts positional encoding strategies dynamically to enable out-of-length generalization for short-context pretrained models; and Hierarchical KV Offloading with LRU Policy, which manages multi-stage cache refreshing and memory transfer between GPU and host to minimize VRAM pressure. Through the synergy of these mechanisms, InfiniteHiP achieves significant performance improvements within the SGLang inference framework, specifically, a 7.24× end-to-end decoding speedup and an 18.95× acceleration in attention computation on million-token contexts, all without requiring any retraining.",
"strengths": "1. The work demonstrates strong practicality and engineering significance. InfiniteHiP can be directly integrated with a variety of existing models, such as LLaMA, Qwen, Gemma, and EXAONE, providing a general and deployment-ready solution for long-context inference on commodity GPUs.\n2. Another notable strength lies in its unified and system-oriented design perspective. Instead of focusing on a single optimization aspect, the framework simultaneously tackles the three major challenges of long-context modeling: computation, generalization, and memory through a coherent modular architecture.",
"weaknesses": "1. Despite its strong engineering impact, the scope of related work is relatively limited, covering only four prior studies, which may not sufficiently position InfiniteHiP within the broader literature of efficient attention and memory optimization.\n2. The main innovations reside at the system level, and the algorithmic novelty is incremental rather than conceptual. Each of the three modules, pruning, RoPE adjustment, and KV management, builds upon previously established ideas, leading to the impression of being “incremental but practical.”\n3. Although several ablation experiments are presented, the paper lacks a systematic quantitative analysis that isolates and justifies the independent contribution of each module. Strengthening the analytical rigor and theoretical interpretation of these components would significantly enhance the paper’s scientific depth and persuasive power.",
"questions": "see weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T02:13:07",
"modification_date": "2025-11-12T12:02:17",
"review_url": "https://openreview.net/forum?id=fwCoRzh0Dw¬eId=1VQ0xZHvLL",
"license": "CC BY 4.0"
},
{
"id": "R6abusy28e",
"forum": "fwCoRzh0Dw",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_G1Ti",
"reviewer_name": "Reviewer_G1Ti",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces InfiniteHiP, a training-free inference framework designed to address the challenges of processing extremely long contexts in Large Language Models (LLMs). The work tackles three main issues: the high computational and memory costs of the attention mechanism, the failure of pre-trained models to generalize beyond their training length, and the significant GPU memory pressure from the Key-Value (KV) cache. The core contributions are: 1) A modular multi-stage hierarchical token pruning algorithm that dynamically eliminates irrelevant context to accelerate attention. 2) A dynamic RoPE adjustment method that enables out-of-length generalization without fine-tuning. 3) An efficient KV cache offloading system that uses host memory and an LRU policy to manage the cache on a single GPU. The authors demonstrate that InfiniteHiP can process up to 3 million tokens on a single 48GB GPU, achieving significant speedups and strong performance on long-context benchmarks.",
"strengths": "- The paper is evaluated on a comprehensive set of benchmarks, including LongBench, RULER, and ∞Bench.\n\n- The work is substantial, integrating multiple techniques (sparse attention, OOL generalization, and KV cache offloading) into a single, practical framework. The implementation within the SGLang framework and detailed performance analysis show a significant engineering effort.\n\n- The proposed method achieves strong performance.",
"weaknesses": "- Crucial details of the proposed method, particularly the complete algorithms for context pruning (Algorithms 1-3), are deferred to the appendix. While this may be due to space constraints, it makes it challenging for the reader to fully grasp the core mechanism without frequently referencing the appendix.\n\n- The heuristic used in the `SelectRep` algorithm is a primary concern. The paper states that when a chunk is divided into two branches, the **first token** of each branch is used as a proxy to decide which branch to discard . This choice seems counter-intuitive. Considering the nature of the causal attention mask, the **last token** of a branch would likely be a more representative summary of the information within that branch. However, even so, the assumption that a single, fixed-position token can reliably represent an entire chunk is not convincingly justified and lacks strong empirical support in the paper.\n\n- The paper could be strengthened by discussing and comparing its KV cache offloading mechanism with other recent works[1,2,3]. \n\nI am willing to raise my score if my concerns are adequately addressed.\n\n[1] InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management\n\n[2] Arkvale: Efficient generative llm inference with recallable key-value eviction\n\n[3] OmniKV: Dynamic context selection for efficient long-context LLMs",
"questions": "1. A significant contribution of this work is the sophisticated KV cache management system. Given its practicality, do the authors plan to open-source the code to facilitate reproducibility and encourage further research in this area?\n\n2. Could the author share insights on why the first token was chosen as the representative token?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T02:46:22",
"modification_date": "2025-11-12T12:02:17",
"review_url": "https://openreview.net/forum?id=fwCoRzh0Dw¬eId=R6abusy28e",
"license": "CC BY 4.0"
},
{
"id": "W15YsjD4uF",
"forum": "fwCoRzh0Dw",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_ZK9U",
"reviewer_name": "Reviewer_ZK9U",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 1,
"presentation": 3,
"summary": "InfiniteHiP improves the KV cache offloading mechanism of HiP Attention (ICLR 2025) by enhancing its cache management policy. The core idea remains the same to manage the KV cache on the unified memory space while keeping a smaller key bank on the GPU memory, which acts as a cache. The use of the Least Recently Used (LRU) policy as the eviction mechanism is incremental.\n\n\nAfter reviewing section 3, FROM HIP TO INFINITEHIP, we are certain that this work is incremental. The token pruning is borrowed from H2O; the dynamic RoPE adjustment is a trick; and Least Recently Used (LRU) is incremental. This is an engineering-heavy paper with incremental improvements over existing work, overstated claims, and limited novel insights. To maintain the high standard of the ICLR conference, we tend to reject this paper.",
"strengths": "The work integrates sparse attention, offloading, and OOL generalization into one unified system. The training-free design and work integration can lead to better performance.\n\nWe believe training-free inference is essential for effective inference, and this paper demonstrates it.\n\nGPU kernels for InfiniteHIP are a good implementation.",
"weaknesses": "The experimental benchmark selection is LongBench and ∞Bench to prove the effectiveness of InfiniteHiP. However, the context length of LongBench (32K) and ∞Benc (100k) is much lower than its claim of supporting 3 MILLION TOKENS on a single GPU. That means the extended context length has not been proven effective for extremely long context tasks. We suggest that the authors conduct experiments on LongBench v2 with a longer context length.\n\nIn Table 5, the RULER Performance of InfiniteHiP starts to be lower than full attention at 128k (74.99 vs. 76.89). Will this tend to continue to go down for a longer context > 128k? This trend can make the title up to 3 million tokens on a single GPU an overstated claim if the InfiniteHiP can not maintain accuracy for long context.\n\nThe RoPE Strategy of sing chunk-indexed RoPE for layers 1-3 and relative RoPE for layers 4-32 is based on observing \"sliding window patterns in early layers\" (Appendix D). Why exactly layers 1-3? What about layers 1-8 or other setting? An ablation study in other settings would help a lot.\n\nThe baseline is also out of date, which compares FA2 instead of FA3 [1] or flashinfer [2]. Other lossy baselines include H2O, StreamingLLM, and InfLLM, from 2023-2024. We recommend a state-of-the-art baseline like [3] or [4]\n\n[1] Ye Z, Chen L, Lai R, et al. Flashinfer: Efficient and customizable attention engine for llm inference serving[J]. arXiv preprint arXiv:2501.01005, 2025.\n\n[2] Shah J, Bikshandi G, Zhang Y, et al. Flashattention-3: Fast and accurate attention with asynchrony and low-precision[J]. Advances in Neural Information Processing Systems, 2024, 37: 68658-68685.\n\n[3] Song W, Jayanthi S M, Ronanki S, et al. Compress, Gather, and Recompute: REFORMing Long-Context Processing in Transformers[J]. arXiv preprint arXiv:2506.01215, 2025.\n\n[4] Deng W, Yang Y, Du P, et al. HGCA: Hybrid GPU-CPU Attention for Long Context LLM Inference[J]. arXiv preprint arXiv:2507.03153, 2025.",
"questions": "Analysis of the impact of InfiniteHIP on network reasoning capabilities?\n\nHow would chunk size affect the InfiniteHiP performance?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T07:25:18",
"modification_date": "2025-11-12T12:02:17",
"review_url": "https://openreview.net/forum?id=fwCoRzh0Dw¬eId=W15YsjD4uF",
"license": "CC BY 4.0"
}
] |
5rjSeZCM6l
|
https://openreview.net/forum?id=5rjSeZCM6l
|
FedSumUp:Secure Federated Learning Without Client-Side Training for Resource-Constrained Edge Devices
| 3.5
| 3.25
|
[
4,
2,
4,
4
] |
[
3,
3,
3,
4
] | 4
|
[
"Federated Learning",
"Data Condensation",
"Server-Side Optimization",
"Privacy-Preserving",
"Edge Devices",
"Variational Autoencoder"
] |
Horizontal Federated Learning (HFL) enables multiple clients with private data to collaboratively train a global model without sharing their local data. As a research branch of HFL, Federated Data Condensation with Distribution Matching (FDCDM) introduces a novel collaborative paradigm where clients upload small synthetic datasets instead of gradients and parameters. FDCDM faces two key challenges: privacy leakage risk, where synthetic data may leak the privacy of real data; and high computational cost on the client side, which limits the deployment capability of FDCDM on resource-constrained devices. To address these challenges, we propose FedSumUp, an improved FDCDM method. The core designs of FedSumUp include: generating initial data templates based on a Variational Autoencoder (VAE); and migrating the entire synthetic data optimization process to the server side, requiring clients only to upload distilled synthetic data and the mean of raw data features without exposing the original data itself. Experimental results on multiple real-world datasets demonstrate that FedSumUp achieves notable advantages in the following aspects: drastically reducing the visual similarity between synthetic and real data, and effectively resisting membership inference attacks; significantly lowering client-side computational overhead, making it deployable on edge devices. FedSumUp is the first work to systematically analyze privacy risks in FDCDM from the perspective of data similarity, providing a new direction for building efficient and privacy-preserving federated learning frameworks.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=5rjSeZCM6l
| 2025-09-20T12:40:47
| 4
|
[
{
"id": "GcXZTsH254",
"forum": "5rjSeZCM6l",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_VTkQ",
"reviewer_name": "Reviewer_VTkQ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper tackles two core weaknesses in Federated Data Condensation with Distribution Matching (FDCDM), a branch of horizontal federated learning:\n\n1. Synthetic datasets can still resemble real client data.\n2. Existing FDCDM algorithms need heavy local optimization.\n\nAuthors present the first systematic privacy-risk analysis in FDCDM from a data-similarity viewpoint.They propose FedSumUp, where a pre-trained VAE is used to create initial synthetic templates, all expensive optimizations are shifted to the server, leaving only data summarization task on clients.\n\nExperimental Results show the following:\nFar less visual similarity between synthetic and real data → improved privacy.\nHigh reduction in client computation, making it very suitable for edge devices.",
"strengths": "+ Systematically reduction of visual similarity between synthetic and real data\n+ No direct data, gradients, or parameters ever leave the client. \n+ No Client-Side Training\n+ VAE based summarization provides a standardized, privacy-safe feature extraction pipeline\n+ Centralizing the condensation and MMD-based alignment process ensures consistent optimization quality and reduces heterogeneity issues caused by varying client compute capacities.\n+ Balanced Utility and Privacy",
"weaknesses": "- Performance heavily relies on the representational strength and generalization of the pre-trained VAE. If the VAE does not capture key features relevant to a domain (e.g., medical images), synthetic data quality may degrade.\n- Migrating all optimization to the server increases centralized computational load, which can become difficult with large datasets and large number of clients. \n- Since the method removes personalized patterns to prevent MIAs, models may lose subtle but useful client-specific features, affecting tasks that rely on personalization\n- Since VAE is available to the server, can't the data be recovered through gradient inversion?\n- Centralizing all optimization increases the risk of server compromise; a malicious server could still attempt to reverse-engineer latent features. \n- It hasn't been tested on real-world images yet. The datasets used are very basic ones and less challenging. \n- More exhaustive experiments and real-world setups of FL should be explored as done in the following paper (inspired by Office 31):\n\"Federated Learning for Commercial Image Sources\", WACV 2023.\nDataset link: https://drive.google.com/file/d/1qgpj1TsGT4lnhhOmwR4gqVRigoHnMRnX",
"questions": "Although raw data is not shared, mean feature embeddings might still reveal distributional hints of private data. How to handle that?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T02:46:16",
"modification_date": "2025-11-12T18:17:25",
"review_url": "https://openreview.net/forum?id=5rjSeZCM6l¬eId=GcXZTsH254",
"license": "CC BY 4.0"
},
{
"id": "P0EfJoz3jo",
"forum": "5rjSeZCM6l",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_LYj3",
"reviewer_name": "Reviewer_LYj3",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper extends Federated Data Condensation with Distribution Matching (FDCDM) by addressing privacy limitations and computational constraints on edge devices. The authors incorporate Variational Autoencoders (VAE) to extract latent representations of client-side data, which are then transmitted to the server for synthetic data generation. This approach requires only serverside model training, thereby reducing the computational burden on clients and mitigating sample-level privacy leakage by transferring latent representations instead of ”initial templates” that could result in visual information leakage.",
"strengths": "The paper presents a complete framework with improvements in accuracy and computational efficiency compared to baseline methods.",
"weaknesses": "1. The paper suffers from significant organizational issues that impede comprehension. While the work builds upon Heterogeneous Federated Learning (HFL), this foundational design choice is not clearly articulated. The framework is only briefly mentioned at the beginning of the Introduction, using vague terminology to describe the challenges and background of HFL before transitioning to FDCDM and data security concerns. This fragmented presentation makes it difficult for readers to understand the core methodology and its relationship to existing work.\n\n2. Additionally, the paper’s contributions are presented in an unprofessional manner, lacking sufficient evidence to support claimed security benefits. The computational advantage is also inadequately substantiated, with only a vague claim of ”reducing client side computational overhead by over 90% compared to methods like FedSD2C,” which lacks rigor and clarity.\n\n3. The rationale for incorporating VAE to prevent sample leakage is inadequately explained. While the paper dedicates considerable space to discussing how the initial template used for synthetic data generation poses a risk of privacy leakage through potential attacks, it fails to provide a clear explanation of why and how VAE addresses this vulnerability. The connection between the VAE-based latent representation and enhanced privacy protection remains unclear.\n\n4. Despite claiming ”theoretical innovation,” the paper provides no theoretical analysis or formal results. The absence of theoretical foundations significantly weakens the paper’s contributions and makes it difficult to assess the principled nature of the proposed approach.\n\n5. The experimental evaluation is inadequate to support the paper’s claims. The experiments are limited to simple datasets and moderately non-IID settings, which is insufficient to demonstrate the method’s effectiveness. The performance evaluation does not adequately explore more challenging tasks or varied heterogeneous settings, making it less convinced regarding the improvement.",
"questions": "See weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T11:39:31",
"modification_date": "2025-11-12T18:17:26",
"review_url": "https://openreview.net/forum?id=5rjSeZCM6l¬eId=P0EfJoz3jo",
"license": "CC BY 4.0"
},
{
"id": "x5rrNe6sbA",
"forum": "5rjSeZCM6l",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_A8Df",
"reviewer_name": "Reviewer_A8Df",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper identifies two critical challenges in the existing Federated Data Condensation with Distribution Matching (FDCDM) paradigm: 1) significant privacy risks when using real data as templates for synthetic data generation, and 2) high computational costs on the resource-constrained edge device client side. To address these issues, the paper proposes FedSumUp, in which each client sends (per-class) VAE latent codes and mean feature vectors; the server performs a two-phase optimization (latent → pixel) to synthesize a global, small dataset used to train the global model.",
"strengths": "1. By offloading all complex optimization to the server, the client-side burden is reduced to a simple one-pass encoding and feature extraction.\n\n2. By using a VAE to generate abstract latent codes, FedSumUp avoids using templates that are either too realistic (leaking privacy) or too noisy (hurting utility).",
"weaknesses": "1. The server must now perform a two-phase optimization (latent code and pixel-level) for each participating client in every round. This cost could be substantial and scales with the number of clients, yet it is not reported, which makes the \"efficiency\" claim one-sided.\n\n2. The paper assumes a semi-honest server adversary and evaluates privacy against server-side MIA, yet the server receives per-class latent codes and mean feature vectors every round. The server with the public VAE decoder may decode latents to image-like content. No further analysis is provided on how much information these latents/means reveal.",
"questions": "1. What is the total computational overhead on the server, and how does this cost scale as the number of clients increases?\n\n2. The entire method relies on a general-purpose VAE pre-trained on a public dataset. How would the method's performance be affected if the clients' private data comes from a highly specialized domain that is significantly \"out-of-distribution\" for the VAE's pre-training data?\n\n3. You opted for a weaker optimization objective on the server to match the mean of the real and synthetic features, rather than a stronger distributional loss. Was this choice primarily for efficiency?\n\n4. What is the reconstruction fidelity when directly decoding transmitted latents with the provided/public decoder?\n\n5. If the server actively optimizes latents to probe the client distribution, how robust is FedSumUp to targeted reconstruction/inversion?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T01:06:45",
"modification_date": "2025-11-12T18:17:26",
"review_url": "https://openreview.net/forum?id=5rjSeZCM6l¬eId=x5rrNe6sbA",
"license": "CC BY 4.0"
},
{
"id": "B7THUcdFRl",
"forum": "5rjSeZCM6l",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_FmzS",
"reviewer_name": "Reviewer_FmzS",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces FedSumUp, a federated data condensation framework designed specifically to address privacy leakage and client-side computational overhead issues in horizontal federated learning. Instead of requiring clients to synthesize and optimize data locally, FedSumUp shifts all expensive data optimization to the server and has clients only upload VAE-encoded latent codes and mean data features, thereby sharply reducing computational load and limiting privacy exposure. The paper systematically critiques current FDCDM paradigms, exposing visual privacy leakage in real-data initialization and utility degradation under random-noise initialization, and presents extensive experiments to demonstrate improved privacy, efficiency, and performance under various non-IID settings compared to several strong baselines.",
"strengths": "1) The paper is the first to rigorously expose and analyze visual privacy leakage in FDCDM schemes, especially under real-data initialization. This is well-illustrated with Table 1 and the corresponding discussion on Page 4, and visually connected to MIA vulnerabilities.\n2) By offloading all synthetic data optimization to the server, the proposed method massively reduces resource requirements on clients (as substantiated by Tables 3, 5, 6, and 7). Table 3 highlights that client runtime is reduced by over 10–15x compared to other methods.\n3) The paper proposes a clever privacy-preserving mechanism. It uses a general-purpose, pre-trained VAE as a privacy filter, where clients upload highly abstract \"latent codes\" rather than raw images. The server first optimizes these codes in the latent space before decoding them. This process tends to filter out personalized features while retaining the common class features beneficial for model training , thereby mechanistically helping to resist Membership Inference Attacks (MIA).",
"weaknesses": "1. While the paper exposes practical visual privacy leakages in prior FDCDM methods, its claimed privacy enhancements in FedSumUp are predominantly empirical (via MIA ACC, Table 3 and Table 5). There is no formal privacy analysis or theoretical bound (e.g., differential privacy guarantees, or information-theoretic leakage quantification). \n2. While Appendix A.6 claims that the VAE is universal and not fine-tuned per client or task, the actual privacy and generalization performance of this VAE is not deeply interrogated. What happens if the VAE is insufficiently expressive for specific domains or tasks? Could the VAE itself encode subtle privacy leakages if, for example, the upstream training dataset for the VAE includes client-resembling data? No empirical test of VAE generality or security robustness is attempted.\n3. The protocol seems to assume honest-but-curious clients, but if clients upload malicious codes or manipulated means, what prevents poisoning or information leakage back to the server or other clients? There is no discussion of potential mechanisms.\n4. While MNIST, FashionMNIST, and CIFAR10 are standard, they are relatively small and may not sufficiently represent real-world high-heterogeneity, high-dimensional, or non-vision FL tasks. It remains unclear if the claimed privacy/utility gains hold for domains outside canonical image datasets.",
"questions": "See Weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T19:46:01",
"modification_date": "2025-11-12T18:17:26",
"review_url": "https://openreview.net/forum?id=5rjSeZCM6l¬eId=B7THUcdFRl",
"license": "CC BY 4.0"
}
] |
|
qN0Il4dtGg
|
https://openreview.net/forum?id=qN0Il4dtGg
|
HARMAP: Hierarchical Atomic Representation for Materials Property Prediction
| 3.5
| 3
|
[
2,
2,
4,
6
] |
[
4,
3,
3,
2
] | 4
|
[
"AI for Materials",
"Atomic Representation",
"Material Property Prediction"
] |
Accurate prediction of material properties is a key step toward rapid materials discovery and cost-effective exploration of vast chemical spaces. Recent advances in machine learning (ML) offer a data-driven alternative that enables fast and scalable property estimation. However, prevailing graph-based pipelines use one-hot or shallow element embeddings and simple distance-based edges, which under-encode element-specific characteristics and cannot faithfully capture bond relations. Thus, we develop HARMAP, a Hierarchical Atomic Representation for Materials Property prediction. First, we build a chemistry-informed Hierarchical Element Knowledge Tree (HEK-Tree) that classifies elements from coarse to fine (e.g., metal vs. non-metal, subgroupings), producing atomic embeddings that preserve unique identities and inter-atomic relations. Second, we map these features into hyperbolic spaces that preserve hierarchical structure, enabling compact separation of levels and smooth similarity across related elements. Finally, we construct a compound graph whose nodes use the learned atomic embeddings and whose edges combine geometric proximity, providing bond-aware connectivity. Across three large public datasets, HARMAP consistently improves over formula-only, structure-only, and standard graph baselines, indicating the effectiveness of HARMAP's unique atomic and bond representations.
|
A Hierarchical Atomic Representation for Materials Property prediction.
|
applications to physical sciences (physics, chemistry, biology, etc.)
|
https://openreview.net/pdf?id=qN0Il4dtGg
| 2025-09-10T21:25:01
| 4
|
[
{
"id": "Kr0LTtqs14",
"forum": "qN0Il4dtGg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_CDzq",
"reviewer_name": "Reviewer_CDzq",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces HARMAP, a hierarchical hyperbolic representation framework for materials property prediction.\nThe method builds a Hierarchical Element Knowledge Tree (HEK-Tree) that encodes chemical taxonomy (metals, non-metals, families, elements) into hyperbolic embeddings, preserving periodic-table hierarchies. \nA Bond-aware Connectivity (BondNeC) mechanism then computes chemically meaningful edge features from hyperbolic distances, and a Hyperbolic Transformer (Hypformer) processes the resulting compound graph for property regression.\nExperiments show its improvements over baselines. There are also ablations showing the contribution of hierarchical encoding and bond features.",
"strengths": "- The idea of embedding periodic-table hierarchies in hyperbolic space is original and well motivated by chemistry’s tree-like structure.\n- Benchmarks evaluation shows consistent improvement, improving upon recent strong baselines such as CrystalFramer and eComFormer.\n- Thorough ablation studies demonstrate clear incremental improvements from each module (HEK-Tree depth, BondNeC, learnable nodes).\n- The paper is clearly structured, with motivating figures and detailed derivations of hyperbolic operations.\nAppendices contain implementation and theoretical clarifications, increasing reproducibility.",
"weaknesses": "- The HEK-Tree and much of the architecture (Hypformer) is adapted from prior word hierarchy [1, 2, 3] and hyperbolic Transformer work [1, 2]; the new contribution mostly lies in its application domain.\n- The model is closer to an engineering combination of existing components than to a new fundamental architecture.\n- Hyperbolic operations and dual-stage encoding are more expensive than Euclidean counterparts.\n\n[1] Tifrea, Alexandru, Gary Bécigneul, and Octavian-Eugen Ganea. \"Poincar\\'e glove: Hyperbolic word embeddings.\" arXiv preprint arXiv:1810.06546 (2018).\n\n[2] Sonthalia, Rishi, and Anna Gilbert. \"Tree! i am no tree! i am a low dimensional hyperbolic embedding.\" Advances in Neural Information Processing Systems 33 (2020): 845-856.\n\n[3] Zhang, Delvin Ce, Rex Ying, and Hady W. Lauw. \"Hyperbolic graph topic modeling network with continuously updated topic tree.\" Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.\n\n[4] Yang, Menglin, et al. \"Hypformer: Exploring efficient transformer fully in hyperbolic space.\" Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024.\n\n[5] Yang, Xin, et al. \"Hgformer: Hyperbolic Graph Transformer for Recommendation.\" arXiv preprint arXiv:2502.15693 (2024).",
"questions": "- The paper does not report runtime, model size, or training efficiency relative to Transformer/GNN baselines.\n- All benchmarks are standard formation-energy/bandgap tasks; results on smaller or experimental datasets (e.g., magnetism, phonon, or thermoelectric properties) would better test generalization. Also MatBench and MatBench-discovery can be taken into account.\n- While component-wise ablations are given, cross-validation or statistical significance of MAE differences is not reported.\n- No uncertainty estimates are provided, which are important in materials modeling contexts.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T04:01:24",
"modification_date": "2025-11-12T11:09:19",
"review_url": "https://openreview.net/forum?id=qN0Il4dtGg¬eId=Kr0LTtqs14",
"license": "CC BY 4.0"
},
{
"id": "nFAzzlciSO",
"forum": "qN0Il4dtGg",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_3Qd5",
"reviewer_name": "Reviewer_3Qd5",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors present HARMAP, which consists of the steps of building KEK-Tree, mapping features into hyperbolic spaces to preserve hierarchical structures of the KEK-tree, and constructing compound graphs to learn atom embedding taking into account bond-aware connectivity.\n\nThe performance evaluation of HARMAP was performed with three public datasets and its effectiveness was shown. \n\nWhile HARMAP might include technical novelties, their empirical evaluation is weak and shallow.\nFor example, I am unsure of the significance of the improvement achieved by HARMAP in the tables -- it looks very small improvements. Also, no standard deviations are shown in the tables.\n\nBesides, what empirically happens with HARMAP from a viewpoint of crystal structures is missing (e.g., which substructures have contributed to improve the MAE score and *why* do such contributions happen?). This makes me feel that their analysis quite shallow and they merely show numbers without deeper understanding to the model behaviors translated into generic interpretations and trends to materials in the datasets.",
"strengths": "Algorithmic novelty of HARMAP",
"weaknesses": "Performance evaluation which is shallow and not convincing",
"questions": "I do not have specific questions but would like to see the authors' response based on the comments above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T15:26:17",
"modification_date": "2025-11-12T11:09:20",
"review_url": "https://openreview.net/forum?id=qN0Il4dtGg¬eId=nFAzzlciSO",
"license": "CC BY 4.0"
},
{
"id": "v4uA8CIxzm",
"forum": "qN0Il4dtGg",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_Py87",
"reviewer_name": "Reviewer_Py87",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces HARMAP, a novel machine learning framework designed to improve the accuracy of materials property prediction. The authors identify key limitations in existing graph-based models, which often rely on oversimplified atomic representations (like one-hot encodings) and geometric-only edges, failing to capture the rich hierarchical relationships between elements and chemically meaningful bonds. HARMAP addresses these shortcomings by creating a more sophisticated and chemistry-aware representation of crystalline materials.\n\nThe main contributions of the work are threefold. First, the authors construct a Hierarchical Element Knowledge Tree (HEK-Tree), a taxonomy that organizes elements from broad categories (e.g., metal/nonmetal) down to specific chemical families and individual elements. Second, this tree is embedded into hyperbolic space, a geometric domain naturally suited for representing hierarchical data with low distortion, which allows the model to preserve chemical relationships effectively. Finally, the framework introduces Bond-aware Connectivity (Bondnec), a method to enrich the edges in the crystal graph by combining standard interatomic distances with a chemical similarity score derived from the hierarchical embeddings, leading to a more accurate representation of bonding.",
"strengths": "1. Holistic Integration of Chemistry: The model moves beyond simple geometry by incorporating deep chemical knowledge. The HEK-Tree encodes established periodic trends, and the Bondnec module infuses chemical similarity into bond representations. This allows the model to reason about atomic interactions in a way that is more aligned with a chemist's intuition.\n\n2. Strong Empirical Performance: The paper provides compelling evidence of its effectiveness. HARMAP achieves state-of-the-art results across three major, diverse benchmarks (Materials Project, JARVIS, OQMD) and on multiple key properties (formation energy, bandgap, elastic moduli). The consistent and significant improvements over strong baselines are a major strength.\n\n3. Comprehensive Ablation Studies: The authors thoroughly validate their design choices through extensive ablations. They demonstrate the individual contribution of the HEK-Tree, the hyperbolic backbone (Hypformer), and the Bondnec edges, proving that each component is essential for the final performance. The study on the HEK-Tree depth also provides valuable insights into the importance of hierarchy.",
"weaknesses": "1. Potential Rigidity of the HEK-Tree: The HEK-Tree is constructed based on fixed, pre-defined chemical knowledge. While this provides a strong inductive bias, it might be less flexible than a fully learned hierarchy. It may not easily adapt to discover novel, non-intuitive element relationships that are not already captured by the standard periodic table grouping.\n\n2. Limited Interpretability of Learned Embeddings: Although the HEK-Tree structure itself is interpretable, the actual node embeddings learned in hyperbolic space are high-dimensional and abstract. While the paper shows the model works, it may be difficult to directly translate these learned representations back to concrete, new chemical insights without further analysis.",
"questions": "1. To what extent is the hierarchy of the HEK-Tree itself learnable, and have you experimented with allowing the tree structure or hierarchical paths to be optimized during training, rather than being fixed based on pre-defined chemical knowledge?\n\n2. Could you provide a comparative analysis of HARMAP's computational cost (e.g., FLOPs, memory usage, or training time) against key baselines to clarify the performance-to-cost ratio and practical scalability?\n\n3. Can you provide any qualitative analysis or case studies demonstrating that the learned Bondnec similarity scores S(i,j) align with known chemical bonding preferences, to validate the claim of capturing chemically meaningful connections?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T01:38:13",
"modification_date": "2025-11-12T11:09:22",
"review_url": "https://openreview.net/forum?id=qN0Il4dtGg¬eId=v4uA8CIxzm",
"license": "CC BY 4.0"
},
{
"id": "78nlETkm8m",
"forum": "qN0Il4dtGg",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_t2RJ",
"reviewer_name": "Reviewer_t2RJ",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper is concerned about crystal property prediction and proposes a hierarchical atomic representation for materials property prediction (HARMAP). The main characteristics of HARMAP are (i) a hierarchical element knowledge tree (HEK-Tree), which encodes domain knowledge (the periodic table) as a hierarchical tree representation, allowing us to embed each atom in a hyperbolic space by considering its relationship to other similar atoms (in a learnable way) and (ii) a bond-aware connectivity (Bondnec), which constructs a graph from a crystal structure by considering not only atomic distances between pairs of atoms but their distances in the hyperbolic space.\n\nThe empirical studies show that the proposed method achieves the best predictive performance among others in a standard suite of benchmark tasks and that the proposed architecture is reasonable by ablation studies.",
"strengths": "It is reasonable to incorporate a taxonomy chemical elements into embeddings of atoms and bonds, instead of using one-hot vectors, for performance improvement. The resultant architecture to implement the idea is also sound to me. The empirical results are compelling to demonstrate the benefit of the proposed idea.",
"weaknesses": "Most of the details on how the authors run the experiments are not in the main part of the paper and are sent to the supplementary material. Since such information is important to understand whether the experiments were conducted in a fair way, I would like to see it in the main body.\n\nI'm mostly curious about how the hyperparameters are determined, specifically embedding dimensions and the numbers of Transformer blocks. In Appendix B.3, the authors provided these numbers used in the experiments, but as far as I am aware of, have not provided the information regarding how these numbers are determined.",
"questions": "I would like to ask the authors to clarify how the hyperparameters are determined in the experiment.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:34:31",
"modification_date": "2025-11-12T11:09:22",
"review_url": "https://openreview.net/forum?id=qN0Il4dtGg¬eId=78nlETkm8m",
"license": "CC BY 4.0"
}
] |
0hLuQAT3fV
|
https://openreview.net/forum?id=0hLuQAT3fV
|
Universal Image Immunization against Diffusion-based Image Editing via Semantic Injection
| 5
| 3.5
|
[
4,
4,
4,
8
] |
[
3,
4,
4,
3
] | 4
|
[
"Diffusion Model",
"AI Safety",
"Image Immunization",
"Adversarial Attack",
"Image Editing"
] |
Recent advances in diffusion models have enabled powerful image editing capabilities guided by natural language prompts, unlocking new creative possibilities. However, they introduce significant ethical and legal risks, such as deepfakes and unauthorized use of copyrighted visual content. To address these risks, image immunization has emerged as a promising defense against AI-driven semantic
manipulation. Yet, most existing approaches rely on image-specific adversarial perturbations that require individual optimization for each image, thereby limiting scalability and practicality. In this paper, we propose the first universal image immunization framework that generates a single, broadly applicable adversarial perturbation specifically designed for diffusion-based editing pipelines. Inspired by
universal adversarial perturbation (UAP) techniques used in targeted attacks, our method generates a UAP that embeds a semantic target into images to be protected. Simultaneously, it suppresses original content to effectively misdirect the model’s attention during editing. As a result, our approach effectively blocks malicious editing attempts by overwriting the original semantic content in the image via the
UAP. Moreover, our method operates effectively even in data-free settings without requiring access to training data or domain knowledge, further enhancing its practicality and broad applicability in real-world scenarios. Extensive experiments show that our method, as the first universal immunization approach, significantly outperforms several baselines in the UAP setting. In addition, despite the inherent difficulty of universal perturbations, our method also achieves performance on par with image-specific methods under a more restricted perturbation budget, while also exhibiting strong black-box transferability across different diffusion models.
|
alignment, fairness, safety, privacy, and societal considerations
|
https://openreview.net/pdf?id=0hLuQAT3fV
| 2025-09-12T19:50:27
| 4
|
[
{
"id": "Cp6SNqZd08",
"forum": "0hLuQAT3fV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_nGCo",
"reviewer_name": "Reviewer_nGCo",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a universal image immunization framework that protects images from malicious diffusion-based editing by applying a single, broadly effective adversarial perturbation. Unlike image-specific defenses, the method generates a universal adversarial perturbation (UAP) that embeds a semantic target and suppresses original content, thereby misdirecting the diffusion model’s attention and preventing faithful or unauthorized semantic modifications.",
"strengths": "- Research on anti-editing is meaningful and promising.\n- The proposed universal adversarial perturbation (UAP) demonstrates greater effectiveness compared to prior per-image optimization approaches.\n- Experimental results show that the proposed method achieves improved performance in several cases.",
"weaknesses": "- During the editing phase, does the proposed method need to append the target prompt (e.g., “Ronaldo”) to the editing prompt? If so, how can it guarantee that a malicious user would use that specific prompt? If not, how does the UAP maintain robustness across different editing prompts, given that it appears to be trained with a fixed target prompt?\n\n- How well does the UAP generalize to complex or lengthy editing prompts? Does its effectiveness degrade under more complicated prompt conditions?\n\n- The UAP is trained on 10,000 random image–prompt pairs. How does the size of this training set influence the robustness and generalization of the learned perturbation?\n\n- Since the primary goal is to defend against editing rather than to generate a target pattern, why is the target semantic injection loss necessary? Would using only the source semantic suppression loss suffice, and how would that affect performance?",
"questions": "Please refer to the weakness part above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T15:59:05",
"modification_date": "2025-11-12T11:15:53",
"review_url": "https://openreview.net/forum?id=0hLuQAT3fV¬eId=Cp6SNqZd08",
"license": "CC BY 4.0"
},
{
"id": "vydGJ4hJSZ",
"forum": "0hLuQAT3fV",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_8CoU",
"reviewer_name": "Reviewer_8CoU",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper empirically proposes a universal image immunization method against diffusion-based editing by jointly optimizing semantic injection and semantic suppression losses. A single universal perturbation is trained to mislead diffusion models semantically while preserving visual quality. Extensive experiments demonstrate strong white-box and black-box defense across multiple diffusion models.",
"strengths": "- The paper proposes a universal, data-free image immunization framework that generalizes across diffusion models.\n- The method introduces a simple yet effective dual-loss design to achieve semantic-level defense.\n- The approach demonstrates strong transferability and robustness under both white-box and black-box settings.\n- The experiments cover multiple diffusion models and editing scenarios, showing consistent performance.",
"weaknesses": "- The paper presents an empirical approach with limited theoretical justification.\n- The authors do not provide a thorough discussion on why $\\mathcal{L}_\\text{inj}$ is effective in the cross-attention feature space or its theoretical justification, relying instead primarily on empirical validation.\n- The evaluation relies heavily on pixel and perceptual similarity metrics, despite the method's core focus on semantic injection and suppression; adding CLIPScore or Grounding DINO detection would better assess semantic alignment.\n- The study lacks visualization of training dynamics; plotting the evolution of semantic injection and suppression losses would help verify optimization stability and convergence.\n- The paper misses key related works on image immunization, such as attention-based EditShield [1] and diffusion latent attack [2].\n\n[1] Chen et. al. EditShield: Protecting Unauthorized Image Editing by Instruction-guided Diffusion Models, ECCV 2024 \n[2] Shih et. al. Pixel Is Not a Barrier: An Effective Evasion Attack for Pixel-Domain Diffusion Models, AAAI 2025",
"questions": "- Could the authors include CLIPScore or Grounding DINO detection in the main results to evaluate semantic alignment, and provide additional metrics in the revision for completeness?\n- When converting tensors back to images, clipping and quantization are applied. Could these operations break the attack by altering $\\delta$ effective direction or strength and thus reduce the semantic injection/suppression effect?\n- Could the two losses interfere or cancel each other out during optimization, given their opposite semantic objectives?\n- Could the authors provide training curves of total, injection, and suppression losses to illustrate optimization stability and convergence?\n\nI encourage the authors to strengthen the paper by addressing the weakness and questions in the rebuttal.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-19T12:09:37",
"modification_date": "2025-11-12T11:15:54",
"review_url": "https://openreview.net/forum?id=0hLuQAT3fV¬eId=vydGJ4hJSZ",
"license": "CC BY 4.0"
},
{
"id": "iarZaNpvvo",
"forum": "0hLuQAT3fV",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_S8F9",
"reviewer_name": "Reviewer_S8F9",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper presents a framework that learns a single universal adversarial perturbation (UAP) to safeguard images from unauthorized text‑guided diffusion model editing. Unlike prior approaches that rely on image‑specific perturbations—limiting scalability and practicality—the proposed method employs one universal perturbation applicable to any image. By overwriting the original semantic content with a target semantic at the cross‑attention level, the approach effectively alters the resulting edits. Experimental results demonstrate that the proposed UAP not only outperforms existing baselines in universal settings but also achieves performance comparable to image‑specific perturbations.",
"strengths": "1. The proposed method enables universal protection using a single perturbation, making it significantly more practical and scalable compared to image-specific perturbations.\n2. The paper is clearly written and easy to follow, with well-structured methodology and presentation.\n3. The approach demonstrates broad applicability across diverse editing models, including Stable Diffusion v1.4 and v2.0, InstructPix2Pix, DiT, and inpainting pipelines.",
"weaknesses": "1. While the method aims to inject target semantics, it is unclear whether the perturbation truly captures the intended concept. For instance, in the *cow* example of Figure 3 (Ours), the generated image still depicts a cow. Also, the perturbation appears to preserve only the **structure** of the *Ronaldo* target image, rather than semantic attributes like gender or identity.\n2. The authors claim that text semantics are naturally fused into visual features at the cross-attention output level. However, textual semantics are also embedded within **attention map**—as used in prior works such as Lo et al. [1]—since the key vectors in the cross-attention mechanism are derived from the textual prompt. The novelty of operating on cross-attention outputs rather than attention maps may be overstated.\n3. The data-dependent UAP is trained on 10,000 randomly sampled LAION-2B-en image–text pairs and evaluated on 500 images spanning 10 object classes. It remains unclear whether the semantic suppression generalizes beyond the training distribution. In particular, if a new image contains semantics absent from the 10,000 training pairs, the UAP may exhibit reduced effectiveness.\n4. The proposed UAP is added to *generated* images before passing them through the diffusion model. However, this is not representative of typical editing pipelines, which usually operate on *real* images. The mismatch between training/deployment assumptions and real usage scenarios raises concerns about practical robustness.\n\n[1] Lo et al., Distraction is all you need: Memory-efficient image immunization against diffusion-based image editing, CVPR 2024.\n\nNote: Weaknesses 1-4 correspond directly to Questions 1-4.",
"questions": "1. In Section 5.2, the authors claim that the generated results reflect the injected *Ronaldo* semantics. If the target image were replaced with a different individual—such as a woman or someone with distinct facial attributes—would the resulting edits reflect high-level semantic changes (e.g., gender, identity) rather than merely structural features? An expanded ablation on target identity would help assess the generality and depth of the proposed semantic injection.\n2. The “Attention Map Attack” baseline generates perturbations by minimizing the alignment between the attention map and the original image semantics (Section 7.4). If this baseline were re-implemented using the same loss functions (Eq. 4 and 5), but applied to attention maps rather than cross-attention outputs, would it achieve comparable effectiveness to the proposed method? A direct comparison would clarify whether operating on cross-attention outputs offers a meaningful advantage over using attention maps.\n3. How does the proposed UAP perform on test images that contain semantics not seen during training? Additional experiments would help validate the generalization of semantic suppression beyond the 10,000 training pairs.\n4. All evaluations appear to apply the UAP to *generated* images. Has the method been tested on *real* images as inputs to the editing pipeline? Since most practical use cases involve real images, results on this setting would be valuable.\n5. In Figure 2(b), the attention map for the *Ronaldo* prompt appears to be as responsive as that of *people*, despite the claim in the Figure 2 caption that attention should be suppressed for the target prompt. Can the authors clarify this observation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T13:22:40",
"modification_date": "2025-11-12T11:15:54",
"review_url": "https://openreview.net/forum?id=0hLuQAT3fV¬eId=iarZaNpvvo",
"license": "CC BY 4.0"
},
{
"id": "F0Pmzrxobs",
"forum": "0hLuQAT3fV",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_Hbgg",
"reviewer_name": "Reviewer_Hbgg",
"rating": 8,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces a universal adversarial perturbation approach for protecting images from diffusion-based editing. Instead of optimizing perturbations per image, it learns a single perturbation that can be applied universally. The method combines a semantic injection loss that aligns perturbed images with a target concept and a suppression loss that reduces the influence of original semantics, effectively disrupting unauthorized edits. Experiments show strong protection, cross-model generalization, and competitive performance even in data-free settings.",
"strengths": "1. The paper is clearly written and well organized, with intuitive figures and a logical presentation of ideas.\n\n2. The proposed framework is well motivated, and the introduction of the source semantic suppression loss is a novel and insightful component that strengthens the overall approach.\n\n3. The experiments are thorough and convincing, showing strong and consistent results across models and settings, including data-free and black-box scenarios.",
"weaknesses": "The comparison with *Semantic Attack* may not be fully fair, as the original method is not designed under any $ \\ell_2 $ or $ \\ell_\\infty $ perturbation constraint. Imposing such a bound changes its optimization behavior and could disadvantage it in this setting.",
"questions": "1. Have the authors explored how the visual structure of the chosen target (for example, purely geometric or black-and-white grid patterns instead of semantic objects like “Ronaldo” or “Tiger”) affects the resulting perturbation? Such structured patterns might yield more uniform attention disruption and stronger transferability.\n\n2. The method achieves strong performance under the universal constraint. If this constraint were relaxed—allowing limited image-specific adaptation—how might the performance change, and what strategies could further strengthen the performance in that less restricted setting?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T13:02:27",
"modification_date": "2025-11-12T11:15:55",
"review_url": "https://openreview.net/forum?id=0hLuQAT3fV¬eId=F0Pmzrxobs",
"license": "CC BY 4.0"
}
] |
|
3sJ4zKToW6
|
https://openreview.net/forum?id=3sJ4zKToW6
|
Consistent Low-Rank Approximation
| 6.666667
| 3.333333
|
[
4,
8,
8
] |
[
3,
2,
5
] | 3
|
[
"low-rank approximation",
"online algorithms",
"consistency",
"recourse"
] |
We introduce and study the problem of consistent low-rank approximation, in which rows of an input matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ arrive sequentially and the goal is to provide a sequence of subspaces that well-approximate the optimal rank-$k$ approximation to the submatrix $\mathbf{A}^{(t)}$ that has arrived at each time $t$, while minimizing the recourse, i.e., the overall change in the sequence of solutions. We first show that when the goal is to achieve a low-rank cost within an additive $\varepsilon\cdot||\mathbf{A}^{(t)}||_F^2$ factor of the optimal cost, roughly $\mathcal{O}\left(\frac{k}{\varepsilon}\log(nd)\right)$ recourse is feasible. For the more challenging goal of achieving a relative $(1+\varepsilon)$-multiplicative approximation of the optimal rank-$k$ cost, we show that a simple upper bound in this setting is $\frac{k^2}{\varepsilon^2}\cdot\text{poly}\log(nd)$ recourse, which we further improve to $\frac{k^{3/2}}{\varepsilon^2}\cdot\text{poly}\log(nd)$ for integer-bounded matrices and $\frac{k}{\varepsilon^2}\cdot\text{poly}\log(nd)$ for data streams with polynomial online condition number. We also show that $\Omega\left(\frac{k}{\varepsilon}\log\frac{n}{k}\right)$ recourse is necessary for any algorithm that maintains a multiplicative $(1+\varepsilon)$-approximation to the optimal low-rank cost, even if the full input is known in advance. Finally, we perform a number of empirical evaluations to complement our theoretical guarantees, demonstrating the efficacy of our algorithms in practice.
|
optimization
|
https://openreview.net/pdf?id=3sJ4zKToW6
| 2025-09-19T05:52:21
| 3
|
[
{
"id": "G9M6d2dYmo",
"forum": "3sJ4zKToW6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_ex4U",
"reviewer_name": "Reviewer_ex4U",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper studies the problem of low-rank approximation (LRA). Specifically given a matrix $A$, this work studies the problem of approximating $A$ with a matrix $AV^TV$, such that $||A-AV^TV||_F^2 \\leq (1+\\epsilon)||A-A_k||_F^2$ where $A_k$ is the best rank-$k$ approximation of $A$, and rows of $A$ arrive sequentially in time. This is a very widely studied problem. A primary contribution of the paper is the following problem: given rows of $A$ arrive sequentially over time, define measure called recourse computed as $||P_t - P_t-1||_F^2$ where $P_t$ is the orthogonal projection matrix corresponding to $V^TV$ at time $t$. This work studies LRA through the lens of recourse and demonstrates that -- 1) when the goal is to approximate $A$ with $\\epsilon$ additive error, an $O(k\\log(nd)/\\epsilon)$ recourse is feasible, 2) when the goal is to approximate $A$ with $1+\\epsilon$ multiplicative error, an $O(k^2\\text{poly}\\log(nd)/\\epsilon^2)$ recourse is feasible. This is further improved to $k^{3/2}\\text{poly}\\log(nd)/\\epsilon^2$ for matrices with integer entries that are bounded, and $k^{2}\\text{poly}\\log(nd)/\\epsilon^2$ when condition number is bounded. A lower bound of $\\Omega(k\\log(n/k)/\\epsilon)$ is also shown for $1+\\epsilon$ multiplicative approximation algorithms.",
"strengths": "- The problem setting is interesting, i.e., studying of the subspace corresponding to streaming updates and understand how subspace can differ for different algorithms is an interesting idea. Mostly because one can imagine having to reconstruct the approximation matrix again and again if the subspace is changing significantly (e.g., as stated for the Frequent directions method). \n\n- I have only glossed over the proofs, which are pretty simple, and believe they are correct. Given the authors present a lower bound to the problem, it helps us ground the theoretical upper bounds presented in this work. \n\n- I really appreciate the simple algorithms which helps maintain the approximation at time $t$. The algorithm basically checks importance of an incoming row by first identifying the bottom $\\sqrt{k}$ singular values among the top $k$ singular vectors. If these vectors have very low spectral contribution, they are \"disposable\" and so can be replaced by any incoming vector. \n\n- Good empirical evaluations help us understand how the algorithms presented here work in practice.",
"weaknesses": "- There is a significant body of work on rank-$k$ approximation algorithms. However, only frequent directions has been empirically compared against. I am surprised as why this is the case.\n\n- Most of the theoretical contributions are really derivative of prior work. While I really appreciate the problem setting, the contributions are really understanding how the subspace are drifting with time given the subspace approximation algorithm. \n\n- Algorithm 2 requires computing SVD at each round in the worst case. So while one may be easily able to reduce recourse, the run time of the algorithms grows with $tk^3$, which seems expensive!\n\n- For distributional shifts, just checking the bottom $\\sqrt{k}$ may not be enough, e.g., for windowed algorithms due to Braverman et. al. 2020, or the works of Musco-Musco, or Cohen et. al. on online leverage score sampling, one might need to re-evaluate samples which was heavy at some point and might become of low importance in future. What do we do then?",
"questions": "Please check the weaknesses section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:49:50",
"modification_date": "2025-11-12T13:19:00",
"review_url": "https://openreview.net/forum?id=3sJ4zKToW6¬eId=G9M6d2dYmo",
"license": "CC BY 4.0"
},
{
"id": "Yx0pNuAbRL",
"forum": "3sJ4zKToW6",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_odAo",
"reviewer_name": "Reviewer_odAo",
"rating": 8,
"confidence": 2,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper studies the online low rank approximation problem. In this problem one is given a matrix $A\\in \\mathbb{R}^{n\\times d}$ with integer entries bounded by $M$ and whose rows $a_1,\\ldots, a_n$ arrive one by one. Let $A^{t}$ denote the matrix of the first $t$ rows at time $t\\in [n]$, the goal is to output a matrix $V^{t}\\in \\mathbb{R}^{k\\times n}$ such that $A^{t}(V^{t})^T V^{t}$ is a $1+\\epsilon$ approximation rank $k$ approximation to $A^{t}$ at every time $t\\in [n]$. In particular the paper studies \\emph{consistent} algorithms for online low rank approximation. More precisely the goal is to minimize the recourse of the algorithm measured as \n\n$$\\sum_{t=1}^n \\|P_A-P_B\\|_F^2$$\n\nfor $A = V_t$ and $B = V_{t-1}$ where $P_{V}$ is the orthogonal projection matrix of the subspace spanned by $V$. Thus a low recourse algorithm ensures that the subspace of low rank approximation does not change drastically on average over the stream. Note that recourse of $nk$ can be achieved trivially by computing the best rank $k$ approximation at each step from scratch.\n\nThe first result shown in the paper is an algorithm that achieves a recourse of $O((k/\\epsilon)\\log(ndM))$ but incurs an additional additive error $\\epsilon \\|A^{t}\\|_F^2$ at each step $t\\in [n]$. Furthermore they show that a recourse of $O((k/\\epsilon^2)\\log^2 n)$ assuming an online condition number of poly$(n)$ and no additive error. Finally for matrices with integer entries they also obtain improved bounds. On the negative side they prove a lower bound on the recourse of $\\Omega(n/\\epsilon \\log(n/k))$ for obtaining a $1+\\epsilon$ approximation at every time step by constructing a hard sequence of rows.",
"strengths": "The paper introduces a novel model for studying low rank approximation of consistency. Consistent and low recourse algorithms have been studied for other problems in data science thus making low rank approximation a natural problem to study from a theoretical perspective. Moreover the authors show good upper and lower bounds for low rank approximation in this model.",
"weaknesses": "The paper does not have many weaknesses but one is that although the problem has a natural theoretical motivation, it would be interesting for the authors to discuss more concrete practical motivations for studying low recourse algorithms for low rank approximation.",
"questions": "Although the authors very briefly discuss potential applications in feature engineering, it would be interesting to see if there are any concrete applications of low recourse algorithms for low rank approximation.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T22:44:45",
"modification_date": "2025-11-12T13:19:00",
"review_url": "https://openreview.net/forum?id=3sJ4zKToW6¬eId=Yx0pNuAbRL",
"license": "CC BY 4.0"
},
{
"id": "SiSxXPBsD0",
"forum": "3sJ4zKToW6",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_vaVm",
"reviewer_name": "Reviewer_vaVm",
"rating": 8,
"confidence": 5,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper studies low rank approximation in a streaming model, where in addition to standard goals of small space and update time, they also do not want the provided solution to change too much across the lifetime of the stream. This is modeled as \"recourse\" which is the sum of squared distances between subspaces at each step.",
"strengths": "Standard streaming subspace approximation algorithms like FrequentDirections and Ridge Leverage Score Sampling can have very large recourse as shown theoretically, and empirically on real data. That means they can bounce between solutions. \n\nThe algorithm is subtle yet simple. It is careful about when to update the estimate with extra care to not to change the subspace too much if it does not have to. It reminds me of distributed streaming algorithms (e.g., https://arxiv.org/abs/1404.7571) that try to minimize total communication of updates, but with focus on ensuring a stable answer. \n\nI think the recourse setting is natural and useful. It is a nice way to quantify stability of the sketch. \n\nA strength is that feels like a complete paper on this topic. It has a variety of upper bounds for additive and relative error, and shows lower bounds on recourse that asymptotically match the upper bounds. There are basic experiments that show that the algorithm is not just theoretical, but works in practice -- whereas baselines like FrequentDirections does not.",
"weaknesses": "Nothing to note.",
"questions": "None.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T04:24:26",
"modification_date": "2025-11-12T13:19:01",
"review_url": "https://openreview.net/forum?id=3sJ4zKToW6¬eId=SiSxXPBsD0",
"license": "CC BY 4.0"
}
] |
|
OyIJvyyB3R
|
https://openreview.net/forum?id=OyIJvyyB3R
|
LLM2Fx-Tools: Tool Calling for Music Post-Production
| 5.5
| 3.5
|
[
4,
8,
6,
4
] |
[
3,
3,
4,
4
] | 4
|
[
"Music Post Production",
"Fx Chain Generation",
"Tool Calling"
] |
This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine their order, and estimate parameters, guided by chain-of-thought (CoT) planning. We also present LP-Fx, a new instruction-following dataset with structured CoT annotations and tool calls for audio effects modules. Experiments show that LLM2Fx-Tools can infer an Fx-chain and its parameters from pairs of unprocessed and processed audio, enabled by autoregressive sequence modeling, tool calling, and CoT reasoning. We further validate the system in a style transfer setting, where audio effects information is transferred from a reference source and applied to new content. Finally, LLM-as-a-judge evaluation demonstrates that our approach generates appropriate CoT reasoning and responses for music production queries. To our knowledge, this is the first work to apply LLM-based tool calling to audio effects modules, enabling interpretable and controllable music production where users can incorporate their own audio plugins.
|
LLM2Fx-Tools is a framework that uses a multimodal LLM to automatically generate executable audio effect chains (as tools), chain-of-thought reasoning, and natural language responses.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=OyIJvyyB3R
| 2025-09-19T13:42:11
| 4
|
[
{
"id": "B7fQjc5nan",
"forum": "OyIJvyyB3R",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_Rbd9",
"reviewer_name": "Reviewer_Rbd9",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper applies existing LLM tool calling techniques to audio effects chain generation. The system uses chain-of-thought to predict effect sequences from audio. The authors create a 101K synthetic dataset LP-Fx generated by Gemini 2.5. In my opinion, the work is mostly an application of existing techniques to a new domain without significant technical innovation.",
"strengths": "1. First work applying structured tool calling to audio effects chains\n2. Comprehensive evaluation across multiple metrics",
"weaknesses": "1. The paper misuses terminology. \"Audio style transfer\" has established meaning in audio processing literature (timbre/texture transformation). This work only does audio effects parameter transfer, which is much narrower. This creates confusion with existing work and is misleading.\n2. Limited technical novelty. The method is standard multimodal LLM fine-tuning: audio encoder -> adapter -> LLM with LoRA. This is direct application of existing techniques without methodological contribution.\n3. No human evaluation despite claims about \"interpretability\" and \"controllable music production\". All evaluation is automatic metrics or LLM-as-a-judge, which has known reliability issues.\n4. Missing details: How does the model handle effects outside the 9 trained types? The paper claims \"users can incorporate their own audio plugins\" but provides no evidence.\n5. The work is mostly experimental validation that LLM tool calling works for this task. The technical contribution is limited.",
"questions": "Recommand using the term \"audio effects (parameter) transfer\" instead of \"audio style transfer\"\n\nGemini 2.5 Flash gets MAE 0.32 vs your 0.23, but it achieves 78% accuracy vs your 80%. Why does such a large model fail so badly on parameters? Does it indicate setup issue?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T11:53:54",
"modification_date": "2025-11-12T13:45:07",
"review_url": "https://openreview.net/forum?id=OyIJvyyB3R¬eId=B7fQjc5nan",
"license": "CC BY 4.0"
},
{
"id": "DNjRsVuCU3",
"forum": "OyIJvyyB3R",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_CSNR",
"reviewer_name": "Reviewer_CSNR",
"rating": 8,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 4,
"summary": "The paper presents LLM2Fx-Tools, a novel tool-calling framework which for a given set of audio inputs, provides executable audio effects sequences (Fx-chain), with appropriate CoT reasoning and responses . The paper also introduces LP-FX, a new instruction following dataset with CoT annotations and tools calls for audio effects. The authors provide experimental validation for their approach along with a demo page for subjective verification.",
"strengths": "Originality: The paper's key novelty lies in formulating Fx-chain estimation as a LLM-based tool call problem. The autoregressive modeling for LLMs is able to learn the sequential order of audio effect calls as opposed to systems only based on audio features. \n\nQuality: The paper has detailed experiments around the three evaluation tasks, reverse engineering to show the model can predict tool-chain for paired audios, blind style transfer to show the generalization capability to unseen audios, and natural language language generation to showcase interpretability. Across all the tasks, LLM2FX-Tools results are strong as compared to the baselines. The authors conduct ablations to show the importance of optimization decisions (CoT, NTL, MST). \n\nClarity: The paper is well written with clear notation and figures, with appendix covering all necessary details for dataset generation, evaluation and LLM prompting.\n\nSignificance: LLM2FX-Tools framework treats the audio effect modules as external non-differentiable tools, which makes the framework flexible to diverse real world scenarios.The authors also present a LP-FX dataset with CoT annotations and tool calls, which is beneficial for future research",
"weaknesses": "For the reverse engineering task, the strongest baseline is Multi-task regression, which comes close even without relying on the ordering of Fx-chain, while the LLM is learning that information. The authors can consider adding a pairwise-ordering loss for the 9 audio effects for the multi-task baselines.\n\nFor the style transfer task, the style of the output appears to be mixed between the input and reference audio while listening subjectively to the demo examples. A comparison with differential audio effects style transfer baseline would be quite important to see (https://arxiv.org/pdf/2207.08759). The objective evaluation can benefit on a larger set than 100 test samples\n\nAs covered in the limitation section, the paper relies on Fx-normalization and Fx-removal preprocessing, while ideally, they should be modeling as part of the tool-calling framework. Experimental validation is limited to single instruments, datasets are relatively smaller in size with ~2k tracks.",
"questions": "For equation 4, $N$ corresponds to the sequence length or training examples? Previous equation 3 has $t$, which is over the sequence, while for $t$ in equation 4 represents the upper range of the number token. It will be helpful if we can improve the notation a little here.\n\nPlease consider having a subjective evaluation and a stronger baseline like (https://arxiv.org/pdf/2207.08759) for the style transfer task. \n\nGemini 2.5 Pro is used both for dataset generation judge in 3.2, and natural language evaluation judge for table 4. Could we use a different judge to remove the bias for this case?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T08:17:05",
"modification_date": "2025-11-12T13:45:07",
"review_url": "https://openreview.net/forum?id=OyIJvyyB3R¬eId=DNjRsVuCU3",
"license": "CC BY 4.0"
},
{
"id": "nsPS8cJzfi",
"forum": "OyIJvyyB3R",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_A8mk",
"reviewer_name": "Reviewer_A8mk",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents LLM2Fx-Tools, a tool-calling framework that generates sequences of audio effects (Fx-chains) for music post-production. The authors also introduce a new dataset, LP-Fx, to support this task. The topic is novel and of clear interest to the audio and music research community. The proposed system builds upon Fx-Encoder++ and fine-tunes Qwen-4B to achieve the goal of automatic Fx-chain generation.",
"strengths": "- The proposed approach to Fx-chain estimation is novel. The integration of Chain-of-Thought (CoT) reasoning into the training framework is also interesting.\n\n- The problem is clearly defined and well motivated.\n\n- The methodology for dataset creation is clearly described and systematically organized.",
"weaknesses": "- In Figure 1, the meaning of FxNorm is unclear.\n\n- In Figure 2, why does e_{SEP} consist of two tokens?\n\n- Below Equation (1), what is N? What is param_n?\n\n- In Section 2.1, second paragraph, the authors mention “handle both tasks.” What exactly are the two tasks?\n\n- In Section 2.1, the term “secondary task” is introduced but not clearly defined.\n\n- In Section 2.2 (Audio Encoder), why was Fx-Encoder++ chosen over other possible encoders? How might different audio encoders influence system performance?\n\n- The writing in Section 2.3 (Number Token Loss) and Equation (4) needs improvement for clarity. The statement “a key problem with Cross Entropy is that it treats all incorrect predictions equally” is vague—please elaborate on how this issue is addressed in your proposed loss. Overall, Subsection 2.3 and Equation (4) are difficult to follow.",
"questions": "- Will the training dataset and training code be released for reproducibility in future work?\n\n- Will the evaluation dataset and evaluation code also be made publicly available?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:01:42",
"modification_date": "2025-11-12T13:45:07",
"review_url": "https://openreview.net/forum?id=OyIJvyyB3R¬eId=nsPS8cJzfi",
"license": "CC BY 4.0"
},
{
"id": "AhiFImc0KX",
"forum": "OyIJvyyB3R",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_Cr7x",
"reviewer_name": "Reviewer_Cr7x",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a new framework (including a model, dataset, and overall methodology) for music post-production based on a multimodal LLM. This is evaluated on inferring an Fx-chain, but also on style transfer and using the LLM as a judge.",
"strengths": "* This is a relatively unexplored application domain in terms of using multimodal LLMs for music post-production tasks.\n* The overall choice of models, the task definition, training process, dataset creation, and evaluation methodology are all appropriate and technically sound.",
"weaknesses": "* My main source of criticism for this paper is that this work overall uses established AI methods for a new application. There is little to no AI innovation taking place, and I feel that this work would be more suitable for a venue specializing in audio production or audio engineering (e.g. AES conferences or conventions, ICASSP, or DAFx). I do not see any compelling evidence for inclusion in ICLR.",
"questions": "As stated above, I fully agree with the design choices made by the authors in terms of methodology, problem setting, evaluation, and the new dataset based on MedleyDB. The paper is also well structured and well written, and I also appreciate the inclusion of a section on Limitations, which is not something always present in ICLR submissions. \n\nMy only comment is the one stated above, on whether is ICLR the most suitable venue for a work which does not offer any innovation in AI, but rather uses established AI methods with some minor modifications as to support a research question and problem directly situated in the field of audio engineering. As such I might recommend this paper on being marginally out of scope of ICLR.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T00:08:45",
"modification_date": "2025-11-12T13:45:08",
"review_url": "https://openreview.net/forum?id=OyIJvyyB3R¬eId=AhiFImc0KX",
"license": "CC BY 4.0"
}
] |
rcsZNV9A5j
|
https://openreview.net/forum?id=rcsZNV9A5j
|
Flash Multi-Head Feed-Forward Network
| 5
| 3.75
|
[
6,
4,
4,
6
] |
[
3,
4,
4,
4
] | 4
|
[
"Machine Learning Systems",
"Machine Learning",
"Software-Hardware Codesign",
"Natural Language Processing",
"Transformer",
"Deep Learning",
"Model Architecture"
] |
We explore Multi-Head FFN (MH-FFN) as a replacement of FFN in the Transformer architecture, motivated by the structural similarity between single-head attention and FFN. While multi-head mechanisms enhance expressivity in attention, naively applying them to FFNs faces two challenges: memory consumption scaling with the head count, and an imbalanced ratio between the growing intermediate size and the fixed head dimension as models scale, which degrades scalability and expressive power. To address these challenges, we propose Flash Multi-Head FFN (FlashMHF), with two key innovations: an I/O-aware fused kernel computing outputs online in SRAM akin to FlashAttention, and a design using dynamically weighted parallel sub-networks to maintain a balanced ratio between intermediate and head dimensions. Validated on models from 128M to 1.3B parameters, FlashMHF consistently improves perplexity and downstream task accuracy over SwiGLU FFNs, while reducing peak memory usage by 3-5x and accelerating inference by up to 1.08x. Our work establishes the multi-head design as a superior architectural principle for FFNs, presenting FlashMHF as a powerful, efficient, and scalable alternative to FFNs in Transformers.
|
We propose a novel multi-head FFN that achieves better transformer model performance while using 3-5x less memory and running 1.00-1.08x faster than standard SwiGLU FFNs.
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
https://openreview.net/pdf?id=rcsZNV9A5j
| 2025-09-16T16:13:44
| 4
|
[
{
"id": "TygVX9zSRX",
"forum": "rcsZNV9A5j",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_i2pJ",
"reviewer_name": "Reviewer_i2pJ",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes FlashMHF, which is a multi-head feed-forward networks (FFNs) for Transformers. Motivated by the structural similarity between single-head attention and FFNs, the authors identify two key challenges in current MHF: memory explosion and scaling imbalance. FlashMHF solves these problems by pairing a scaled-balanced parallel FFN subnetworks designed with a high-efficiency, IO-aware kernel. Experiments on models from 128M to 1.3B parameters show improvements in perplexity and downstream tasks, with 3-5x memory reduction and up to 1.08x inference speedup.",
"strengths": "The motivation of the paper is well justified with two problems in naive multi-head attention. There are proper ablations such as head dimensions and model scales, and downstream task evaluations are standard. The idea is straightforward by using sub-networks to group different heads to solve the problems, yet results are pretty impressive.",
"weaknesses": "1. In section 3.2.1 the authors say their FlashMHF functions Luke a dense MoE, however, there is no direct comparison against dense MoE architecture. \n2. There is no ablations for “Flash”, so it’s hard to isolate memory savings from the architectural change and the kernel optimization. \n3. Lack of large scale experiments to verify the scaling effect - largest model size is 1.3B.\n4. About presentation, Figure 3a doesn’t show multihead which is confusing. Also, the biggest innovation of it seems to come from MoE, while the title is a bit misleading, “mixture of dense multi-head FFN experts” might be better cover what the core idea is.",
"questions": "Multi-head needs to be concat so we do need to materialize the full tensor. In section 3.2.2 it says “The key to solving the memory explosion lies in the multi-head design itself” seems wrong, shouldn’t it be in the expert design, because we can do the weighted average accumulation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:26:11",
"modification_date": "2025-11-12T11:48:53",
"review_url": "https://openreview.net/forum?id=rcsZNV9A5j¬eId=TygVX9zSRX",
"license": "CC BY 4.0"
},
{
"id": "idLqumvNwL",
"forum": "rcsZNV9A5j",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_B4B2",
"reviewer_name": "Reviewer_B4B2",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes FlashMHF, which introduces the multi-head mechanism (in attention) into the Feed-Forward Network (FFN) module while balancing performance scalability and implementation efficiency. The proposed design addresses two key issues in naïve multi-head FFNs (i.e., scaling imbalance and memory explosion), by decomposing parallel FFN subnetworks and implementing an I/O-aware flash kernel. Experimental results on small- and medium-scale models demonstrate that FlashMHF outperforms the de facto SwiGLU baseline in language modeling tasks, while significantly reducing memory usage.",
"strengths": "1. The paper is well-motivated and clearly written. It identifies two key challenges of multi-head FFNs and proposes corresponding solutions, which are empirically validated.\n2. FlashMHF achieves lower PPL and better downstream performance than SwiGLU and other baselines. The architectural design choices are well-supported by effective ablation studies, including the multi-head mechanism, SwiGLU component, and subnetwork structure.\n3. lt is implemented with a kernel design analogous to FlashAttention, ensuring the feasibility of training large-scale language models efficiently.",
"weaknesses": "1. **Source of subnetwork advantages.**\nThe authors claim that the benefit of the subnetwork design mainly arises from a more balanced expansion ratio. However, for a given head, the parallel subnetwork computation essentially differs from a dense FFN only by an additional **blockwise gating** applied to intermediate activations. When concatenated, this does not effectively control the expansion ratio and finally increases by $d_{model}/d_h$ compared to a standard SwiGLU. I suspect the improvement stems from added nonlinearity (gating with normalization) rather than from the parallel sub-net. In other words, applying a similar gating mechanism to a standard SwiGLU might also yield certain loss improvement (as the experiments show, the standalone multi-head design brings no clear advantage at larger scales).\n\n2. **Fairness of speed evaluation.**\nThe speed comparison appears somewhat unfair. To match parameter counts, the authors add four extra layers (1/5 of total) for baseline; but deeper networks are inherently slower due to layer-wise serialization, whereas **increasing width** would be a fairer adjustment. Moreover, the attention computation also scales with depth, thus latency improvements only become apparent at longer sequence lengths (as shown in Fig. 7b).\n\n3. **Memory evaluation setup.**\nThe memory comparison setup should be clarified. SwiGLU can also be easily adapted to a flash kernel, and many frameworks **fuse activation functions** to reduce memory overhead. It is unclear whether the authors’ implementation accounts for these optimizations. Considering that modern LLM training almost universally employs **gradient checkpointing**, FFN intermediate activations are typically recomputed rather than stored, which should be reflected in a more realistic baseline comparison.",
"questions": "1. What is the specific implementation of SwiGLU used in the efficiency evaluation? Is activation recomputation (gradient checkpointing) applied during measurement?\n2. In the GLU formulation, you define $\\mathbf{Q}=\\mathbf{X}$ (Eq. 3). However, in the multi-head FFN definition, a separate projection $\\mathbf{W}_{in}$ is introduced to obtain the query (Eq.10). Is this design choice be empirically validated as necessary?\n3. Compared to PKV, the activation function used in PAttention [1] might serve as a more solid baseline for comparison.\n4. Given that most sota Transformer architectures now adopt MoE designs, how do the authors view the compatibility and potential integration of FlashMHF with MoE architectures?\n\n[1] TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters. 2024.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T04:03:09",
"modification_date": "2025-11-12T11:48:53",
"review_url": "https://openreview.net/forum?id=rcsZNV9A5j¬eId=idLqumvNwL",
"license": "CC BY 4.0"
},
{
"id": "DiDcnXvEMd",
"forum": "rcsZNV9A5j",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_p8ui",
"reviewer_name": "Reviewer_p8ui",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes FlashMHF, a replacement for standard FFNs in Transformer architectures. The core idea is to mirror the Multi-Head design of Attention also in the FFNs implementation. The paper however warns that a naive adaptation incurs scaling issues, both in terms of increasing memory consumption and expressive power degradation. The authors address both of these issues by carefully prescribing how the intermediate activations dimension should scale with model size, and by implementing a fused kernel for FlashMHF which avoids materialising intermediate tensors. Results show how substituting the FlashMHF component with FFNs can boost performance (both on PPL, and downstream tasks evaluations taken from lm-eval-harness), while simultaneously reducing peak memory utilisation, and slightly improving latency.",
"strengths": "- The main motivations behind the choice of architecture modifications are justified reasonably well\n- The analysis is convincing, and the experiments conducted overall complete (although some results could be presented better)",
"weaknesses": "- Novelty is limited: both core methodologies (mirroring MH Attention and improving kernel application via tiling) have already been proposed",
"questions": "__On Novelty__:\nAs I mentioned above, I find the novelty aspect of the paper rather limited. As you yourselves correctly point out, the structural symmetry between sequence-wise Attention and feature-wise FFN (which acts as main justification behind your work) has already been illustrated; the proposal to split FFNs in a multi-head fashion was already (granted, partly) investigated in MH-MoE; the tile-wise implementation of your fused kernel is directly inspired by FlashAttention, and is at the core of the design of efficient parallelisation of MMMs. The most relevant novelty is then given by the proposed re-scaling of the size of the internal components of the MH FFN. I still appreciate the overall execution, but I find this limits the contribution of the paper.\n\n__On Fig7__:\nThe presentation of the results in Sec4.3, and more specifically in Fig7, should be heavily revised, for a number of reasons:\n- From what you write in L415, your “comparison uses a 20-layer FlashMHF and MH-FFN against a 24-layer SwiGLU baseline”, so I’m understanding you’re considering memory consumption and latency in a *forward pass through the whole architecture*, including both FFN/FlashMHF and Attention layers? I believe at this stage it would be more relevant instead to have a direct comparison between the *single* FlashMHF / FFN layer, so to properly identify the improvements introduced by your proposed modification (as the Attention layer is the same in both cases, I take it). To be clear: I do appreciate the result you report (ultimately, the “weight” of the overall architecture is what practitioners mostly care about), but the presence of Attention does dirty the relevant metric. Notice this should play in your favour, too, in that the memory / speedup gains should be more marked. If instead I misunderstood, and you’re considering just FFN/FlashMHF layers, please clarify this in the text.\n- What is the deal with the sequence lengths picked? I was expecting orderly powers of 2, which would make identifying the O(L) trend straightforward at glance. Also, please use a ylog scale, for the same reason\n- Moreover, why picking sequence lengths in the first place? Since you’re focusing on the FFN layer (which applies a perfectly sequence-parallel operation, and acts purely along the feature dimensions), then a scaling trend with respect to feature dimension would be much more relevant, in my opinion. What you’re effectively reporting here is the scaling trend of Attention. Again, it’s not like this result is not useful per se, but the way it’s presented makes it harder to isolate the contribution of your own component, which should be the focus of this section.\n- Finally, and perhaps most importantly, the comparison is not entirely fair: if I understood correctly, you’re using an unoptimised version of SwiGLU (which unnecessarily materialises intermediate tensors) to compare against your own fused kernel for FlashMHF. How much of the gains you’re seeing are due to the tiled implementation of the kernel? Because that same solution could be easily applied to SwiGLU as well, I reckon.\n\n__On Gating__:\nIn L200-218 you describe your chosen per-head expert aggregation mechanism. There is a number of different ways one could go on about aggregating both within and across heads: have you experimented with different methods? Can you expand on the reasoning behind this specific choice? Compared to the remainder of the paper, this section is lacking some justifications.\n\n\n__Minor__:\n- In L247 you write: “we synchronize the hyper-parameter settings for the optimizer across all models”. What does this mean? I’m expecting, say, optimal LR’s to vary across architectures, at least in principle. Are you not performing any hyper-parameter sweep whatsoever? And if you’re doing it, are you picking the best for *which* architecture exactly?\n- You’re going down the route of making the FFN more akin to Attention; but there is also the “dual” approach of making attention more akin to FFNs, as explored in “MLP-Mixer: An all-MLP Architecture for Vision”. I don’t think it makes sense to explicitly add a comparison with this architecture, but I would at least mention it, as I believe it’s relevant. Moreover, I was quite surprised to see that there isn’t much work which just goes all the way and substitutes FFN with component-wise attention. Apart from MH-MoE, I could only find “DaViT: Dual Attention Vision Transformers” (again, only applied to vision).\n\n\n__Grammar / Rewording / Formatting__:\n- L51 “we analyse the …, a straightforward … and identify” -> the clause is breaking the flow. Maybe “we analyse the … (a straightforward … ), and identify…”\n- L62 analogous -> analogousLY\n- L89 the equation is hanging: consider prepending something like “We consider the parameters: ”\n- L107 remains -> reTains\n- Eq(1,2,3,…) I think you’re misusing the equivalent-by-definition / delta-equivalent (\\triangleq) symbol. The defined-as symbol (\\eqqcolon) would be much better indicated here, imho\n- L115 define headwise split -> define THE headwise split? Define headwise split AS? (Similarly for headwise concatenation in L121)\n- L119 this operation split -> splitS\n- L120 into $d_h\\times H$…sub tensors? parts? blocks?\n- L131 to overcome these challenges -> which challenges? I reckon it refers to the “practical limitations” above, but it’s rather vague\n- L473: write -> store? Write … to memory?\n- L474: incorporates -> incorporate",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:00:24",
"modification_date": "2025-11-12T11:48:54",
"review_url": "https://openreview.net/forum?id=rcsZNV9A5j¬eId=DiDcnXvEMd",
"license": "CC BY 4.0"
},
{
"id": "CtOi7cbV2E",
"forum": "rcsZNV9A5j",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_kH78",
"reviewer_name": "Reviewer_kH78",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "Motivated by the structural similarity between single-ahead attention and a feed-forward network (FFN), the paper explores multi-head FFNs. To account for the increased memory consumption and other issues, the authors propose a novel architecture, FlashMHF, inspired by FlashAttention which also dynamically weights parallel sub-networks. They find for small models that their design improves perplexity and task accuracy whilst reducing peak memory usage and inference time vs a SwiGLU FFN. This suggests that FlashMHF might be a powerful new architectural component that could replace the FFN in existing transformer architectures.",
"strengths": "1. Good empirical results on the models and tasks tested compared to a standard baseline, clear improvements across a range of downstream tasks\n2. Mathematical exposition of the preliminaries and method is clear and well-written\n3. The method is a satisfying synthesis of existing ideas and innovations in other aspects of the transformer architecture to improve the FFN block\n4. The GPU memory scaling of the proposed FFN architecture is smaller than that of a typical FFN",
"weaknesses": "1. The paper proposes a core architectural innovation for LLMs, but only tests on very small models (<=1.3B parameters).\n2. Three model sizes are tested but they are not plotted together / compared directly so it's not clear how improvements scale with size. There's limited empirical evidence that we should expect the accuracy / perplexity advantages of this architecture to improve with scale, rather than diminish.\n3. It's not clear why scaling imbalance, $d_{ff}/d_h$, is an issue, as stated on line 056. The reference given on line 057 does not address this since neural scaling laws assume normal FFNs, and does not investigate multi-head FFNs. The discussion and evidence given in 3.1 and 4.1 seemingly only addresses one way of scaling the model. There are unstated assumptions in the paper relative to prior work about what ratios are important, and how one would scale a model with multi-head FFNs. To make a claim about these ratios scaling poorly and causing issues, one needs to provide evidence that any way of scaling them would lead to performance degradation relative to using single-head FFNs. Otherwise it can just be argued that scaling them in a different way might resolve this problem naturally, without need for more complex architectures. More explicitly put, for the Naive multi-head FFN baseline you make assumptions such as \"the per-head width is typically kept fixed\" (line 180), and then show this is bad. Why should one keep this fixed then? Why not just scale things in a different way? Additionally, why is it correct to equate the ratio $d_{ff}/d_{model}$ in normal SwiGLU designs with the ratio $d_{ff}/d_h$? Saying this latter ratio is outside of the optimal values found for the former ratio in prior work tells us nothing without additional evidence or reasoning backing up the validity of this comparison.\n4. Framing something that scales linearly as you scale up the model as an \"explosion\" is disingenuous. Typically things are framed as explosions when they scale exponentially. It is not convincing that the memory requirements of the naive multi-head FFN or standard FFN are a critical issue.\n5. You do not conduct enough ablations for the claims on lines 338-343 to be valid.\n6. It's not clear that inference latency reduction results are statistically significant\n7. All plots are given with training steps on the x-axis, not wall clock time. It's unclear how the proposed architecture affects training time.",
"questions": "1. Why does an imbalanced ratio between intermediate FFN size and FFN head dimension degrade scalability and expressive power? (See weakness 3 for related critique and questions)\n2. Why should multiple FFN heads split up the model dimension, and not each use the whole thing, as would be analogous to multi-head attention? Obviously this would lead to greater computation requirements, but perhaps also better performance? It would be nice to see this investigated, though this is a very minor point.\n3. How does the parameter count of FlashMHF scale and compare to a standard FFN?\n4. Does the proposed architecture increase training time for a fixed number of training steps?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T23:46:54",
"modification_date": "2025-11-12T11:48:56",
"review_url": "https://openreview.net/forum?id=rcsZNV9A5j¬eId=CtOi7cbV2E",
"license": "CC BY 4.0"
}
] |
eS4MAmmCHy
|
https://openreview.net/forum?id=eS4MAmmCHy
|
PEL-NAS: Search Space Partitioned Architecture Prompt Co-evolutionary LLM-driven Hardware-Aware Neural Architecture Search
| 3.5
| 4
|
[
4,
4,
4,
2
] |
[
4,
4,
4,
4
] | 4
|
[
"Large Language Model",
"Hardware-aware",
"Neural Architecture Search"
] |
Hardware-Aware Neural Architecture Search (HW-NAS) requires joint optimization of accuracy and latency under device constraints.
Traditional supernet-based methods require multiple GPU days per dataset. Large Language Model (LLM)-driven approaches avoid training a large supernet and can provide quick feedback, but we observe an exploration bias: the LLM repeatedly proposes neural network designs within limited search space and fails to discover architectures across different latency ranges in the whole search space. To address this issue, we propose PEL-NAS: a search space Partitioned, architecture prompt co-Evolutionary and LLM-driven Neural Architecture Search that can generate neural networks with high accuracy and low latency with reduced search cost. Our proposed PEL-NAS has three key components: 1) a complexity-driven partitioning engine that divides the search space by complexity to enforce diversity and mitigate exploration bias; 2) an LLM-powered architecture prompt co-evolution operator, in which the LLM first updates a knowledge base of design heuristics based on results from the previous round, then performs a guided evolution algorithm on architectures with prompts that incorporate this knowledge base. Prompts and designs improve together across rounds which avoid random guesswork and improve efficiency; 3) a zero-cost predictor to avoid training a large number of candidates from scratch. Experimental results show that on HW-NAS-Bench, PEL-NAS can achieve overall higher HV, lower IGD, and up to 54% lower latency than baselines at similar accuracy. Meanwhile, the search cost drops from days to minutes compared with traditional supernet baselines.
|
infrastructure, software libraries, hardware, systems, etc.
|
https://openreview.net/pdf?id=eS4MAmmCHy
| 2025-09-18T03:16:21
| 4
|
[
{
"id": "r5WN4tP0vh",
"forum": "eS4MAmmCHy",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_ygWA",
"reviewer_name": "Reviewer_ygWA",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces a novel framework for HW-NAS by partitioning the search space into complexity-based niches and performs an LLM as an evolutionary operator (crossover + mutation) whose prompts and design heuristics co-evolve from round-to-round; candidates are scored training-free using zero-cost proxies to target accuracy–latency Pareto fronts. The experiments are performed on standard benchmarks, demonstrating the effectiveness of PEL-NAS.",
"strengths": "- A novel framework for NAS which used LLM to guide the search process.\n- Strong results on HW-NAS-Bench and ViT search spaces.",
"weaknesses": "- Lack of theoretical support for the LLM-based section. How and why is an LLM-based approach superior to traditional evolutionary search methods? Compared to evolutionary search, LLM-based search is expensive since it requires a pretrained language model.\n- The idea of partitioning the search space into multiple subspaces is not novel, as it has been proposed in prior studies [1]\n- Lack of ablation studies using other LLM models such as DeepSeek or Gemini.\n- Lack of providing the performance curve as the search progresses.\n- The reviewer believes that the framework is general. Instead of focusing on multiple objectives (i.e. accuracy and latency), how about benchmarking this method using accuracy as the sole objective? The authors should compare its performance against Random Search, Reinforcement Learning, and Evolutionary Search, using both single-proxy and ensemble-proxy settings, with and without search space partitioning, to validate the effectiveness of the LLM-based algorithm.\n\n\n\n[1] Few-Shot Neural Architecture Search, Yiyang Zhao et al.",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T12:38:09",
"modification_date": "2025-11-12T12:20:18",
"review_url": "https://openreview.net/forum?id=eS4MAmmCHy¬eId=r5WN4tP0vh",
"license": "CC BY 4.0"
},
{
"id": "1COuuMvFR9",
"forum": "eS4MAmmCHy",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_xvWB",
"reviewer_name": "Reviewer_xvWB",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes PEL-NAS, a search-space partitioned, architecture prompt co-Evolutionary and LLM-driven NAS. The approach relies on complexity aware partitioning of the search space, an LLM-powered Evolutionary operating guided by an evolving knowledge base, and a training-free evaluation protocol. Experiments on HW-NAS-Bench shows good coverage of the Pareto front and good computational search performance compared to baselines.",
"strengths": "1. PEL-NAS is a sensible NAS approach that demonstrates strong empirical performance and efficiency.",
"weaknesses": "1. The partitioning method is manual, i.e., heavily reliant on the architecture search space. For example, the authors choose conv 3x3 following careful analysis of HW-NAS-Bench which of course doesn't apply to ViTs, so choose Embed Dim and Depth Num for ViT. This is a critical limitation of the method. \n2. Related to the previous point, partitioning is critical to PEL-NAS (Table 5) but it is not clear how sensitive the choice of partitioning (partitioning criteria) is on performance. \n3. The training-free objective evaluation, which leads to the efficient search cost, is not novel and is tied to the choice of benchmark,",
"questions": "1. What is the impact of using a different LLM or variants of the prompt on the performance?\n2. What is an example actual knowledge base produced?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:28:08",
"modification_date": "2025-11-12T12:20:18",
"review_url": "https://openreview.net/forum?id=eS4MAmmCHy¬eId=1COuuMvFR9",
"license": "CC BY 4.0"
},
{
"id": "ivDvQH3yMC",
"forum": "eS4MAmmCHy",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_w62C",
"reviewer_name": "Reviewer_w62C",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes PEL-NAS, a hardware-aware NAS framework that (i) partitions the search space by simple architectural complexity (e.g., counts of 3×3 convs) to ensure diverse exploration, and (ii) uses an LLM with a persistent knowledge base to co-evolve design rules and prompts for generating candidates. \n\nCandidates are scored without training: accuracy comes from an offline-trained surrogate built on zero-cost proxies, and latency from HW-NAS-Bench; multi-objective selection advances a Pareto set (HV/IGD). On CIFAR-10/100 and ImageNet-16-120 across six devices, PEL-NAS reports higher HV/lower IGD than prior baselines with minutes-level search; a ViT study adapts partitioning to depth/embedding size and uses Auto-Proxy plus measured latency. \n\nAblations suggest partitioning and the surrogate contribute the largest gains, with smaller incremental benefit from the LLM co-evolution.",
"strengths": "- **Originality.** The paper identifies a clear and highly relevant problem: the inherent exploration bias (or mode collapse) of LLMs when applied to the vast NAS search space . The proposed solution, complexity-driven partitioning , is a novel and direct structural intervention to mitigate this specific, LLM-centric bias. This is combined with an LLM that maintains a persistent knowledge base to steer evolution, forming a targeted and well-motivated framework.\n- **Quality.** Consistent Pareto gains (HV↑/IGD↓) across multiple devices and datasets; ablations clearly identify partitioning and the ZC-ensemble surrogate as principal contributors, with LLM co-evolution adding a smaller but positive increment.\n- **Clarity.** The paper is well written and effectively uses clear figures (e.g., Figure 2 , Figure 4 ) and well-organized tables (e.g., Table 1 , Table 5 ) to present its methodology and results.",
"weaknesses": "**Positioning and novelty.**\nThe paper's contributions center on two components, but the ablation study (Table 5) clearly shows that the search space partitioning is the paper's single most critical component; its removal causes the most significant performance degradation . In contrast, the LLM-KB (a form of persistent memory) has precedents in prior work (e.g., LLMatic’s [1] two archives include a prompt archive that stores/updates prompts over the search; RZ-NAS [2] adds explicit reflection modules), making its novelty more incremental.\nWhile partitioning proves essential, a deeper analysis of its generality and robustness would strengthen the paper. Specifically:\n- Lack of Analysis for Generality and Robustness: \n - The partitioning rules appear to be manually defined and may depend on domain-specific heuristics. For instance, the CNN search space is partitioned by the count of nor_conv_3x3 operators, whereas the ViT variant uses entirely different manually designed criteria, Embed Dim and Depth Num. The paper assumes that the number of 3×3 convolutions reliably reflects latency, but this “parameter ≠ latency” paradox is a well-known issue. This raises an open question about whether the current partitioning rule would remain effective if the optimization target were memory or another hardware metric. In such cases, should the partitioning instead be based on memory-bound operators?\n - While the proposed rule works well under the evaluated settings, its stability and generality across architectures remain uncertain. A targeted sensitivity study that varies both the partitioning axis (e.g., FLOPs, latency, parameter count, or memory usage) and the number of partitions, especially under different search-space scales, would help clarify how robust the framework truly is. In particular, comparing the proposed complexity-based partitioning against a random or uniform partition baseline would help confirm whether the performance gains stem from the partitioning principle itself or merely from enforcing any form of structured diversification. More broadly, developing an automated way to infer salient complexity dimensions for new search spaces would make this approach more generalizable and less dependent on manual expert analysis.\n\nThis suggests that the partitioning strategy is an orthogonal contribution, largely independent of the specific LLM-KB searcher. Demonstrating such generality, for example, by applying the partitioning method to other LLM-based searchers, would substantially strengthen the contribution and position it as a general tool for mitigating LLM exploration bias rather than just one component of a single method.\n\n**Clarification of \"Training-Free\" Terminology.**\nThe \"training-free\" terminology warrants clarification. The method is only training-free during the search phase. Its accuracy estimation relies entirely on a pre-trained accuracy surrogate model that must be trained offline. This is explicitly an XGBoost model fit on ZC proxies for CNNs and a pre-existing \"Auto-Proxy\" predictor for ViTs. This approach is distinct from fully training-free methods that use raw ZC proxies directly for ranking without fitting a surrogate, and this distinction should be made clearer.\n\n**Conflated Comparison of Search vs. Estimation Strategies.**\nThe main experimental comparisons (Tables 2 & 3) conflate the paper's search strategy (partitioned LLM) with the performance estimation strategy (surrogate model). PEL-NAS is benchmarked against methods using fundamentally different estimators, like supernets (FairNAS , PRP-NAS , and DARTS) or full-training (LLMatic). The resulting cost differences (Table 4) are largely dominated by the choice of estimator (surrogate vs. supernet), which reflects differences in methodological setup rather than isolating the contribution of the proposed search strategy.\n\nSince the proposed LLM-based searcher with search-space partition could, in principle, operate on top of any performance estimator, whether a pre-trained supernet, a learned surrogate (as used in this paper), or even a raw zero-cost proxy score, contrasting it directly against supernet-based pipelines obscures what the LLM searcher itself contributes. A more precise evaluation would hold the performance estimator constant (e.g., using the same trained supernet or surrogate predictor) and compare searchers head-to-head, including Random Search, standard Evolutionary Algorithms, and prior LLM-based approaches such as LLMatic. This would more clearly isolate the effectiveness and efficiency of the proposed partitioned LLM search strategy itself.\n\n[1] Muhammad U. Nasir, Sam Earle, Christopher Cleghorn, Steven James, Julian Togelius. LLMatic: Neural Architecture Search Via Large Language Models And Quality Diversity Optimization. GECCO 2024.\n\n[2] Zipeng_Ji, Guanghui Zhu, Chunfeng Yuan, Yihua Huang. RZ-NAS: Enhancing LLM-guided Neural Architecture Search via Reflective Zero-Cost Strategy. ICML 2025.",
"questions": "**On Sensitivity to the Partitioning Axis and Granularity.**\n- The paper's CNN partitioning strategy is based on the nor_conv_3x3 count , which is identified as the most parameter-heavy operator. Is the general principle simply to partition by the most computationally expensive operator? How would the results change if a different operator, such as nor_conv_1x1, were used as the partitioning axis instead?\n- The paper uses a fixed number of six niches for the HW-NAS-Bench space. How sensitive is the method's final performance (e.g., HV and IGD) to this choice? What would be the impact of using significantly fewer (e.g., 3) or more (e.g., 10) niches? Furthermore, should the optimal number of niches be relative to the overall size and complexity of the search space?\n- As a diagnostic baseline, how does the proposed complexity-based partitioning compare against a random or uniform partition of the same size? Such a comparison could help determine whether the gains arise from the complexity metric itself or from structured diversification in general.\n\n**On Fair Comparisons and the Generality of Partitioning.** To properly isolate the value of the proposed search strategy from the surrogate-based estimator:\n- Could the authors provide results for baselines, including simple ones (e.g., Random Search, standard Evolutionary Algorithm) and, especially, searchers from prior LLM works (like LLMatic or RZ-NAS), that all use the exact same pre-trained ZC-proxy surrogate for evaluation?\n- More importantly, since the partitioning strategy appears to be an orthogonal contribution , could the authors apply this partitioning scheme to other prior LLM searchers (like LLMatic's) to demonstrate that it provides a general and consistent boost to their performance?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T16:35:17",
"modification_date": "2025-11-12T12:20:18",
"review_url": "https://openreview.net/forum?id=eS4MAmmCHy¬eId=ivDvQH3yMC",
"license": "CC BY 4.0"
},
{
"id": "oSrIUUnOOZ",
"forum": "eS4MAmmCHy",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_MXNW",
"reviewer_name": "Reviewer_MXNW",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "In this paper, authors propose PEL-NAS, a training-free framework for Hardware-Aware Neural Architecture Search (HW-NAS) that addresses the exploration bias inherent in Large Language Model (LLM)-driven approaches. The authors observe that LLMs tend to generate architectures within limited regions of the search space, exhibiting a form of mode collapse. To address this issue, they propose a framework which contains three components: 1) a complexity-driven partitioning strategy that divides the search space into disjoint niches based on architectural complexity; 2) an LLM-based co-evolution mechanism that maintains and updates a knowledge base while generating new architectures; 3) a zero-cost ensemble predictor for rapid evaluation. Empirical results on HW-NAS-Bench demonstrate superior Pareto front with dramatically reduced search costs.",
"strengths": "1. Authors identified a genuine problem in LLM-based NAS, i.e., exploration bias leading to incomplete Pareto fronts, as well as mode collapse. The complexity-driven partitioning strategy is empirically justified and provides an elegant solution to enforce diversity. Moreover, the complexity-driven partitioning is a kind of targeted structural intervention rather than just mere prompt engineering, demonstrates clear connection to hardware-related model complexity.\n\n2. The dramatic reduction in search cost from GPU days to minutes while achieving promising results addresses a practical limitation in HW-NAS deployment. Besides, the successful extension to ViT search spaces demonstrates the framework’s adaptability beyond the original CNN-focused HW-NAS-Bench experiments.\n\n3. The dual-stage prompt engineering that the LLM alternates between updating a knowledge base and generating architectures, represents a interesting and effective approach to leverage LLMs’ reasoning capabilities while maintain the search memory.",
"weaknesses": "1. One of major concerns regarding this paper (as well as other similar LLM-driven NAS works) is the data contamination problem. For instance, HW-NAS-Bench contains only 15625 architectures and publicly available since 2021 (well before GPT-4’s training cutoff), while GPT-4 and other LLMs are trained on vast web corpora that likely include published NAS papers and their architectures. There is a substantial risk that the LLM might be essentially performing 'retrieval' rather than genuine 'search' or 'discovery'. Although the co-evolution mechanism and knowledge base updates might push the LLM slightly beyond memorisation, it would be better that authors can somehow verify the generated architectures are novel or outside the LLM’s training distribution. \n\n2. Another concern is regarding the search space scalability, the manual identification of complexity-driving operators, e.g., nor_conv_3x3 for CNNs, Embed Dim and Depth Num for ViTs, raises questions about the scalability to novel search spaces. An automated/heuristic partitioning strategy would be more valuable.\n\n3. While the authors identify exploration bias, they did not deeply analyse why LLMs exhibit this behaviour or explore prompt engineering alternatives that might directly address this bias without requiring partitioning. The incomplete analysis of LLM behaviour weakens their claimed corresponding contribution.\n\n4. Some of compared baselines are outdated (e.g., DARTS is from 7 years ago), the proposed framework could benefit from comparisons with more recent training-free NAS methods, e.g., MeCo [1], SWAP [2] or L-SWAG [3], and other diversity-prompting techniques in evolutionary algorithms beyond the mentioned baselines. \n\n \n \n\n[1] Jiang et al., MeCo: Zero-Shot NAS with One Data and Single Forward Pass via Minimum Eigenvalue of Correlation. NeurIPS 2023.\n\n[2] Peng et al., SWAP-NAS: Sample-Wise Activation Patterns for Ultra-fast NAS. ICLR 2024.\n\n[3] Casarin et al., L-SWAG: Layer-Sample Wise Activation with Gradients Information for Zero-Shot NAS on Vision Transformers. CVPR 2025.",
"questions": "1. How sensitive is the proposed method to the number of niches? Have authors experimented with different partitioning granularities?\n\n2. Can the partitioning strategy be automatically/heuristically learned rather than manually defined based on the search space analysis?\n\n3. Can authors perform some memorisation tests? For example, prompt the LLM (e.g., GPT-4) to directly generate architectures from HW-NAS-Bench search space, and see whether it’s feasible, as well as the performance of produced architectures.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T16:34:55",
"modification_date": "2025-11-12T12:20:19",
"review_url": "https://openreview.net/forum?id=eS4MAmmCHy¬eId=oSrIUUnOOZ",
"license": "CC BY 4.0"
}
] |
|
MgVNhx5uaa
|
https://openreview.net/forum?id=MgVNhx5uaa
|
ATOM-Bench: From Atoms to Conclusions in Objective Evaluation of Large Multimodal Models Reasoning
| 3
| 3.75
|
[
2,
4,
4,
2
] |
[
4,
4,
3,
4
] | 4
|
[
"multimodal Large Language Models",
"benchmark",
"chain of thought"
] |
Chain-of-Thought (CoT) reasoning has significantly enhanced the ability of Large Multimodal Models (LMMs) to tackle complex image–text tasks, establishing itself as a cornerstone of multimodal learning. Despite significant progress, the impact of CoT on LMMs still lacks objective evaluation and in-depth research. Current CoT evaluation paradigms rely on powerful LLMs as judges of free-form text, but this introduces bias and hallucination from the evaluator itself. Moreover, it may penalize models for stylistic variations rather than genuine reasoning failures, thereby undermining the fairness and reliability of the assessment. To address this gap, we introduce ATOM-Bench, a CoT evaluation framework built on objective atomic questions. ATOM-Bench decomposes complex reasoning tasks into a series of atomic nodes, covering 570 high-resolution real-world images and 2,920 questions across 4 cognitive dimensions, and 12 domains, including architecture, text, transportation, culture, climate, and geology. Our benchmark introduces three novel quantitative metrics to objectively analyze reasoning faithfulness, consistency, and robustness. Extensive experiments with 22 LMMs validate the effectiveness of our framework. The results reveal that even the strongest models often exhibit a mismatch between surface-level correctness of final answers and their underlying evidence comprehension, while also exposing cognitive rigidity when faced with objective facts.We believe that ATOM-Bench, as a more objective and diagnostic tool, will advance LMMs toward more reliable and faithful reasoning.
|
We introduce ATOM-Bench, a diagnostic benchmark for evaluating Chain-of-Thought reasoning in Large Multimodal Models via objective atomic questions, spanning 2,920 QAs over 570 real-world images, to address challenges of reasoning reliability.
|
datasets and benchmarks
|
https://openreview.net/pdf?id=MgVNhx5uaa
| 2025-09-18T21:58:39
| 4
|
[
{
"id": "qyea8A8FPG",
"forum": "MgVNhx5uaa",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_sAqG",
"reviewer_name": "Reviewer_sAqG",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper presents ATOM-Bench, a benchmark for evaluating reasoning processes in large multimodal models. It reformulates complex reasoning into atomic multiple-choice questions with ground-truth answers to enable objective measurement. The dataset contains 570 real-world images and 2,920 questions covering four cognitive dimensions and twelve subdomains. The authors introduce three metrics to assess reasoning consistency, hallucination rate, and robustness when models are given corrected evidence. Experiments on 22 multimodal models are conducted to analyze their reasoning behavior and the relationship between answer accuracy and reasoning consistency.",
"strengths": "1. Atomic multiple-choice questions provides objective, interpretable, and reproducible evaluation results.\n2. The analysis highlights a clear gap between correctness and reasoning quality, offering concrete empirical observations.",
"weaknesses": "1. The dataset is relatively small and limited in diversity, containing only 570 real-world images, which restricts coverage of varied visual scenes and reduces the stability of cross-model comparisons.\n2. The task scope is overly narrow, as the framework is primarily validated on single-image geo-localization, limiting its generalizability to other multimodal reasoning tasks.\n3. Error analysis remains anecdotal and lacks systematic statistics on error types or cross-model differences, making it difficult to derive actionable insights for model improvement.",
"questions": "1. Could it be extended to cover more complex or diverse multimodal reasoning tasks beyond single-image geolocation?\n2. Could the authors provide a more detailed error analysis with statistics and a deeper look at failure patterns?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:18:37",
"modification_date": "2025-11-12T12:47:01",
"review_url": "https://openreview.net/forum?id=MgVNhx5uaa¬eId=qyea8A8FPG",
"license": "CC BY 4.0"
},
{
"id": "X8f3AtT1WA",
"forum": "MgVNhx5uaa",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_sR7U",
"reviewer_name": "Reviewer_sR7U",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "- The author proposes a novel atomic-question-based CoT evaluation framework, comparing to the previous benchmark which reasons over the CoT process using traditional \"LLM as a Judge\", the evaluation framework focuses on objective and fairness, including three new evaluation metrics, RCS(reasoning-conclusion support), HI(Hallucinated Inference), RRS(Reasoning Revision Score).\n- Besides the evaluation framework, the authors introduce ATOM-Bench, the benchmark includes 2,920 multi-choice questions across 4 cognitive dimensions and 12 subtasks.\n- Authors also evaluates 22 leading models and provide insights including even the state-of-the-art models like Gemini-2.5-Pro and GPT-5 can show post-hoc fallacies, models often fail to revise errors when confronted with indisputable ground-truth evidence.",
"strengths": "- The originality of the paper is good, the paper focuses on the fair and objective evaluation without llm-as-the-judge process.\n- The dimension of the ATOM-Bench is good, it includes 14 different atomic skills, including spatial reasoning.\n- The evaluation models are sufficient, including 22 leading models with both open-sourced and close-sourced ones.\n- The data curation process is very clear to the readers:\nEach step clearly specifies which data sources were used, what criteria were applied, and how the results were verified.\nHuman verification and inter-annotator agreement evaluation were introduced to ensure annotation quality.\nThe logic behind the question categorization is clear and task-oriented.\n- The structure of the paper is easy to read.",
"weaknesses": "- Overall, I appreciate the readers for the presentation of this paper, however, **examples** are significantly insufficiant for both methodology part and evaluation part. And I think this is one of the biggest weakness of this paper. I search very carefully for more detailed examples in the appendix and only find failure analysis and a few failure examples.\n\n More specifically, authors should provide more examples regarding:\n1. full multimodal reasoning process of a model regarding the answer, how to evaluate based on that example\n2. examples of how atomic tasks compose into complex reasoning\n3. lacks visual illustrations of multimodal input and error analysis\n\n- The CoT evaluation framework and benchmark samples are only applied on geolocation, which according to my knowledge, this pipeline and benchmark could also be used to a broader domain for evaluating reasoning process, e.g. Mathematical Reasoning, Science Reasoning, which also needs step-by-step objective reasoning process in order to successfully answer a question.\n\n- About the evaluation metric, the RCS (Reasoning Consistency Score) and HI (Hallucination Index) is overlapping with each other, e.g. high reasoning consistency scores indicates low hullucination index. The evaluation dimension is not diverse enough. Have you considered other evaluation metrics, for example, evaluate perception error and reasoning error separately. \n\n- Although the benchmark fucos on objective evaluation, the structure of reasoning, the completeness and the soundness of the answer, however, are the keys to evaluate the correctness of the answer, but the paper entirely ignores them.",
"questions": "I provide the following questions for authors:\n- The motivation of the paper is to reduce the biased evaluation of \"llm-as-the-judge\", however, in the evaluation metrics, there is some human threshold. e.g. In RCS, the τ=0.75 is set by human. This step also introduces human bias. Why is τ=0.75? Do you have explanations on it?\n- The author claims that the proposed evaluation metric is more objective and fair than traditional evaluation method. Do you have quantitative results to prove that? For example, sample a subset of Atom-Bench and using standard \"llm-as-the-judge\" process to evaluate and compare it against the proposed evaluation metrics.\n- Can the proposed evaluation framework generalize to other domains except for the geographical reasoning task, e.g. for a broader domain, e.g. in Mathematical Reasoning? If so, could you provide examples regarding how to apply this framework into a broder domain, e.g. Mathematical Reasoning using a standard Benchmark like Mathvista?\n- **Especially**, you should provide more examples regarding: (as listed in weakness)\n1. full multimodal reasoning process of a model regarding the answer, how to evaluate based on that example\n2. examples of how atomic tasks compose into complex reasoning\n3. visual illustrations of multimodal input and error analysis\n\nI will consider raise my score if you can address my concerns listed above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T06:20:46",
"modification_date": "2025-11-12T12:47:02",
"review_url": "https://openreview.net/forum?id=MgVNhx5uaa¬eId=X8f3AtT1WA",
"license": "CC BY 4.0"
},
{
"id": "Q3TyhMYSY4",
"forum": "MgVNhx5uaa",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_db8N",
"reviewer_name": "Reviewer_db8N",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper notes that while Chain-of-Thought (CoT) reasoning improves Large Multimodal Models (LMMs) in complex image-text tasks, current CoT evaluation—relying on LLMs as judges—suffers from bias, hallucination, and mispenalizing stylistic variations over real reasoning failures.\nTo address this, it proposes ATOM-Bench: a framework that decomposes complex tasks into atomic questions (covering 570 high-res images, 2,920 questions across 4 cognitive dimensions and 12 domains) and introduces three quantitative metrics (RCS, HI, RRS) to turn subjective evaluation into evidence-based diagnostics, solving the \"black-box evaluating a black-box\" issue.\nExperiments on 22 LMMs show even state-of-the-art models mismatch final answer correctness with evidence comprehension and have cognitive rigidity. The paper contributes an objective, reproducible CoT evaluation framework, the first high-res process-oriented CoT benchmark, and insights into LMMs’ gaps in reasoning faithfulness and flexibility to advance reliable LMM research.",
"strengths": "1. Instead of relying on Large Language Models (LLMs) as judges in traditional paradigms, ATOM-Bench adopts \"atomic questions\" to eliminate the \"black-box evaluating black-box\" dilemma. \n2. The benchmark is built on 570 high-resolution real-world images, validated through human-machine collaboration (including expert cross-reviews of clue authenticity and distractor rationality) to guarantee data quality. \n3. It decomposes complex reasoning tasks into clue-level (CLQ) and conclusion-level (CoLQ) atomic nodes, covering 4 cognitive dimensions and 12 real-world domains. T",
"weaknesses": "1. The benchmark only centers on single-image geolocation, failing to cover complex scenarios like video temporal reasoning or cross-modal generation. It cannot fully measure LMMs’ performance across diverse CoT tasks.\n2. All questions are multiple-choice, with no assessment of free-text reasoning chain generation. It cannot evaluate models’ ability to express logical steps in open text for real-world applications.\n3. Complex reasoning is decomposed into pre-defined \"standard chains,\" ignoring the diverse reasoning paths models may actually take. It fails to reflect models’ real logical decision-making processes.",
"questions": "1. Its atomic decomposition relies on preset logic. Is this decomposition consistent with humans’ actual reasoning paths? \n2. ATOM-Bench lacks samples from low-resource regions (e.g., small countries). Does it plan to supplement such data to improve evaluation comprehensiveness? \n3. It doesn’t specify the weight of image vs. text clues. When clues conflict, can current metrics fairly measure models’ decision rationality?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T17:19:47",
"modification_date": "2025-11-12T12:47:02",
"review_url": "https://openreview.net/forum?id=MgVNhx5uaa¬eId=Q3TyhMYSY4",
"license": "CC BY 4.0"
},
{
"id": "IWuQrflMwV",
"forum": "MgVNhx5uaa",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_WwKZ",
"reviewer_name": "Reviewer_WwKZ",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper is about the evaluation of large multimodal model reasoning. The authors claim that they introduce a CoT evaluation framework built on objective atomic questions, covering 570 high-resolution real-world images and 2,920 questions across 4 cognitive dimensions, and 12 domains, including architecture, text, transportation, culture, climate, and geology. They have tested a number of large multimodal models with the proposed evaluation framework.",
"strengths": "1. The authors have evaluated a number of multimodal large language models with the proposed evaluation framework.\n2. The proposed benchmark covers a wide range of domains.",
"weaknesses": "1. The authors claim that \"Current CoT evaluation paradigms rely on powerful LLMs as judges of free-form text, but this introduces bias and hallucination from the evaluator itself.\" However, the proposed evaluation framework also relies on LLMs and shares the weakness of prior works.\n2. The proposed evaluation framework relies on \"rigorous human review and refinement\", as the authors claim. However, it is not clear how the authors ensure rigor in this process. Are there any human errors in this process? How to ensure the human experts have checked the dataset with care?\n3. The evaluation framework relies heavily on human efforts in checking many details in the evaluation, which is not practical and hard to scale with paid annotators. Additionally, the proposed benchmark is relatively small. Although the authors claim it covers a wide range of fields and cognitive dimensions, I wonder whether these fields or dimensions are properly represented with limited data.\n4. Given the fact that the dataset is small and the method is not practical, I doubt whether this paper has made enough contribution.",
"questions": "Please refer to the weakness section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T20:15:06",
"modification_date": "2025-11-12T12:47:03",
"review_url": "https://openreview.net/forum?id=MgVNhx5uaa¬eId=IWuQrflMwV",
"license": "CC BY 4.0"
}
] |
wztR0XcNW9
|
https://openreview.net/forum?id=wztR0XcNW9
|
TopoCore: Unifying Topology Manifolds and Persistent Homology for Data Pruning
| 4
| 3
|
[
4,
2,
6
] |
[
3,
3,
3
] | 3
|
[
"Coreset Selection",
"Topological Data Analysis",
"Persistent Homology",
"Architectural Transferability",
"Data-Efficient Learning",
"Manifold Learning",
"Pretrained Models"
] |
Geometric coreset selection methods, while practical for leveraging pretrained models, are fundamentally unstable. Their reliance on extrinsic geometric metrics makes them highly sensitive to variations in feature embeddings, leading to poor performance when transferring across different network architectures or when dealing with noisy features. We introduce TopoCore, a novel framework that resolves this challenge by leveraging the principles of topology to capture the intrinsic, stable structure of data. TopoCore operates in two stages, (1) utilizing a _topology-aware manifold approximation_ to establish a global low-dimensional embedding of the dataset. Subsequently, (2) it employs _differentiable persistent homology_ to perform a local topological optimization on the manifold embeddings, scoring samples based on their structural complexity. We show that at high pruning rates (e.g., 90\%), our _dual-scale topological approach_ yields a coreset selection method that boosts accuracy with up to 4$\times$ better precision than existing methods. Furthermore, through the inherent stability properties of topology, TopoCore is (a) exceptionally robust to noise perturbations of the feature embeddings and (b) demonstrates superior architecture transferability, improving both accuracy and stability across diverse network architectures. This study demonstrates a promising avenue towards stable and principled topology-based frameworks for robust data-efficient learning.
|
learning on graphs and other geometries & topologies
|
https://openreview.net/pdf?id=wztR0XcNW9
| 2025-09-18T02:54:05
| 3
|
[
{
"id": "p1cclI53pH",
"forum": "wztR0XcNW9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_Sq9q",
"reviewer_name": "Reviewer_Sq9q",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper addresses the problem of coreset selection, i.e. a small representative subset of a large dataset that minimizes the degradation in model performance and allows for faster training and reduced storage. Although existing geometry-based methods do not require an expensive training, they rely on extrinsic metrics that make them sensitive to variations in feature embeddings. The authors propose TopoCore, a two-stage method for coreset selection that utilizes topology to accurately approximate the underlying manifold of the data. To preserve the global structure, during the first stage, feature embeddings of deep neural network are projected onto a low-dimensional manifold with UMAP. To preserve the local structure, during the second stage topological persistence of points is maximized independently for each class. The coreset selection is based on the TopologyScore that combines Density Score, reflecting global representativeness, and Persistence Score, reflecting local topological complexity. The empirical evaluation includes comparison with several baseline methods in both training-based and training-free scenarios, analysis of method’s performance when feature embedding model is varied. The authors also analyze the TopoCore robustness to the noise injected into feature embeddings.",
"strengths": "- The paper proposes a novel topology-based view on the problem of coreset selection that leverages the geometric methods.\n- Experimental results demonstrate that TopoCore outperforms benchmark methods, especially at high pruning rate and on more complex datasets.\n- TopoCore is more robust to noise in the feature space, especially at the higher pruning rates (70-90%). \n- TopoCore provides better results across a wide range of embedding model choice.",
"weaknesses": "- Although the paper provides some evidence for the choice of UMAP, a more recent works [1][2][3], which were shown to outperform UMAP with better preservation of data topology, are not considered for comparison and/or improvement of TopoCore.\n- Experimental part is limited. As far as I understand, the experiments focus on the test accuracy of the ResNet-family models (ResNet-18, ResNet-50) for different pruning rates and embedding models. The evaluation lacks results for more recent architectures, for example, transformers, and estimation of other properties such as quality of transfer learning / domain adaptation.\n- The paper does not provide any estimate on the computational cost of the proposed procedure. Is TopoCore more computationally intensive than the benchmark methods?\n\nMinor: The notion of prototype is often used in the main text but formal definition is given only in the appendix.\n\n[1] M. Moor et al. Topological autoencoders. ICLR, 2020.\n[2] I. Trofimov et al. Learning topology-preserving data representations. ICLR, 2023.\n[3] E. Tulchinskii et al. RTD-Lite: scalable topological analysis for comparing weighted graphs in learning tasks. AISTATS, 2025.",
"questions": "Please, see weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T03:21:38",
"modification_date": "2025-11-12T12:20:06",
"review_url": "https://openreview.net/forum?id=wztR0XcNW9¬eId=p1cclI53pH",
"license": "CC BY 4.0"
},
{
"id": "xDDQjbq6df",
"forum": "wztR0XcNW9",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_y5KN",
"reviewer_name": "Reviewer_y5KN",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces TopoCore, a method for coreset selection. It is a combination of dimensionality reduction, non-parametric density estimation and persistent homology. Then, coresets are used for further training of ResNet-18. \nExperimental results show that the proposed method slightly outperforms baseline.",
"strengths": "The paper proposes a new approach to coreset selection. \nThis is one of the few applications of multipersistence to deep learning.",
"weaknesses": "1) The paper is hard to understand. Some notions like \"Hilbert decomposition signed measure\" are not defined.\n2) Some details of the method are missing (see Questions)\n3) The difference is no statistically significant w.r.t. baselines in many cases (Table 5 in Appendix).\nPlease include statistical tests to validate significance.\n4) Improvements over Random selection is quite small. I doubt that the method is of practical importance.\n5) A relevant publications is missing:\n\nTrofimov, I., Cherniavskii, D., Tulchinskii, E., Balabin, N., Burnaev, E., & Barannikov, S. (2023). Learning topology-preserving data representations. arXiv preprint arXiv:2302.00136.",
"questions": "1) As far as I understood, persistence scores are calculated for every class separately. Are they summed next? \n2) The optimization of L_{pers} can naturally lead to a degenerate solution, like points very far from each other, which maximizes persistence. How do you handle it?\n3) Is TopologyScore maximized or minimized or minimized?\n4) In Table 1, why TopoCore exhibits different metrics in \"no training dynamics\" and \"with training dynamics\" blocks?\nI assume that the difference must be only in baselines.\n5) Some important details are hard to understand from the paper. How L_{proj} is optimized? Together with TopologyScore or not?\nHow the coreset is selected? Is should be a subset of a dataset, but I can't find details.\nWhere are similarities p_{ij} are taken from? etc.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T21:47:27",
"modification_date": "2025-11-12T12:20:07",
"review_url": "https://openreview.net/forum?id=wztR0XcNW9¬eId=xDDQjbq6df",
"license": "CC BY 4.0"
},
{
"id": "pYAdWkt3a1",
"forum": "wztR0XcNW9",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_hXms",
"reviewer_name": "Reviewer_hXms",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper addresses the coreset selection problem: choosing a small subset of training data that maintains nearly the same model performance as the full dataset. It introduces a training-free approach that operates on frozen embeddings, viewing the dataset as a point cloud and using both global manifold density and local topological persistence to identify samples essential to the intrinsic structure. On benchmark image datasets, the method consistently achieves higher retention and lower variance than geometric baselines, especially under high pruning, showing stability and robustness across architectures, though its experiments are limited to vision tasks and lack compute analysis.",
"strengths": "1. **Originality and clarity.** A training free coreset that combines manifold density with local topological persistence is a fresh, well motivated idea and the method is clearly described for replication.\n\n2. **Strong empirical results.** Consistently high retention and lower variance than geometric baselines, especially at high pruning, plus sensible ablations on mixing weights and optimization depth.\n\n3. **Practical robustness.** Works across multiple backbones with good transfer and noise robustness, indicating the selection signal is less model dependent than distance based methods.",
"weaknesses": "1. **Limited scope.** Evaluation is restricted to vision benchmarks; no NLP or other modalities are tested, which weakens generality claims.\n\n2. **Dependence on embeddings and projection**. Results hinge on the quality of frozen features and the chosen manifold projector, with limited guidance on hyperparameters or stability across settings.\n\n3. **Unclear computational costs.** No clear wall clock, memory, or scaling analysis for kNN construction and persistence steps, so the cost–accuracy tradeoff is unclear.\n\n4. **Quantitative evidence is incomplete.** The paper’s claims, like *“up to 4× better precision” and improved proxy-to-target transfer*, are not consistently backed by tables, and the sensitivity of results to k-NN and manifold-projection settings remains largely unexplored.",
"questions": "1. How sensitive are results to the manifold projection choice and its hyperparameters?\n\n2. Please provide runtime and memory comparisons vs baselines on CIFAR and ImageNet to clarify scalability.\n\n3. Do you have any non-vision results to support generality, for example ANLI or IMDB with a frozen RoBERTa encoder (D2 paper).\n\nI like this paper and find the direction promising. I am at marginal accept and am willing to increase my score if the authors address my concerns in the rebuttal with concrete evidence.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-18T05:37:25",
"modification_date": "2025-11-12T12:20:07",
"review_url": "https://openreview.net/forum?id=wztR0XcNW9¬eId=pYAdWkt3a1",
"license": "CC BY 4.0"
}
] |
|
WnRzN4U8Y8
|
https://openreview.net/forum?id=WnRzN4U8Y8
|
WIMFRIS: WIndow Mamba Fusion and Parameter Efficient Tuning for Referring Image Segmentation
| 5
| 4.5
|
[
4,
6,
4,
6
] |
[
5,
5,
3,
5
] | 4
|
[
"Referring image segmentation",
"parameter efficient tuning",
"computer vision"
] |
Existing Parameter-Efficient Tuning (PET) methods for Referring Image Segmentation (RIS) primarily focus on layer-wise feature alignment, often neglecting the crucial role of a neck module for the intermediate fusion of aggregated multi-scale features, which creates a significant performance bottleneck. To address this limitation, we introduce WIMFRIS, a novel framework that establishes a powerful neck architecture alongside a simple yet effective PET strategy. At its core is our proposed HMF block, which first aggregates multi-scale features and then employs a novel WMF module to perform effective intermediate fusion. This WMF module leverages non-overlapping window partitioning to mitigate the information decay problem inherent in SSMs while ensuring rich local-global context interaction. Furthermore, our PET strategy enhances primary alignment with a MTA for robust textual priors, a MSA for precise vision-language fusion, and learnable emphasis parameters for adaptive stage-wise feature weighting. Extensive experiments demonstrate that WIMFRIS achieves new state-of-the-art performance across all public RIS benchmarks.
|
This paper introduces WIMFRIS, a framework that achieves state-of-the-art in referring image segmentation by proposing a novel HMF neck module to efficiently fuse text with visual features , overcoming a key performance bottleneck in prior methods.
|
applications to computer vision, audio, language, and other modalities
|
https://openreview.net/pdf?id=WnRzN4U8Y8
| 2025-09-20T14:00:25
| 4
|
[
{
"id": "l3NeqmvthW",
"forum": "WnRzN4U8Y8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_N61Y",
"reviewer_name": "Reviewer_N61Y",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper presents a parameter-efficient framework that integrates a window-based intermediate fusion neck (HMF) and lightweight adapters (MTA, MSA, and emphasis parameters) to enhance vision–language alignment for referring image segmentation.",
"strengths": "- The paper introduces a Hierarchical Mamba Fusion (HMF) block, which performs intermediate vision–language fusion by aggregating multi-scale features and applying a window-based Mamba module (WMF).\n- A parameter-efficient tuning (PET) strategy is presented, consisting of a Mamba Text Adapter (MTA) for modeling textual priors, a Multi-Scale Aligner (MSA) with RFMixer and cross-attention for visual–text alignment, and learnable emphasis parameters for adaptive layer weighting.\n- The overall framework, WIMFRIS, integrates these components and is experimentally compared against existing PET-based and full fine-tuning methods on multiple RIS benchmarks.",
"weaknesses": "* Lack of Novelty\n\nThe paper shows limited novelty. The **PET part** closely follows DETRIS, essentially extending its parameter-efficient tuning framework with minor Mamba-based modifications. The **neck design** heavily overlaps with the fusion architecture in fixation phase in SaFiRe, both adopting window-based Mamba fusion for intermediate vision-language alignment. Overall, the work mainly integrates these existing ideas rather than introducing a substantively new contribution.\n\n\n* Incomplete Manuscript\n\nThe paper appears **incomplete**. Section 3.2 is unfinished, and the crucial description of the **task decoder** is missing. This omission disrupts the continuity between Sections 2.3 and 2.4. The authors should carefully verify whether the submitted version is the complete manuscript.\n\n\n* Unfair and Limited Comparison\n\nFor Table 1\n\n\n1. **Unfair Comparison :**\nTo ensure fairness, (1) the parameters of PET-based methods should be adjusted to achieve **comparable model sizes**, and (2) the **backbones of all compared methods** should be unified.\n\n2. **Limited Comparison with State-of-the-Arts:**\nMore PET-based approaches should be included, as previous works (e.g., ETRIS, DETRIS, RISCLIP) have done, especially those involving **backbone-side modality fusion** in RIS, such as **PWAM in LAVT**, **SDF in VLT**, and **CFE in RISCLIP**, as well as classical parameter-efficient tuning methods like **LoRA** and **Adapter**.\n\n3. **Marginal Improvement of the WMF Neck:**\nCompared with **DETRIS**, the improvements achieved by the proposed **WMF Neck** are quite marginal.\n\n4. **Insufficient Comparison :**\nA more comprehensive comparison is needed to substantiate the claimed advantages of the proposed neck method, including detailed analyses of **parameter counts**, **computational cost (GFLOPs)**, and **inference speed**, particularly in comparisons with **ETRIS/DETRIS necks**.\n\nFor Table 2\n\n1. **Inconsistent Metrics:**\n Table 2 mixes **mIoU** and **oIoU** without clarification. While RISCLIP, DETRIS, and WIMFRIS use **mIoU**, most other methods report **oIoU**. In particular, for works like **CGFormer** and **Polyformer**, which provide both metrics, the authors still report their **oIoU** values. Since **mIoU** is generally higher than **oIoU** on the RefCOCO family datasets, this inconsistency makes the performance comparison **unreliable**.\n2. **RISCLIP Issue:**\n According to the authors’ own definition (line 44, “…keeping the vast majority of the backbone parameters frozen”), RISCLIP also freezes its CLIP backbone and should be considered a parameter-efficient tuning method. Moreover, the results of **RISCLIP-L** are missing, which appear **significantly higher** than those of the proposed “Ours-L” model (trained on RefCOCO+, mIoU: **RISCLIP-L** 74.38 / 78.77 / 66.84 vs. **Ours-L** 71.9 / 76.2 / 67.2).\n\n\n* Efficiency Analysis\n\nAlthough this work emphasizes the **PET framework** and uses the **efficient Mamba architecture**, more detailed **efficiency analyses** should be provided—specifically **GFLOPs**, **inference speed**, and preferably **FPS**.\n\n\n* Minor Issues\n\nIn **Table 3(a)**, the content does not match the caption: *4×4* is **not** the smallest window size.\n\n\n\n***I would be happy to revise my score if the author addresses these points.***\n\n\n\n---\n\n**References:**\n\nDETRIS: Densely Connected Parameter-Efficient Tuning for Referring Image Segmentation AAAI2025\n\nSaFiRe: SaFiRe: Saccade-Fixation Reiteration with Mamba for Referring Image Segmentation NeurIPS 2025\n\nLAVT: Language-Aware Vision Transformer for Referring Image Segmentation CVPR2022\n\nVLT: Vision-Language Transformer and Query Generation for Referring Segmentation TPAMI2023\n\nRISCLIP:Extending CLIP’s Image-Text Alignment to Referring Image Segmentation NAACL2024\n\nLoRA: Low-Rank Adaptation of Large Language Models. ICLR2022\n\nParameter-Efficient Transfer Learning for NLP. ICML2019\n\nCGFormer: Contrastive Grouping with Transformer for Referring Image Segmentation CVPR2023\n\nPolyFormer: Referring Image Segmentation as Sequential Polygon Generation CVPR2023",
"questions": "* Could you clarify the **task decoder design**?\n\n* In Table 1, which IoU metric is used—**mIoU** or **oIoU**? ETRIS reports oIoU from the original paper, but DETRIS uses mIoU.\n\n* In Table 2, please clarify metric issue and the RISCLIP issue mentioned in W1-B.\n\n* What are the **inference speed** and **GFLOPs** of the proposed model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T19:11:04",
"modification_date": "2025-11-12T18:19:52",
"review_url": "https://openreview.net/forum?id=WnRzN4U8Y8¬eId=l3NeqmvthW",
"license": "CC BY 4.0"
},
{
"id": "F6qig8fkUO",
"forum": "WnRzN4U8Y8",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_WGQk",
"reviewer_name": "Reviewer_WGQk",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 4,
"presentation": 3,
"summary": "This paper introduces WIMFRIS, a framework for Referring Image Segmentationthat focuses on both a novel intermediate fusion neck architecture (the Hierarchical Mamba Fusion, or HMF, block) and a parameter-efficient tuning strategy. The HMF block leverages a Window Mamba Fuser module to effectively aggregate and fuse multi-scale vision and language features, using window partitioning to tackle the exponential decay in information typical of state-space models. The PET strategy employs adapters to efficiently align textual and visual representations and a learnable stage-wise emphasis mechanism. Extensive experiments are conducted on major RIS benchmarks, demonstrating state-of-the-art results for WIMFRIS compared to both PET-based and full fine-tuning methods.",
"strengths": "- WIMFRIS achieves state-of-the-art or highly competitive performance across all standard RIS benchmarks (RefCOCO, RefCOCO+, G-Ref), outperforming previous parameter-efficient and full-tuning baselines. Table 2 clearly demonstrates these gains, including mixed-data setups.\n- Multiple ablation tables systematically dissect the contributions of each module and architectural choice.\n- The schematic diagrams provide clear breakdowns of the model pipeline, supporting the text’s descriptions of modular design and the flow of visual and textual feature processing. The visualizations offer compelling qualitative evidence for improved segmentation, especially in challenging situations (e.g., clutter, occlusion).\n- The paper carefully characterizes the underlying exponential decay issue in SSM-based fusion, and the model’s windowed approach is well justified both mathematically and empirically.\n- WIMFRIS demonstrates competitive results while tuning a very small fraction of backbone parameters, highlighting the value for practical deployment.\n- The explicit, detailed description of contrastive, dice, and alignment losses (and their weighting) makes reproduction feasible and testable.",
"weaknesses": "- While MSA adapters and MTA are described and visualized in Figure 2, the specific methodology for choosing insertion layers for adapters in different backbones is only loosely justified. There is a missed opportunity for a principled, possibly automated or analytical policy for placement, and no ablation on layer choice is provided.\n- Although Table 3 (a) explores performance trade-offs for window size, the choice of optimal $4 \\times 4$ is only empirically justified. There is little theoretical or dataset-specific reasoning for why this size generalizes, and exploring task- or scale-adaptive policies would strengthen claims of robustness.\n- There are several grammatical errors and awkward phrasings, as well as the use of slightly non-standard abbreviations in the tables (e.g., \"vol\", \"m/s/6\", \"m/sfI\" in Table 1), which may disrupt readability and hinder quick assimilation for a broad audience.",
"questions": "- Can the authors provide a rationale for the placement of PET adapters (MSA, MTA) at specific depths in the vision/text backbone? Have they considered or tested more adaptive/learned strategies for insertion, and can they provide ablations or guidelines for optimal selection?\n\n- How is the concatenation between text class tokens and visual patch windows actually handled in practice (e.g., with respect to normalization, possible channel mismatch, and possible overfitting due to repetitive text tokens)? Would normalization before SSM scans improve performance or stability?\n\n- Have the authors empirically measured the actual decay rate of long-range dependencies for varying window sizes in SSM, and if so, can those be reported? Is the optimal window size truly dataset/task dependent?\n\n- Are there notable scenarios where the windowed approach harms segmentation accuracy, e.g., in very small or oddly-shaped object instances, or when referring expressions are ambiguous or highly context-dependent?\n\n- Will the complete code (including all adapter implementations and ablation regimes) be released for reproducibility, and if so, under what license and conditions?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T14:48:49",
"modification_date": "2025-11-12T18:19:52",
"review_url": "https://openreview.net/forum?id=WnRzN4U8Y8¬eId=F6qig8fkUO",
"license": "CC BY 4.0"
},
{
"id": "MinqdN4erx",
"forum": "WnRzN4U8Y8",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_CZEK",
"reviewer_name": "Reviewer_CZEK",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes a novel parameter-efficient tuning (PET) method named WIMFRIS for referring image segmentation. In contrast to existing PET methods that primarily focus on layer-wise feature alignment and are struggle to aggregate multi-scale features, the proposed approach introduces a simple yet effective neck architecture based on the Mamba module. WIMFRIS achieves state-of-the-art performance on standard RIS benchmarks, demonstrating both efficiency and strong segmentation capability.",
"strengths": "- The paper proposes a new efficient parameter-efficient tuning (PET)–based referring image segmentation (RIS) approach named WIMFRIS.\n- The proposed algorithm enhances efficiency by replacing conventional blocks with an HMF block that actively leverages the Mamba architecture. In addition, it introduces several novel components—an SSM-based MTA, an MSA robust to multiple receptive fields, and an RFMixer—which together contribute to more precise vision-language fusion.\n- The method achieves state-of-the-art performance on popular RIS benchmarks, demonstrating both effectiveness and robustness.",
"weaknesses": "- Structural Issues in Writing\n - In the Abstract, abbreviations such as HMF and WMF appear without their full names or descriptions, making it difficult for readers to understand them.\n - Figure 1 lacks an explanation of the HMF module, requiring readers to infer that WMF is a sub-module of HMF only from context.\n- #Params of PET and Performance Comparison\n - When comparing with existing PET methods, it would be fair to keep the number of PET parameters (#params) consistent across models. According to Table 1, when DINOv2-B/14 is used as the vision encoder, the proposed method shows only a slight improvement in performance compared to DETRIS, even though it uses more parameters. This raises concerns that the effectiveness of WIMFRIS may not be scalable.\n- Limited Novelty\n - The paper proposes several modules (e.g., WMF, HMF, MSA, MTA), but the architectural novelty of each component seems limited. For instance, the HMF module appears to replace multiple cross-attention layers with a more efficient Mamba-based structure, but the use of Mamba itself is not novel. Similarly, the MSA and RFMixer are designed to handle multiple receptive fields, but this concept is not entirely new.\n - The paper would benefit from additional discussion or evidence to substantiate the novelty of these architectural contributions.\n- Lack of Ablation Studies\n - As mentioned above, the paper lacks experiments that demonstrate the effectiveness and novelty of the proposed modules. For example, it would strengthen the work to include comparisons between MSA/RFMixer and baseline or vanilla methods for handling multiple receptive fields.\n - Table 3-(a) appears more like an engineering-oriented study rather than one providing clear scientific insight.",
"questions": "Please provide your responses with reference to the weaknesses mentioned above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T00:16:57",
"modification_date": "2025-11-12T18:19:52",
"review_url": "https://openreview.net/forum?id=WnRzN4U8Y8¬eId=MinqdN4erx",
"license": "CC BY 4.0"
},
{
"id": "i5RjdUj9Fg",
"forum": "WnRzN4U8Y8",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_UdVq",
"reviewer_name": "Reviewer_UdVq",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "WIMFRIS introduces a neck-heavy, parameter-efficient RIS framework that aggregates multi-scale DINOv2 features, fuses them with CLIP text via a windowed Mamba block, and adaptively re-weights each stage, setting new SOTA mIoU on RefCOCO/+/g with < 3 % trainable params.",
"strengths": "1. First to plug a windowed SSM neck (WMF) into RIS; mitigates exponential decay of vanilla Mamba.\n2. Learnable emphasis per stage is simple yet novel for PET.\n3. Exhaustive ablations: window size, kernel configs, PET modules all explored.\n4. Plug-in HMF boosts ETRIS & DETRIS (Table 1), proving generic utility.",
"weaknesses": "1. All results are fine-tuned; real-world deployment often lacks target-domain labels.\n2. WMF prepends text to windows, but vision never feeds back to text; may miss visual disambiguation cues.\n3. Parameter efficiency ≠ inference speed; window partitioning + SSM may hurt parallelism.",
"questions": "See weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T11:05:37",
"modification_date": "2025-11-12T18:19:53",
"review_url": "https://openreview.net/forum?id=WnRzN4U8Y8¬eId=i5RjdUj9Fg",
"license": "CC BY 4.0"
}
] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 437