Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.06962 | Thomas Kerdreux | Thomas Kerdreux and Alexandre Tuel and Quentin Febvre and Alexis
Mouche and Bertrand Chapron | Efficient Self-Supervised Learning for Earth Observation via Dynamic
Dataset Curation | Accepted at CVPR Workshop : The First Workshop on Foundation and
Large Vision Models in Remote Sensing | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Self-supervised learning (SSL) has enabled the development of vision
foundation models for Earth Observation (EO), demonstrating strong
transferability across diverse remote sensing tasks. While prior work has
focused on network architectures and training strategies, the role of dataset
curation, especially in balancing and diversifying pre-training datasets,
remains underexplored. In EO, this challenge is amplified by the redundancy and
heavy-tailed distributions common in satellite imagery, which can lead to
biased representations and inefficient training.
In this work, we propose a dynamic dataset pruning strategy designed to
improve SSL pre-training by maximizing dataset diversity and balance. Our
method iteratively refines the training set without requiring a pre-existing
feature extractor, making it well-suited for domains where curated datasets are
limited or unavailable. We demonstrate our approach on the Sentinel-1 Wave Mode
(WV) Synthetic Aperture Radar (SAR) archive, a challenging dataset dominated by
ocean observations. We train models from scratch on the entire Sentinel-1 WV
archive spanning 10 years. Across three downstream tasks, our results show that
dynamic pruning improves both computational efficiency and representation
quality, leading to stronger transferability.
We also release the weights of Nereus-SAR-1, the first model in the Nereus
family, a series of foundation models for ocean observation and analysis using
SAR imagery, at github.com/galeio-research/nereus-sar-models/.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 15:13:26 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Kerdreux",
"Thomas",
""
],
[
"Tuel",
"Alexandre",
""
],
[
"Febvre",
"Quentin",
""
],
[
"Mouche",
"Alexis",
""
],
[
"Chapron",
"Bertrand",
""
]
] | TITLE: Efficient Self-Supervised Learning for Earth Observation via Dynamic
Dataset Curation
ABSTRACT: Self-supervised learning (SSL) has enabled the development of vision
foundation models for Earth Observation (EO), demonstrating strong
transferability across diverse remote sensing tasks. While prior work has
focused on network architectures and training strategies, the role of dataset
curation, especially in balancing and diversifying pre-training datasets,
remains underexplored. In EO, this challenge is amplified by the redundancy and
heavy-tailed distributions common in satellite imagery, which can lead to
biased representations and inefficient training.
In this work, we propose a dynamic dataset pruning strategy designed to
improve SSL pre-training by maximizing dataset diversity and balance. Our
method iteratively refines the training set without requiring a pre-existing
feature extractor, making it well-suited for domains where curated datasets are
limited or unavailable. We demonstrate our approach on the Sentinel-1 Wave Mode
(WV) Synthetic Aperture Radar (SAR) archive, a challenging dataset dominated by
ocean observations. We train models from scratch on the entire Sentinel-1 WV
archive spanning 10 years. Across three downstream tasks, our results show that
dynamic pruning improves both computational efficiency and representation
quality, leading to stronger transferability.
We also release the weights of Nereus-SAR-1, the first model in the Nereus
family, a series of foundation models for ocean observation and analysis using
SAR imagery, at github.com/galeio-research/nereus-sar-models/.
|
2504.06963 | Vladimir Bataev | Vladimir Bataev | RNN-Transducer-based Losses for Speech Recognition on Noisy Targets | Final Project Report, Bachelor's Degree in Computer Science,
University of London, March 2024 | null | null | null | eess.AS cs.AI cs.CL cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training speech recognition systems on noisy transcripts is a significant
challenge in industrial pipelines, where datasets are enormous and ensuring
accurate transcription for every instance is difficult. In this work, we
introduce novel loss functions to mitigate the impact of transcription errors
in RNN-Transducer models. Our Star-Transducer loss addresses deletion errors by
incorporating "skip frame" transitions in the loss lattice, restoring over 90%
of the system's performance compared to models trained with accurate
transcripts. The Bypass-Transducer loss uses "skip token" transitions to tackle
insertion errors, recovering more than 60% of the quality. Finally, the
Target-Robust Transducer loss merges these approaches, offering robust
performance against arbitrary errors. Experimental results demonstrate that the
Target-Robust Transducer loss significantly improves RNN-T performance on noisy
data by restoring over 70% of the quality compared to well-transcribed data.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 15:18:29 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Bataev",
"Vladimir",
""
]
] | TITLE: RNN-Transducer-based Losses for Speech Recognition on Noisy Targets
ABSTRACT: Training speech recognition systems on noisy transcripts is a significant
challenge in industrial pipelines, where datasets are enormous and ensuring
accurate transcription for every instance is difficult. In this work, we
introduce novel loss functions to mitigate the impact of transcription errors
in RNN-Transducer models. Our Star-Transducer loss addresses deletion errors by
incorporating "skip frame" transitions in the loss lattice, restoring over 90%
of the system's performance compared to models trained with accurate
transcripts. The Bypass-Transducer loss uses "skip token" transitions to tackle
insertion errors, recovering more than 60% of the quality. Finally, the
Target-Robust Transducer loss merges these approaches, offering robust
performance against arbitrary errors. Experimental results demonstrate that the
Target-Robust Transducer loss significantly improves RNN-T performance on noisy
data by restoring over 70% of the quality compared to well-transcribed data.
|
2504.06965 | Qingsong Yan | Teng Xiao, Qi Hu, Qingsong Yan, Wei Liu, Zhiwei Ye, Fei Deng | A Deep Single Image Rectification Approach for Pan-Tilt-Zoom Cameras | Accepted to ICME 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pan-Tilt-Zoom (PTZ) cameras with wide-angle lenses are widely used in
surveillance but often require image rectification due to their inherent
nonlinear distortions. Current deep learning approaches typically struggle to
maintain fine-grained geometric details, resulting in inaccurate rectification.
This paper presents a Forward Distortion and Backward Warping Network
(FDBW-Net), a novel framework for wide-angle image rectification. It begins by
using a forward distortion model to synthesize barrel-distorted images,
reducing pixel redundancy and preventing blur. The network employs a pyramid
context encoder with attention mechanisms to generate backward warping flows
containing geometric details. Then, a multi-scale decoder is used to restore
distorted features and output rectified images. FDBW-Net's performance is
validated on diverse datasets: public benchmarks, AirSim-rendered PTZ camera
imagery, and real-scene PTZ camera datasets. It demonstrates that FDBW-Net
achieves SOTA performance in distortion rectification, boosting the
adaptability of PTZ cameras for practical visual applications.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 15:19:38 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Xiao",
"Teng",
""
],
[
"Hu",
"Qi",
""
],
[
"Yan",
"Qingsong",
""
],
[
"Liu",
"Wei",
""
],
[
"Ye",
"Zhiwei",
""
],
[
"Deng",
"Fei",
""
]
] | TITLE: A Deep Single Image Rectification Approach for Pan-Tilt-Zoom Cameras
ABSTRACT: Pan-Tilt-Zoom (PTZ) cameras with wide-angle lenses are widely used in
surveillance but often require image rectification due to their inherent
nonlinear distortions. Current deep learning approaches typically struggle to
maintain fine-grained geometric details, resulting in inaccurate rectification.
This paper presents a Forward Distortion and Backward Warping Network
(FDBW-Net), a novel framework for wide-angle image rectification. It begins by
using a forward distortion model to synthesize barrel-distorted images,
reducing pixel redundancy and preventing blur. The network employs a pyramid
context encoder with attention mechanisms to generate backward warping flows
containing geometric details. Then, a multi-scale decoder is used to restore
distorted features and output rectified images. FDBW-Net's performance is
validated on diverse datasets: public benchmarks, AirSim-rendered PTZ camera
imagery, and real-scene PTZ camera datasets. It demonstrates that FDBW-Net
achieves SOTA performance in distortion rectification, boosting the
adaptability of PTZ cameras for practical visual applications.
|
2504.06969 | Lilian Ngweta | Lilian Ngweta, Kiran Kate, Jason Tsay, Yara Rizk | Towards LLMs Robustness to Changes in Prompt Format Styles | NAACL Student Research Workshop (SRW) 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have gained popularity in recent years for their
utility in various applications. However, they are sensitive to non-semantic
changes in prompt formats, where small changes in the prompt format can lead to
significant performance fluctuations. In the literature, this problem is
commonly referred to as prompt brittleness. Previous research on prompt
engineering has focused mainly on developing techniques for identifying the
optimal prompt for specific tasks. Some studies have also explored the issue of
prompt brittleness and proposed methods to quantify performance variations;
however, no simple solution has been found to address this challenge. We
propose Mixture of Formats (MOF), a simple and efficient technique for
addressing prompt brittleness in LLMs by diversifying the styles used in the
prompt few-shot examples. MOF was inspired by computer vision techniques that
utilize diverse style datasets to prevent models from associating specific
styles with the target variable. Empirical results show that our proposed
technique reduces style-induced prompt brittleness in various LLMs while also
enhancing overall performance across prompt variations and different datasets.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 15:26:00 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Ngweta",
"Lilian",
""
],
[
"Kate",
"Kiran",
""
],
[
"Tsay",
"Jason",
""
],
[
"Rizk",
"Yara",
""
]
] | TITLE: Towards LLMs Robustness to Changes in Prompt Format Styles
ABSTRACT: Large language models (LLMs) have gained popularity in recent years for their
utility in various applications. However, they are sensitive to non-semantic
changes in prompt formats, where small changes in the prompt format can lead to
significant performance fluctuations. In the literature, this problem is
commonly referred to as prompt brittleness. Previous research on prompt
engineering has focused mainly on developing techniques for identifying the
optimal prompt for specific tasks. Some studies have also explored the issue of
prompt brittleness and proposed methods to quantify performance variations;
however, no simple solution has been found to address this challenge. We
propose Mixture of Formats (MOF), a simple and efficient technique for
addressing prompt brittleness in LLMs by diversifying the styles used in the
prompt few-shot examples. MOF was inspired by computer vision techniques that
utilize diverse style datasets to prevent models from associating specific
styles with the target variable. Empirical results show that our proposed
technique reduces style-induced prompt brittleness in various LLMs while also
enhancing overall performance across prompt variations and different datasets.
|
2504.06982 | Yuhang Yang | Yuhang Yang, Fengqi Liu, Yixing Lu, Qin Zhao, Pingyu Wu, Wei Zhai, Ran
Yi, Yang Cao, Lizhuang Ma, Zheng-Jun Zha, Junting Dong | SIGMAN:Scaling 3D Human Gaussian Generation with Millions of Assets | project page:https://yyvhang.github.io/SIGMAN_3D/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D human digitization has long been a highly pursued yet challenging task.
Existing methods aim to generate high-quality 3D digital humans from single or
multiple views, but remain primarily constrained by current paradigms and the
scarcity of 3D human assets. Specifically, recent approaches fall into several
paradigms: optimization-based and feed-forward (both single-view regression and
multi-view generation with reconstruction). However, they are limited by slow
speed, low quality, cascade reasoning, and ambiguity in mapping low-dimensional
planes to high-dimensional space due to occlusion and invisibility,
respectively. Furthermore, existing 3D human assets remain small-scale,
insufficient for large-scale training. To address these challenges, we propose
a latent space generation paradigm for 3D human digitization, which involves
compressing multi-view images into Gaussians via a UV-structured VAE, along
with DiT-based conditional generation, we transform the ill-posed
low-to-high-dimensional mapping problem into a learnable distribution shift,
which also supports end-to-end inference. In addition, we employ the multi-view
optimization approach combined with synthetic data to construct the HGS-1M
dataset, which contains $1$ million 3D Gaussian assets to support the
large-scale training. Experimental results demonstrate that our paradigm,
powered by large-scale training, produces high-quality 3D human Gaussians with
intricate textures, facial details, and loose clothing deformation.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 15:38:18 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Yang",
"Yuhang",
""
],
[
"Liu",
"Fengqi",
""
],
[
"Lu",
"Yixing",
""
],
[
"Zhao",
"Qin",
""
],
[
"Wu",
"Pingyu",
""
],
[
"Zhai",
"Wei",
""
],
[
"Yi",
"Ran",
""
],
[
"Cao",
"Yang",
""
],
[
"Ma",
"Lizhuang",
""
],
[
"Zha",
"Zheng-Jun",
""
],
[
"Dong",
"Junting",
""
]
] | TITLE: SIGMAN:Scaling 3D Human Gaussian Generation with Millions of Assets
ABSTRACT: 3D human digitization has long been a highly pursued yet challenging task.
Existing methods aim to generate high-quality 3D digital humans from single or
multiple views, but remain primarily constrained by current paradigms and the
scarcity of 3D human assets. Specifically, recent approaches fall into several
paradigms: optimization-based and feed-forward (both single-view regression and
multi-view generation with reconstruction). However, they are limited by slow
speed, low quality, cascade reasoning, and ambiguity in mapping low-dimensional
planes to high-dimensional space due to occlusion and invisibility,
respectively. Furthermore, existing 3D human assets remain small-scale,
insufficient for large-scale training. To address these challenges, we propose
a latent space generation paradigm for 3D human digitization, which involves
compressing multi-view images into Gaussians via a UV-structured VAE, along
with DiT-based conditional generation, we transform the ill-posed
low-to-high-dimensional mapping problem into a learnable distribution shift,
which also supports end-to-end inference. In addition, we employ the multi-view
optimization approach combined with synthetic data to construct the HGS-1M
dataset, which contains $1$ million 3D Gaussian assets to support the
large-scale training. Experimental results demonstrate that our paradigm,
powered by large-scale training, produces high-quality 3D human Gaussians with
intricate textures, facial details, and loose clothing deformation.
|
2504.06991 | Ghurumuruhan Ganesan | Ghurumuruhan Ganesan | Dissimilar Batch Decompositions of Random Datasets | Accepted for publication in Sankhya A | null | null | null | cs.LG math.PR stat.ML | http://creativecommons.org/licenses/by/4.0/ | For better learning, large datasets are often split into small batches and
fed sequentially to the predictive model. In this paper, we study such batch
decompositions from a probabilistic perspective. We assume that data points
(possibly corrupted) are drawn independently from a given space and define a
concept of similarity between two data points. We then consider decompositions
that restrict the amount of similarity within each batch and obtain high
probability bounds for the minimum size. We demonstrate an inherent tradeoff
between relaxing the similarity constraint and the overall size and also use
martingale methods to obtain bounds for the maximum size of data subsets with a
given similarity.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 15:58:06 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Ganesan",
"Ghurumuruhan",
""
]
] | TITLE: Dissimilar Batch Decompositions of Random Datasets
ABSTRACT: For better learning, large datasets are often split into small batches and
fed sequentially to the predictive model. In this paper, we study such batch
decompositions from a probabilistic perspective. We assume that data points
(possibly corrupted) are drawn independently from a given space and define a
concept of similarity between two data points. We then consider decompositions
that restrict the amount of similarity within each batch and obtain high
probability bounds for the minimum size. We demonstrate an inherent tradeoff
between relaxing the similarity constraint and the overall size and also use
martingale methods to obtain bounds for the maximum size of data subsets with a
given similarity.
|
2504.06997 | Mingliang Pan | Mingliang Pan, Chenxu Li, Yuanzhe Zhang, Alan Mollins, Quan Wang,
Ahmet T. Erdogan, Yuanyuan Hua, Zhenya Zang, Neil Finlayson, Robert K.
Henderson, David Day-Uei Li | Cerebral blood flow monitoring using a deep learning implementation of
the two-layer DCS analytical model with a 512 512 SPAD array | 23 pages, 11 figures | null | null | null | physics.med-ph physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffuse correlation spectroscopy (DCS) analyzes the autocorrelation function
of photons scattered by red blood cells, enabling non-invasive, continuous
measurement of deep tissue blood flow at the bedside. Multi-layer DCS models
(two- and three-layer) enhance cerebral blood flow index (CBFi) sensitivity and
mitigate interference from extracerebral tissues. However, these models require
multiple predefined parameters and are computationally intensive, making them
impractical for real-time bedside monitoring. To address this challenge, we
integrate a single-photon avalanche diode (SPAD) array with a deep learning
(DL)-based approach trained on data generated by the two-layer analytical
model. This method bypasses traditional model fitting, enabling real-time CBFi
monitoring while minimizing superficial tissue contamination. We first validate
our approach using Monte Carlo-simulated test datasets, demonstrating superior
accuracy in relative CBFi estimation (5.8% error vs. 19.1% for conventional
fitting) and enhanced CBFi sensitivity (87.1% vs. 55.4%). Additionally, our
method effectively isolates shallow blood flow changes and 750-fold faster than
single-exponential fitting in a realistic scenario. We further evaluate the
system in a healthy adult, achieving real-time CBFi monitoring and pulsatile
waveform recovery during a brain activity test using a 512 512 SPAD array
sensor. These results highlight the potential of our approach for real-time
brain activity monitoring.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 16:09:34 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Pan",
"Mingliang",
""
],
[
"Li",
"Chenxu",
""
],
[
"Zhang",
"Yuanzhe",
""
],
[
"Mollins",
"Alan",
""
],
[
"Wang",
"Quan",
""
],
[
"Erdogan",
"Ahmet T.",
""
],
[
"Hua",
"Yuanyuan",
""
],
[
"Zang",
"Zhenya",
""
],
[
"Finlayson",
"Neil",
""
],
[
"Henderson",
"Robert K.",
""
],
[
"Li",
"David Day-Uei",
""
]
] | TITLE: Cerebral blood flow monitoring using a deep learning implementation of
the two-layer DCS analytical model with a 512 512 SPAD array
ABSTRACT: Diffuse correlation spectroscopy (DCS) analyzes the autocorrelation function
of photons scattered by red blood cells, enabling non-invasive, continuous
measurement of deep tissue blood flow at the bedside. Multi-layer DCS models
(two- and three-layer) enhance cerebral blood flow index (CBFi) sensitivity and
mitigate interference from extracerebral tissues. However, these models require
multiple predefined parameters and are computationally intensive, making them
impractical for real-time bedside monitoring. To address this challenge, we
integrate a single-photon avalanche diode (SPAD) array with a deep learning
(DL)-based approach trained on data generated by the two-layer analytical
model. This method bypasses traditional model fitting, enabling real-time CBFi
monitoring while minimizing superficial tissue contamination. We first validate
our approach using Monte Carlo-simulated test datasets, demonstrating superior
accuracy in relative CBFi estimation (5.8% error vs. 19.1% for conventional
fitting) and enhanced CBFi sensitivity (87.1% vs. 55.4%). Additionally, our
method effectively isolates shallow blood flow changes and 750-fold faster than
single-exponential fitting in a realistic scenario. We further evaluate the
system in a healthy adult, achieving real-time CBFi monitoring and pulsatile
waveform recovery during a brain activity test using a 512 512 SPAD array
sensor. These results highlight the potential of our approach for real-time
brain activity monitoring.
|
2504.07002 | Yuan Xiao | Yuan Xiao, Yuchen Chen, Shiqing Ma, Haocheng Huang, Chunrong Fang,
Yanwei Chen, Weisong Sun, Yunfeng Zhu, Xiaofang Zhang, Zhenyu Chen | DeCoMa: Detecting and Purifying Code Dataset Watermarks through Dual
Channel Code Abstraction | Accepted to ISSTA 2025. Code is available at
https://github.com/xiaoyuanpigo/DeCoMa | null | 10.1145/3728952 | null | cs.CR cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Watermarking is a technique to help identify the source of data points, which
can be used to help prevent the misuse of protected datasets. Existing methods
on code watermarking, leveraging the idea from the backdoor research, embed
stealthy triggers as watermarks.Despite their high resilience against dilution
attacks and backdoor detections, the robustness has not been fully evaluated.
To fill this gap, we propose DeCoMa, a dual-channel approach to Detect and
purify Code dataset waterMarks.To overcome the high barrier created by the
stealthy and hidden nature of code watermarks, DeCoMa leverages dual-channel
constraints on code to generalize and map code samples into standardized
templates. Subsequently, DeCoMa extracts hidden watermarks by identifying
outlier associations between paired elements within the standardized templates.
Finally, DeCoMa purifies the watermarked dataset by removing all samples
containing the detected watermark, enabling the silent appropriation of
protected code. We conduct extensive experiments to evaluate the effectiveness
and efficiency of DeCoMa, covering 14 types of code watermarks and 3
representative intelligent code tasks (a total of 14 scenarios). Experimental
results demonstrate that DeCoMa achieves a stable recall of 100% in 14 code
watermark detection scenarios, significantly outperforming the baselines.
Additionally, DeCoMa effectively attacks code watermarks with embedding rates
as low as 0.1%, while maintaining comparable model performance after training
on the purified dataset. Furthermore, as DeCoMa requires no model training for
detection, it achieves substantially higher efficiency than all baselines, with
a speedup ranging from 31.5 to 130.9X. The results call for more advanced
watermarking techniques for code models, while DeCoMa can serve as a baseline
for future evaluation.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 16:19:11 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Xiao",
"Yuan",
""
],
[
"Chen",
"Yuchen",
""
],
[
"Ma",
"Shiqing",
""
],
[
"Huang",
"Haocheng",
""
],
[
"Fang",
"Chunrong",
""
],
[
"Chen",
"Yanwei",
""
],
[
"Sun",
"Weisong",
""
],
[
"Zhu",
"Yunfeng",
""
],
[
"Zhang",
"Xiaofang",
""
],
[
"Chen",
"Zhenyu",
""
]
] | TITLE: DeCoMa: Detecting and Purifying Code Dataset Watermarks through Dual
Channel Code Abstraction
ABSTRACT: Watermarking is a technique to help identify the source of data points, which
can be used to help prevent the misuse of protected datasets. Existing methods
on code watermarking, leveraging the idea from the backdoor research, embed
stealthy triggers as watermarks.Despite their high resilience against dilution
attacks and backdoor detections, the robustness has not been fully evaluated.
To fill this gap, we propose DeCoMa, a dual-channel approach to Detect and
purify Code dataset waterMarks.To overcome the high barrier created by the
stealthy and hidden nature of code watermarks, DeCoMa leverages dual-channel
constraints on code to generalize and map code samples into standardized
templates. Subsequently, DeCoMa extracts hidden watermarks by identifying
outlier associations between paired elements within the standardized templates.
Finally, DeCoMa purifies the watermarked dataset by removing all samples
containing the detected watermark, enabling the silent appropriation of
protected code. We conduct extensive experiments to evaluate the effectiveness
and efficiency of DeCoMa, covering 14 types of code watermarks and 3
representative intelligent code tasks (a total of 14 scenarios). Experimental
results demonstrate that DeCoMa achieves a stable recall of 100% in 14 code
watermark detection scenarios, significantly outperforming the baselines.
Additionally, DeCoMa effectively attacks code watermarks with embedding rates
as low as 0.1%, while maintaining comparable model performance after training
on the purified dataset. Furthermore, as DeCoMa requires no model training for
detection, it achieves substantially higher efficiency than all baselines, with
a speedup ranging from 31.5 to 130.9X. The results call for more advanced
watermarking techniques for code models, while DeCoMa can serve as a baseline
for future evaluation.
|
2504.07017 | Yusuf Guven | Yusuf Guven, Tufan Kumbasar | Adapting GT2-FLS for Uncertainty Quantification: A Blueprint Calibration
Strategy | in IEEE International Conference on Fuzzy Systems, 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Uncertainty Quantification (UQ) is crucial for deploying reliable Deep
Learning (DL) models in high-stakes applications. Recently, General Type-2
Fuzzy Logic Systems (GT2-FLSs) have been proven to be effective for UQ,
offering Prediction Intervals (PIs) to capture uncertainty. However, existing
methods often struggle with computational efficiency and adaptability, as
generating PIs for new coverage levels $(\phi_d)$ typically requires retraining
the model. Moreover, methods that directly estimate the entire conditional
distribution for UQ are computationally expensive, limiting their scalability
in real-world scenarios. This study addresses these challenges by proposing a
blueprint calibration strategy for GT2-FLSs, enabling efficient adaptation to
any desired $\phi_d$ without retraining. By exploring the relationship between
$\alpha$-plane type reduced sets and uncertainty coverage, we develop two
calibration methods: a lookup table-based approach and a derivative-free
optimization algorithm. These methods allow GT2-FLSs to produce accurate and
reliable PIs while significantly reducing computational overhead. Experimental
results on high-dimensional datasets demonstrate that the calibrated GT2-FLS
achieves superior performance in UQ, highlighting its potential for scalable
and practical applications.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 16:32:43 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Guven",
"Yusuf",
""
],
[
"Kumbasar",
"Tufan",
""
]
] | TITLE: Adapting GT2-FLS for Uncertainty Quantification: A Blueprint Calibration
Strategy
ABSTRACT: Uncertainty Quantification (UQ) is crucial for deploying reliable Deep
Learning (DL) models in high-stakes applications. Recently, General Type-2
Fuzzy Logic Systems (GT2-FLSs) have been proven to be effective for UQ,
offering Prediction Intervals (PIs) to capture uncertainty. However, existing
methods often struggle with computational efficiency and adaptability, as
generating PIs for new coverage levels $(\phi_d)$ typically requires retraining
the model. Moreover, methods that directly estimate the entire conditional
distribution for UQ are computationally expensive, limiting their scalability
in real-world scenarios. This study addresses these challenges by proposing a
blueprint calibration strategy for GT2-FLSs, enabling efficient adaptation to
any desired $\phi_d$ without retraining. By exploring the relationship between
$\alpha$-plane type reduced sets and uncertainty coverage, we develop two
calibration methods: a lookup table-based approach and a derivative-free
optimization algorithm. These methods allow GT2-FLSs to produce accurate and
reliable PIs while significantly reducing computational overhead. Experimental
results on high-dimensional datasets demonstrate that the calibrated GT2-FLS
achieves superior performance in UQ, highlighting its potential for scalable
and practical applications.
|
2504.07025 | Bojian Wu | Bojian Wu, Yifan Peng, Ruizhen Hu, Xiaowei Zhou | Glossy Object Reconstruction with Cost-effective Polarized Acquisition | Accepted to CVPR 2025 as highlight | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The challenge of image-based 3D reconstruction for glossy objects lies in
separating diffuse and specular components on glossy surfaces from captured
images, a task complicated by the ambiguity in discerning lighting conditions
and material properties using RGB data alone. While state-of-the-art methods
rely on tailored and/or high-end equipment for data acquisition, which can be
cumbersome and time-consuming, this work introduces a scalable
polarization-aided approach that employs cost-effective acquisition tools. By
attaching a linear polarizer to readily available RGB cameras, multi-view
polarization images can be captured without the need for advance calibration or
precise measurements of the polarizer angle, substantially reducing system
construction costs. The proposed approach represents polarimetric BRDF, Stokes
vectors, and polarization states of object surfaces as neural implicit fields.
These fields, combined with the polarizer angle, are retrieved by optimizing
the rendering loss of input polarized images. By leveraging fundamental
physical principles for the implicit representation of polarization rendering,
our method demonstrates superiority over existing techniques through
experiments in public datasets and real captured images on both reconstruction
and novel view synthesis.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 16:38:51 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wu",
"Bojian",
""
],
[
"Peng",
"Yifan",
""
],
[
"Hu",
"Ruizhen",
""
],
[
"Zhou",
"Xiaowei",
""
]
] | TITLE: Glossy Object Reconstruction with Cost-effective Polarized Acquisition
ABSTRACT: The challenge of image-based 3D reconstruction for glossy objects lies in
separating diffuse and specular components on glossy surfaces from captured
images, a task complicated by the ambiguity in discerning lighting conditions
and material properties using RGB data alone. While state-of-the-art methods
rely on tailored and/or high-end equipment for data acquisition, which can be
cumbersome and time-consuming, this work introduces a scalable
polarization-aided approach that employs cost-effective acquisition tools. By
attaching a linear polarizer to readily available RGB cameras, multi-view
polarization images can be captured without the need for advance calibration or
precise measurements of the polarizer angle, substantially reducing system
construction costs. The proposed approach represents polarimetric BRDF, Stokes
vectors, and polarization states of object surfaces as neural implicit fields.
These fields, combined with the polarizer angle, are retrieved by optimizing
the rendering loss of input polarized images. By leveraging fundamental
physical principles for the implicit representation of polarization rendering,
our method demonstrates superiority over existing techniques through
experiments in public datasets and real captured images on both reconstruction
and novel view synthesis.
|
2504.07031 | Pawel Pukowski | Pawel Pukowski and Venet Osmani | Identifying Key Challenges of Hardness-Based Resampling | Submitted to IEEE TPAMI | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Performance gap across classes remains a persistent challenge in machine
learning, often attributed to variations in class hardness. One way to quantify
class hardness is through sample complexity - the minimum number of samples
required to effectively learn a given class. Sample complexity theory suggests
that class hardness is driven by differences in the amount of data required for
generalization. That is, harder classes need substantially more samples to
achieve generalization. Therefore, hardness-based resampling is a promising
approach to mitigate these performance disparities. While resampling has been
studied extensively in data-imbalanced settings, its impact on balanced
datasets remains unexplored.
This raises the fundamental question whether resampling is effective because
it addresses data imbalance or hardness imbalance. We begin addressing this
question by introducing class imbalance into balanced datasets and evaluate its
effect on performance disparities. We oversample hard classes and undersample
easy classes to bring hard classes closer to their sample complexity
requirements while maintaining a constant dataset size for fairness. We
estimate class-level hardness using the Area Under the Margin (AUM) hardness
estimator and leverage it to compute resampling ratios. Using these ratios, we
perform hardness-based resampling on the well-known CIFAR-10 and CIFAR-100
datasets.
Contrary to theoretical expectations, our results show that hardness-based
resampling does not meaningfully affect class-wise performance disparities. To
explain this discrepancy, we conduct detailed analyses to identify key
challenges unique to hardness-based imbalance, distinguishing it from
traditional data-based imbalance. Our insights help explain why theoretical
sample complexity expectations fail to translate into practical performance
gains and we provide guidelines for future research.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 16:45:57 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Pukowski",
"Pawel",
""
],
[
"Osmani",
"Venet",
""
]
] | TITLE: Identifying Key Challenges of Hardness-Based Resampling
ABSTRACT: Performance gap across classes remains a persistent challenge in machine
learning, often attributed to variations in class hardness. One way to quantify
class hardness is through sample complexity - the minimum number of samples
required to effectively learn a given class. Sample complexity theory suggests
that class hardness is driven by differences in the amount of data required for
generalization. That is, harder classes need substantially more samples to
achieve generalization. Therefore, hardness-based resampling is a promising
approach to mitigate these performance disparities. While resampling has been
studied extensively in data-imbalanced settings, its impact on balanced
datasets remains unexplored.
This raises the fundamental question whether resampling is effective because
it addresses data imbalance or hardness imbalance. We begin addressing this
question by introducing class imbalance into balanced datasets and evaluate its
effect on performance disparities. We oversample hard classes and undersample
easy classes to bring hard classes closer to their sample complexity
requirements while maintaining a constant dataset size for fairness. We
estimate class-level hardness using the Area Under the Margin (AUM) hardness
estimator and leverage it to compute resampling ratios. Using these ratios, we
perform hardness-based resampling on the well-known CIFAR-10 and CIFAR-100
datasets.
Contrary to theoretical expectations, our results show that hardness-based
resampling does not meaningfully affect class-wise performance disparities. To
explain this discrepancy, we conduct detailed analyses to identify key
challenges unique to hardness-based imbalance, distinguishing it from
traditional data-based imbalance. Our insights help explain why theoretical
sample complexity expectations fail to translate into practical performance
gains and we provide guidelines for future research.
|
2504.07061 | Shi Pan | Shi Pan and Jianan Chen and Maria Secrier | Teaching pathology foundation models to accurately predict gene
expression with parameter efficient knowledge transfer | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene expression profiling provides critical insights into cellular
heterogeneity, biological processes and disease mechanisms. There has been an
increasing interest in computational approaches that can predict gene
expression directly from digitalized histopathology images. While image
foundation models have shown promise in a variety of pathology downstream
analysis, their performances on gene-expression prediction are still limited.
Explicitly incorporating information from the transcriptomic models can help
image models to address domain shift, yet the fine-tuning and alignment of
foundation models can be expensive. In the work, we propose Parameter Efficient
Knowledge trAnsfer (PEKA), a novel framework that leverages Block-Affine
Adaptation and integrates knowledge distillation and structure alignment losses
for cross-modal knowledge transfer. We evaluated PEKA for gene expression
prediction using multiple spatial transcriptomics datasets (comprising 206,123
image tiles with matched gene expression profiles) that encompassed various
types of tissue. PEKA achieved at least 5\% performance improvement over
baseline foundation models while also outperforming alternative
parameter-efficient fine-tuning strategies. We will release the code, datasets
and aligned models after peer-review to facilitate broader adoption and further
development for parameter efficient model alignment.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 17:24:41 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Pan",
"Shi",
""
],
[
"Chen",
"Jianan",
""
],
[
"Secrier",
"Maria",
""
]
] | TITLE: Teaching pathology foundation models to accurately predict gene
expression with parameter efficient knowledge transfer
ABSTRACT: Gene expression profiling provides critical insights into cellular
heterogeneity, biological processes and disease mechanisms. There has been an
increasing interest in computational approaches that can predict gene
expression directly from digitalized histopathology images. While image
foundation models have shown promise in a variety of pathology downstream
analysis, their performances on gene-expression prediction are still limited.
Explicitly incorporating information from the transcriptomic models can help
image models to address domain shift, yet the fine-tuning and alignment of
foundation models can be expensive. In the work, we propose Parameter Efficient
Knowledge trAnsfer (PEKA), a novel framework that leverages Block-Affine
Adaptation and integrates knowledge distillation and structure alignment losses
for cross-modal knowledge transfer. We evaluated PEKA for gene expression
prediction using multiple spatial transcriptomics datasets (comprising 206,123
image tiles with matched gene expression profiles) that encompassed various
types of tissue. PEKA achieved at least 5\% performance improvement over
baseline foundation models while also outperforming alternative
parameter-efficient fine-tuning strategies. We will release the code, datasets
and aligned models after peer-review to facilitate broader adoption and further
development for parameter efficient model alignment.
|
2504.07065 | William Simon | Riselda Kodra, Hadjer Benmeziane, Irem Boybat, William Andrew Simon | Enhancing Downstream Analysis in Genome Sequencing: Species
Classification While Basecalling | Accepted as Tiny Paper at MLGenX workshop, ICLR, 2025 | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | The ability to quickly and accurately identify microbial species in a sample,
known as metagenomic profiling, is critical across various fields, from
healthcare to environmental science. This paper introduces a novel method to
profile signals coming from sequencing devices in parallel with determining
their nucleotide sequences, a process known as basecalling, via a
multi-objective deep neural network for simultaneous basecalling and
multi-class genome classification. We introduce a new loss strategy where
losses for basecalling and classification are back-propagated separately, with
model weights combined for the shared layers, and a pre-configured ranking
strategy allowing top-K species accuracy, giving users flexibility to choose
between higher accuracy or higher speed at identifying the species. We achieve
state-of-the-art basecalling accuracies, while classification accuracies meet
and exceed the results of state-of-the-art binary classifiers, attaining an
average of 92.5%/98.9% accuracy at identifying the top-1/3 species among a
total of 17 genomes in the Wick bacterial dataset. The work presented here has
implications for future studies in metagenomic profiling by accelerating the
bottleneck step of matching the DNA sequence to the correct genome.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 17:30:43 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Kodra",
"Riselda",
""
],
[
"Benmeziane",
"Hadjer",
""
],
[
"Boybat",
"Irem",
""
],
[
"Simon",
"William Andrew",
""
]
] | TITLE: Enhancing Downstream Analysis in Genome Sequencing: Species
Classification While Basecalling
ABSTRACT: The ability to quickly and accurately identify microbial species in a sample,
known as metagenomic profiling, is critical across various fields, from
healthcare to environmental science. This paper introduces a novel method to
profile signals coming from sequencing devices in parallel with determining
their nucleotide sequences, a process known as basecalling, via a
multi-objective deep neural network for simultaneous basecalling and
multi-class genome classification. We introduce a new loss strategy where
losses for basecalling and classification are back-propagated separately, with
model weights combined for the shared layers, and a pre-configured ranking
strategy allowing top-K species accuracy, giving users flexibility to choose
between higher accuracy or higher speed at identifying the species. We achieve
state-of-the-art basecalling accuracies, while classification accuracies meet
and exceed the results of state-of-the-art binary classifiers, attaining an
average of 92.5%/98.9% accuracy at identifying the top-1/3 species among a
total of 17 genomes in the Wick bacterial dataset. The work presented here has
implications for future studies in metagenomic profiling by accelerating the
bottleneck step of matching the DNA sequence to the correct genome.
|
2504.07069 | Bibek Paudel | Bibek Paudel, Alexander Lyzhov, Preetam Joshi, Puneet Anand | HalluciNot: Hallucination Detection Through Context and Common Knowledge
Verification | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | This paper introduces a comprehensive system for detecting hallucinations in
large language model (LLM) outputs in enterprise settings. We present a novel
taxonomy of LLM responses specific to hallucination in enterprise applications,
categorizing them into context-based, common knowledge, enterprise-specific,
and innocuous statements. Our hallucination detection model HDM-2 validates LLM
responses with respect to both context and generally known facts (common
knowledge). It provides both hallucination scores and word-level annotations,
enabling precise identification of problematic content. To evaluate it on
context-based and common-knowledge hallucinations, we introduce a new dataset
HDMBench. Experimental results demonstrate that HDM-2 out-performs existing
approaches across RagTruth, TruthfulQA, and HDMBench datasets. This work
addresses the specific challenges of enterprise deployment, including
computational efficiency, domain specialization, and fine-grained error
identification. Our evaluation dataset, model weights, and inference code are
publicly available.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 17:39:41 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Paudel",
"Bibek",
""
],
[
"Lyzhov",
"Alexander",
""
],
[
"Joshi",
"Preetam",
""
],
[
"Anand",
"Puneet",
""
]
] | TITLE: HalluciNot: Hallucination Detection Through Context and Common Knowledge
Verification
ABSTRACT: This paper introduces a comprehensive system for detecting hallucinations in
large language model (LLM) outputs in enterprise settings. We present a novel
taxonomy of LLM responses specific to hallucination in enterprise applications,
categorizing them into context-based, common knowledge, enterprise-specific,
and innocuous statements. Our hallucination detection model HDM-2 validates LLM
responses with respect to both context and generally known facts (common
knowledge). It provides both hallucination scores and word-level annotations,
enabling precise identification of problematic content. To evaluate it on
context-based and common-knowledge hallucinations, we introduce a new dataset
HDMBench. Experimental results demonstrate that HDM-2 out-performs existing
approaches across RagTruth, TruthfulQA, and HDMBench datasets. This work
addresses the specific challenges of enterprise deployment, including
computational efficiency, domain specialization, and fine-grained error
identification. Our evaluation dataset, model weights, and inference code are
publicly available.
|
2504.07072 | Desmond Elliott | Israfel Salazar, Manuel Fern\'andez Burda, Shayekh Bin Islam, Arshia
Soltani Moakhar, Shivalika Singh, Fabian Farestam, Angelika Romanou, Danylo
Boiko, Dipika Khullar, Mike Zhang, Dominik Krzemi\'nski, Jekaterina Novikova,
Lu\'isa Shimabucoro, Joseph Marvin Imperial, Rishabh Maheshwary, Sharad
Duwal, Alfonso Amayuelas, Swati Rajwal, Jebish Purbey, Ahmed Ruby, Nicholas
Popovi\v{c}, Marek Suppa, Azmine Toushik Wasi, Ram Mohan Rao Kadiyala, Olga
Tsymboi, Maksim Kostritsya, Bardia Soltani Moakhar, Gabriel da Costa Merlin,
Ot\'avio Ferracioli Coletti, Maral Jabbari Shiviari, MohammadAmin farahani
fard, Silvia Fernandez, Mar\'ia Grandury, Dmitry Abulkhanov, Drishti Sharma,
Andre Guarnier De Mitri, Leticia Bossatto Marchezi, Johan Obando-Ceron, Nazar
Kohut, Beyza Ermis, Desmond Elliott, Enzo Ferrante, Sara Hooker, Marzieh
Fadaee | Kaleidoscope: In-language Exams for Massively Multilingual Vision
Evaluation | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | The evaluation of vision-language models (VLMs) has mainly relied on
English-language benchmarks, leaving significant gaps in both multilingual and
multicultural coverage. While multilingual benchmarks have expanded, both in
size and languages, many rely on translations of English datasets, failing to
capture cultural nuances. In this work, we propose Kaleidoscope, as the most
comprehensive exam benchmark to date for the multilingual evaluation of
vision-language models. Kaleidoscope is a large-scale, in-language multimodal
benchmark designed to evaluate VLMs across diverse languages and visual inputs.
Kaleidoscope covers 18 languages and 14 different subjects, amounting to a
total of 20,911 multiple-choice questions. Built through an open science
collaboration with a diverse group of researchers worldwide, Kaleidoscope
ensures linguistic and cultural authenticity. We evaluate top-performing
multilingual vision-language models and find that they perform poorly on
low-resource languages and in complex multimodal scenarios. Our results
highlight the need for progress on culturally inclusive multimodal evaluation
frameworks.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 17:43:16 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Salazar",
"Israfel",
""
],
[
"Burda",
"Manuel Fernández",
""
],
[
"Islam",
"Shayekh Bin",
""
],
[
"Moakhar",
"Arshia Soltani",
""
],
[
"Singh",
"Shivalika",
""
],
[
"Farestam",
"Fabian",
""
],
[
"Romanou",
"Angelika",
""
],
[
"Boiko",
"Danylo",
""
],
[
"Khullar",
"Dipika",
""
],
[
"Zhang",
"Mike",
""
],
[
"Krzemiński",
"Dominik",
""
],
[
"Novikova",
"Jekaterina",
""
],
[
"Shimabucoro",
"Luísa",
""
],
[
"Imperial",
"Joseph Marvin",
""
],
[
"Maheshwary",
"Rishabh",
""
],
[
"Duwal",
"Sharad",
""
],
[
"Amayuelas",
"Alfonso",
""
],
[
"Rajwal",
"Swati",
""
],
[
"Purbey",
"Jebish",
""
],
[
"Ruby",
"Ahmed",
""
],
[
"Popovič",
"Nicholas",
""
],
[
"Suppa",
"Marek",
""
],
[
"Wasi",
"Azmine Toushik",
""
],
[
"Kadiyala",
"Ram Mohan Rao",
""
],
[
"Tsymboi",
"Olga",
""
],
[
"Kostritsya",
"Maksim",
""
],
[
"Moakhar",
"Bardia Soltani",
""
],
[
"Merlin",
"Gabriel da Costa",
""
],
[
"Coletti",
"Otávio Ferracioli",
""
],
[
"Shiviari",
"Maral Jabbari",
""
],
[
"fard",
"MohammadAmin farahani",
""
],
[
"Fernandez",
"Silvia",
""
],
[
"Grandury",
"María",
""
],
[
"Abulkhanov",
"Dmitry",
""
],
[
"Sharma",
"Drishti",
""
],
[
"De Mitri",
"Andre Guarnier",
""
],
[
"Marchezi",
"Leticia Bossatto",
""
],
[
"Obando-Ceron",
"Johan",
""
],
[
"Kohut",
"Nazar",
""
],
[
"Ermis",
"Beyza",
""
],
[
"Elliott",
"Desmond",
""
],
[
"Ferrante",
"Enzo",
""
],
[
"Hooker",
"Sara",
""
],
[
"Fadaee",
"Marzieh",
""
]
] | TITLE: Kaleidoscope: In-language Exams for Massively Multilingual Vision
Evaluation
ABSTRACT: The evaluation of vision-language models (VLMs) has mainly relied on
English-language benchmarks, leaving significant gaps in both multilingual and
multicultural coverage. While multilingual benchmarks have expanded, both in
size and languages, many rely on translations of English datasets, failing to
capture cultural nuances. In this work, we propose Kaleidoscope, as the most
comprehensive exam benchmark to date for the multilingual evaluation of
vision-language models. Kaleidoscope is a large-scale, in-language multimodal
benchmark designed to evaluate VLMs across diverse languages and visual inputs.
Kaleidoscope covers 18 languages and 14 different subjects, amounting to a
total of 20,911 multiple-choice questions. Built through an open science
collaboration with a diverse group of researchers worldwide, Kaleidoscope
ensures linguistic and cultural authenticity. We evaluate top-performing
multilingual vision-language models and find that they perform poorly on
low-resource languages and in complex multimodal scenarios. Our results
highlight the need for progress on culturally inclusive multimodal evaluation
frameworks.
|
2504.07080 | Atharva Pandey | Atharva Pandey, Kshitij Dubey, Rahul Sharma, Amit Sharma | DeduCE: Deductive Consistency as a Framework to Evaluate LLM Reasoning | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite great performance on Olympiad-level reasoning problems, frontier
large language models can still struggle on high school math when presented
with novel problems outside standard benchmarks. Going beyond final accuracy,
we propose a deductive consistency metric to analyze chain-of-thought output
from language models (LMs).Formally, deductive reasoning involves two subtasks:
understanding a set of input premises and inferring the conclusions that follow
from them. The proposed metric studies LMs' performance on these subtasks, with
the goal of explaining LMs' reasoning errors on novel problems: how well do LMs
understand input premises with increasing context lengths, and how well can
they infer conclusions over multiple reasoning hops? Since existing benchmarks
may be memorized, we develop a pipeline to evaluate LMs' deductive consistency
on novel, perturbed versions of benchmark problems. On novel grade school math
problems (GSM-8k), we find that LMs are fairly robust to increasing number of
input premises, but suffer significant accuracy decay as the number of
reasoning hops is increased. Interestingly, these errors are masked in the
original benchmark as all models achieve near 100% accuracy. As we increase the
number of solution steps using a synthetic dataset, prediction over multiple
hops still remains the major source of error compared to understanding input
premises. Other factors, such as shifts in language style or natural
propagation of early errors do not explain the trends. Our analysis provides a
new view to characterize LM reasoning -- as computations over a window of input
premises and reasoning hops -- that can provide unified evaluation across
problem domains.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 17:53:55 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Pandey",
"Atharva",
""
],
[
"Dubey",
"Kshitij",
""
],
[
"Sharma",
"Rahul",
""
],
[
"Sharma",
"Amit",
""
]
] | TITLE: DeduCE: Deductive Consistency as a Framework to Evaluate LLM Reasoning
ABSTRACT: Despite great performance on Olympiad-level reasoning problems, frontier
large language models can still struggle on high school math when presented
with novel problems outside standard benchmarks. Going beyond final accuracy,
we propose a deductive consistency metric to analyze chain-of-thought output
from language models (LMs).Formally, deductive reasoning involves two subtasks:
understanding a set of input premises and inferring the conclusions that follow
from them. The proposed metric studies LMs' performance on these subtasks, with
the goal of explaining LMs' reasoning errors on novel problems: how well do LMs
understand input premises with increasing context lengths, and how well can
they infer conclusions over multiple reasoning hops? Since existing benchmarks
may be memorized, we develop a pipeline to evaluate LMs' deductive consistency
on novel, perturbed versions of benchmark problems. On novel grade school math
problems (GSM-8k), we find that LMs are fairly robust to increasing number of
input premises, but suffer significant accuracy decay as the number of
reasoning hops is increased. Interestingly, these errors are masked in the
original benchmark as all models achieve near 100% accuracy. As we increase the
number of solution steps using a synthetic dataset, prediction over multiple
hops still remains the major source of error compared to understanding input
premises. Other factors, such as shifts in language style or natural
propagation of early errors do not explain the trends. Our analysis provides a
new view to characterize LM reasoning -- as computations over a window of input
premises and reasoning hops -- that can provide unified evaluation across
problem domains.
|
2504.07093 | Gene Chou | Gene Chou, Wenqi Xian, Guandao Yang, Mohamed Abdelfattah, Bharath
Hariharan, Noah Snavely, Ning Yu, Paul Debevec | FlashDepth: Real-time Streaming Video Depth Estimation at 2K Resolution | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A versatile video depth estimation model should (1) be accurate and
consistent across frames, (2) produce high-resolution depth maps, and (3)
support real-time streaming. We propose FlashDepth, a method that satisfies all
three requirements, performing depth estimation on a 2044x1148 streaming video
at 24 FPS. We show that, with careful modifications to pretrained single-image
depth models, these capabilities are enabled with relatively little data and
training. We evaluate our approach across multiple unseen datasets against
state-of-the-art depth models, and find that ours outperforms them in terms of
boundary sharpness and speed by a significant margin, while maintaining
competitive accuracy. We hope our model will enable various applications that
require high-resolution depth, such as video editing, and online
decision-making, such as robotics.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 17:59:31 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Chou",
"Gene",
""
],
[
"Xian",
"Wenqi",
""
],
[
"Yang",
"Guandao",
""
],
[
"Abdelfattah",
"Mohamed",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"Snavely",
"Noah",
""
],
[
"Yu",
"Ning",
""
],
[
"Debevec",
"Paul",
""
]
] | TITLE: FlashDepth: Real-time Streaming Video Depth Estimation at 2K Resolution
ABSTRACT: A versatile video depth estimation model should (1) be accurate and
consistent across frames, (2) produce high-resolution depth maps, and (3)
support real-time streaming. We propose FlashDepth, a method that satisfies all
three requirements, performing depth estimation on a 2044x1148 streaming video
at 24 FPS. We show that, with careful modifications to pretrained single-image
depth models, these capabilities are enabled with relatively little data and
training. We evaluate our approach across multiple unseen datasets against
state-of-the-art depth models, and find that ours outperforms them in terms of
boundary sharpness and speed by a significant margin, while maintaining
competitive accuracy. We hope our model will enable various applications that
require high-resolution depth, such as video editing, and online
decision-making, such as robotics.
|
2110.03427 | Atanu Mandal | Atanu Mandal, Santanu Pal, Indranil Dutta, Mahidas Bhattacharya, Sudip
Kumar Naskar | Is Attention always needed? A Case Study on Language Identification from
Speech | Accepted for publication in Natural Language Engineering | Nat. lang. process. 31 (2025) 250-276 | 10.1017/nlp.2024.22 | null | cs.LG cs.CL cs.SD eess.AS eess.SP | http://creativecommons.org/licenses/by/4.0/ | Language Identification (LID) is a crucial preliminary process in the field
of Automatic Speech Recognition (ASR) that involves the identification of a
spoken language from audio samples. Contemporary systems that can process
speech in multiple languages require users to expressly designate one or more
languages prior to utilization. The LID task assumes a significant role in
scenarios where ASR systems are unable to comprehend the spoken language in
multilingual settings, leading to unsuccessful speech recognition outcomes. The
present study introduces convolutional recurrent neural network (CRNN) based
LID, designed to operate on the Mel-frequency Cepstral Coefficient (MFCC)
characteristics of audio samples. Furthermore, we replicate certain
state-of-the-art methodologies, specifically the Convolutional Neural Network
(CNN) and Attention-based Convolutional Recurrent Neural Network (CRNN with
attention), and conduct a comparative analysis with our CRNN-based approach. We
conducted comprehensive evaluations on thirteen distinct Indian languages and
our model resulted in over 98\% classification accuracy. The LID model exhibits
high-performance levels ranging from 97% to 100% for languages that are
linguistically similar. The proposed LID model exhibits a high degree of
extensibility to additional languages and demonstrates a strong resistance to
noise, achieving 91.2% accuracy in a noisy setting when applied to a European
Language (EU) dataset.
| [
{
"version": "v1",
"created": "Tue, 5 Oct 2021 16:38:57 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Jul 2022 03:47:05 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Oct 2023 15:21:08 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mandal",
"Atanu",
""
],
[
"Pal",
"Santanu",
""
],
[
"Dutta",
"Indranil",
""
],
[
"Bhattacharya",
"Mahidas",
""
],
[
"Naskar",
"Sudip Kumar",
""
]
] | TITLE: Is Attention always needed? A Case Study on Language Identification from
Speech
ABSTRACT: Language Identification (LID) is a crucial preliminary process in the field
of Automatic Speech Recognition (ASR) that involves the identification of a
spoken language from audio samples. Contemporary systems that can process
speech in multiple languages require users to expressly designate one or more
languages prior to utilization. The LID task assumes a significant role in
scenarios where ASR systems are unable to comprehend the spoken language in
multilingual settings, leading to unsuccessful speech recognition outcomes. The
present study introduces convolutional recurrent neural network (CRNN) based
LID, designed to operate on the Mel-frequency Cepstral Coefficient (MFCC)
characteristics of audio samples. Furthermore, we replicate certain
state-of-the-art methodologies, specifically the Convolutional Neural Network
(CNN) and Attention-based Convolutional Recurrent Neural Network (CRNN with
attention), and conduct a comparative analysis with our CRNN-based approach. We
conducted comprehensive evaluations on thirteen distinct Indian languages and
our model resulted in over 98\% classification accuracy. The LID model exhibits
high-performance levels ranging from 97% to 100% for languages that are
linguistically similar. The proposed LID model exhibits a high degree of
extensibility to additional languages and demonstrates a strong resistance to
noise, achieving 91.2% accuracy in a noisy setting when applied to a European
Language (EU) dataset.
|
2111.13463 | Ivica Kostric | Ivica Kostric and Krisztian Balog and Filip Radlinski | Generating Usage-related Questions for Preference Elicitation in
Conversational Recommender Systems | Journal extension of our RecSys '21 paper titled "Soliciting User
Preferences in Conversational Recommender Systems via Usage-related
Questions." This version appears in ACM Transactions on Recommender Systems
(ToRS), 2(2), Article 12, April 2024, with expanded experiments and new
analysis | ACM Transactions on Recommender Systems (ToRS), Volume 2, Issue 2,
Article 12 (April 2024) | 10.1145/3629981 | null | cs.IR cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key distinguishing feature of conversational recommender systems over
traditional recommender systems is their ability to elicit user preferences
using natural language. Currently, the predominant approach to preference
elicitation is to ask questions directly about items or item attributes. Users
searching for recommendations may not have deep knowledge of the available
options in a given domain. As such, they might not be aware of key attributes
or desirable values for them. However, in many settings, talking about the
planned use of items does not present any difficulties, even for those that are
new to a domain. In this paper, we propose a novel approach to preference
elicitation by asking implicit questions based on item usage. As one of the
main contributions of this work, we develop a multi-stage data annotation
protocol using crowdsourcing, to create a high-quality labeled training
dataset. Another main contribution is the development of four models for the
question generation task: two template-based baseline models and two neural
text-to-text models. The template-based models use heuristically extracted
common patterns found in the training data, while the neural models use the
training data to learn to generate questions automatically. Using common
metrics from machine translation for automatic evaluation, we show that our
approaches are effective in generating elicitation questions, even with limited
training data. We further employ human evaluation for comparing the generated
questions using both pointwise and pairwise evaluation designs. We find that
the human evaluation results are consistent with the automatic ones, allowing
us to draw conclusions about the quality of the generated questions with
certainty. Finally, we provide a detailed analysis of cases where the models
show their limitations.
| [
{
"version": "v1",
"created": "Fri, 26 Nov 2021 12:23:14 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 13:25:51 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kostric",
"Ivica",
""
],
[
"Balog",
"Krisztian",
""
],
[
"Radlinski",
"Filip",
""
]
] | TITLE: Generating Usage-related Questions for Preference Elicitation in
Conversational Recommender Systems
ABSTRACT: A key distinguishing feature of conversational recommender systems over
traditional recommender systems is their ability to elicit user preferences
using natural language. Currently, the predominant approach to preference
elicitation is to ask questions directly about items or item attributes. Users
searching for recommendations may not have deep knowledge of the available
options in a given domain. As such, they might not be aware of key attributes
or desirable values for them. However, in many settings, talking about the
planned use of items does not present any difficulties, even for those that are
new to a domain. In this paper, we propose a novel approach to preference
elicitation by asking implicit questions based on item usage. As one of the
main contributions of this work, we develop a multi-stage data annotation
protocol using crowdsourcing, to create a high-quality labeled training
dataset. Another main contribution is the development of four models for the
question generation task: two template-based baseline models and two neural
text-to-text models. The template-based models use heuristically extracted
common patterns found in the training data, while the neural models use the
training data to learn to generate questions automatically. Using common
metrics from machine translation for automatic evaluation, we show that our
approaches are effective in generating elicitation questions, even with limited
training data. We further employ human evaluation for comparing the generated
questions using both pointwise and pairwise evaluation designs. We find that
the human evaluation results are consistent with the automatic ones, allowing
us to draw conclusions about the quality of the generated questions with
certainty. Finally, we provide a detailed analysis of cases where the models
show their limitations.
|
2210.15527 | Yun-Hin Chan | Yun-Hin Chan, Edith C.-H. Ngai | Exploiting Features and Logits in Heterogeneous Federated Learning | Accepted by Computer Networks | null | 10.1016/j.comnet.2025.111271 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the rapid growth of IoT and artificial intelligence, deploying neural
networks on IoT devices is becoming increasingly crucial for edge intelligence.
Federated learning (FL) facilitates the management of edge devices to
collaboratively train a shared model while maintaining training data local and
private. However, a general assumption in FL is that all edge devices are
trained on the same machine learning model, which may be impractical
considering diverse device capabilities. For instance, less capable devices may
slow down the updating process because they struggle to handle large models
appropriate for ordinary devices. In this paper, we propose a novel data-free
FL method that supports heterogeneous client models by managing features and
logits, called Felo; and its extension with a conditional VAE deployed in the
server, called Velo. Felo averages the mid-level features and logits from the
clients at the server based on their class labels to provide the average
features and logits, which are utilized for further training the client models.
Unlike Felo, the server has a conditional VAE in Velo, which is used for
training mid-level features and generating synthetic features according to the
labels. The clients optimize their models based on the synthetic features and
the average logits. We conduct experiments on two datasets and show
satisfactory performances of our methods compared with the state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2022 15:11:46 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 09:54:58 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Chan",
"Yun-Hin",
""
],
[
"Ngai",
"Edith C. -H.",
""
]
] | TITLE: Exploiting Features and Logits in Heterogeneous Federated Learning
ABSTRACT: Due to the rapid growth of IoT and artificial intelligence, deploying neural
networks on IoT devices is becoming increasingly crucial for edge intelligence.
Federated learning (FL) facilitates the management of edge devices to
collaboratively train a shared model while maintaining training data local and
private. However, a general assumption in FL is that all edge devices are
trained on the same machine learning model, which may be impractical
considering diverse device capabilities. For instance, less capable devices may
slow down the updating process because they struggle to handle large models
appropriate for ordinary devices. In this paper, we propose a novel data-free
FL method that supports heterogeneous client models by managing features and
logits, called Felo; and its extension with a conditional VAE deployed in the
server, called Velo. Felo averages the mid-level features and logits from the
clients at the server based on their class labels to provide the average
features and logits, which are utilized for further training the client models.
Unlike Felo, the server has a conditional VAE in Velo, which is used for
training mid-level features and generating synthetic features according to the
labels. The clients optimize their models based on the synthetic features and
the average logits. We conduct experiments on two datasets and show
satisfactory performances of our methods compared with the state-of-the-art
methods.
|
2301.00539 | Sudhansu Bala Das | Sudhansu Bala Das, Divyajoti Panda, Tapas Kumar Mishra, Bidyut Kr.
Patra | Statistical Machine Translation for Indic Languages | 32pages, 1 figure, 4 tables | Nat. lang. process. 31 (2025) 328-345 | 10.1017/nlp.2024.26 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Machine Translation (MT) system generally aims at automatic representation of
source language into target language retaining the originality of context using
various Natural Language Processing (NLP) techniques. Among various NLP
methods, Statistical Machine Translation(SMT). SMT uses probabilistic and
statistical techniques to analyze information and conversion. This paper
canvasses about the development of bilingual SMT models for translating English
to fifteen low-resource Indian Languages (ILs) and vice versa. At the outset,
all 15 languages are briefed with a short description related to our
experimental need. Further, a detailed analysis of Samanantar and OPUS dataset
for model building, along with standard benchmark dataset (Flores-200) for
fine-tuning and testing, is done as a part of our experiment. Different
preprocessing approaches are proposed in this paper to handle the noise of the
dataset. To create the system, MOSES open-source SMT toolkit is explored.
Distance reordering is utilized with the aim to understand the rules of grammar
and context-dependent adjustments through a phrase reordering categorization
framework. In our experiment, the quality of the translation is evaluated using
standard metrics such as BLEU, METEOR, and RIBES
| [
{
"version": "v1",
"created": "Mon, 2 Jan 2023 06:23:12 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Das",
"Sudhansu Bala",
""
],
[
"Panda",
"Divyajoti",
""
],
[
"Mishra",
"Tapas Kumar",
""
],
[
"Patra",
"Bidyut Kr.",
""
]
] | TITLE: Statistical Machine Translation for Indic Languages
ABSTRACT: Machine Translation (MT) system generally aims at automatic representation of
source language into target language retaining the originality of context using
various Natural Language Processing (NLP) techniques. Among various NLP
methods, Statistical Machine Translation(SMT). SMT uses probabilistic and
statistical techniques to analyze information and conversion. This paper
canvasses about the development of bilingual SMT models for translating English
to fifteen low-resource Indian Languages (ILs) and vice versa. At the outset,
all 15 languages are briefed with a short description related to our
experimental need. Further, a detailed analysis of Samanantar and OPUS dataset
for model building, along with standard benchmark dataset (Flores-200) for
fine-tuning and testing, is done as a part of our experiment. Different
preprocessing approaches are proposed in this paper to handle the noise of the
dataset. To create the system, MOSES open-source SMT toolkit is explored.
Distance reordering is utilized with the aim to understand the rules of grammar
and context-dependent adjustments through a phrase reordering categorization
framework. In our experiment, the quality of the translation is evaluated using
standard metrics such as BLEU, METEOR, and RIBES
|
2301.06650 | Lijun Sun Dr. | Vincent Zhihao Zheng, Seongjin Choi, Lijun Sun | Probabilistic Traffic Forecasting with Dynamic Regression | null | Probabilistic Traffic Forecasting with Dynamic Regression.
Transportation Science (2025) | 10.1287/trsc.2024.0560 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a dynamic regression (DR) framework that enhances
existing deep spatiotemporal models by incorporating structured learning for
the error process in traffic forecasting. The framework relaxes the assumption
of time independence by modeling the error series of the base model (i.e., a
well-established traffic forecasting model) using a matrix-variate
autoregressive (AR) model. The AR model is integrated into training by
redesigning the loss function. The newly designed loss function is based on the
likelihood of a non-isotropic error term, enabling the model to generate
probabilistic forecasts while preserving the original outputs of the base
model. Importantly, the additional parameters introduced by the DR framework
can be jointly optimized alongside the base model. Evaluation on
state-of-the-art (SOTA) traffic forecasting models using speed and flow
datasets demonstrates improved performance, with interpretable AR coefficients
and spatiotemporal covariance matrices enhancing the understanding of the
model.
| [
{
"version": "v1",
"created": "Tue, 17 Jan 2023 01:12:44 GMT"
},
{
"version": "v2",
"created": "Fri, 31 May 2024 15:05:40 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 14:26:10 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zheng",
"Vincent Zhihao",
""
],
[
"Choi",
"Seongjin",
""
],
[
"Sun",
"Lijun",
""
]
] | TITLE: Probabilistic Traffic Forecasting with Dynamic Regression
ABSTRACT: This paper proposes a dynamic regression (DR) framework that enhances
existing deep spatiotemporal models by incorporating structured learning for
the error process in traffic forecasting. The framework relaxes the assumption
of time independence by modeling the error series of the base model (i.e., a
well-established traffic forecasting model) using a matrix-variate
autoregressive (AR) model. The AR model is integrated into training by
redesigning the loss function. The newly designed loss function is based on the
likelihood of a non-isotropic error term, enabling the model to generate
probabilistic forecasts while preserving the original outputs of the base
model. Importantly, the additional parameters introduced by the DR framework
can be jointly optimized alongside the base model. Evaluation on
state-of-the-art (SOTA) traffic forecasting models using speed and flow
datasets demonstrates improved performance, with interpretable AR coefficients
and spatiotemporal covariance matrices enhancing the understanding of the
model.
|
2305.15203 | Lorenzo Basile | Lorenzo Basile, Nikos Karantzas, Alberto d'Onofrio, Luca Manzoni, Luca
Bortolussi, Alex Rodriguez, Fabio Anselmi | Frequency maps reveal the correlation between Adversarial Attacks and
Implicit Bias | Accepted at IJCNN 2025 | null | null | null | cs.LG cs.AI cs.CR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite their impressive performance in classification tasks, neural networks
are known to be vulnerable to adversarial attacks, subtle perturbations of the
input data designed to deceive the model. In this work, we investigate the
correlation between these perturbations and the implicit bias of neural
networks trained with gradient-based algorithms. To this end, we analyse a
representation of the network's implicit bias through the lens of the Fourier
transform. Specifically, we identify unique fingerprints of implicit bias and
adversarial attacks by calculating the minimal, essential frequencies needed
for accurate classification of each image, as well as the frequencies that
drive misclassification in its adversarially perturbed counterpart. This
approach enables us to uncover and analyse the correlation between these
essential frequencies, providing a precise map of how the network's biases
align or contrast with the frequency components exploited by adversarial
attacks. To this end, among other methods, we use a newly introduced technique
capable of detecting nonlinear correlations between high-dimensional datasets.
Our results provide empirical evidence that the network bias in Fourier space
and the target frequencies of adversarial attacks are highly correlated and
suggest new potential strategies for adversarial defence.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 14:40:23 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jul 2024 16:34:48 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 14:29:39 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Basile",
"Lorenzo",
""
],
[
"Karantzas",
"Nikos",
""
],
[
"d'Onofrio",
"Alberto",
""
],
[
"Manzoni",
"Luca",
""
],
[
"Bortolussi",
"Luca",
""
],
[
"Rodriguez",
"Alex",
""
],
[
"Anselmi",
"Fabio",
""
]
] | TITLE: Frequency maps reveal the correlation between Adversarial Attacks and
Implicit Bias
ABSTRACT: Despite their impressive performance in classification tasks, neural networks
are known to be vulnerable to adversarial attacks, subtle perturbations of the
input data designed to deceive the model. In this work, we investigate the
correlation between these perturbations and the implicit bias of neural
networks trained with gradient-based algorithms. To this end, we analyse a
representation of the network's implicit bias through the lens of the Fourier
transform. Specifically, we identify unique fingerprints of implicit bias and
adversarial attacks by calculating the minimal, essential frequencies needed
for accurate classification of each image, as well as the frequencies that
drive misclassification in its adversarially perturbed counterpart. This
approach enables us to uncover and analyse the correlation between these
essential frequencies, providing a precise map of how the network's biases
align or contrast with the frequency components exploited by adversarial
attacks. To this end, among other methods, we use a newly introduced technique
capable of detecting nonlinear correlations between high-dimensional datasets.
Our results provide empirical evidence that the network bias in Fourier space
and the target frequencies of adversarial attacks are highly correlated and
suggest new potential strategies for adversarial defence.
|
2310.16810 | Yongxin Zhou | Yongxin Zhou, Fabien Ringeval, Fran\c{c}ois Portet | Can GPT models Follow Human Summarization Guidelines? A Study for
Targeted Communication Goals | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This study investigates the ability of GPT models (ChatGPT, GPT-4 and GPT-4o)
to generate dialogue summaries that adhere to human guidelines. Our evaluation
involved experimenting with various prompts to guide the models in complying
with guidelines on two datasets: DialogSum (English social conversations) and
DECODA (French call center interactions). Human evaluation, based on
summarization guidelines, served as the primary assessment method, complemented
by extensive quantitative and qualitative analyses. Our findings reveal a
preference for GPT-generated summaries over those from task-specific
pre-trained models and reference summaries, highlighting GPT models' ability to
follow human guidelines despite occasionally producing longer outputs and
exhibiting divergent lexical and structural alignment with references. The
discrepancy between ROUGE, BERTScore, and human evaluation underscores the need
for more reliable automatic evaluation metrics.
| [
{
"version": "v1",
"created": "Wed, 25 Oct 2023 17:39:07 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 21:42:15 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhou",
"Yongxin",
""
],
[
"Ringeval",
"Fabien",
""
],
[
"Portet",
"François",
""
]
] | TITLE: Can GPT models Follow Human Summarization Guidelines? A Study for
Targeted Communication Goals
ABSTRACT: This study investigates the ability of GPT models (ChatGPT, GPT-4 and GPT-4o)
to generate dialogue summaries that adhere to human guidelines. Our evaluation
involved experimenting with various prompts to guide the models in complying
with guidelines on two datasets: DialogSum (English social conversations) and
DECODA (French call center interactions). Human evaluation, based on
summarization guidelines, served as the primary assessment method, complemented
by extensive quantitative and qualitative analyses. Our findings reveal a
preference for GPT-generated summaries over those from task-specific
pre-trained models and reference summaries, highlighting GPT models' ability to
follow human guidelines despite occasionally producing longer outputs and
exhibiting divergent lexical and structural alignment with references. The
discrepancy between ROUGE, BERTScore, and human evaluation underscores the need
for more reliable automatic evaluation metrics.
|
2311.01759 | Jianlei Yang | Jianlei Yang, Jiacheng Liao, Fanding Lei, Meichen Liu, Junyi Chen,
Lingkun Long, Han Wan, Bei Yu, Weisheng Zhao | TinyFormer: Efficient Transformer Design and Deployment on Tiny Devices | This work has been submitted to the IEEE for possible publication | null | null | null | cs.LG cs.AR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Developing deep learning models on tiny devices (e.g. Microcontroller units,
MCUs) has attracted much attention in various embedded IoT applications.
However, it is challenging to efficiently design and deploy recent advanced
models (e.g. transformers) on tiny devices due to their severe hardware
resource constraints. In this work, we propose TinyFormer, a framework
specifically designed to develop and deploy resource-efficient transformers on
MCUs. TinyFormer mainly consists of SuperNAS, SparseNAS and SparseEngine.
Separately, SuperNAS aims to search for an appropriate supernet from a vast
search space. SparseNAS evaluates the best sparse single-path model including
transformer architecture from the identified supernet. Finally, SparseEngine
efficiently deploys the searched sparse models onto MCUs. To the best of our
knowledge, SparseEngine is the first deployment framework capable of performing
inference of sparse models with transformer on MCUs. Evaluation results on the
CIFAR-10 dataset demonstrate that TinyFormer can develop efficient transformers
with an accuracy of 96.1% while adhering to hardware constraints of 1MB storage
and $320$KB memory. Additionally, TinyFormer achieves significant speedups in
sparse inference, up to 12.2x, when compared to the CMSIS-NN library.
TinyFormer is believed to bring powerful transformers into TinyML scenarios and
greatly expand the scope of deep learning applications.
| [
{
"version": "v1",
"created": "Fri, 3 Nov 2023 07:34:47 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 11:42:15 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yang",
"Jianlei",
""
],
[
"Liao",
"Jiacheng",
""
],
[
"Lei",
"Fanding",
""
],
[
"Liu",
"Meichen",
""
],
[
"Chen",
"Junyi",
""
],
[
"Long",
"Lingkun",
""
],
[
"Wan",
"Han",
""
],
[
"Yu",
"Bei",
""
],
[
"Zhao",
"Weisheng",
""
]
] | TITLE: TinyFormer: Efficient Transformer Design and Deployment on Tiny Devices
ABSTRACT: Developing deep learning models on tiny devices (e.g. Microcontroller units,
MCUs) has attracted much attention in various embedded IoT applications.
However, it is challenging to efficiently design and deploy recent advanced
models (e.g. transformers) on tiny devices due to their severe hardware
resource constraints. In this work, we propose TinyFormer, a framework
specifically designed to develop and deploy resource-efficient transformers on
MCUs. TinyFormer mainly consists of SuperNAS, SparseNAS and SparseEngine.
Separately, SuperNAS aims to search for an appropriate supernet from a vast
search space. SparseNAS evaluates the best sparse single-path model including
transformer architecture from the identified supernet. Finally, SparseEngine
efficiently deploys the searched sparse models onto MCUs. To the best of our
knowledge, SparseEngine is the first deployment framework capable of performing
inference of sparse models with transformer on MCUs. Evaluation results on the
CIFAR-10 dataset demonstrate that TinyFormer can develop efficient transformers
with an accuracy of 96.1% while adhering to hardware constraints of 1MB storage
and $320$KB memory. Additionally, TinyFormer achieves significant speedups in
sparse inference, up to 12.2x, when compared to the CMSIS-NN library.
TinyFormer is believed to bring powerful transformers into TinyML scenarios and
greatly expand the scope of deep learning applications.
|
2311.18681 | Chantal Pellegrini | Chantal Pellegrini, Ege \"Ozsoy, Benjamin Busam, Nassir Navab,
Matthias Keicher | RaDialog: A Large Vision-Language Model for Radiology Report Generation
and Conversational Assistance | improved version accepted at MIDL 2025:
https://openreview.net/pdf?id=trUvr1gSNI | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conversational AI tools that can generate and discuss clinically correct
radiology reports for a given medical image have the potential to transform
radiology. Such a human-in-the-loop radiology assistant could facilitate a
collaborative diagnostic process, thus saving time and improving the quality of
reports. Towards this goal, we introduce RaDialog, the first thoroughly
evaluated and publicly available large vision-language model for radiology
report generation and interactive dialog. RaDialog effectively integrates
visual image features and structured pathology findings with a large language
model (LLM) while simultaneously adapting it to a specialized domain using
parameter-efficient fine-tuning. To keep the conversational abilities of the
underlying LLM, we propose a comprehensive, semi-automatically labeled,
image-grounded instruct dataset for chest X-ray radiology tasks. By training
with this dataset, our method achieves state-of-the-art clinical correctness in
report generation and shows impressive abilities in interactive tasks such as
correcting reports and answering questions, serving as a foundational step
toward clinical dialog systems. Our code is available on github:
https://github.com/ChantalMP/RaDialog.
| [
{
"version": "v1",
"created": "Thu, 30 Nov 2023 16:28:40 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 07:32:34 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Pellegrini",
"Chantal",
""
],
[
"Özsoy",
"Ege",
""
],
[
"Busam",
"Benjamin",
""
],
[
"Navab",
"Nassir",
""
],
[
"Keicher",
"Matthias",
""
]
] | TITLE: RaDialog: A Large Vision-Language Model for Radiology Report Generation
and Conversational Assistance
ABSTRACT: Conversational AI tools that can generate and discuss clinically correct
radiology reports for a given medical image have the potential to transform
radiology. Such a human-in-the-loop radiology assistant could facilitate a
collaborative diagnostic process, thus saving time and improving the quality of
reports. Towards this goal, we introduce RaDialog, the first thoroughly
evaluated and publicly available large vision-language model for radiology
report generation and interactive dialog. RaDialog effectively integrates
visual image features and structured pathology findings with a large language
model (LLM) while simultaneously adapting it to a specialized domain using
parameter-efficient fine-tuning. To keep the conversational abilities of the
underlying LLM, we propose a comprehensive, semi-automatically labeled,
image-grounded instruct dataset for chest X-ray radiology tasks. By training
with this dataset, our method achieves state-of-the-art clinical correctness in
report generation and shows impressive abilities in interactive tasks such as
correcting reports and answering questions, serving as a foundational step
toward clinical dialog systems. Our code is available on github:
https://github.com/ChantalMP/RaDialog.
|
2312.16379 | Alexey Melnikov | Asel Sagingalieva, Stefan Komornyik, Ayush Joshi, Christopher Mansell,
Karan Pinto, Markus Pflitsch, and Alexey Melnikov | Photovoltaic power forecasting using quantum machine learning | 12 pages, 4 figures, 1 table | null | null | null | cs.LG cs.ET quant-ph | http://creativecommons.org/licenses/by/4.0/ | Predicting solar panel power output is crucial for advancing the transition
to renewable energy but is complicated by the variable and non-linear nature of
solar energy. This is influenced by numerous meteorological factors,
geographical positioning, and photovoltaic cell properties, posing significant
challenges to forecasting accuracy and grid stability. Our study introduces a
suite of solutions centered around hybrid quantum neural networks designed to
tackle these complexities. The first proposed model, the Hybrid Quantum Long
Short-Term Memory, surpasses all tested models by achieving mean absolute
errors and mean squared errors that are more than 40% lower. The second
proposed model, the Hybrid Quantum Sequence-to-Sequence neural network, once
trained, predicts photovoltaic power with 16% lower mean absolute error for
arbitrary time intervals without the need for prior meteorological data,
highlighting its versatility. Moreover, our hybrid models perform better even
when trained on limited datasets, underlining their potential utility in
data-scarce scenarios. These findings represent progress towards resolving time
series prediction challenges in energy forecasting through hybrid quantum
models, showcasing the transformative potential of quantum machine learning in
catalyzing the renewable energy transition.
| [
{
"version": "v1",
"created": "Wed, 27 Dec 2023 02:37:46 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 22:55:21 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sagingalieva",
"Asel",
""
],
[
"Komornyik",
"Stefan",
""
],
[
"Joshi",
"Ayush",
""
],
[
"Mansell",
"Christopher",
""
],
[
"Pinto",
"Karan",
""
],
[
"Pflitsch",
"Markus",
""
],
[
"Melnikov",
"Alexey",
""
]
] | TITLE: Photovoltaic power forecasting using quantum machine learning
ABSTRACT: Predicting solar panel power output is crucial for advancing the transition
to renewable energy but is complicated by the variable and non-linear nature of
solar energy. This is influenced by numerous meteorological factors,
geographical positioning, and photovoltaic cell properties, posing significant
challenges to forecasting accuracy and grid stability. Our study introduces a
suite of solutions centered around hybrid quantum neural networks designed to
tackle these complexities. The first proposed model, the Hybrid Quantum Long
Short-Term Memory, surpasses all tested models by achieving mean absolute
errors and mean squared errors that are more than 40% lower. The second
proposed model, the Hybrid Quantum Sequence-to-Sequence neural network, once
trained, predicts photovoltaic power with 16% lower mean absolute error for
arbitrary time intervals without the need for prior meteorological data,
highlighting its versatility. Moreover, our hybrid models perform better even
when trained on limited datasets, underlining their potential utility in
data-scarce scenarios. These findings represent progress towards resolving time
series prediction challenges in energy forecasting through hybrid quantum
models, showcasing the transformative potential of quantum machine learning in
catalyzing the renewable energy transition.
|
2402.04051 | Akira Ito | Akira Ito, Masanori Yamada, Atsutoshi Kumagai | Analysis of Linear Mode Connectivity via Permutation-Based Weight
Matching: With Insights into Other Permutation Search Methods | In Proceedings of the Thirteenth International Conference on Learning
Representations (ICLR 2025) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Ainsworth et al. showed that using weight matching (WM) to minimize
the $L^2$ distance in a permutation search of model parameters effectively
identifies permutations that satisfy linear mode connectivity (LMC), where the
loss along a linear path between two independently trained models with
different seeds remains nearly constant. This paper analyzes LMC using WM,
which is useful for understanding stochastic gradient descent's effectiveness
and its application in areas like model merging. We first empirically show that
permutations found by WM do not significantly reduce the $L^2$ distance between
two models, and the occurrence of LMC is not merely due to distance reduction
by WM itself. We then demonstrate that permutations can change the directions
of the singular vectors, but not the singular values, of the weight matrices in
each layer. This finding shows that permutations found by WM primarily align
the directions of singular vectors associated with large singular values across
models. This alignment brings the singular vectors with large singular values,
which determine the model's functionality, closer between the original and
merged models, allowing the merged model to retain functionality similar to the
original models, thereby satisfying LMC. This paper also analyzes activation
matching (AM) in terms of singular vectors and finds that the principle of AM
is likely the same as that of WM. Finally, we analyze the difference between WM
and the straight-through estimator (STE), a dataset-dependent permutation
search method, and show that WM can be more advantageous than STE in achieving
LMC among three or more models.
| [
{
"version": "v1",
"created": "Tue, 6 Feb 2024 14:53:28 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Feb 2024 10:36:25 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Apr 2024 05:57:26 GMT"
},
{
"version": "v4",
"created": "Thu, 3 Oct 2024 11:36:28 GMT"
},
{
"version": "v5",
"created": "Tue, 8 Apr 2025 02:23:05 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Ito",
"Akira",
""
],
[
"Yamada",
"Masanori",
""
],
[
"Kumagai",
"Atsutoshi",
""
]
] | TITLE: Analysis of Linear Mode Connectivity via Permutation-Based Weight
Matching: With Insights into Other Permutation Search Methods
ABSTRACT: Recently, Ainsworth et al. showed that using weight matching (WM) to minimize
the $L^2$ distance in a permutation search of model parameters effectively
identifies permutations that satisfy linear mode connectivity (LMC), where the
loss along a linear path between two independently trained models with
different seeds remains nearly constant. This paper analyzes LMC using WM,
which is useful for understanding stochastic gradient descent's effectiveness
and its application in areas like model merging. We first empirically show that
permutations found by WM do not significantly reduce the $L^2$ distance between
two models, and the occurrence of LMC is not merely due to distance reduction
by WM itself. We then demonstrate that permutations can change the directions
of the singular vectors, but not the singular values, of the weight matrices in
each layer. This finding shows that permutations found by WM primarily align
the directions of singular vectors associated with large singular values across
models. This alignment brings the singular vectors with large singular values,
which determine the model's functionality, closer between the original and
merged models, allowing the merged model to retain functionality similar to the
original models, thereby satisfying LMC. This paper also analyzes activation
matching (AM) in terms of singular vectors and finds that the principle of AM
is likely the same as that of WM. Finally, we analyze the difference between WM
and the straight-through estimator (STE), a dataset-dependent permutation
search method, and show that WM can be more advantageous than STE in achieving
LMC among three or more models.
|
2403.02437 | Hyejun Jeong | Hyejun Jeong, Shiqing Ma, Amir Houmansadr | A Survey on Federated Unlearning: Challenges and Opportunities | null | null | null | null | cs.LG cs.AI cs.DC | http://creativecommons.org/licenses/by/4.0/ | Federated learning (FL), introduced in 2017, facilitates collaborative
learning between non-trusting parties with no need for the parties to
explicitly share their data among themselves. This allows training models on
user data while respecting privacy regulations such as GDPR and CPRA. However,
emerging privacy requirements may mandate model owners to be able to
\emph{forget} some learned data, e.g., when requested by data owners or law
enforcement. This has given birth to an active field of research called
\emph{machine unlearning}. In the context of FL, many techniques developed for
unlearning in centralized settings are not trivially applicable! This is due to
the unique differences between centralized and distributed learning, in
particular, interactivity, stochasticity, heterogeneity, and limited
accessibility in FL. In response, a recent line of work has focused on
developing unlearning mechanisms tailored to FL.
This SoK paper aims to take a deep look at the \emph{federated unlearning}
literature, with the goal of identifying research trends and challenges in this
emerging field. By carefully categorizing papers published on FL unlearning
(since 2020), we aim to pinpoint the unique complexities of federated
unlearning, highlighting limitations on directly applying centralized
unlearning methods. We compare existing federated unlearning methods regarding
influence removal and performance recovery, compare their threat models and
assumptions, and discuss their implications and limitations. For instance, we
analyze the experimental setup of FL unlearning studies from various
perspectives, including data heterogeneity and its simulation, the datasets
used for demonstration, and evaluation metrics. Our work aims to offer insights
and suggestions for future research on federated unlearning.
| [
{
"version": "v1",
"created": "Mon, 4 Mar 2024 19:35:08 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Jun 2024 19:00:03 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 19:55:57 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Jeong",
"Hyejun",
""
],
[
"Ma",
"Shiqing",
""
],
[
"Houmansadr",
"Amir",
""
]
] | TITLE: A Survey on Federated Unlearning: Challenges and Opportunities
ABSTRACT: Federated learning (FL), introduced in 2017, facilitates collaborative
learning between non-trusting parties with no need for the parties to
explicitly share their data among themselves. This allows training models on
user data while respecting privacy regulations such as GDPR and CPRA. However,
emerging privacy requirements may mandate model owners to be able to
\emph{forget} some learned data, e.g., when requested by data owners or law
enforcement. This has given birth to an active field of research called
\emph{machine unlearning}. In the context of FL, many techniques developed for
unlearning in centralized settings are not trivially applicable! This is due to
the unique differences between centralized and distributed learning, in
particular, interactivity, stochasticity, heterogeneity, and limited
accessibility in FL. In response, a recent line of work has focused on
developing unlearning mechanisms tailored to FL.
This SoK paper aims to take a deep look at the \emph{federated unlearning}
literature, with the goal of identifying research trends and challenges in this
emerging field. By carefully categorizing papers published on FL unlearning
(since 2020), we aim to pinpoint the unique complexities of federated
unlearning, highlighting limitations on directly applying centralized
unlearning methods. We compare existing federated unlearning methods regarding
influence removal and performance recovery, compare their threat models and
assumptions, and discuss their implications and limitations. For instance, we
analyze the experimental setup of FL unlearning studies from various
perspectives, including data heterogeneity and its simulation, the datasets
used for demonstration, and evaluation metrics. Our work aims to offer insights
and suggestions for future research on federated unlearning.
|
2404.03543 | JiaWei Guo | Jiawei Guo, Ziming Li, Xueling Liu, Kaijing Ma, Tianyu Zheng,
Zhouliang Yu, Ding Pan, Yizhi LI, Ruibo Liu, Yue Wang, Shuyue Guo, Xingwei
Qu, Xiang Yue, Ge Zhang, Wenhu Chen, Jie Fu | CodeEditorBench: Evaluating Code Editing Capability of Large Language
Models | null | null | null | null | cs.SE cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) for code are rapidly evolving, with code editing
emerging as a critical capability. We introduce CodeEditorBench, an evaluation
framework designed to rigorously assess the performance of LLMs in code editing
tasks, including debugging, translating, polishing, and requirement switching.
Unlike existing benchmarks focusing solely on code generation, CodeEditorBench
emphasizes real-world scenarios and practical aspects of software development.
We curate diverse coding challenges and scenarios from five sources, covering
various programming languages, complexity levels, and editing tasks. Evaluation
of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and
GPT-4), outperform open-source models in CodeEditorBench, highlighting
differences in model performance based on problem types and prompt
sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by
providing a robust platform for assessing code editing capabilities. We will
release all prompts and datasets to enable the community to expand the dataset
and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to
the advancement of LLMs in code editing and provide a valuable resource for
researchers and practitioners.
| [
{
"version": "v1",
"created": "Thu, 4 Apr 2024 15:49:49 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Apr 2024 04:29:25 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 09:39:25 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Guo",
"Jiawei",
""
],
[
"Li",
"Ziming",
""
],
[
"Liu",
"Xueling",
""
],
[
"Ma",
"Kaijing",
""
],
[
"Zheng",
"Tianyu",
""
],
[
"Yu",
"Zhouliang",
""
],
[
"Pan",
"Ding",
""
],
[
"LI",
"Yizhi",
""
],
[
"Liu",
"Ruibo",
""
],
[
"Wang",
"Yue",
""
],
[
"Guo",
"Shuyue",
""
],
[
"Qu",
"Xingwei",
""
],
[
"Yue",
"Xiang",
""
],
[
"Zhang",
"Ge",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Fu",
"Jie",
""
]
] | TITLE: CodeEditorBench: Evaluating Code Editing Capability of Large Language
Models
ABSTRACT: Large Language Models (LLMs) for code are rapidly evolving, with code editing
emerging as a critical capability. We introduce CodeEditorBench, an evaluation
framework designed to rigorously assess the performance of LLMs in code editing
tasks, including debugging, translating, polishing, and requirement switching.
Unlike existing benchmarks focusing solely on code generation, CodeEditorBench
emphasizes real-world scenarios and practical aspects of software development.
We curate diverse coding challenges and scenarios from five sources, covering
various programming languages, complexity levels, and editing tasks. Evaluation
of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and
GPT-4), outperform open-source models in CodeEditorBench, highlighting
differences in model performance based on problem types and prompt
sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by
providing a robust platform for assessing code editing capabilities. We will
release all prompts and datasets to enable the community to expand the dataset
and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to
the advancement of LLMs in code editing and provide a valuable resource for
researchers and practitioners.
|
2405.10577 | Yizhe Zhao | Zhe Huang, Yizhe Zhao, Hao Xiao, Chenyan Wu, Lingting Ge | DuoSpaceNet: Leveraging Both Bird's-Eye-View and Perspective View
Representations for 3D Object Detection | CVPR 2025 Workshop on Autonomous Driving (WAD) | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view camera-only 3D object detection largely follows two primary
paradigms: exploiting bird's-eye-view (BEV) representations or focusing on
perspective-view (PV) features, each with distinct advantages. Although several
recent approaches explore combining BEV and PV, many rely on partial fusion or
maintain separate detection heads. In this paper, we propose DuoSpaceNet, a
novel framework that fully unifies BEV and PV feature spaces within a single
detection pipeline for comprehensive 3D perception. Our design includes a
decoder to integrate BEV and PV features into unified detection queries, as
well as a feature enhancement strategy that enriches different feature
representations. In addition, DuoSpaceNet can be extended to handle multi-frame
inputs, enabling more robust temporal analysis. Extensive experiments on
nuScenes dataset show that DuoSpaceNet surpasses both BEV-based baselines
(e.g., BEVFormer) and PV-based baselines (e.g., Sparse4D) in 3D object
detection and BEV map segmentation, verifying the effectiveness of our proposed
design.
| [
{
"version": "v1",
"created": "Fri, 17 May 2024 07:04:29 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Aug 2024 02:09:11 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 18:00:17 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Huang",
"Zhe",
""
],
[
"Zhao",
"Yizhe",
""
],
[
"Xiao",
"Hao",
""
],
[
"Wu",
"Chenyan",
""
],
[
"Ge",
"Lingting",
""
]
] | TITLE: DuoSpaceNet: Leveraging Both Bird's-Eye-View and Perspective View
Representations for 3D Object Detection
ABSTRACT: Multi-view camera-only 3D object detection largely follows two primary
paradigms: exploiting bird's-eye-view (BEV) representations or focusing on
perspective-view (PV) features, each with distinct advantages. Although several
recent approaches explore combining BEV and PV, many rely on partial fusion or
maintain separate detection heads. In this paper, we propose DuoSpaceNet, a
novel framework that fully unifies BEV and PV feature spaces within a single
detection pipeline for comprehensive 3D perception. Our design includes a
decoder to integrate BEV and PV features into unified detection queries, as
well as a feature enhancement strategy that enriches different feature
representations. In addition, DuoSpaceNet can be extended to handle multi-frame
inputs, enabling more robust temporal analysis. Extensive experiments on
nuScenes dataset show that DuoSpaceNet surpasses both BEV-based baselines
(e.g., BEVFormer) and PV-based baselines (e.g., Sparse4D) in 3D object
detection and BEV map segmentation, verifying the effectiveness of our proposed
design.
|
2405.13955 | Xiaoshan Zhou | Xiaoshan Zhou, Carol C. Menassa, and Vineet R. Kamat | Decoding Brain Dynamics in Motor Planning Based on EEG Microstates for
Predicting Pedestrian Road-Crossing in Vehicle-to-Everything Architectures | 38 pages, 11 figures | null | null | null | cs.HC cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pedestrians who cross roads, often emerge from occlusion or abruptly begin
crossing from a standstill, frequently leading to unintended collisions with
vehicular traffic that result in accidents and interruptions. Existing studies
have predominantly relied on external network sensing and observational data to
anticipate pedestrian motion. However, these methods are post hoc, reducing the
vehicles' ability to respond in a timely manner. This study addresses these
gaps by introducing a novel data stream and analytical framework derived from
pedestrians' wearable electroencephalogram (EEG) signals to predict motor
planning in road crossings. Experiments were conducted where participants were
embodied in a visual avatar as pedestrians and interacted with varying traffic
volumes, marked crosswalks, and traffic signals. To understand how human
cognitive modules flexibly interplay with hemispheric asymmetries in functional
specialization, we analyzed time-frequency representation and functional
connectivity using collected EEG signals and constructed a Gaussian Hidden
Markov Model to decompose EEG sequences into cognitive microstate transitions
based on posterior probabilistic reasoning. Subsequently, datasets were
constructed using a sliding window approach, and motor readiness was predicted
using the K-nearest Neighbors algorithm combined with Dynamic Time Warping.
Results showed that high-beta oscillations in the frontocentral cortex achieved
an Area Under the Curve of 0.91 with approximately a 1-second anticipatory lead
window before physical road crossing movement occurred. These preliminary
results signify a transformative shift towards pedestrians proactively
signaling their motor intentions to autonomous vehicles within intelligent V2X
systems. The proposed framework is also adaptable to various human-robot
interactions, enabling seamless collaboration in dynamic mobile environments.
| [
{
"version": "v1",
"created": "Wed, 22 May 2024 19:40:37 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 19:58:30 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhou",
"Xiaoshan",
""
],
[
"Menassa",
"Carol C.",
""
],
[
"Kamat",
"Vineet R.",
""
]
] | TITLE: Decoding Brain Dynamics in Motor Planning Based on EEG Microstates for
Predicting Pedestrian Road-Crossing in Vehicle-to-Everything Architectures
ABSTRACT: Pedestrians who cross roads, often emerge from occlusion or abruptly begin
crossing from a standstill, frequently leading to unintended collisions with
vehicular traffic that result in accidents and interruptions. Existing studies
have predominantly relied on external network sensing and observational data to
anticipate pedestrian motion. However, these methods are post hoc, reducing the
vehicles' ability to respond in a timely manner. This study addresses these
gaps by introducing a novel data stream and analytical framework derived from
pedestrians' wearable electroencephalogram (EEG) signals to predict motor
planning in road crossings. Experiments were conducted where participants were
embodied in a visual avatar as pedestrians and interacted with varying traffic
volumes, marked crosswalks, and traffic signals. To understand how human
cognitive modules flexibly interplay with hemispheric asymmetries in functional
specialization, we analyzed time-frequency representation and functional
connectivity using collected EEG signals and constructed a Gaussian Hidden
Markov Model to decompose EEG sequences into cognitive microstate transitions
based on posterior probabilistic reasoning. Subsequently, datasets were
constructed using a sliding window approach, and motor readiness was predicted
using the K-nearest Neighbors algorithm combined with Dynamic Time Warping.
Results showed that high-beta oscillations in the frontocentral cortex achieved
an Area Under the Curve of 0.91 with approximately a 1-second anticipatory lead
window before physical road crossing movement occurred. These preliminary
results signify a transformative shift towards pedestrians proactively
signaling their motor intentions to autonomous vehicles within intelligent V2X
systems. The proposed framework is also adaptable to various human-robot
interactions, enabling seamless collaboration in dynamic mobile environments.
|
2405.13983 | Anton Morgunov | Yu Shee, Anton Morgunov, Haote Li, Victor S. Batista | DirectMultiStep: Direct Route Generation for Multistep Retrosynthesis | null | null | 10.1021/acs.jcim.4c01982 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Traditional computer-aided synthesis planning (CASP) methods rely on
iterative single-step predictions, leading to exponential search space growth
that limits efficiency and scalability. We introduce a series of
transformer-based models, that leverage a mixture of experts approach to
directly generate multistep synthetic routes as a single string, conditionally
predicting each transformation based on all preceding ones. Our DMS Explorer XL
model, which requires only target compounds as input, outperforms
state-of-the-art methods on the PaRoutes dataset with 1.9x and 3.1x
improvements in Top-1 accuracy on the n$_1$ and n$_5$ test sets, respectively.
Providing additional information, such as the desired number of steps and
starting materials, enables both a reduction in model size and an increase in
accuracy, highlighting the benefits of incorporating more constraints into the
prediction process. The top-performing DMS-Flex (Duo) model scores 25-50%
higher on Top-1 and Top-10 accuracies for both n$_1$ and n$_5$ sets.
Additionally, our models successfully predict routes for FDA-approved drugs not
included in the training data, demonstrating strong generalization
capabilities. While the limited diversity of the training set may affect
performance on less common reaction types, our multistep-first approach
presents a promising direction towards fully automated retrosynthetic planning.
| [
{
"version": "v1",
"created": "Wed, 22 May 2024 20:39:05 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jan 2025 17:37:07 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Mar 2025 01:58:12 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Shee",
"Yu",
""
],
[
"Morgunov",
"Anton",
""
],
[
"Li",
"Haote",
""
],
[
"Batista",
"Victor S.",
""
]
] | TITLE: DirectMultiStep: Direct Route Generation for Multistep Retrosynthesis
ABSTRACT: Traditional computer-aided synthesis planning (CASP) methods rely on
iterative single-step predictions, leading to exponential search space growth
that limits efficiency and scalability. We introduce a series of
transformer-based models, that leverage a mixture of experts approach to
directly generate multistep synthetic routes as a single string, conditionally
predicting each transformation based on all preceding ones. Our DMS Explorer XL
model, which requires only target compounds as input, outperforms
state-of-the-art methods on the PaRoutes dataset with 1.9x and 3.1x
improvements in Top-1 accuracy on the n$_1$ and n$_5$ test sets, respectively.
Providing additional information, such as the desired number of steps and
starting materials, enables both a reduction in model size and an increase in
accuracy, highlighting the benefits of incorporating more constraints into the
prediction process. The top-performing DMS-Flex (Duo) model scores 25-50%
higher on Top-1 and Top-10 accuracies for both n$_1$ and n$_5$ sets.
Additionally, our models successfully predict routes for FDA-approved drugs not
included in the training data, demonstrating strong generalization
capabilities. While the limited diversity of the training set may affect
performance on less common reaction types, our multistep-first approach
presents a promising direction towards fully automated retrosynthetic planning.
|
2405.20445 | Jianan Zhao | Jianan Zhao, Zhaocheng Zhu, Mikhail Galkin, Hesham Mostafa, Michael
Bronstein, Jian Tang | Fully-inductive Node Classification on Arbitrary Graphs | ICLR2025 | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One fundamental challenge in graph machine learning is generalizing to new
graphs. Many existing methods following the inductive setup can generalize to
test graphs with new structures, but assuming the feature and label spaces
remain the same as the training ones. This paper introduces a fully-inductive
setup, where models should perform inference on arbitrary test graphs with new
structures, feature and label spaces. We propose GraphAny as the first attempt
at this challenging setup. GraphAny models inference on a new graph as an
analytical solution to a LinearGNN, which can be naturally applied to graphs
with any feature and label spaces. To further build a stronger model with
learning capacity, we fuse multiple LinearGNN predictions with learned
inductive attention scores. Specifically, the attention module is carefully
parameterized as a function of the entropy-normalized distance features between
pairs of LinearGNN predictions to ensure generalization to new graphs.
Empirically, GraphAny trained on a single Wisconsin dataset with only 120
labeled nodes can generalize to 30 new graphs with an average accuracy of
67.26%, surpassing not only all inductive baselines, but also strong
transductive methods trained separately on each of the 30 test graphs.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 19:43:29 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jun 2024 02:08:54 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Feb 2025 03:14:20 GMT"
},
{
"version": "v4",
"created": "Fri, 28 Feb 2025 00:56:45 GMT"
},
{
"version": "v5",
"created": "Tue, 8 Apr 2025 00:15:02 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhao",
"Jianan",
""
],
[
"Zhu",
"Zhaocheng",
""
],
[
"Galkin",
"Mikhail",
""
],
[
"Mostafa",
"Hesham",
""
],
[
"Bronstein",
"Michael",
""
],
[
"Tang",
"Jian",
""
]
] | TITLE: Fully-inductive Node Classification on Arbitrary Graphs
ABSTRACT: One fundamental challenge in graph machine learning is generalizing to new
graphs. Many existing methods following the inductive setup can generalize to
test graphs with new structures, but assuming the feature and label spaces
remain the same as the training ones. This paper introduces a fully-inductive
setup, where models should perform inference on arbitrary test graphs with new
structures, feature and label spaces. We propose GraphAny as the first attempt
at this challenging setup. GraphAny models inference on a new graph as an
analytical solution to a LinearGNN, which can be naturally applied to graphs
with any feature and label spaces. To further build a stronger model with
learning capacity, we fuse multiple LinearGNN predictions with learned
inductive attention scores. Specifically, the attention module is carefully
parameterized as a function of the entropy-normalized distance features between
pairs of LinearGNN predictions to ensure generalization to new graphs.
Empirically, GraphAny trained on a single Wisconsin dataset with only 120
labeled nodes can generalize to 30 new graphs with an average accuracy of
67.26%, surpassing not only all inductive baselines, but also strong
transductive methods trained separately on each of the 30 test graphs.
|
2405.20769 | Matthew Regehr | Christian Janos Lebeda, Matthew Regehr, Gautam Kamath, Thomas Steinke | Avoiding Pitfalls for Privacy Accounting of Subsampled Mechanisms under
Composition | null | null | null | null | cs.CR cs.DS cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | We consider the problem of computing tight privacy guarantees for the
composition of subsampled differentially private mechanisms. Recent algorithms
can numerically compute the privacy parameters to arbitrary precision but must
be carefully applied.
Our main contribution is to address two common points of confusion. First,
some privacy accountants assume that the privacy guarantees for the composition
of a subsampled mechanism are determined by self-composing the worst-case
datasets for the uncomposed mechanism. We show that this is not true in
general. Second, Poisson subsampling is sometimes assumed to have similar
privacy guarantees compared to sampling without replacement. We show that the
privacy guarantees may in fact differ significantly between the two sampling
schemes. In particular, we give an example of hyperparameters that result in
$\varepsilon \approx 1$ for Poisson subsampling and $\varepsilon > 10$ for
sampling without replacement. This occurs for some parameters that could
realistically be chosen for DP-SGD.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 20:30:12 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 15:21:03 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Lebeda",
"Christian Janos",
""
],
[
"Regehr",
"Matthew",
""
],
[
"Kamath",
"Gautam",
""
],
[
"Steinke",
"Thomas",
""
]
] | TITLE: Avoiding Pitfalls for Privacy Accounting of Subsampled Mechanisms under
Composition
ABSTRACT: We consider the problem of computing tight privacy guarantees for the
composition of subsampled differentially private mechanisms. Recent algorithms
can numerically compute the privacy parameters to arbitrary precision but must
be carefully applied.
Our main contribution is to address two common points of confusion. First,
some privacy accountants assume that the privacy guarantees for the composition
of a subsampled mechanism are determined by self-composing the worst-case
datasets for the uncomposed mechanism. We show that this is not true in
general. Second, Poisson subsampling is sometimes assumed to have similar
privacy guarantees compared to sampling without replacement. We show that the
privacy guarantees may in fact differ significantly between the two sampling
schemes. In particular, we give an example of hyperparameters that result in
$\varepsilon \approx 1$ for Poisson subsampling and $\varepsilon > 10$ for
sampling without replacement. This occurs for some parameters that could
realistically be chosen for DP-SGD.
|
2406.00984 | Hiroaki Yamagiwa | Hiroaki Yamagiwa, Ryoma Hashimoto, Kiwamu Arakane, Ken Murakami, Shou
Soeda, Momose Oyama, Yihua Zhu, Mariko Okada, Hidetoshi Shimodaira | Predicting Drug-Gene Relations via Analogy Tasks with Word Embeddings | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural language processing (NLP) is utilized in a wide range of fields,
where words in text are typically transformed into feature vectors called
embeddings. BioConceptVec is a specific example of embeddings tailored for
biology, trained on approximately 30 million PubMed abstracts using models such
as skip-gram. Generally, word embeddings are known to solve analogy tasks
through simple vector arithmetic. For instance, $\mathrm{\textit{king}} -
\mathrm{\textit{man}} + \mathrm{\textit{woman}}$ predicts
$\mathrm{\textit{queen}}$. In this study, we demonstrate that BioConceptVec
embeddings, along with our own embeddings trained on PubMed abstracts, contain
information about drug-gene relations and can predict target genes from a given
drug through analogy computations. We also show that categorizing drugs and
genes using biological pathways improves performance. Furthermore, we
illustrate that vectors derived from known relations in the past can predict
unknown future relations in datasets divided by year. Despite the simplicity of
implementing analogy tasks as vector additions, our approach demonstrated
performance comparable to that of large language models such as GPT-4 in
predicting drug-gene relations.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 04:36:38 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Sep 2024 20:22:41 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Dec 2024 09:03:03 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2025 17:50:27 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yamagiwa",
"Hiroaki",
""
],
[
"Hashimoto",
"Ryoma",
""
],
[
"Arakane",
"Kiwamu",
""
],
[
"Murakami",
"Ken",
""
],
[
"Soeda",
"Shou",
""
],
[
"Oyama",
"Momose",
""
],
[
"Zhu",
"Yihua",
""
],
[
"Okada",
"Mariko",
""
],
[
"Shimodaira",
"Hidetoshi",
""
]
] | TITLE: Predicting Drug-Gene Relations via Analogy Tasks with Word Embeddings
ABSTRACT: Natural language processing (NLP) is utilized in a wide range of fields,
where words in text are typically transformed into feature vectors called
embeddings. BioConceptVec is a specific example of embeddings tailored for
biology, trained on approximately 30 million PubMed abstracts using models such
as skip-gram. Generally, word embeddings are known to solve analogy tasks
through simple vector arithmetic. For instance, $\mathrm{\textit{king}} -
\mathrm{\textit{man}} + \mathrm{\textit{woman}}$ predicts
$\mathrm{\textit{queen}}$. In this study, we demonstrate that BioConceptVec
embeddings, along with our own embeddings trained on PubMed abstracts, contain
information about drug-gene relations and can predict target genes from a given
drug through analogy computations. We also show that categorizing drugs and
genes using biological pathways improves performance. Furthermore, we
illustrate that vectors derived from known relations in the past can predict
unknown future relations in datasets divided by year. Despite the simplicity of
implementing analogy tasks as vector additions, our approach demonstrated
performance comparable to that of large language models such as GPT-4 in
predicting drug-gene relations.
|
2406.07467 | Fatemeh Hadadi | Fatemeh Hadadi, Qinghua Xu, Domenico Bianculli, Lionel Briand | LLM meets ML: Data-efficient Anomaly Detection on Unseen Unstable Logs | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most log-based anomaly detectors assume logs are stable, though logs are
often unstable due to software or environmental changes. Anomaly detection on
unstable logs (ULAD) is therefore a more realistic, yet under-investigated
challenge. Current approaches predominantly employ machine learning (ML)
models, which often require extensive labeled data for training. To mitigate
data insufficiency, we propose FlexLog, a novel hybrid approach for ULAD that
combines ML models -- decision tree, k-nearest neighbors, and a feedforward
neural network -- with a Large Language Model (Mistral) through ensemble
learning. FlexLog also incorporates a cache and retrieval-augmented generation
(RAG) to further enhance efficiency and effectiveness. To evaluate FlexLog, we
configured four datasets for ULAD, namely ADFA-U, LOGEVOL-U, SynHDFS-U, and
SYNEVOL-U. FlexLog outperforms all baselines by at least 1.2 percentage points
in F1 score while using 62.87 percentage points less labeled data. When trained
on the same amount of data as the baselines, FlexLog achieves up to a 13
percentage points increase in F1 score on ADFA-U across varying training
dataset sizes. Additionally, FlexLog maintains inference time under one second
per log sequence, making it suitable for most applications except
latency-sensitive systems. Further analysis reveals the positive impact of
FlexLog's key components: cache, RAG and ensemble learning.
| [
{
"version": "v1",
"created": "Tue, 11 Jun 2024 17:13:18 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 20:52:04 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Hadadi",
"Fatemeh",
""
],
[
"Xu",
"Qinghua",
""
],
[
"Bianculli",
"Domenico",
""
],
[
"Briand",
"Lionel",
""
]
] | TITLE: LLM meets ML: Data-efficient Anomaly Detection on Unseen Unstable Logs
ABSTRACT: Most log-based anomaly detectors assume logs are stable, though logs are
often unstable due to software or environmental changes. Anomaly detection on
unstable logs (ULAD) is therefore a more realistic, yet under-investigated
challenge. Current approaches predominantly employ machine learning (ML)
models, which often require extensive labeled data for training. To mitigate
data insufficiency, we propose FlexLog, a novel hybrid approach for ULAD that
combines ML models -- decision tree, k-nearest neighbors, and a feedforward
neural network -- with a Large Language Model (Mistral) through ensemble
learning. FlexLog also incorporates a cache and retrieval-augmented generation
(RAG) to further enhance efficiency and effectiveness. To evaluate FlexLog, we
configured four datasets for ULAD, namely ADFA-U, LOGEVOL-U, SynHDFS-U, and
SYNEVOL-U. FlexLog outperforms all baselines by at least 1.2 percentage points
in F1 score while using 62.87 percentage points less labeled data. When trained
on the same amount of data as the baselines, FlexLog achieves up to a 13
percentage points increase in F1 score on ADFA-U across varying training
dataset sizes. Additionally, FlexLog maintains inference time under one second
per log sequence, making it suitable for most applications except
latency-sensitive systems. Further analysis reveals the positive impact of
FlexLog's key components: cache, RAG and ensemble learning.
|
2406.08092 | Zhi Qu | Zhi Qu, Chenchen Ding, Taro Watanabe | Languages Transferred Within the Encoder: On Representation Transfer in
Zero-Shot Multilingual Translation | Accepted by MT Summit 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding representation transfer in multilingual neural machine
translation (MNMT) can reveal the reason for the zero-shot translation
deficiency. In this work, we systematically analyze the representational issue
of MNMT models. We first introduce the identity pair, translating a sentence to
itself, to address the lack of the base measure in multilingual investigations,
as the identity pair can reflect the representation of a language within the
model. Then, we demonstrate that the encoder transfers the source language to
the representational subspace of the target language instead of the
language-agnostic state. Thus, the zero-shot translation deficiency arises
because the representation of a translation is entangled with other languages
and not transferred to the target language effectively. Based on our findings,
we propose two methods: 1) low-rank language-specific embedding at the encoder,
and 2) language-specific contrastive learning of the representation at the
decoder. The experimental results on Europarl-15, TED-19, and OPUS-100 datasets
show that our methods substantially enhance the performance of zero-shot
translations without sacrifices in supervised directions by improving language
transfer capacity, thereby providing practical evidence to support our
conclusions. Codes are available at https://github.com/zhiqu22/ZeroTrans.
| [
{
"version": "v1",
"created": "Wed, 12 Jun 2024 11:16:30 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 03:39:51 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Qu",
"Zhi",
""
],
[
"Ding",
"Chenchen",
""
],
[
"Watanabe",
"Taro",
""
]
] | TITLE: Languages Transferred Within the Encoder: On Representation Transfer in
Zero-Shot Multilingual Translation
ABSTRACT: Understanding representation transfer in multilingual neural machine
translation (MNMT) can reveal the reason for the zero-shot translation
deficiency. In this work, we systematically analyze the representational issue
of MNMT models. We first introduce the identity pair, translating a sentence to
itself, to address the lack of the base measure in multilingual investigations,
as the identity pair can reflect the representation of a language within the
model. Then, we demonstrate that the encoder transfers the source language to
the representational subspace of the target language instead of the
language-agnostic state. Thus, the zero-shot translation deficiency arises
because the representation of a translation is entangled with other languages
and not transferred to the target language effectively. Based on our findings,
we propose two methods: 1) low-rank language-specific embedding at the encoder,
and 2) language-specific contrastive learning of the representation at the
decoder. The experimental results on Europarl-15, TED-19, and OPUS-100 datasets
show that our methods substantially enhance the performance of zero-shot
translations without sacrifices in supervised directions by improving language
transfer capacity, thereby providing practical evidence to support our
conclusions. Codes are available at https://github.com/zhiqu22/ZeroTrans.
|
2406.11917 | Chao He | Chao He and Hongmei Shi and Ruixin Li and Jianbo Li and ZuJun Yu | Modulated Differentiable STFT and Balanced Spectrum Metric for Freight
Train Wheelset Bearing Cross-machine Transfer Fault Diagnosis under Speed
Fluctuations | null | Advanced Engineering Informatics 62 (2024) 102568 | 10.1016/j.aei.2024.102568 | null | cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The service conditions of wheelset bearings has a direct impact on the safe
operation of railway heavy haul freight trains as the key components. However,
speed fluctuation of the trains and few fault samples are the two main problems
that restrict the accuracy of bearing fault diagnosis. Therefore, a
cross-machine transfer diagnosis (pyDSN) network coupled with interpretable
modulated differentiable short-time Fourier transform (STFT) and
physics-informed balanced spectrum quality metric is proposed to learn
domain-invariant and discriminative features under time-varying speeds.
Firstly, due to insufficiency in extracting extract frequency components of
time-varying speed signals using fixed windows, a modulated differentiable STFT
(MDSTFT) that is interpretable with STFT-informed theoretical support, is
proposed to extract the robust time-frequency spectrum (TFS). During training
process, multiple windows with different lengths dynamically change. Also, in
addition to the classification metric and domain discrepancy metric, we
creatively introduce a third kind of metric, referred to as the
physics-informed metric, to enhance transferable TFS. A physics-informed
balanced spectrum quality (BSQ) regularization loss is devised to guide an
optimization direction for MDSTFT and model. With it, not only can model
acquire high-quality TFS, but also a physics-restricted domain adaptation
network can be also acquired, making it learn real-world physics knowledge,
ultimately diminish the domain discrepancy across different datasets. The
experiment is conducted in the scenario of migrating from the laboratory
datasets to the freight train dataset, indicating that the hybrid-driven pyDSN
outperforms existing methods and has practical value.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2024 02:43:24 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 04:01:43 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"He",
"Chao",
""
],
[
"Shi",
"Hongmei",
""
],
[
"Li",
"Ruixin",
""
],
[
"Li",
"Jianbo",
""
],
[
"Yu",
"ZuJun",
""
]
] | TITLE: Modulated Differentiable STFT and Balanced Spectrum Metric for Freight
Train Wheelset Bearing Cross-machine Transfer Fault Diagnosis under Speed
Fluctuations
ABSTRACT: The service conditions of wheelset bearings has a direct impact on the safe
operation of railway heavy haul freight trains as the key components. However,
speed fluctuation of the trains and few fault samples are the two main problems
that restrict the accuracy of bearing fault diagnosis. Therefore, a
cross-machine transfer diagnosis (pyDSN) network coupled with interpretable
modulated differentiable short-time Fourier transform (STFT) and
physics-informed balanced spectrum quality metric is proposed to learn
domain-invariant and discriminative features under time-varying speeds.
Firstly, due to insufficiency in extracting extract frequency components of
time-varying speed signals using fixed windows, a modulated differentiable STFT
(MDSTFT) that is interpretable with STFT-informed theoretical support, is
proposed to extract the robust time-frequency spectrum (TFS). During training
process, multiple windows with different lengths dynamically change. Also, in
addition to the classification metric and domain discrepancy metric, we
creatively introduce a third kind of metric, referred to as the
physics-informed metric, to enhance transferable TFS. A physics-informed
balanced spectrum quality (BSQ) regularization loss is devised to guide an
optimization direction for MDSTFT and model. With it, not only can model
acquire high-quality TFS, but also a physics-restricted domain adaptation
network can be also acquired, making it learn real-world physics knowledge,
ultimately diminish the domain discrepancy across different datasets. The
experiment is conducted in the scenario of migrating from the laboratory
datasets to the freight train dataset, indicating that the hybrid-driven pyDSN
outperforms existing methods and has practical value.
|
2406.15341 | Haoyang Liu | Haoyang Liu, Shuyu Chen, Ye Zhang, Haohan Wang | GenoTEX: An LLM Agent Benchmark for Automated Gene Expression Data
Analysis | 31 pages, 4 figures | null | null | null | cs.LG cs.AI q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in machine learning have significantly improved the
identification of disease-associated genes from gene expression datasets.
However, these processes often require extensive expertise and manual effort,
limiting their scalability. Large Language Model (LLM)-based agents have shown
promise in automating these tasks due to their increasing problem-solving
abilities. To support the evaluation and development of such methods, we
introduce GenoTEX, a benchmark dataset for the automated analysis of gene
expression data. GenoTEX provides analysis code and results for solving a wide
range of gene-trait association problems, encompassing dataset selection,
preprocessing, and statistical analysis, in a pipeline that follows
computational genomics standards. The benchmark includes expert-curated
annotations from bioinformaticians to ensure accuracy and reliability. To
provide baselines for these tasks, we present GenoAgent, a team of LLM-based
agents that adopt a multi-step programming workflow with flexible
self-correction, to collaboratively analyze gene expression datasets. Our
experiments demonstrate the potential of LLM-based methods in analyzing genomic
data, while error analysis highlights the challenges and areas for future
improvement. We propose GenoTEX as a promising resource for benchmarking and
enhancing automated methods for gene expression data analysis. The benchmark is
available at https://github.com/Liu-Hy/GenoTEX.
| [
{
"version": "v1",
"created": "Fri, 21 Jun 2024 17:55:24 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 17:59:22 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 17:09:04 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Liu",
"Haoyang",
""
],
[
"Chen",
"Shuyu",
""
],
[
"Zhang",
"Ye",
""
],
[
"Wang",
"Haohan",
""
]
] | TITLE: GenoTEX: An LLM Agent Benchmark for Automated Gene Expression Data
Analysis
ABSTRACT: Recent advancements in machine learning have significantly improved the
identification of disease-associated genes from gene expression datasets.
However, these processes often require extensive expertise and manual effort,
limiting their scalability. Large Language Model (LLM)-based agents have shown
promise in automating these tasks due to their increasing problem-solving
abilities. To support the evaluation and development of such methods, we
introduce GenoTEX, a benchmark dataset for the automated analysis of gene
expression data. GenoTEX provides analysis code and results for solving a wide
range of gene-trait association problems, encompassing dataset selection,
preprocessing, and statistical analysis, in a pipeline that follows
computational genomics standards. The benchmark includes expert-curated
annotations from bioinformaticians to ensure accuracy and reliability. To
provide baselines for these tasks, we present GenoAgent, a team of LLM-based
agents that adopt a multi-step programming workflow with flexible
self-correction, to collaboratively analyze gene expression datasets. Our
experiments demonstrate the potential of LLM-based methods in analyzing genomic
data, while error analysis highlights the challenges and areas for future
improvement. We propose GenoTEX as a promising resource for benchmarking and
enhancing automated methods for gene expression data analysis. The benchmark is
available at https://github.com/Liu-Hy/GenoTEX.
|
2407.21077 | Vahid Noroozi | Somshubra Majumdar, Vahid Noroozi, Mehrzad Samadi, Sean Narenthiran,
Aleksander Ficek, Wasi Uddin Ahmad, Jocelyn Huang, Jagadeesh Balam, Boris
Ginsburg | Genetic Instruct: Scaling up Synthetic Generation of Coding Instructions
for Large Language Models | null | null | null | null | cs.CL cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) require high quality instruction data for
effective alignment, particularly in code generation tasks where expert curated
datasets are expensive to produce. We present Genetic-Instruct, a scalable
algorithm for synthesizing large-scale, high quality coding instructions using
evolutionary principles. Starting from a small set of seed instructions,
Genetic-Instruct generates diverse and challenging instruction-code pairs by
leveraging an Instructor-LLM for generation, a Coder-LLM for code synthesis,
and a Judge-LLM for automatic quality evaluation. Our proposed approach is
highly parallelizable and effective even with a small seed data and weaker
generator models. We generated more than 7.5 million coding instructions with
the proposed approach. Then we evaluated it by fine-tuning LLMs with the
synthetic samples and demonstrated a significant improvement in their code
generation capability compared to the other synthetic generation approaches and
publicly available datasets. Our results highlight the efficiency, scalability,
and generalizability of the Genetic-Instruct framework.
| [
{
"version": "v1",
"created": "Mon, 29 Jul 2024 20:42:59 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 23:35:11 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Majumdar",
"Somshubra",
""
],
[
"Noroozi",
"Vahid",
""
],
[
"Samadi",
"Mehrzad",
""
],
[
"Narenthiran",
"Sean",
""
],
[
"Ficek",
"Aleksander",
""
],
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Huang",
"Jocelyn",
""
],
[
"Balam",
"Jagadeesh",
""
],
[
"Ginsburg",
"Boris",
""
]
] | TITLE: Genetic Instruct: Scaling up Synthetic Generation of Coding Instructions
for Large Language Models
ABSTRACT: Large Language Models (LLMs) require high quality instruction data for
effective alignment, particularly in code generation tasks where expert curated
datasets are expensive to produce. We present Genetic-Instruct, a scalable
algorithm for synthesizing large-scale, high quality coding instructions using
evolutionary principles. Starting from a small set of seed instructions,
Genetic-Instruct generates diverse and challenging instruction-code pairs by
leveraging an Instructor-LLM for generation, a Coder-LLM for code synthesis,
and a Judge-LLM for automatic quality evaluation. Our proposed approach is
highly parallelizable and effective even with a small seed data and weaker
generator models. We generated more than 7.5 million coding instructions with
the proposed approach. Then we evaluated it by fine-tuning LLMs with the
synthetic samples and demonstrated a significant improvement in their code
generation capability compared to the other synthetic generation approaches and
publicly available datasets. Our results highlight the efficiency, scalability,
and generalizability of the Genetic-Instruct framework.
|
2408.04290 | Amirreza Fateh | Alireza Saber, Pouria Parhami, Alimohammad Siahkarzadeh, Mansoor
Fateh, Amirreza Fateh | Efficient and Accurate Pneumonia Detection Using a Novel Multi-Scale
Transformer Approach | null | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Pneumonia, a prevalent respiratory infection, remains a leading cause of
morbidity and mortality worldwide, particularly among vulnerable populations.
Chest X-rays serve as a primary tool for pneumonia detection; however,
variations in imaging conditions and subtle visual indicators complicate
consistent interpretation. Automated tools can enhance traditional methods by
improving diagnostic reliability and supporting clinical decision-making. In
this study, we propose a novel multi-scale transformer approach for pneumonia
detection that integrates lung segmentation and classification into a unified
framework. Our method introduces a lightweight transformer-enhanced TransUNet
for precise lung segmentation, achieving a Dice score of 95.68% on the "Chest
X-ray Masks and Labels" dataset with fewer parameters than traditional
transformers. For classification, we employ pre-trained ResNet models
(ResNet-50 and ResNet-101) to extract multi-scale feature maps, which are then
processed through a modified transformer module to enhance pneumonia detection.
This integration of multi-scale feature extraction and lightweight transformer
modules ensures robust performance, making our method suitable for
resource-constrained clinical environments. Our approach achieves 93.75%
accuracy on the "Kermany" dataset and 96.04% accuracy on the "Cohen" dataset,
outperforming existing methods while maintaining computational efficiency. This
work demonstrates the potential of multi-scale transformer architectures to
improve pneumonia diagnosis, offering a scalable and accurate solution to
global healthcare
challenges."https://github.com/amirrezafateh/Multi-Scale-Transformer-Pneumonia"
| [
{
"version": "v1",
"created": "Thu, 8 Aug 2024 08:06:42 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Nov 2024 11:51:50 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Jan 2025 17:04:30 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2025 07:00:02 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Saber",
"Alireza",
""
],
[
"Parhami",
"Pouria",
""
],
[
"Siahkarzadeh",
"Alimohammad",
""
],
[
"Fateh",
"Mansoor",
""
],
[
"Fateh",
"Amirreza",
""
]
] | TITLE: Efficient and Accurate Pneumonia Detection Using a Novel Multi-Scale
Transformer Approach
ABSTRACT: Pneumonia, a prevalent respiratory infection, remains a leading cause of
morbidity and mortality worldwide, particularly among vulnerable populations.
Chest X-rays serve as a primary tool for pneumonia detection; however,
variations in imaging conditions and subtle visual indicators complicate
consistent interpretation. Automated tools can enhance traditional methods by
improving diagnostic reliability and supporting clinical decision-making. In
this study, we propose a novel multi-scale transformer approach for pneumonia
detection that integrates lung segmentation and classification into a unified
framework. Our method introduces a lightweight transformer-enhanced TransUNet
for precise lung segmentation, achieving a Dice score of 95.68% on the "Chest
X-ray Masks and Labels" dataset with fewer parameters than traditional
transformers. For classification, we employ pre-trained ResNet models
(ResNet-50 and ResNet-101) to extract multi-scale feature maps, which are then
processed through a modified transformer module to enhance pneumonia detection.
This integration of multi-scale feature extraction and lightweight transformer
modules ensures robust performance, making our method suitable for
resource-constrained clinical environments. Our approach achieves 93.75%
accuracy on the "Kermany" dataset and 96.04% accuracy on the "Cohen" dataset,
outperforming existing methods while maintaining computational efficiency. This
work demonstrates the potential of multi-scale transformer architectures to
improve pneumonia diagnosis, offering a scalable and accurate solution to
global healthcare
challenges."https://github.com/amirrezafateh/Multi-Scale-Transformer-Pneumonia"
|
2408.06828 | Jingzhi Bao | Jingzhi Bao, Guanying Chen, Shuguang Cui | PIR: Photometric Inverse Rendering with Shading Cues Modeling and
Surface Reflectance Regularization | Accepted to 3DV 2025. Project page:
https://jzbao03.site/projects/PIR/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of inverse rendering from photometric
images. Existing approaches for this problem suffer from the effects of
self-shadows, inter-reflections, and lack of constraints on the surface
reflectance, leading to inaccurate decomposition of reflectance and
illumination due to the ill-posed nature of inverse rendering. In this work, we
propose a new method for neural inverse rendering. Our method jointly optimizes
the light source position to account for the self-shadows in images, and
computes indirect illumination using a differentiable rendering layer and an
importance sampling strategy. To enhance surface reflectance decomposition, we
introduce a new regularization by distilling DINO features to foster accurate
and consistent material decomposition. Extensive experiments on synthetic and
real datasets demonstrate that our method outperforms the state-of-the-art
methods in reflectance decomposition.
| [
{
"version": "v1",
"created": "Tue, 13 Aug 2024 11:39:14 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jan 2025 17:18:18 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 03:08:44 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Bao",
"Jingzhi",
""
],
[
"Chen",
"Guanying",
""
],
[
"Cui",
"Shuguang",
""
]
] | TITLE: PIR: Photometric Inverse Rendering with Shading Cues Modeling and
Surface Reflectance Regularization
ABSTRACT: This paper addresses the problem of inverse rendering from photometric
images. Existing approaches for this problem suffer from the effects of
self-shadows, inter-reflections, and lack of constraints on the surface
reflectance, leading to inaccurate decomposition of reflectance and
illumination due to the ill-posed nature of inverse rendering. In this work, we
propose a new method for neural inverse rendering. Our method jointly optimizes
the light source position to account for the self-shadows in images, and
computes indirect illumination using a differentiable rendering layer and an
importance sampling strategy. To enhance surface reflectance decomposition, we
introduce a new regularization by distilling DINO features to foster accurate
and consistent material decomposition. Extensive experiments on synthetic and
real datasets demonstrate that our method outperforms the state-of-the-art
methods in reflectance decomposition.
|
2408.12598 | Ziyu Tang | Ziyu Tang, Weicai Ye, Yifan Wang, Di Huang, Hujun Bao, Tong He,
Guofeng Zhang | ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor
Reconstruction | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural implicit reconstruction via volume rendering has demonstrated its
effectiveness in recovering dense 3D surfaces. However, it is non-trivial to
simultaneously recover meticulous geometry and preserve smoothness across
regions with differing characteristics. To address this issue, previous methods
typically employ geometric priors, which are often constrained by the
performance of the prior models. In this paper, we propose ND-SDF, which learns
a Normal Deflection field to represent the angular deviation between the scene
normal and the prior normal. Unlike previous methods that uniformly apply
geometric priors on all samples, introducing significant bias in accuracy, our
proposed normal deflection field dynamically learns and adapts the utilization
of samples based on their specific characteristics, thereby improving both the
accuracy and effectiveness of the model. Our method not only obtains smooth
weakly textured regions such as walls and floors but also preserves the
geometric details of complex structures. In addition, we introduce a novel ray
sampling strategy based on the deflection angle to facilitate the unbiased
rendering process, which significantly improves the quality and accuracy of
intricate surfaces, especially on thin structures. Consistent improvements on
various challenging datasets demonstrate the superiority of our method.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 2024 17:59:01 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Sep 2024 06:31:25 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 15:24:36 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Tang",
"Ziyu",
""
],
[
"Ye",
"Weicai",
""
],
[
"Wang",
"Yifan",
""
],
[
"Huang",
"Di",
""
],
[
"Bao",
"Hujun",
""
],
[
"He",
"Tong",
""
],
[
"Zhang",
"Guofeng",
""
]
] | TITLE: ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor
Reconstruction
ABSTRACT: Neural implicit reconstruction via volume rendering has demonstrated its
effectiveness in recovering dense 3D surfaces. However, it is non-trivial to
simultaneously recover meticulous geometry and preserve smoothness across
regions with differing characteristics. To address this issue, previous methods
typically employ geometric priors, which are often constrained by the
performance of the prior models. In this paper, we propose ND-SDF, which learns
a Normal Deflection field to represent the angular deviation between the scene
normal and the prior normal. Unlike previous methods that uniformly apply
geometric priors on all samples, introducing significant bias in accuracy, our
proposed normal deflection field dynamically learns and adapts the utilization
of samples based on their specific characteristics, thereby improving both the
accuracy and effectiveness of the model. Our method not only obtains smooth
weakly textured regions such as walls and floors but also preserves the
geometric details of complex structures. In addition, we introduce a novel ray
sampling strategy based on the deflection angle to facilitate the unbiased
rendering process, which significantly improves the quality and accuracy of
intricate surfaces, especially on thin structures. Consistent improvements on
various challenging datasets demonstrate the superiority of our method.
|
2408.13378 | Yoshitaka Inoue | Yoshitaka Inoue, Tianci Song, Xinling Wang, Augustin Luna, Tianfan Fu | DrugAgent: Multi-Agent Large Language Model-Based Reasoning for
Drug-Target Interaction Prediction | 15 pages, 1 figure | null | null | null | cs.AI cs.CL cs.IR cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Advancements in large language models (LLMs) allow them to address diverse
questions using human-like interfaces. Still, limitations in their training
prevent them from answering accurately in scenarios that could benefit from
multiple perspectives. Multi-agent systems allow the resolution of questions to
enhance result consistency and reliability. While drug-target interaction (DTI)
prediction is important for drug discovery, existing approaches face challenges
due to complex biological systems and the lack of interpretability needed for
clinical applications. DrugAgent is a multi-agent LLM system for DTI prediction
that combines multiple specialized perspectives with transparent reasoning. Our
system adapts and extends existing multi-agent frameworks by (1) applying
coordinator-based architecture to the DTI domain, (2) integrating
domain-specific data sources, including ML predictions, knowledge graphs, and
literature evidence, and (3) incorporating Chain-of-Thought (CoT) and ReAct
(Reason+Act) frameworks for transparent DTI reasoning. We conducted
comprehensive experiments using a kinase inhibitor dataset, where our
multi-agent LLM method outperformed the non-reasoning multi-agent model (GPT-4o
mini) by 45% in F1 score (0.514 vs 0.355). Through ablation studies, we
demonstrated the contributions of each agent, with the AI agent being the most
impactful, followed by the KG agent and search agent. Most importantly, our
approach provides detailed, human-interpretable reasoning for each prediction
by combining evidence from multiple sources - a critical feature for biomedical
applications where understanding the rationale behind predictions is essential
for clinical decision-making and regulatory compliance. Code is available at
https://anonymous.4open.science/r/DrugAgent-B2EA.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2024 21:24:59 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Sep 2024 16:06:37 GMT"
},
{
"version": "v3",
"created": "Mon, 16 Sep 2024 22:13:30 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Apr 2025 19:32:55 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Inoue",
"Yoshitaka",
""
],
[
"Song",
"Tianci",
""
],
[
"Wang",
"Xinling",
""
],
[
"Luna",
"Augustin",
""
],
[
"Fu",
"Tianfan",
""
]
] | TITLE: DrugAgent: Multi-Agent Large Language Model-Based Reasoning for
Drug-Target Interaction Prediction
ABSTRACT: Advancements in large language models (LLMs) allow them to address diverse
questions using human-like interfaces. Still, limitations in their training
prevent them from answering accurately in scenarios that could benefit from
multiple perspectives. Multi-agent systems allow the resolution of questions to
enhance result consistency and reliability. While drug-target interaction (DTI)
prediction is important for drug discovery, existing approaches face challenges
due to complex biological systems and the lack of interpretability needed for
clinical applications. DrugAgent is a multi-agent LLM system for DTI prediction
that combines multiple specialized perspectives with transparent reasoning. Our
system adapts and extends existing multi-agent frameworks by (1) applying
coordinator-based architecture to the DTI domain, (2) integrating
domain-specific data sources, including ML predictions, knowledge graphs, and
literature evidence, and (3) incorporating Chain-of-Thought (CoT) and ReAct
(Reason+Act) frameworks for transparent DTI reasoning. We conducted
comprehensive experiments using a kinase inhibitor dataset, where our
multi-agent LLM method outperformed the non-reasoning multi-agent model (GPT-4o
mini) by 45% in F1 score (0.514 vs 0.355). Through ablation studies, we
demonstrated the contributions of each agent, with the AI agent being the most
impactful, followed by the KG agent and search agent. Most importantly, our
approach provides detailed, human-interpretable reasoning for each prediction
by combining evidence from multiple sources - a critical feature for biomedical
applications where understanding the rationale behind predictions is essential
for clinical decision-making and regulatory compliance. Code is available at
https://anonymous.4open.science/r/DrugAgent-B2EA.
|
2409.00134 | Alexey Skrynnik | Anton Andreychuk, Konstantin Yakovlev, Aleksandr Panov, Alexey
Skrynnik | MAPF-GPT: Imitation Learning for Multi-Agent Pathfinding at Scale | null | null | null | null | cs.MA cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multi-agent pathfinding (MAPF) is a problem that generally requires finding
collision-free paths for multiple agents in a shared environment. Solving MAPF
optimally, even under restrictive assumptions, is NP-hard, yet efficient
solutions for this problem are critical for numerous applications, such as
automated warehouses and transportation systems. Recently, learning-based
approaches to MAPF have gained attention, particularly those leveraging deep
reinforcement learning. Typically, such learning-based MAPF solvers are
augmented with additional components like single-agent planning or
communication. Orthogonally, in this work we rely solely on imitation learning
that leverages a large dataset of expert MAPF solutions and transformer-based
neural network to create a foundation model for MAPF called MAPF-GPT. The
latter is capable of generating actions without additional heuristics or
communication. MAPF-GPT demonstrates zero-shot learning abilities when solving
the MAPF problems that are not present in the training dataset. We show that
MAPF-GPT notably outperforms the current best-performing learnable MAPF solvers
on a diverse range of problem instances and is computationally efficient during
inference.
| [
{
"version": "v1",
"created": "Thu, 29 Aug 2024 12:55:10 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Sep 2024 13:49:00 GMT"
},
{
"version": "v3",
"created": "Wed, 25 Sep 2024 13:09:35 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Feb 2025 12:28:36 GMT"
},
{
"version": "v5",
"created": "Tue, 8 Apr 2025 07:32:56 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Andreychuk",
"Anton",
""
],
[
"Yakovlev",
"Konstantin",
""
],
[
"Panov",
"Aleksandr",
""
],
[
"Skrynnik",
"Alexey",
""
]
] | TITLE: MAPF-GPT: Imitation Learning for Multi-Agent Pathfinding at Scale
ABSTRACT: Multi-agent pathfinding (MAPF) is a problem that generally requires finding
collision-free paths for multiple agents in a shared environment. Solving MAPF
optimally, even under restrictive assumptions, is NP-hard, yet efficient
solutions for this problem are critical for numerous applications, such as
automated warehouses and transportation systems. Recently, learning-based
approaches to MAPF have gained attention, particularly those leveraging deep
reinforcement learning. Typically, such learning-based MAPF solvers are
augmented with additional components like single-agent planning or
communication. Orthogonally, in this work we rely solely on imitation learning
that leverages a large dataset of expert MAPF solutions and transformer-based
neural network to create a foundation model for MAPF called MAPF-GPT. The
latter is capable of generating actions without additional heuristics or
communication. MAPF-GPT demonstrates zero-shot learning abilities when solving
the MAPF problems that are not present in the training dataset. We show that
MAPF-GPT notably outperforms the current best-performing learnable MAPF solvers
on a diverse range of problem instances and is computationally efficient during
inference.
|
2409.13717 | Yiheng Wu | Yiheng Wu, Roman Yangarber, Xian Mao | DiVA-DocRE: A Discriminative and Voice-Aware Paradigm for Document-Level
Relation Extraction | After internal discussions among the co-authors, we have decided to
withdraw the manuscript due to a change in research direction and a lack of
unanimous agreement to proceed with publication at this time | null | null | null | cs.CL cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The remarkable capabilities of Large Language Models (LLMs) in text
comprehension and generation have revolutionized Information Extraction (IE).
One such advancement is in Document-level Relation Triplet Extraction (DocRTE),
a critical task in information systems that aims to extract entities and their
semantic relationships from documents. However, existing methods are primarily
designed for Sentence level Relation Triplet Extraction (SentRTE), which
typically handles a limited set of relations and triplet facts within a single
sentence. Additionally, some approaches treat relations as candidate choices
integrated into prompt templates, resulting in inefficient processing and
suboptimal performance when determining the relation elements in triplets. To
address these limitations, we introduce a Discriminative and Voice Aware
Paradigm DiVA. DiVA involves only two steps: performing document-level relation
extraction (DocRE) and then identifying the subject object entities based on
the relation. No additional processing is required simply input the document to
directly obtain the triplets. This streamlined process more accurately reflects
real-world scenarios for triplet extraction. Our innovation lies in
transforming DocRE into a discriminative task, where the model pays attention
to each relation and to the often overlooked issue of active vs. passive voice
within the triplet. Our experiments on the Re-DocRED and DocRED datasets
demonstrate state-of-the-art results for the DocRTE task.
| [
{
"version": "v1",
"created": "Sat, 7 Sep 2024 18:47:38 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 10:43:00 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wu",
"Yiheng",
""
],
[
"Yangarber",
"Roman",
""
],
[
"Mao",
"Xian",
""
]
] | TITLE: DiVA-DocRE: A Discriminative and Voice-Aware Paradigm for Document-Level
Relation Extraction
ABSTRACT: The remarkable capabilities of Large Language Models (LLMs) in text
comprehension and generation have revolutionized Information Extraction (IE).
One such advancement is in Document-level Relation Triplet Extraction (DocRTE),
a critical task in information systems that aims to extract entities and their
semantic relationships from documents. However, existing methods are primarily
designed for Sentence level Relation Triplet Extraction (SentRTE), which
typically handles a limited set of relations and triplet facts within a single
sentence. Additionally, some approaches treat relations as candidate choices
integrated into prompt templates, resulting in inefficient processing and
suboptimal performance when determining the relation elements in triplets. To
address these limitations, we introduce a Discriminative and Voice Aware
Paradigm DiVA. DiVA involves only two steps: performing document-level relation
extraction (DocRE) and then identifying the subject object entities based on
the relation. No additional processing is required simply input the document to
directly obtain the triplets. This streamlined process more accurately reflects
real-world scenarios for triplet extraction. Our innovation lies in
transforming DocRE into a discriminative task, where the model pays attention
to each relation and to the often overlooked issue of active vs. passive voice
within the triplet. Our experiments on the Re-DocRED and DocRED datasets
demonstrate state-of-the-art results for the DocRTE task.
|
2409.16681 | Kun Zhou | Kun Zhou, You Zhang, Shengkui Zhao, Hao Wang, Zexu Pan, Dianwen Ng,
Chong Zhang, Chongjia Ni, Yukun Ma, Trung Hieu Nguyen, Jia Qi Yip, Bin Ma | Emotional Dimension Control in Language Model-Based Text-to-Speech:
Spanning a Broad Spectrum of Human Emotions | null | null | null | null | eess.AS cs.CL cs.SD | http://creativecommons.org/licenses/by/4.0/ | Current emotional text-to-speech systems face challenges in conveying the
full spectrum of human emotions, largely due to the inherent complexity of
human emotions and the limited range of emotional labels in existing speech
datasets. To address these limitations, this paper introduces a TTS framework
that provides flexible user control over three emotional dimensions - pleasure,
arousal, and dominance - enabling the synthesis of a diverse array of emotional
styles. The framework leverages an emotional dimension predictor, trained soley
on categorical labels from speech data and grounded in earlier psychological
research, which is seamlessly integrated into a language model-based TTS
system. Experimental results demonstrates that the proposed framework
effectively learns emotional styles from expressive speech, eliminating the
need for explicit emotion labels during TTS training, while enhancing the
naturalness and diversity of synthesized emotional speech.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 07:16:16 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 08:08:08 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhou",
"Kun",
""
],
[
"Zhang",
"You",
""
],
[
"Zhao",
"Shengkui",
""
],
[
"Wang",
"Hao",
""
],
[
"Pan",
"Zexu",
""
],
[
"Ng",
"Dianwen",
""
],
[
"Zhang",
"Chong",
""
],
[
"Ni",
"Chongjia",
""
],
[
"Ma",
"Yukun",
""
],
[
"Nguyen",
"Trung Hieu",
""
],
[
"Yip",
"Jia Qi",
""
],
[
"Ma",
"Bin",
""
]
] | TITLE: Emotional Dimension Control in Language Model-Based Text-to-Speech:
Spanning a Broad Spectrum of Human Emotions
ABSTRACT: Current emotional text-to-speech systems face challenges in conveying the
full spectrum of human emotions, largely due to the inherent complexity of
human emotions and the limited range of emotional labels in existing speech
datasets. To address these limitations, this paper introduces a TTS framework
that provides flexible user control over three emotional dimensions - pleasure,
arousal, and dominance - enabling the synthesis of a diverse array of emotional
styles. The framework leverages an emotional dimension predictor, trained soley
on categorical labels from speech data and grounded in earlier psychological
research, which is seamlessly integrated into a language model-based TTS
system. Experimental results demonstrates that the proposed framework
effectively learns emotional styles from expressive speech, eliminating the
need for explicit emotion labels during TTS training, while enhancing the
naturalness and diversity of synthesized emotional speech.
|
2410.05454 | Ayesha Vermani | Ayesha Vermani, Josue Nassar, Hyungju Jeon, Matthew Dowling, Il
Memming Park | Meta-Dynamical State Space Models for Integrative Neural Data Analysis | null | null | null | null | stat.ML cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning shared structure across environments facilitates rapid learning and
adaptive behavior in neural systems. This has been widely demonstrated and
applied in machine learning to train models that are capable of generalizing to
novel settings. However, there has been limited work exploiting the shared
structure in neural activity during similar tasks for learning latent dynamics
from neural recordings. Existing approaches are designed to infer dynamics from
a single dataset and cannot be readily adapted to account for statistical
heterogeneities across recordings. In this work, we hypothesize that similar
tasks admit a corresponding family of related solutions and propose a novel
approach for meta-learning this solution space from task-related neural
activity of trained animals. Specifically, we capture the variabilities across
recordings on a low-dimensional manifold which concisely parametrizes this
family of dynamics, thereby facilitating rapid learning of latent dynamics
given new recordings. We demonstrate the efficacy of our approach on few-shot
reconstruction and forecasting of synthetic dynamical systems, and neural
recordings from the motor cortex during different arm reaching tasks.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 19:35:49 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 21:44:06 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Vermani",
"Ayesha",
""
],
[
"Nassar",
"Josue",
""
],
[
"Jeon",
"Hyungju",
""
],
[
"Dowling",
"Matthew",
""
],
[
"Park",
"Il Memming",
""
]
] | TITLE: Meta-Dynamical State Space Models for Integrative Neural Data Analysis
ABSTRACT: Learning shared structure across environments facilitates rapid learning and
adaptive behavior in neural systems. This has been widely demonstrated and
applied in machine learning to train models that are capable of generalizing to
novel settings. However, there has been limited work exploiting the shared
structure in neural activity during similar tasks for learning latent dynamics
from neural recordings. Existing approaches are designed to infer dynamics from
a single dataset and cannot be readily adapted to account for statistical
heterogeneities across recordings. In this work, we hypothesize that similar
tasks admit a corresponding family of related solutions and propose a novel
approach for meta-learning this solution space from task-related neural
activity of trained animals. Specifically, we capture the variabilities across
recordings on a low-dimensional manifold which concisely parametrizes this
family of dynamics, thereby facilitating rapid learning of latent dynamics
given new recordings. We demonstrate the efficacy of our approach on few-shot
reconstruction and forecasting of synthetic dynamical systems, and neural
recordings from the motor cortex during different arm reaching tasks.
|
2410.08527 | Yangyi Chen | Yangyi Chen, Binxuan Huang, Yifan Gao, Zhengyang Wang, Jingfeng Yang,
Heng Ji | Scaling Laws for Predicting Downstream Performance in LLMs | Accepted to TMLR | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Precise estimation of downstream performance in large language models (LLMs)
prior to training is essential for guiding their development process. Scaling
laws analysis utilizes the statistics of a series of significantly smaller
sampling language models (LMs) to predict the performance of the target LLM.
For downstream performance prediction, the critical challenge lies in the
emergent abilities in LLMs that occur beyond task-specific computational
thresholds. In this work, we focus on the pre-training loss as a more
computation-efficient metric for performance estimation. Our two-stage approach
FLP consists of first estimating a function that maps computational resources
(e.g., FLOPs) to the pre-training Loss using a series of fully-converged
sampling models, followed by mapping the pre-training loss to downstream task
Performance using the intermediate models with emerged performance. In our
experiments, this FLP solution accurately predicts the performance of LLMs with
7B and 13B parameters using a series of sampling LMs up to 3B, achieving error
margins of 5% and 10%, respectively, and significantly outperforming the
FLOPs-to-Performance approach. Further, we present FLP-M, a fundamental
approach for performance prediction that addresses the practical need to
integrate datasets from multiple sources during pre-training. FLP-M extends the
power law analytical function to predict domain-specific pre-training loss
based on FLOPs across data sources, and employs a two-layer neural network to
model the non-linear relationship between multiple domain-specific loss and
downstream performance. By utilizing a 3B LLM trained on a specific ratio and a
series of smaller sampling LMs, FLP-M can effectively forecast the performance
of 3B and 7B LLMs across various data mixtures for most benchmarks within 10%
error margins.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 04:57:48 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 21:47:09 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Chen",
"Yangyi",
""
],
[
"Huang",
"Binxuan",
""
],
[
"Gao",
"Yifan",
""
],
[
"Wang",
"Zhengyang",
""
],
[
"Yang",
"Jingfeng",
""
],
[
"Ji",
"Heng",
""
]
] | TITLE: Scaling Laws for Predicting Downstream Performance in LLMs
ABSTRACT: Precise estimation of downstream performance in large language models (LLMs)
prior to training is essential for guiding their development process. Scaling
laws analysis utilizes the statistics of a series of significantly smaller
sampling language models (LMs) to predict the performance of the target LLM.
For downstream performance prediction, the critical challenge lies in the
emergent abilities in LLMs that occur beyond task-specific computational
thresholds. In this work, we focus on the pre-training loss as a more
computation-efficient metric for performance estimation. Our two-stage approach
FLP consists of first estimating a function that maps computational resources
(e.g., FLOPs) to the pre-training Loss using a series of fully-converged
sampling models, followed by mapping the pre-training loss to downstream task
Performance using the intermediate models with emerged performance. In our
experiments, this FLP solution accurately predicts the performance of LLMs with
7B and 13B parameters using a series of sampling LMs up to 3B, achieving error
margins of 5% and 10%, respectively, and significantly outperforming the
FLOPs-to-Performance approach. Further, we present FLP-M, a fundamental
approach for performance prediction that addresses the practical need to
integrate datasets from multiple sources during pre-training. FLP-M extends the
power law analytical function to predict domain-specific pre-training loss
based on FLOPs across data sources, and employs a two-layer neural network to
model the non-linear relationship between multiple domain-specific loss and
downstream performance. By utilizing a 3B LLM trained on a specific ratio and a
series of smaller sampling LMs, FLP-M can effectively forecast the performance
of 3B and 7B LLMs across various data mixtures for most benchmarks within 10%
error margins.
|
2410.12779 | Xingzhi Sun | Xingzhi Sun, Danqi Liao, Kincaid MacDonald, Yanlei Zhang, Chen Liu,
Guillaume Huguet, Guy Wolf, Ian Adelstein, Tim G. J. Rudner, Smita
Krishnaswamy | Geometry-Aware Generative Autoencoders for Warped Riemannian Metric
Learning and Generative Modeling on Data Manifolds | Published in Proceedings of the 28th International Conference on
Artificial Intelligence and Statistics (AISTATS 2025) | null | null | null | cs.LG math.DG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rapid growth of high-dimensional datasets in fields such as single-cell RNA
sequencing and spatial genomics has led to unprecedented opportunities for
scientific discovery, but it also presents unique computational and statistical
challenges. Traditional methods struggle with geometry-aware data generation,
interpolation along meaningful trajectories, and transporting populations via
feasible paths. To address these issues, we introduce Geometry-Aware Generative
Autoencoder (GAGA), a novel framework that combines extensible manifold
learning with generative modeling. GAGA constructs a neural network embedding
space that respects the intrinsic geometries discovered by manifold learning
and learns a novel warped Riemannian metric on the data space. This warped
metric is derived from both the points on the data manifold and negative
samples off the manifold, allowing it to characterize a meaningful geometry
across the entire latent space. Using this metric, GAGA can uniformly sample
points on the manifold, generate points along geodesics, and interpolate
between populations across the learned manifold using geodesic-guided flows.
GAGA shows competitive performance in simulated and real-world datasets,
including a 30% improvement over the state-of-the-art methods in single-cell
population-level trajectory inference.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 17:53:26 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Oct 2024 18:27:10 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Jan 2025 16:39:26 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Apr 2025 19:30:58 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sun",
"Xingzhi",
""
],
[
"Liao",
"Danqi",
""
],
[
"MacDonald",
"Kincaid",
""
],
[
"Zhang",
"Yanlei",
""
],
[
"Liu",
"Chen",
""
],
[
"Huguet",
"Guillaume",
""
],
[
"Wolf",
"Guy",
""
],
[
"Adelstein",
"Ian",
""
],
[
"Rudner",
"Tim G. J.",
""
],
[
"Krishnaswamy",
"Smita",
""
]
] | TITLE: Geometry-Aware Generative Autoencoders for Warped Riemannian Metric
Learning and Generative Modeling on Data Manifolds
ABSTRACT: Rapid growth of high-dimensional datasets in fields such as single-cell RNA
sequencing and spatial genomics has led to unprecedented opportunities for
scientific discovery, but it also presents unique computational and statistical
challenges. Traditional methods struggle with geometry-aware data generation,
interpolation along meaningful trajectories, and transporting populations via
feasible paths. To address these issues, we introduce Geometry-Aware Generative
Autoencoder (GAGA), a novel framework that combines extensible manifold
learning with generative modeling. GAGA constructs a neural network embedding
space that respects the intrinsic geometries discovered by manifold learning
and learns a novel warped Riemannian metric on the data space. This warped
metric is derived from both the points on the data manifold and negative
samples off the manifold, allowing it to characterize a meaningful geometry
across the entire latent space. Using this metric, GAGA can uniformly sample
points on the manifold, generate points along geodesics, and interpolate
between populations across the learned manifold using geodesic-guided flows.
GAGA shows competitive performance in simulated and real-world datasets,
including a 30% improvement over the state-of-the-art methods in single-cell
population-level trajectory inference.
|
2410.16520 | Naba Rizvi | Naba Rizvi, Harper Strickland, Daniel Gitelman, Tristan Cooper, Alexis
Morales-Flores, Michael Golden, Aekta Kallepalli, Akshat Alurkar, Haaset
Owens, Saleha Ahmedi, Isha Khirwadkar, Imani Munyaka, Nedjma Ousidhoum | AUTALIC: A Dataset for Anti-AUTistic Ableist Language In Context | 9 pages, 5 figures, 7 tables | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | As our understanding of autism and ableism continues to increase, so does our
understanding of ableist language towards autistic people. Such language poses
a significant challenge in NLP research due to its subtle and context-dependent
nature. Yet, detecting anti-autistic ableist language remains underexplored,
with existing NLP tools often failing to capture its nuanced expressions. We
present AUTALIC, the first benchmark dataset dedicated to the detection of
anti-autistic ableist language in context, addressing a significant gap in the
field. The dataset comprises 2,400 autism-related sentences collected from
Reddit, accompanied by surrounding context, and is annotated by trained experts
with backgrounds in neurodiversity. Our comprehensive evaluation reveals that
current language models, including state-of-the-art LLMs, struggle to reliably
identify anti-autistic ableism and align with human judgments, underscoring
their limitations in this domain. We publicly release AUTALIC along with the
individual annotations which serve as a valuable resource to researchers
working on ableism, neurodiversity, and also studying disagreements in
annotation tasks. This dataset serves as a crucial step towards developing more
inclusive and context-aware NLP systems that better reflect diverse
perspectives.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 21:21:29 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Nov 2024 16:43:06 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 17:08:26 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Rizvi",
"Naba",
""
],
[
"Strickland",
"Harper",
""
],
[
"Gitelman",
"Daniel",
""
],
[
"Cooper",
"Tristan",
""
],
[
"Morales-Flores",
"Alexis",
""
],
[
"Golden",
"Michael",
""
],
[
"Kallepalli",
"Aekta",
""
],
[
"Alurkar",
"Akshat",
""
],
[
"Owens",
"Haaset",
""
],
[
"Ahmedi",
"Saleha",
""
],
[
"Khirwadkar",
"Isha",
""
],
[
"Munyaka",
"Imani",
""
],
[
"Ousidhoum",
"Nedjma",
""
]
] | TITLE: AUTALIC: A Dataset for Anti-AUTistic Ableist Language In Context
ABSTRACT: As our understanding of autism and ableism continues to increase, so does our
understanding of ableist language towards autistic people. Such language poses
a significant challenge in NLP research due to its subtle and context-dependent
nature. Yet, detecting anti-autistic ableist language remains underexplored,
with existing NLP tools often failing to capture its nuanced expressions. We
present AUTALIC, the first benchmark dataset dedicated to the detection of
anti-autistic ableist language in context, addressing a significant gap in the
field. The dataset comprises 2,400 autism-related sentences collected from
Reddit, accompanied by surrounding context, and is annotated by trained experts
with backgrounds in neurodiversity. Our comprehensive evaluation reveals that
current language models, including state-of-the-art LLMs, struggle to reliably
identify anti-autistic ableism and align with human judgments, underscoring
their limitations in this domain. We publicly release AUTALIC along with the
individual annotations which serve as a valuable resource to researchers
working on ableism, neurodiversity, and also studying disagreements in
annotation tasks. This dataset serves as a crucial step towards developing more
inclusive and context-aware NLP systems that better reflect diverse
perspectives.
|
2410.17875 | Guangyuan Shi | Guangyuan Shi, Zexin Lu, Xiaoyu Dong, Wenlong Zhang, Xuanyu Zhang,
Yujie Feng, Xiao-Ming Wu | Understanding Layer Significance in LLM Alignment | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Aligning large language models (LLMs) through supervised fine-tuning is
essential for tailoring them to specific applications. Recent studies suggest
that alignment primarily adjusts a model's presentation style rather than its
foundational knowledge, indicating that only certain components of the model
are significantly impacted. To uncover how alignment affects model behavior at
a granular level, we propose identifying which layers within LLMs are most
critical to the alignment process. Our approach, named ILA, involves learning a
binary mask for the parameter changes in each layer during alignment, as an
indicator of layer significance. Experimental results reveal that, despite
substantial differences in alignment datasets, the important layers of a model
identified by ILA exhibit nearly 90\% overlap, highlighting fundamental
patterns in LLM alignment. The results also indicate that freezing
non-essential layers improves overall model performance, while selectively
tuning the most critical layers significantly enhances fine-tuning efficiency
with minimal performance loss. Finally, we discuss how these findings extend
from LLM alignment to reasoning.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2024 13:47:05 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Dec 2024 19:24:24 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 09:44:28 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Shi",
"Guangyuan",
""
],
[
"Lu",
"Zexin",
""
],
[
"Dong",
"Xiaoyu",
""
],
[
"Zhang",
"Wenlong",
""
],
[
"Zhang",
"Xuanyu",
""
],
[
"Feng",
"Yujie",
""
],
[
"Wu",
"Xiao-Ming",
""
]
] | TITLE: Understanding Layer Significance in LLM Alignment
ABSTRACT: Aligning large language models (LLMs) through supervised fine-tuning is
essential for tailoring them to specific applications. Recent studies suggest
that alignment primarily adjusts a model's presentation style rather than its
foundational knowledge, indicating that only certain components of the model
are significantly impacted. To uncover how alignment affects model behavior at
a granular level, we propose identifying which layers within LLMs are most
critical to the alignment process. Our approach, named ILA, involves learning a
binary mask for the parameter changes in each layer during alignment, as an
indicator of layer significance. Experimental results reveal that, despite
substantial differences in alignment datasets, the important layers of a model
identified by ILA exhibit nearly 90\% overlap, highlighting fundamental
patterns in LLM alignment. The results also indicate that freezing
non-essential layers improves overall model performance, while selectively
tuning the most critical layers significantly enhances fine-tuning efficiency
with minimal performance loss. Finally, we discuss how these findings extend
from LLM alignment to reasoning.
|
2410.18358 | Henrik Ebel | Henrik Ebel, Jan van Delden, Timo L\"uddecke, Aditya Borse, Rutwik
Gulakala, Marcus Stoffel, Manish Yadav, Merten Stender, Leon Schindler,
Kristin Miriam de Payrebrune, Maximilian Raff, C. David Remy, Benedict
R\"oder, Rohit Raj, Tobias Rentschler, Alexander Tismer, Stefan Riedelbauch,
Peter Eberhard | Data Publishing in Mechanics and Dynamics: Challenges, Guidelines, and
Examples from Engineering Design | 25 pages, 10 figures | DCE 6 (2025) e23 | 10.1017/dce.2025.13 | null | cs.CY cs.AI cs.CE cs.ET cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Data-based methods have gained increasing importance in engineering,
especially but not only driven by successes with deep artificial neural
networks. Success stories are prevalent, e.g., in areas such as data-driven
modeling, control and automation, as well as surrogate modeling for accelerated
simulation. Beyond engineering, generative and large-language models are
increasingly helping with tasks that, previously, were solely associated with
creative human processes. Thus, it seems timely to seek
artificial-intelligence-support for engineering design tasks to automate, help
with, or accelerate purpose-built designs of engineering systems, e.g., in
mechanics and dynamics, where design so far requires a lot of specialized
knowledge. However, research-wise, compared to established, predominantly
first-principles-based methods, the datasets used for training, validation, and
test become an almost inherent part of the overall methodology. Thus, data
publishing becomes just as important in (data-driven) engineering science as
appropriate descriptions of conventional methodology in publications in the
past. This article analyzes the value and challenges of data publishing in
mechanics and dynamics, in particular regarding engineering design tasks,
showing that the latter raise also challenges and considerations not typical in
fields where data-driven methods have been booming originally. Possible ways to
deal with these challenges are discussed and a set of examples from across
different design problems shows how data publishing can be put into practice.
The analysis, discussions, and examples are based on the research experience
made in a priority program of the German research foundation focusing on
research on artificially intelligent design assistants in mechanics and
dynamics.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 18:26:05 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Dec 2024 12:58:09 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Ebel",
"Henrik",
""
],
[
"van Delden",
"Jan",
""
],
[
"Lüddecke",
"Timo",
""
],
[
"Borse",
"Aditya",
""
],
[
"Gulakala",
"Rutwik",
""
],
[
"Stoffel",
"Marcus",
""
],
[
"Yadav",
"Manish",
""
],
[
"Stender",
"Merten",
""
],
[
"Schindler",
"Leon",
""
],
[
"de Payrebrune",
"Kristin Miriam",
""
],
[
"Raff",
"Maximilian",
""
],
[
"Remy",
"C. David",
""
],
[
"Röder",
"Benedict",
""
],
[
"Raj",
"Rohit",
""
],
[
"Rentschler",
"Tobias",
""
],
[
"Tismer",
"Alexander",
""
],
[
"Riedelbauch",
"Stefan",
""
],
[
"Eberhard",
"Peter",
""
]
] | TITLE: Data Publishing in Mechanics and Dynamics: Challenges, Guidelines, and
Examples from Engineering Design
ABSTRACT: Data-based methods have gained increasing importance in engineering,
especially but not only driven by successes with deep artificial neural
networks. Success stories are prevalent, e.g., in areas such as data-driven
modeling, control and automation, as well as surrogate modeling for accelerated
simulation. Beyond engineering, generative and large-language models are
increasingly helping with tasks that, previously, were solely associated with
creative human processes. Thus, it seems timely to seek
artificial-intelligence-support for engineering design tasks to automate, help
with, or accelerate purpose-built designs of engineering systems, e.g., in
mechanics and dynamics, where design so far requires a lot of specialized
knowledge. However, research-wise, compared to established, predominantly
first-principles-based methods, the datasets used for training, validation, and
test become an almost inherent part of the overall methodology. Thus, data
publishing becomes just as important in (data-driven) engineering science as
appropriate descriptions of conventional methodology in publications in the
past. This article analyzes the value and challenges of data publishing in
mechanics and dynamics, in particular regarding engineering design tasks,
showing that the latter raise also challenges and considerations not typical in
fields where data-driven methods have been booming originally. Possible ways to
deal with these challenges are discussed and a set of examples from across
different design problems shows how data publishing can be put into practice.
The analysis, discussions, and examples are based on the research experience
made in a priority program of the German research foundation focusing on
research on artificially intelligent design assistants in mechanics and
dynamics.
|
2410.19426 | Daniel Galperin | Daniel Galperin, Ullrich K\"othe | Analyzing Generative Models by Manifold Entropic Metrics | Camera-ready version: accepted at AISTATS 2025 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Good generative models should not only synthesize high quality data, but also
utilize interpretable representations that aid human understanding of their
behavior. However, it is difficult to measure objectively if and to what degree
desirable properties of disentangled representations have been achieved.
Inspired by the principle of independent mechanisms, we address this difficulty
by introducing a novel set of tractable information-theoretic evaluation
metrics. We demonstrate the usefulness of our metrics on illustrative toy
examples and conduct an in-depth comparison of various normalizing flow
architectures and $\beta$-VAEs on the EMNIST dataset. Our method allows to sort
latent features by importance and assess the amount of residual correlations of
the resulting concepts. The most interesting finding of our experiments is a
ranking of model architectures and training procedures in terms of their
inductive bias to converge to aligned and disentangled representations during
training.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2024 09:35:00 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 15:47:53 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Galperin",
"Daniel",
""
],
[
"Köthe",
"Ullrich",
""
]
] | TITLE: Analyzing Generative Models by Manifold Entropic Metrics
ABSTRACT: Good generative models should not only synthesize high quality data, but also
utilize interpretable representations that aid human understanding of their
behavior. However, it is difficult to measure objectively if and to what degree
desirable properties of disentangled representations have been achieved.
Inspired by the principle of independent mechanisms, we address this difficulty
by introducing a novel set of tractable information-theoretic evaluation
metrics. We demonstrate the usefulness of our metrics on illustrative toy
examples and conduct an in-depth comparison of various normalizing flow
architectures and $\beta$-VAEs on the EMNIST dataset. Our method allows to sort
latent features by importance and assess the amount of residual correlations of
the resulting concepts. The most interesting finding of our experiments is a
ranking of model architectures and training procedures in terms of their
inductive bias to converge to aligned and disentangled representations during
training.
|
2411.02540 | Mateusz Cedro | Mateusz Cedro, David Martens | GraphXAIN: Narratives to Explain Graph Neural Networks | 19 pages, 9 figures, 2 tables | World Conference on Explainable Artificial Intelligence 2025 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) are a powerful technique for machine learning on
graph-structured data, yet they pose challenges in interpretability. Existing
GNN explanation methods usually yield technical outputs, such as subgraphs and
feature importance scores, that are difficult for non-data scientists to
understand and thereby violate the purpose of explanations. Motivated by recent
Explainable AI (XAI) research, we propose GraphXAIN, a method that generates
natural language narratives explaining GNN predictions. GraphXAIN is a model-
and explainer-agnostic method that uses Large Language Models (LLMs) to
translate explanatory subgraphs and feature importance scores into coherent,
story-like explanations of GNN decision-making processes. Evaluations on
real-world datasets demonstrate GraphXAIN's ability to improve graph
explanations. A survey of machine learning researchers and practitioners
reveals that GraphXAIN enhances four explainability dimensions:
understandability, satisfaction, convincingness, and suitability for
communicating model predictions. When combined with another graph explainer
method, GraphXAIN further improves trustworthiness, insightfulness, confidence,
and usability. Notably, 95% of participants found GraphXAIN to be a valuable
addition to the GNN explanation method. By incorporating natural language
narratives, our approach serves both graph practitioners and non-expert users
by providing clearer and more effective explanations.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 19:21:06 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Nov 2024 08:29:10 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Feb 2025 15:14:01 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Cedro",
"Mateusz",
""
],
[
"Martens",
"David",
""
]
] | TITLE: GraphXAIN: Narratives to Explain Graph Neural Networks
ABSTRACT: Graph Neural Networks (GNNs) are a powerful technique for machine learning on
graph-structured data, yet they pose challenges in interpretability. Existing
GNN explanation methods usually yield technical outputs, such as subgraphs and
feature importance scores, that are difficult for non-data scientists to
understand and thereby violate the purpose of explanations. Motivated by recent
Explainable AI (XAI) research, we propose GraphXAIN, a method that generates
natural language narratives explaining GNN predictions. GraphXAIN is a model-
and explainer-agnostic method that uses Large Language Models (LLMs) to
translate explanatory subgraphs and feature importance scores into coherent,
story-like explanations of GNN decision-making processes. Evaluations on
real-world datasets demonstrate GraphXAIN's ability to improve graph
explanations. A survey of machine learning researchers and practitioners
reveals that GraphXAIN enhances four explainability dimensions:
understandability, satisfaction, convincingness, and suitability for
communicating model predictions. When combined with another graph explainer
method, GraphXAIN further improves trustworthiness, insightfulness, confidence,
and usability. Notably, 95% of participants found GraphXAIN to be a valuable
addition to the GNN explanation method. By incorporating natural language
narratives, our approach serves both graph practitioners and non-expert users
by providing clearer and more effective explanations.
|
2411.04794 | Yuxin Zuo | Yuxin Zuo, Wenxuan Jiang, Wenxuan Liu, Zixuan Li, Long Bai, Hanbin
Wang, Yutao Zeng, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng | KnowCoder-X: Boosting Multilingual Information Extraction via Code | 26 pages, 3 figures | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Empirical evidence indicates that LLMs exhibit spontaneous cross-lingual
alignment. However, although LLMs show promising cross-lingual alignment in IE,
a significant imbalance across languages persists, highlighting an underlying
deficiency. To address this, we propose KnowCoder-X, a powerful code LLM with
advanced cross-lingual and multilingual capabilities for universal information
extraction. Firstly, it standardizes the representation of multilingual schemas
using Python classes, ensuring a consistent ontology across different
languages. Then, IE across languages is formulated as a unified code generation
task. Secondly, we enhance the model's cross-lingual transferability through IE
cross-lingual alignment instruction tuning on a translated instance prediction
task we proposed. During this phase, we also construct a high-quality and
diverse bilingual IE parallel dataset with 257k samples, called ParallelNER,
synthesized by our proposed robust three-stage pipeline, with manual annotation
to ensure quality. Although without training in 29 unseen languages,
KnowCoder-X surpasses ChatGPT by $30.17\%$ and SoTA by $20.03\%$, thereby
demonstrating superior cross-lingual IE capabilities. Comprehensive evaluations
on 64 IE benchmarks in Chinese and English under various settings demonstrate
that KnowCoder-X significantly enhances cross-lingual IE transfer through
boosting the IE alignment. Our code and dataset are available at:
https://github.com/ICT-GoKnow/KnowCoder
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 15:36:05 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 16:16:30 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zuo",
"Yuxin",
""
],
[
"Jiang",
"Wenxuan",
""
],
[
"Liu",
"Wenxuan",
""
],
[
"Li",
"Zixuan",
""
],
[
"Bai",
"Long",
""
],
[
"Wang",
"Hanbin",
""
],
[
"Zeng",
"Yutao",
""
],
[
"Jin",
"Xiaolong",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Cheng",
"Xueqi",
""
]
] | TITLE: KnowCoder-X: Boosting Multilingual Information Extraction via Code
ABSTRACT: Empirical evidence indicates that LLMs exhibit spontaneous cross-lingual
alignment. However, although LLMs show promising cross-lingual alignment in IE,
a significant imbalance across languages persists, highlighting an underlying
deficiency. To address this, we propose KnowCoder-X, a powerful code LLM with
advanced cross-lingual and multilingual capabilities for universal information
extraction. Firstly, it standardizes the representation of multilingual schemas
using Python classes, ensuring a consistent ontology across different
languages. Then, IE across languages is formulated as a unified code generation
task. Secondly, we enhance the model's cross-lingual transferability through IE
cross-lingual alignment instruction tuning on a translated instance prediction
task we proposed. During this phase, we also construct a high-quality and
diverse bilingual IE parallel dataset with 257k samples, called ParallelNER,
synthesized by our proposed robust three-stage pipeline, with manual annotation
to ensure quality. Although without training in 29 unseen languages,
KnowCoder-X surpasses ChatGPT by $30.17\%$ and SoTA by $20.03\%$, thereby
demonstrating superior cross-lingual IE capabilities. Comprehensive evaluations
on 64 IE benchmarks in Chinese and English under various settings demonstrate
that KnowCoder-X significantly enhances cross-lingual IE transfer through
boosting the IE alignment. Our code and dataset are available at:
https://github.com/ICT-GoKnow/KnowCoder
|
2411.08872 | Sadjad Alikhani | Sadjad Alikhani, Gouranga Charan, and Ahmed Alkhateeb | Large Wireless Model (LWM): A Foundation Model for Wireless Channels | The LWM model and relevant scripts are available on the LWM website:
https://lwm-wireless.net/ | null | null | null | cs.IT eess.SP math.IT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper presents Large Wireless Model (LWM) -- the world's first
foundation model for wireless channels. Designed as a task-agnostic model, LWM
generates universal, rich, contextualized channel embeddings (features) that
potentially enhance performance across a wide range of downstream tasks in
wireless communication and sensing systems. Towards this objective, LWM, which
has a transformer-based architecture, was pre-trained in a self-supervised
manner on large-scale wireless channel datasets. Our results show consistent
improvements in downstream tasks when using the LWM embeddings compared to raw
channel representations, especially in scenarios with high-complexity machine
learning tasks and limited training datasets. This LWM's ability to learn from
large-scale wireless data opens a promising direction for intelligent systems
that can efficiently adapt to diverse tasks with limited data, paving the way
for addressing key challenges in wireless communication and sensing systems.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 18:51:10 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 19:49:37 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Alikhani",
"Sadjad",
""
],
[
"Charan",
"Gouranga",
""
],
[
"Alkhateeb",
"Ahmed",
""
]
] | TITLE: Large Wireless Model (LWM): A Foundation Model for Wireless Channels
ABSTRACT: This paper presents Large Wireless Model (LWM) -- the world's first
foundation model for wireless channels. Designed as a task-agnostic model, LWM
generates universal, rich, contextualized channel embeddings (features) that
potentially enhance performance across a wide range of downstream tasks in
wireless communication and sensing systems. Towards this objective, LWM, which
has a transformer-based architecture, was pre-trained in a self-supervised
manner on large-scale wireless channel datasets. Our results show consistent
improvements in downstream tasks when using the LWM embeddings compared to raw
channel representations, especially in scenarios with high-complexity machine
learning tasks and limited training datasets. This LWM's ability to learn from
large-scale wireless data opens a promising direction for intelligent systems
that can efficiently adapt to diverse tasks with limited data, paving the way
for addressing key challenges in wireless communication and sensing systems.
|
2411.13951 | Lucas Correia | Lucas Correia, Jan-Christoph Goos, Thomas B\"ack, Anna V. Kononova | PATH: A Discrete-sequence Dataset for Evaluating Online Unsupervised
Anomaly Detection Approaches for Multivariate Time Series | Submitted to the Big Data Research journal | null | null | null | cs.LG cs.AI cs.CE cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Benchmarking anomaly detection approaches for multivariate time series is a
challenging task due to a lack of high-quality datasets. Current publicly
available datasets are too small, not diverse and feature trivial anomalies,
which hinders measurable progress in this research area. We propose a solution:
a diverse, extensive, and non-trivial dataset generated via state-of-the-art
simulation tools that reflects realistic behaviour of an automotive powertrain,
including its multivariate, dynamic and variable-state properties.
Additionally, our dataset represents a discrete-sequence problem, which remains
unaddressed by previously-proposed solutions in literature. To cater for both
unsupervised and semi-supervised anomaly detection settings, as well as time
series generation and forecasting, we make different versions of the dataset
available, where training and test subsets are offered in contaminated and
clean versions, depending on the task. We also provide baseline results from a
selection of approaches based on deterministic and variational autoencoders, as
well as a non-parametric approach. As expected, the baseline experimentation
shows that the approaches trained on the semi-supervised version of the dataset
outperform their unsupervised counterparts, highlighting a need for approaches
more robust to contaminated training data. Furthermore, results show that the
threshold used can have a large influence on detection performance, hence more
work needs to be invested in methods to find a suitable threshold without the
need for labelled data.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 09:03:12 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Nov 2024 14:24:57 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Jan 2025 17:16:22 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2025 15:26:49 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Correia",
"Lucas",
""
],
[
"Goos",
"Jan-Christoph",
""
],
[
"Bäck",
"Thomas",
""
],
[
"Kononova",
"Anna V.",
""
]
] | TITLE: PATH: A Discrete-sequence Dataset for Evaluating Online Unsupervised
Anomaly Detection Approaches for Multivariate Time Series
ABSTRACT: Benchmarking anomaly detection approaches for multivariate time series is a
challenging task due to a lack of high-quality datasets. Current publicly
available datasets are too small, not diverse and feature trivial anomalies,
which hinders measurable progress in this research area. We propose a solution:
a diverse, extensive, and non-trivial dataset generated via state-of-the-art
simulation tools that reflects realistic behaviour of an automotive powertrain,
including its multivariate, dynamic and variable-state properties.
Additionally, our dataset represents a discrete-sequence problem, which remains
unaddressed by previously-proposed solutions in literature. To cater for both
unsupervised and semi-supervised anomaly detection settings, as well as time
series generation and forecasting, we make different versions of the dataset
available, where training and test subsets are offered in contaminated and
clean versions, depending on the task. We also provide baseline results from a
selection of approaches based on deterministic and variational autoencoders, as
well as a non-parametric approach. As expected, the baseline experimentation
shows that the approaches trained on the semi-supervised version of the dataset
outperform their unsupervised counterparts, highlighting a need for approaches
more robust to contaminated training data. Furthermore, results show that the
threshold used can have a large influence on detection performance, hence more
work needs to be invested in methods to find a suitable threshold without the
need for labelled data.
|
2411.16199 | Haojie Zheng | Shuchen Weng, Haojie Zheng, Peixuan Zhang, Yuchen Hong, Han Jiang, Si
Li, Boxin Shi | VIRES: Video Instance Repainting via Sketch and Text Guided Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce VIRES, a video instance repainting method with sketch and text
guidance, enabling video instance repainting, replacement, generation, and
removal. Existing approaches struggle with temporal consistency and accurate
alignment with the provided sketch sequence. VIRES leverages the generative
priors of text-to-video models to maintain temporal consistency and produce
visually pleasing results. We propose the Sequential ControlNet with the
standardized self-scaling, which effectively extracts structure layouts and
adaptively captures high-contrast sketch details. We further augment the
diffusion transformer backbone with the sketch attention to interpret and
inject fine-grained sketch semantics. A sketch-aware encoder ensures that
repainted results are aligned with the provided sketch sequence. Additionally,
we contribute the VireSet, a dataset with detailed annotations tailored for
training and evaluating video instance editing methods. Experimental results
demonstrate the effectiveness of VIRES, which outperforms state-of-the-art
methods in visual quality, temporal consistency, condition alignment, and human
ratings. Project page: https://hjzheng.net/projects/VIRES/
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 08:55:41 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Nov 2024 11:43:01 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 08:57:48 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Mar 2025 05:28:29 GMT"
},
{
"version": "v5",
"created": "Thu, 27 Mar 2025 10:17:44 GMT"
},
{
"version": "v6",
"created": "Tue, 8 Apr 2025 14:47:07 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Weng",
"Shuchen",
""
],
[
"Zheng",
"Haojie",
""
],
[
"Zhang",
"Peixuan",
""
],
[
"Hong",
"Yuchen",
""
],
[
"Jiang",
"Han",
""
],
[
"Li",
"Si",
""
],
[
"Shi",
"Boxin",
""
]
] | TITLE: VIRES: Video Instance Repainting via Sketch and Text Guided Generation
ABSTRACT: We introduce VIRES, a video instance repainting method with sketch and text
guidance, enabling video instance repainting, replacement, generation, and
removal. Existing approaches struggle with temporal consistency and accurate
alignment with the provided sketch sequence. VIRES leverages the generative
priors of text-to-video models to maintain temporal consistency and produce
visually pleasing results. We propose the Sequential ControlNet with the
standardized self-scaling, which effectively extracts structure layouts and
adaptively captures high-contrast sketch details. We further augment the
diffusion transformer backbone with the sketch attention to interpret and
inject fine-grained sketch semantics. A sketch-aware encoder ensures that
repainted results are aligned with the provided sketch sequence. Additionally,
we contribute the VireSet, a dataset with detailed annotations tailored for
training and evaluating video instance editing methods. Experimental results
demonstrate the effectiveness of VIRES, which outperforms state-of-the-art
methods in visual quality, temporal consistency, condition alignment, and human
ratings. Project page: https://hjzheng.net/projects/VIRES/
|
2411.16260 | Fu-Chieh Chang | Fu-Chieh Chang, You-Chen Lin, Pei-Yuan Wu | Unraveling Arithmetic in Large Language Models: The Role of Algebraic
Structures | null | ICLR 2025 Workshop on Reasoning and Planning for Large Language
Models | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | The reasoning abilities of large language models (LLMs) have improved with
chain-of-thought (CoT) prompting, allowing models to solve complex tasks
stepwise. However, training CoT capabilities requires detailed reasoning data,
which is often scarce. The self-taught reasoner (STaR) framework addresses this
by using reinforcement learning to automatically generate reasoning steps,
reducing reliance on human-labeled data. Although STaR and its variants have
demonstrated empirical success, a theoretical foundation explaining these
improvements is lacking. Large language models (LLMs) have demonstrated
remarkable mathematical capabilities, largely driven by chain-of-thought (CoT)
prompting, which decomposes complex reasoning into step-by-step solutions.
However, the mechanisms underlying LLMs' ability to perform arithmetic in a
single step of CoT remain poorly understood. In this work, we propose that LLMs
learn arithmetic by capturing algebraic structures, such as commutativity and
identity properties. Since these structures are observable through input-output
relationships, they can generalize to unseen data. We empirically demonstrate
that LLMs can learn algebraic structures using a custom dataset of arithmetic
problems, as well as providing theoretical evidence showing that, under
specific configurations of weights and biases, the transformer-based LLMs can
generate embeddings that remain invariant to both permutations of input tokens
and the presence of identity elements. Our findings indicate that leveraging
algebraic structures can enhance the LLMs' arithmetic capabilities, offering
insights into improving their arithmetic performance.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 10:23:11 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 15:19:23 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Chang",
"Fu-Chieh",
""
],
[
"Lin",
"You-Chen",
""
],
[
"Wu",
"Pei-Yuan",
""
]
] | TITLE: Unraveling Arithmetic in Large Language Models: The Role of Algebraic
Structures
ABSTRACT: The reasoning abilities of large language models (LLMs) have improved with
chain-of-thought (CoT) prompting, allowing models to solve complex tasks
stepwise. However, training CoT capabilities requires detailed reasoning data,
which is often scarce. The self-taught reasoner (STaR) framework addresses this
by using reinforcement learning to automatically generate reasoning steps,
reducing reliance on human-labeled data. Although STaR and its variants have
demonstrated empirical success, a theoretical foundation explaining these
improvements is lacking. Large language models (LLMs) have demonstrated
remarkable mathematical capabilities, largely driven by chain-of-thought (CoT)
prompting, which decomposes complex reasoning into step-by-step solutions.
However, the mechanisms underlying LLMs' ability to perform arithmetic in a
single step of CoT remain poorly understood. In this work, we propose that LLMs
learn arithmetic by capturing algebraic structures, such as commutativity and
identity properties. Since these structures are observable through input-output
relationships, they can generalize to unseen data. We empirically demonstrate
that LLMs can learn algebraic structures using a custom dataset of arithmetic
problems, as well as providing theoretical evidence showing that, under
specific configurations of weights and biases, the transformer-based LLMs can
generate embeddings that remain invariant to both permutations of input tokens
and the presence of identity elements. Our findings indicate that leveraging
algebraic structures can enhance the LLMs' arithmetic capabilities, offering
insights into improving their arithmetic performance.
|
2411.16310 | Jaime Corsetti | Jaime Corsetti, Francesco Giuliari, Alice Fasoli, Davide Boscaini,
Fabio Poiesi | Functionality understanding and segmentation in 3D scenes | CVPR 2025 Highlight. Camera ready version. 20 pages, 12 figures, 7
tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Understanding functionalities in 3D scenes involves interpreting natural
language descriptions to locate functional interactive objects, such as handles
and buttons, in a 3D environment. Functionality understanding is highly
challenging, as it requires both world knowledge to interpret language and
spatial perception to identify fine-grained objects. For example, given a task
like 'turn on the ceiling light', an embodied AI agent must infer that it needs
to locate the light switch, even though the switch is not explicitly mentioned
in the task description. To date, no dedicated methods have been developed for
this problem. In this paper, we introduce Fun3DU, the first approach designed
for functionality understanding in 3D scenes. Fun3DU uses a language model to
parse the task description through Chain-of-Thought reasoning in order to
identify the object of interest. The identified object is segmented across
multiple views of the captured scene by using a vision and language model. The
segmentation results from each view are lifted in 3D and aggregated into the
point cloud using geometric information. Fun3DU is training-free, relying
entirely on pre-trained models. We evaluate Fun3DU on SceneFun3D, the most
recent and only dataset to benchmark this task, which comprises over 3000 task
descriptions on 230 scenes. Our method significantly outperforms
state-of-the-art open-vocabulary 3D segmentation approaches. Project page:
https://tev-fbk.github.io/fun3du/
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 11:57:48 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Nov 2024 16:45:22 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Dec 2024 15:12:06 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2025 08:30:11 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Corsetti",
"Jaime",
""
],
[
"Giuliari",
"Francesco",
""
],
[
"Fasoli",
"Alice",
""
],
[
"Boscaini",
"Davide",
""
],
[
"Poiesi",
"Fabio",
""
]
] | TITLE: Functionality understanding and segmentation in 3D scenes
ABSTRACT: Understanding functionalities in 3D scenes involves interpreting natural
language descriptions to locate functional interactive objects, such as handles
and buttons, in a 3D environment. Functionality understanding is highly
challenging, as it requires both world knowledge to interpret language and
spatial perception to identify fine-grained objects. For example, given a task
like 'turn on the ceiling light', an embodied AI agent must infer that it needs
to locate the light switch, even though the switch is not explicitly mentioned
in the task description. To date, no dedicated methods have been developed for
this problem. In this paper, we introduce Fun3DU, the first approach designed
for functionality understanding in 3D scenes. Fun3DU uses a language model to
parse the task description through Chain-of-Thought reasoning in order to
identify the object of interest. The identified object is segmented across
multiple views of the captured scene by using a vision and language model. The
segmentation results from each view are lifted in 3D and aggregated into the
point cloud using geometric information. Fun3DU is training-free, relying
entirely on pre-trained models. We evaluate Fun3DU on SceneFun3D, the most
recent and only dataset to benchmark this task, which comprises over 3000 task
descriptions on 230 scenes. Our method significantly outperforms
state-of-the-art open-vocabulary 3D segmentation approaches. Project page:
https://tev-fbk.github.io/fun3du/
|
2411.17191 | Naoki Matsumura | Naoki Matsumura, Yuta Yoshimoto, Tamio Yamazaki, Tomohito Amano,
Tomoyuki Noda, Naoki Ebata, Takatoshi Kasano and Yasufumi Sakai | Generator of Neural Network Potential for Molecular Dynamics:
Constructing Robust and Accurate Potentials with Active Learning for
Nanosecond-scale Simulations | null | null | 10.1021/acs.jctc.4c01613 | null | cond-mat.mtrl-sci physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural network potentials (NNPs) enable large-scale molecular dynamics (MD)
simulations of systems containing >10,000 atoms with the accuracy comparable to
ab initio methods and play a crucial role in material studies. Although NNPs
are valuable for short-duration MD simulations, maintaining the stability of
long-duration MD simulations remains challenging due to the uncharted regions
of the potential energy surface (PES). Currently, there is no effective
methodology to address this issue. To overcome this challenge, we developed an
automatic generator of robust and accurate NNPs based on an active learning
(AL) framework. This generator provides a fully integrated solution
encompassing initial dataset creation, NNP training, evaluation, sampling of
additional structures, screening, and labeling. Crucially, our approach uses a
sampling strategy that focuses on generating unstable structures with short
interatomic distances, combined with a screening strategy that efficiently
samples these configurations based on interatomic distances and structural
features. This approach greatly enhances the MD simulation stability, enabling
nanosecond-scale simulations. We evaluated the performance of our NNP generator
in terms of its MD simulation stability and physical properties by applying it
to liquid propylene glycol (PG) and polyethylene glycol (PEG). The generated
NNPs enable stable MD simulations of systems with >10,000 atoms for 20 ns. The
predicted physical properties, such as the density and self-diffusion
coefficient, show excellent agreement with the experimental values. This work
represents a remarkable advance in the generation of robust and accurate NNPs
for organic materials, paving the way for long-duration MD simulations of
complex systems.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 08:03:13 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 07:20:57 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 07:53:26 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Matsumura",
"Naoki",
""
],
[
"Yoshimoto",
"Yuta",
""
],
[
"Yamazaki",
"Tamio",
""
],
[
"Amano",
"Tomohito",
""
],
[
"Noda",
"Tomoyuki",
""
],
[
"Ebata",
"Naoki",
""
],
[
"Kasano",
"Takatoshi",
""
],
[
"Sakai",
"Yasufumi",
""
]
] | TITLE: Generator of Neural Network Potential for Molecular Dynamics:
Constructing Robust and Accurate Potentials with Active Learning for
Nanosecond-scale Simulations
ABSTRACT: Neural network potentials (NNPs) enable large-scale molecular dynamics (MD)
simulations of systems containing >10,000 atoms with the accuracy comparable to
ab initio methods and play a crucial role in material studies. Although NNPs
are valuable for short-duration MD simulations, maintaining the stability of
long-duration MD simulations remains challenging due to the uncharted regions
of the potential energy surface (PES). Currently, there is no effective
methodology to address this issue. To overcome this challenge, we developed an
automatic generator of robust and accurate NNPs based on an active learning
(AL) framework. This generator provides a fully integrated solution
encompassing initial dataset creation, NNP training, evaluation, sampling of
additional structures, screening, and labeling. Crucially, our approach uses a
sampling strategy that focuses on generating unstable structures with short
interatomic distances, combined with a screening strategy that efficiently
samples these configurations based on interatomic distances and structural
features. This approach greatly enhances the MD simulation stability, enabling
nanosecond-scale simulations. We evaluated the performance of our NNP generator
in terms of its MD simulation stability and physical properties by applying it
to liquid propylene glycol (PG) and polyethylene glycol (PEG). The generated
NNPs enable stable MD simulations of systems with >10,000 atoms for 20 ns. The
predicted physical properties, such as the density and self-diffusion
coefficient, show excellent agreement with the experimental values. This work
represents a remarkable advance in the generation of robust and accurate NNPs
for organic materials, paving the way for long-duration MD simulations of
complex systems.
|
2412.06206 | Nan Zhang | Nan Zhang, Prafulla Kumar Choubey, Alexander Fabbri, Gabriel
Bernadett-Shapiro, Rui Zhang, Prasenjit Mitra, Caiming Xiong, Chien-Sheng Wu | SiReRAG: Indexing Similar and Related Information for Multihop Reasoning | ICLR 2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Indexing is an important step towards strong performance in
retrieval-augmented generation (RAG) systems. However, existing methods
organize data based on either semantic similarity (similarity) or related
information (relatedness), but do not cover both perspectives comprehensively.
Our analysis reveals that modeling only one perspective results in insufficient
knowledge synthesis, leading to suboptimal performance on complex tasks
requiring multihop reasoning. In this paper, we propose SiReRAG, a novel RAG
indexing approach that explicitly considers both similar and related
information. On the similarity side, we follow existing work and explore some
variances to construct a similarity tree based on recursive summarization. On
the relatedness side, SiReRAG extracts propositions and entities from texts,
groups propositions via shared entities, and generates recursive summaries to
construct a relatedness tree. We index and flatten both similarity and
relatedness trees into a unified retrieval pool. Our experiments demonstrate
that SiReRAG consistently outperforms state-of-the-art indexing methods on
three multihop datasets (MuSiQue, 2WikiMultiHopQA, and HotpotQA), with an
average 1.9% improvement in F1 scores. As a reasonably efficient solution,
SiReRAG enhances existing reranking methods significantly, with up to 7.8%
improvement in average F1 scores. Our code is available at
https://github.com/SalesforceAIResearch/SiReRAG .
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 04:56:43 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 19:47:16 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhang",
"Nan",
""
],
[
"Choubey",
"Prafulla Kumar",
""
],
[
"Fabbri",
"Alexander",
""
],
[
"Bernadett-Shapiro",
"Gabriel",
""
],
[
"Zhang",
"Rui",
""
],
[
"Mitra",
"Prasenjit",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Wu",
"Chien-Sheng",
""
]
] | TITLE: SiReRAG: Indexing Similar and Related Information for Multihop Reasoning
ABSTRACT: Indexing is an important step towards strong performance in
retrieval-augmented generation (RAG) systems. However, existing methods
organize data based on either semantic similarity (similarity) or related
information (relatedness), but do not cover both perspectives comprehensively.
Our analysis reveals that modeling only one perspective results in insufficient
knowledge synthesis, leading to suboptimal performance on complex tasks
requiring multihop reasoning. In this paper, we propose SiReRAG, a novel RAG
indexing approach that explicitly considers both similar and related
information. On the similarity side, we follow existing work and explore some
variances to construct a similarity tree based on recursive summarization. On
the relatedness side, SiReRAG extracts propositions and entities from texts,
groups propositions via shared entities, and generates recursive summaries to
construct a relatedness tree. We index and flatten both similarity and
relatedness trees into a unified retrieval pool. Our experiments demonstrate
that SiReRAG consistently outperforms state-of-the-art indexing methods on
three multihop datasets (MuSiQue, 2WikiMultiHopQA, and HotpotQA), with an
average 1.9% improvement in F1 scores. As a reasonably efficient solution,
SiReRAG enhances existing reranking methods significantly, with up to 7.8%
improvement in average F1 scores. Our code is available at
https://github.com/SalesforceAIResearch/SiReRAG .
|
2412.06717 | Sahil Sethi | Sahil Sethi, Sai Reddy, Mansi Sakarvadia, Jordan Serotte, Darlington
Nwaudo, Nicholas Maassen, Lewis Shi | Toward Non-Invasive Diagnosis of Bankart Lesions with Deep Learning | Accepted for presentation at SPIE Medical Imaging 2025:
Computer-Aided Diagnosis. The manuscript is expected to appear in the
conference proceedings | null | 10.1117/12.3046251 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bankart lesions, or anterior-inferior glenoid labral tears, are
diagnostically challenging on standard MRIs due to their subtle imaging
features-often necessitating invasive MRI arthrograms (MRAs). This study
develops deep learning (DL) models to detect Bankart lesions on both standard
MRIs and MRAs, aiming to improve diagnostic accuracy and reduce reliance on
MRAs. We curated a dataset of 586 shoulder MRIs (335 standard, 251 MRAs) from
558 patients who underwent arthroscopy. Ground truth labels were derived from
intraoperative findings, the gold standard for Bankart lesion diagnosis.
Separate DL models for MRAs and standard MRIs were trained using the Swin
Transformer architecture, pre-trained on a public knee MRI dataset. Predictions
from sagittal, axial, and coronal views were ensembled to optimize performance.
The models were evaluated on a 20% hold-out test set (117 MRIs: 46 MRAs, 71
standard MRIs). Bankart lesions were identified in 31.9% of MRAs and 8.6% of
standard MRIs. The models achieved AUCs of 0.87 (86% accuracy, 83% sensitivity,
86% specificity) and 0.90 (85% accuracy, 82% sensitivity, 86% specificity) on
standard MRIs and MRAs, respectively. These results match or surpass
radiologist performance on our dataset and reported literature metrics.
Notably, our model's performance on non-invasive standard MRIs matched or
surpassed the radiologists interpreting MRAs. This study demonstrates the
feasibility of using DL to address the diagnostic challenges posed by subtle
pathologies like Bankart lesions. Our models demonstrate potential to improve
diagnostic confidence, reduce reliance on invasive imaging, and enhance
accessibility to care.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 18:04:27 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sethi",
"Sahil",
""
],
[
"Reddy",
"Sai",
""
],
[
"Sakarvadia",
"Mansi",
""
],
[
"Serotte",
"Jordan",
""
],
[
"Nwaudo",
"Darlington",
""
],
[
"Maassen",
"Nicholas",
""
],
[
"Shi",
"Lewis",
""
]
] | TITLE: Toward Non-Invasive Diagnosis of Bankart Lesions with Deep Learning
ABSTRACT: Bankart lesions, or anterior-inferior glenoid labral tears, are
diagnostically challenging on standard MRIs due to their subtle imaging
features-often necessitating invasive MRI arthrograms (MRAs). This study
develops deep learning (DL) models to detect Bankart lesions on both standard
MRIs and MRAs, aiming to improve diagnostic accuracy and reduce reliance on
MRAs. We curated a dataset of 586 shoulder MRIs (335 standard, 251 MRAs) from
558 patients who underwent arthroscopy. Ground truth labels were derived from
intraoperative findings, the gold standard for Bankart lesion diagnosis.
Separate DL models for MRAs and standard MRIs were trained using the Swin
Transformer architecture, pre-trained on a public knee MRI dataset. Predictions
from sagittal, axial, and coronal views were ensembled to optimize performance.
The models were evaluated on a 20% hold-out test set (117 MRIs: 46 MRAs, 71
standard MRIs). Bankart lesions were identified in 31.9% of MRAs and 8.6% of
standard MRIs. The models achieved AUCs of 0.87 (86% accuracy, 83% sensitivity,
86% specificity) and 0.90 (85% accuracy, 82% sensitivity, 86% specificity) on
standard MRIs and MRAs, respectively. These results match or surpass
radiologist performance on our dataset and reported literature metrics.
Notably, our model's performance on non-invasive standard MRIs matched or
surpassed the radiologists interpreting MRAs. This study demonstrates the
feasibility of using DL to address the diagnostic challenges posed by subtle
pathologies like Bankart lesions. Our models demonstrate potential to improve
diagnostic confidence, reduce reliance on invasive imaging, and enhance
accessibility to care.
|
2412.06947 | Bardia Nadimi | Bardia Nadimi and Ghali Omar Boutaib and Hao Zheng | PyraNet: A Multi-Layered Hierarchical Dataset for Verilog | null | null | null | null | cs.AR cs.AI cs.LG cs.PL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recently, there has been a growing interest in leveraging Large Language
Models for Verilog code generation. However, the current quality of the
generated Verilog code remains suboptimal. This is largely due to the absence
of well-defined, well-organized datasets with high-quality samples, as well as
a lack of innovative fine-tuning methods and models specifically trained on
Verilog. In this paper, we introduce a novel open-source dataset and a
corresponding fine-tuning technique, which utilizes a multi-layered structure
that we refer to as PyraNet. Our experiments demonstrate that employing the
proposed dataset and fine-tuning approach leads to a more accurate fine-tuned
model, producing syntactically and functionally correct Verilog code. The
evaluation results show improvements by up-to $32.6\%$ in comparison to the
CodeLlama-7B baseline model and up-to $16.7\%$ in comparison to the
state-of-the-art models using VerilogEval evaluation platform.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 19:45:54 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Dec 2024 01:07:02 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 21:58:26 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Nadimi",
"Bardia",
""
],
[
"Boutaib",
"Ghali Omar",
""
],
[
"Zheng",
"Hao",
""
]
] | TITLE: PyraNet: A Multi-Layered Hierarchical Dataset for Verilog
ABSTRACT: Recently, there has been a growing interest in leveraging Large Language
Models for Verilog code generation. However, the current quality of the
generated Verilog code remains suboptimal. This is largely due to the absence
of well-defined, well-organized datasets with high-quality samples, as well as
a lack of innovative fine-tuning methods and models specifically trained on
Verilog. In this paper, we introduce a novel open-source dataset and a
corresponding fine-tuning technique, which utilizes a multi-layered structure
that we refer to as PyraNet. Our experiments demonstrate that employing the
proposed dataset and fine-tuning approach leads to a more accurate fine-tuned
model, producing syntactically and functionally correct Verilog code. The
evaluation results show improvements by up-to $32.6\%$ in comparison to the
CodeLlama-7B baseline model and up-to $16.7\%$ in comparison to the
state-of-the-art models using VerilogEval evaluation platform.
|
2412.07456 | Ben Steinfurth | Jonas Schulte-Sasse, Ben Steinfurth and Julien Weiss | Automatic extraction of wall streamlines from oil-flow visualizations
using a convolutional neural network | null | Exp. Fluids 66 (2025) | 10.1007/s00348-025-04016-x | null | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | Oil-flow visualizations represent a simple means to reveal time-averaged wall
streamline patterns. Yet, the evaluation of such images can be a time-consuming
process and is subjective to human perception. In this study, we present a fast
and robust method to obtain quantitative insight based on qualitative oil-flow
visualizations. Using a convolutional neural network, the local flow direction
is predicted based on the oil-flow texture. This was achieved with supervised
training based on an extensive dataset involving approximately one million
image patches that cover variations of the flow direction, the wall
shear-stress magnitude and the oil-flow mixture. For a test dataset that is
distinct from the training data, the mean prediction error of the flow
direction is as low as three degrees. A reliable performance is also noted when
the model is applied to oil-flow visualizations from the literature,
demonstrating the generalizability required for an application in diverse flow
configurations.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 12:21:44 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Schulte-Sasse",
"Jonas",
""
],
[
"Steinfurth",
"Ben",
""
],
[
"Weiss",
"Julien",
""
]
] | TITLE: Automatic extraction of wall streamlines from oil-flow visualizations
using a convolutional neural network
ABSTRACT: Oil-flow visualizations represent a simple means to reveal time-averaged wall
streamline patterns. Yet, the evaluation of such images can be a time-consuming
process and is subjective to human perception. In this study, we present a fast
and robust method to obtain quantitative insight based on qualitative oil-flow
visualizations. Using a convolutional neural network, the local flow direction
is predicted based on the oil-flow texture. This was achieved with supervised
training based on an extensive dataset involving approximately one million
image patches that cover variations of the flow direction, the wall
shear-stress magnitude and the oil-flow mixture. For a test dataset that is
distinct from the training data, the mean prediction error of the flow
direction is as low as three degrees. A reliable performance is also noted when
the model is applied to oil-flow visualizations from the literature,
demonstrating the generalizability required for an application in diverse flow
configurations.
|
2412.08307 | Shijian Wang | Shijian Wang, Linxin Song, Jieyu Zhang, Ryotaro Shimizu, Jiarui Jin,
Ao Luo, Yuan Lu, Li Yao, Cunjian Chen, Julian McAuley, Wentao Zhang, Hanqian
Wu | Investigating the Scaling Effect of Instruction Templates for Training
Multimodal Language Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current multimodal language model (MLM) training approaches overlook the
influence of instruction templates. Previous research deals with this problem
by leveraging hand-crafted or model-generated templates, failing to investigate
the scaling effect of instruction templates on MLM training. In this work, we
propose a programmatic instruction template generator capable of producing over
15K unique instruction templates by filling randomly sampled positional
synonyms into weighted sampled meta templates, enabling us to comprehensively
explore MLM's performance across various template scales in the training
process. Our investigation into scaling instruction templates for MLM training
demonstrates that MLM capabilities do not consistently improve with increasing
template scale. Instead, optimal performance is achieved at a medium template
scale. Models trained with data augmented at the optimal template scale achieve
performance gains of up to 10% over those trained on the original data and
achieve the best overall performance compared with the similar-scale MLMs tuned
on at most 75 times the scale of our augmented dataset. The code will be
publicly available at https://github.com/shijian2001/TemplateScaling.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 11:39:42 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 14:45:49 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 08:30:30 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wang",
"Shijian",
""
],
[
"Song",
"Linxin",
""
],
[
"Zhang",
"Jieyu",
""
],
[
"Shimizu",
"Ryotaro",
""
],
[
"Jin",
"Jiarui",
""
],
[
"Luo",
"Ao",
""
],
[
"Lu",
"Yuan",
""
],
[
"Yao",
"Li",
""
],
[
"Chen",
"Cunjian",
""
],
[
"McAuley",
"Julian",
""
],
[
"Zhang",
"Wentao",
""
],
[
"Wu",
"Hanqian",
""
]
] | TITLE: Investigating the Scaling Effect of Instruction Templates for Training
Multimodal Language Model
ABSTRACT: Current multimodal language model (MLM) training approaches overlook the
influence of instruction templates. Previous research deals with this problem
by leveraging hand-crafted or model-generated templates, failing to investigate
the scaling effect of instruction templates on MLM training. In this work, we
propose a programmatic instruction template generator capable of producing over
15K unique instruction templates by filling randomly sampled positional
synonyms into weighted sampled meta templates, enabling us to comprehensively
explore MLM's performance across various template scales in the training
process. Our investigation into scaling instruction templates for MLM training
demonstrates that MLM capabilities do not consistently improve with increasing
template scale. Instead, optimal performance is achieved at a medium template
scale. Models trained with data augmented at the optimal template scale achieve
performance gains of up to 10% over those trained on the original data and
achieve the best overall performance compared with the similar-scale MLMs tuned
on at most 75 times the scale of our augmented dataset. The code will be
publicly available at https://github.com/shijian2001/TemplateScaling.
|
2412.08755 | Kyle Stein | Kyle Stein, Andrew Arash Mahyari, Guillermo Francia, Eman El-Sheikh | Proactive Adversarial Defense: Harnessing Prompt Tuning in
Vision-Language Models to Detect Unseen Backdoored Images | null | null | null | null | cs.CV cs.AI cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Backdoor attacks pose a critical threat by embedding hidden triggers into
inputs, causing models to misclassify them into target labels. While extensive
research has focused on mitigating these attacks in object recognition models
through weight fine-tuning, much less attention has been given to detecting
backdoored samples directly. Given the vast datasets used in training, manual
inspection for backdoor triggers is impractical, and even state-of-the-art
defense mechanisms fail to fully neutralize their impact. To address this gap,
we introduce a groundbreaking method to detect unseen backdoored images during
both training and inference. Leveraging the transformative success of prompt
tuning in Vision Language Models (VLMs), our approach trains learnable text
prompts to differentiate clean images from those with hidden backdoor triggers.
Experiments demonstrate the exceptional efficacy of this method, achieving an
impressive average accuracy of 86% across two renowned datasets for detecting
unseen backdoor triggers, establishing a new standard in backdoor defense.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 19:54:14 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jan 2025 19:15:20 GMT"
},
{
"version": "v3",
"created": "Fri, 14 Mar 2025 19:24:34 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Apr 2025 18:01:26 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Stein",
"Kyle",
""
],
[
"Mahyari",
"Andrew Arash",
""
],
[
"Francia",
"Guillermo",
""
],
[
"El-Sheikh",
"Eman",
""
]
] | TITLE: Proactive Adversarial Defense: Harnessing Prompt Tuning in
Vision-Language Models to Detect Unseen Backdoored Images
ABSTRACT: Backdoor attacks pose a critical threat by embedding hidden triggers into
inputs, causing models to misclassify them into target labels. While extensive
research has focused on mitigating these attacks in object recognition models
through weight fine-tuning, much less attention has been given to detecting
backdoored samples directly. Given the vast datasets used in training, manual
inspection for backdoor triggers is impractical, and even state-of-the-art
defense mechanisms fail to fully neutralize their impact. To address this gap,
we introduce a groundbreaking method to detect unseen backdoored images during
both training and inference. Leveraging the transformative success of prompt
tuning in Vision Language Models (VLMs), our approach trains learnable text
prompts to differentiate clean images from those with hidden backdoor triggers.
Experiments demonstrate the exceptional efficacy of this method, achieving an
impressive average accuracy of 86% across two renowned datasets for detecting
unseen backdoor triggers, establishing a new standard in backdoor defense.
|
2412.11530 | Junda Cheng | Junda Cheng, Zhipeng Cai, Zhaoxing Zhang, Wei Yin, Matthias Muller,
Michael Paulitsch, Xin Yang | RoMeO: Robust Metric Visual Odometry | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual odometry (VO) aims to estimate camera poses from visual inputs -- a
fundamental building block for many applications such as VR/AR and robotics.
This work focuses on monocular RGB VO where the input is a monocular RGB video
without IMU or 3D sensors. Existing approaches lack robustness under this
challenging scenario and fail to generalize to unseen data (especially
outdoors); they also cannot recover metric-scale poses. We propose Robust
Metric Visual Odometry (RoMeO), a novel method that resolves these issues
leveraging priors from pre-trained depth models. RoMeO incorporates both
monocular metric depth and multi-view stereo (MVS) models to recover
metric-scale, simplify correspondence search, provide better initialization and
regularize optimization. Effective strategies are proposed to inject noise
during training and adaptively filter noisy depth priors, which ensure the
robustness of RoMeO on in-the-wild data. As shown in Fig.1, RoMeO advances the
state-of-the-art (SOTA) by a large margin across 6 diverse datasets covering
both indoor and outdoor scenes. Compared to the current SOTA DPVO, RoMeO
reduces the relative (align the trajectory scale with GT) and absolute
trajectory errors both by >50%. The performance gain also transfers to the full
SLAM pipeline (with global BA & loop closure). Code will be released upon
acceptance.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 08:08:35 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Dec 2024 06:32:22 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 13:16:35 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Cheng",
"Junda",
""
],
[
"Cai",
"Zhipeng",
""
],
[
"Zhang",
"Zhaoxing",
""
],
[
"Yin",
"Wei",
""
],
[
"Muller",
"Matthias",
""
],
[
"Paulitsch",
"Michael",
""
],
[
"Yang",
"Xin",
""
]
] | TITLE: RoMeO: Robust Metric Visual Odometry
ABSTRACT: Visual odometry (VO) aims to estimate camera poses from visual inputs -- a
fundamental building block for many applications such as VR/AR and robotics.
This work focuses on monocular RGB VO where the input is a monocular RGB video
without IMU or 3D sensors. Existing approaches lack robustness under this
challenging scenario and fail to generalize to unseen data (especially
outdoors); they also cannot recover metric-scale poses. We propose Robust
Metric Visual Odometry (RoMeO), a novel method that resolves these issues
leveraging priors from pre-trained depth models. RoMeO incorporates both
monocular metric depth and multi-view stereo (MVS) models to recover
metric-scale, simplify correspondence search, provide better initialization and
regularize optimization. Effective strategies are proposed to inject noise
during training and adaptively filter noisy depth priors, which ensure the
robustness of RoMeO on in-the-wild data. As shown in Fig.1, RoMeO advances the
state-of-the-art (SOTA) by a large margin across 6 diverse datasets covering
both indoor and outdoor scenes. Compared to the current SOTA DPVO, RoMeO
reduces the relative (align the trajectory scale with GT) and absolute
trajectory errors both by >50%. The performance gain also transfers to the full
SLAM pipeline (with global BA & loop closure). Code will be released upon
acceptance.
|
2412.17867 | Ziming Guo | Ziming Guo, Chao Ma, Yinggang Sun, Tiancheng Zhao, Guangyao Wang, Hai
Huang | Evaluating and Enhancing LLMs for Multi-turn Text-to-SQL with Multiple
Question Types | International Joint Conference on Neural Networks 2025 (IJCNN 2025) | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in large language models (LLMs) have significantly
advanced text-to-SQL systems. However, most LLM-based methods often narrowly
focus on SQL generation, neglecting the complexities of real-world
conversational queries. This oversight can lead to unreliable responses,
particularly for ambiguous questions that cannot be directly addressed with
SQL. To bridge this gap, we propose MMSQL, a comprehensive test suite designed
to evaluate the question classification and SQL generation capabilities of LLMs
by simulating real-world scenarios with diverse question types and multi-turn
Q&A interactions. Using MMSQL, we assessed the performance of popular LLMs,
including both open-source and closed-source models, and identified key factors
impacting their performance in such scenarios. Moreover, we introduce an
LLM-based multi-agent framework that employs specialized agents to identify
question types and determine appropriate answering strategies. Our experiments
demonstrate that this approach significantly enhances the model's ability to
navigate the complexities of conversational dynamics, effectively handling the
diverse and complex nature of user queries. Our dataset and code are publicly
available at https://mcxiaoxiao.github.io/MMSQL.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2024 10:13:45 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 07:13:30 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 09:47:45 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2025 02:23:17 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Guo",
"Ziming",
""
],
[
"Ma",
"Chao",
""
],
[
"Sun",
"Yinggang",
""
],
[
"Zhao",
"Tiancheng",
""
],
[
"Wang",
"Guangyao",
""
],
[
"Huang",
"Hai",
""
]
] | TITLE: Evaluating and Enhancing LLMs for Multi-turn Text-to-SQL with Multiple
Question Types
ABSTRACT: Recent advancements in large language models (LLMs) have significantly
advanced text-to-SQL systems. However, most LLM-based methods often narrowly
focus on SQL generation, neglecting the complexities of real-world
conversational queries. This oversight can lead to unreliable responses,
particularly for ambiguous questions that cannot be directly addressed with
SQL. To bridge this gap, we propose MMSQL, a comprehensive test suite designed
to evaluate the question classification and SQL generation capabilities of LLMs
by simulating real-world scenarios with diverse question types and multi-turn
Q&A interactions. Using MMSQL, we assessed the performance of popular LLMs,
including both open-source and closed-source models, and identified key factors
impacting their performance in such scenarios. Moreover, we introduce an
LLM-based multi-agent framework that employs specialized agents to identify
question types and determine appropriate answering strategies. Our experiments
demonstrate that this approach significantly enhances the model's ability to
navigate the complexities of conversational dynamics, effectively handling the
diverse and complex nature of user queries. Our dataset and code are publicly
available at https://mcxiaoxiao.github.io/MMSQL.
|
2501.00952 | Maxim Ziatdinov | Sarah I. Allec, Maxim Ziatdinov | Active and transfer learning with partially Bayesian neural networks for
materials and chemicals | Minor revisions | null | null | null | cond-mat.dis-nn cond-mat.mtrl-sci physics.data-an | http://creativecommons.org/licenses/by/4.0/ | Active learning, an iterative process of selecting the most informative data
points for exploration, is crucial for efficient characterization of materials
and chemicals property space. Neural networks excel at predicting these
properties but lack the uncertainty quantification needed for active
learning-driven exploration. Fully Bayesian neural networks, in which weights
are treated as probability distributions inferred via advanced Markov Chain
Monte Carlo methods, offer robust uncertainty quantification but at high
computational cost. Here, we show that partially Bayesian neural networks
(PBNNs), where only selected layers have probabilistic weights while others
remain deterministic, can achieve accuracy and uncertainty estimates on active
learning tasks comparable to fully Bayesian networks at lower computational
cost. Furthermore, by initializing prior distributions with weights pre-trained
on theoretical calculations, we demonstrate that PBNNs can effectively leverage
computational predictions to accelerate active learning of experimental data.
We validate these approaches on both molecular property prediction and
materials science tasks, establishing PBNNs as a practical tool for active
learning with limited, complex datasets.
| [
{
"version": "v1",
"created": "Wed, 1 Jan 2025 20:48:26 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 20:33:33 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Allec",
"Sarah I.",
""
],
[
"Ziatdinov",
"Maxim",
""
]
] | TITLE: Active and transfer learning with partially Bayesian neural networks for
materials and chemicals
ABSTRACT: Active learning, an iterative process of selecting the most informative data
points for exploration, is crucial for efficient characterization of materials
and chemicals property space. Neural networks excel at predicting these
properties but lack the uncertainty quantification needed for active
learning-driven exploration. Fully Bayesian neural networks, in which weights
are treated as probability distributions inferred via advanced Markov Chain
Monte Carlo methods, offer robust uncertainty quantification but at high
computational cost. Here, we show that partially Bayesian neural networks
(PBNNs), where only selected layers have probabilistic weights while others
remain deterministic, can achieve accuracy and uncertainty estimates on active
learning tasks comparable to fully Bayesian networks at lower computational
cost. Furthermore, by initializing prior distributions with weights pre-trained
on theoretical calculations, we demonstrate that PBNNs can effectively leverage
computational predictions to accelerate active learning of experimental data.
We validate these approaches on both molecular property prediction and
materials science tasks, establishing PBNNs as a practical tool for active
learning with limited, complex datasets.
|
2501.04671 | Charles Corbi\`ere | Charles Corbi\`ere, Simon Roburin, Syrielle Montariol, Antoine
Bosselut and Alexandre Alahi | Retrieval-Based Interleaved Visual Chain-of-Thought in Real-World
Driving Scenarios | Project page: https://vita-epfl.github.io/DrivingVQA | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | While chain-of-thought (CoT) prompting improves reasoning in large language
models, its effectiveness in vision-language models (VLMs) remains limited due
to over-reliance on textual cues and memorized knowledge. To investigate the
visual reasoning capabilities of VLMs in complex real-world scenarios, we
introduce DrivingVQA, a visual question answering dataset derived from driving
theory exams, which contains 3,931 multiple-choice problems with expert-written
explanations and grounded entities relevant to the reasoning process.
Leveraging this dataset, we propose RIV-CoT, a Retrieval-Based Interleaved
Visual Chain-of-Thought method that enables VLMs to reason using visual crops
corresponding to these relevant entities. Our experiments demonstrate that
RIV-CoT improves answer accuracy by 3.1% and reasoning accuracy by 4.6% over
vanilla CoT prompting. Furthermore, we demonstrate that our method effectively
scales to the larger A-OKVQA reasoning dataset by leveraging automatically
generated pseudo-labels, outperforming CoT prompting.
| [
{
"version": "v1",
"created": "Wed, 8 Jan 2025 18:31:16 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 17:09:59 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Corbière",
"Charles",
""
],
[
"Roburin",
"Simon",
""
],
[
"Montariol",
"Syrielle",
""
],
[
"Bosselut",
"Antoine",
""
],
[
"Alahi",
"Alexandre",
""
]
] | TITLE: Retrieval-Based Interleaved Visual Chain-of-Thought in Real-World
Driving Scenarios
ABSTRACT: While chain-of-thought (CoT) prompting improves reasoning in large language
models, its effectiveness in vision-language models (VLMs) remains limited due
to over-reliance on textual cues and memorized knowledge. To investigate the
visual reasoning capabilities of VLMs in complex real-world scenarios, we
introduce DrivingVQA, a visual question answering dataset derived from driving
theory exams, which contains 3,931 multiple-choice problems with expert-written
explanations and grounded entities relevant to the reasoning process.
Leveraging this dataset, we propose RIV-CoT, a Retrieval-Based Interleaved
Visual Chain-of-Thought method that enables VLMs to reason using visual crops
corresponding to these relevant entities. Our experiments demonstrate that
RIV-CoT improves answer accuracy by 3.1% and reasoning accuracy by 4.6% over
vanilla CoT prompting. Furthermore, we demonstrate that our method effectively
scales to the larger A-OKVQA reasoning dataset by leveraging automatically
generated pseudo-labels, outperforming CoT prompting.
|
2501.05446 | Yifan Yu | Yifan Yu, Shaohui Liu, R\'emi Pautrat, Marc Pollefeys, Viktor Larsson | Relative Pose Estimation through Affine Corrections of Monocular Depth
Priors | CVPR 2025 (Highlight) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular depth estimation (MDE) models have undergone significant
advancements over recent years. Many MDE models aim to predict affine-invariant
relative depth from monocular images, while recent developments in large-scale
training and vision foundation models enable reasonable estimation of metric
(absolute) depth. However, effectively leveraging these predictions for
geometric vision tasks, in particular relative pose estimation, remains
relatively under explored. While depths provide rich constraints for cross-view
image alignment, the intrinsic noise and ambiguity from the monocular depth
priors present practical challenges to improving upon classic keypoint-based
solutions. In this paper, we develop three solvers for relative pose estimation
that explicitly account for independent affine (scale and shift) ambiguities,
covering both calibrated and uncalibrated conditions. We further propose a
hybrid estimation pipeline that combines our proposed solvers with classic
point-based solvers and epipolar constraints. We find that the affine
correction modeling is beneficial to not only the relative depth priors but
also, surprisingly, the "metric" ones. Results across multiple datasets
demonstrate large improvements of our approach over classic keypoint-based
baselines and PnP-based solutions, under both calibrated and uncalibrated
setups. We also show that our method improves consistently with different
feature matchers and MDE models, and can further benefit from very recent
advances on both modules. Code is available at
https://github.com/MarkYu98/madpose.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 18:58:30 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Mar 2025 17:14:43 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 03:59:21 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yu",
"Yifan",
""
],
[
"Liu",
"Shaohui",
""
],
[
"Pautrat",
"Rémi",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Larsson",
"Viktor",
""
]
] | TITLE: Relative Pose Estimation through Affine Corrections of Monocular Depth
Priors
ABSTRACT: Monocular depth estimation (MDE) models have undergone significant
advancements over recent years. Many MDE models aim to predict affine-invariant
relative depth from monocular images, while recent developments in large-scale
training and vision foundation models enable reasonable estimation of metric
(absolute) depth. However, effectively leveraging these predictions for
geometric vision tasks, in particular relative pose estimation, remains
relatively under explored. While depths provide rich constraints for cross-view
image alignment, the intrinsic noise and ambiguity from the monocular depth
priors present practical challenges to improving upon classic keypoint-based
solutions. In this paper, we develop three solvers for relative pose estimation
that explicitly account for independent affine (scale and shift) ambiguities,
covering both calibrated and uncalibrated conditions. We further propose a
hybrid estimation pipeline that combines our proposed solvers with classic
point-based solvers and epipolar constraints. We find that the affine
correction modeling is beneficial to not only the relative depth priors but
also, surprisingly, the "metric" ones. Results across multiple datasets
demonstrate large improvements of our approach over classic keypoint-based
baselines and PnP-based solutions, under both calibrated and uncalibrated
setups. We also show that our method improves consistently with different
feature matchers and MDE models, and can further benefit from very recent
advances on both modules. Code is available at
https://github.com/MarkYu98/madpose.
|
2501.09333 | Wei-Lun Chao | Arpita Chowdhury, Dipanjyoti Paul, Zheda Mai, Jianyang Gu, Ziheng
Zhang, Kazi Sajeed Mehrab, Elizabeth G. Campolongo, Daniel Rubenstein,
Charles V. Stewart, Anuj Karpatne, Tanya Berger-Wolf, Yu Su, Wei-Lun Chao | Prompt-CAM: Making Vision Transformers Interpretable for Fine-Grained
Analysis | Accepted by CVPR 2025 Main Conference | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a simple approach to make pre-trained Vision Transformers (ViTs)
interpretable for fine-grained analysis, aiming to identify and localize the
traits that distinguish visually similar categories, such as bird species.
Pre-trained ViTs, such as DINO, have demonstrated remarkable capabilities in
extracting localized, discriminative features. However, saliency maps like
Grad-CAM often fail to identify these traits, producing blurred, coarse
heatmaps that highlight entire objects instead. We propose a novel approach,
Prompt Class Attention Map (Prompt-CAM), to address this limitation. Prompt-CAM
learns class-specific prompts for a pre-trained ViT and uses the corresponding
outputs for classification. To correctly classify an image, the true-class
prompt must attend to unique image patches not present in other classes' images
(i.e., traits). As a result, the true class's multi-head attention maps reveal
traits and their locations. Implementation-wise, Prompt-CAM is almost a ``free
lunch,'' requiring only a modification to the prediction head of Visual Prompt
Tuning (VPT). This makes Prompt-CAM easy to train and apply, in stark contrast
to other interpretable methods that require designing specific models and
training processes. Extensive empirical studies on a dozen datasets from
various domains (e.g., birds, fishes, insects, fungi, flowers, food, and cars)
validate the superior interpretation capability of Prompt-CAM. The source code
and demo are available at https://github.com/Imageomics/Prompt_CAM.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 07:07:41 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 18:03:40 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Chowdhury",
"Arpita",
""
],
[
"Paul",
"Dipanjyoti",
""
],
[
"Mai",
"Zheda",
""
],
[
"Gu",
"Jianyang",
""
],
[
"Zhang",
"Ziheng",
""
],
[
"Mehrab",
"Kazi Sajeed",
""
],
[
"Campolongo",
"Elizabeth G.",
""
],
[
"Rubenstein",
"Daniel",
""
],
[
"Stewart",
"Charles V.",
""
],
[
"Karpatne",
"Anuj",
""
],
[
"Berger-Wolf",
"Tanya",
""
],
[
"Su",
"Yu",
""
],
[
"Chao",
"Wei-Lun",
""
]
] | TITLE: Prompt-CAM: Making Vision Transformers Interpretable for Fine-Grained
Analysis
ABSTRACT: We present a simple approach to make pre-trained Vision Transformers (ViTs)
interpretable for fine-grained analysis, aiming to identify and localize the
traits that distinguish visually similar categories, such as bird species.
Pre-trained ViTs, such as DINO, have demonstrated remarkable capabilities in
extracting localized, discriminative features. However, saliency maps like
Grad-CAM often fail to identify these traits, producing blurred, coarse
heatmaps that highlight entire objects instead. We propose a novel approach,
Prompt Class Attention Map (Prompt-CAM), to address this limitation. Prompt-CAM
learns class-specific prompts for a pre-trained ViT and uses the corresponding
outputs for classification. To correctly classify an image, the true-class
prompt must attend to unique image patches not present in other classes' images
(i.e., traits). As a result, the true class's multi-head attention maps reveal
traits and their locations. Implementation-wise, Prompt-CAM is almost a ``free
lunch,'' requiring only a modification to the prediction head of Visual Prompt
Tuning (VPT). This makes Prompt-CAM easy to train and apply, in stark contrast
to other interpretable methods that require designing specific models and
training processes. Extensive empirical studies on a dozen datasets from
various domains (e.g., birds, fishes, insects, fungi, flowers, food, and cars)
validate the superior interpretation capability of Prompt-CAM. The source code
and demo are available at https://github.com/Imageomics/Prompt_CAM.
|
2501.11014 | Ken Enda | Ken Enda, Yoshitaka Oda, Zen-ichi Tanei, Kenichi Satoh, Hiroaki
Motegi, Terasaka Shunsuke, Shigeru Yamaguchi, Takahiro Ogawa, Wang Lei,
Masumi Tsuda and Shinya Tanaka | Transfer Learning Strategies for Pathological Foundation Models: A
Systematic Evaluation in Brain Tumor Classification | 25 pages, 7 figures | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Foundation models pretrained on large-scale pathology datasets have shown
promising results across various diagnostic tasks. Here, we present a
systematic evaluation of transfer learning strategies for brain tumor
classification using these models. We analyzed 254 cases comprising five major
tumor types: glioblastoma, astrocytoma, oligodendroglioma, primary central
nervous system lymphoma, and metastatic tumors. Comparing state-of-the-art
foundation models with conventional approaches, we found that foundation models
demonstrated robust classification performance with as few as 10 patches per
case, despite the traditional assumption that extensive per-case image sampling
is necessary. Furthermore, our evaluation revealed that simple transfer
learning strategies like linear probing were sufficient, while fine-tuning
often degraded model performance. These findings suggest a paradigm shift from
"training encoders on extensive pathological data" to "querying pre-trained
encoders with labeled datasets", providing practical implications for
implementing AI-assisted diagnosis in clinical pathology.
| [
{
"version": "v1",
"created": "Sun, 19 Jan 2025 11:18:34 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 01:49:45 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Enda",
"Ken",
""
],
[
"Oda",
"Yoshitaka",
""
],
[
"Tanei",
"Zen-ichi",
""
],
[
"Satoh",
"Kenichi",
""
],
[
"Motegi",
"Hiroaki",
""
],
[
"Shunsuke",
"Terasaka",
""
],
[
"Yamaguchi",
"Shigeru",
""
],
[
"Ogawa",
"Takahiro",
""
],
[
"Lei",
"Wang",
""
],
[
"Tsuda",
"Masumi",
""
],
[
"Tanaka",
"Shinya",
""
]
] | TITLE: Transfer Learning Strategies for Pathological Foundation Models: A
Systematic Evaluation in Brain Tumor Classification
ABSTRACT: Foundation models pretrained on large-scale pathology datasets have shown
promising results across various diagnostic tasks. Here, we present a
systematic evaluation of transfer learning strategies for brain tumor
classification using these models. We analyzed 254 cases comprising five major
tumor types: glioblastoma, astrocytoma, oligodendroglioma, primary central
nervous system lymphoma, and metastatic tumors. Comparing state-of-the-art
foundation models with conventional approaches, we found that foundation models
demonstrated robust classification performance with as few as 10 patches per
case, despite the traditional assumption that extensive per-case image sampling
is necessary. Furthermore, our evaluation revealed that simple transfer
learning strategies like linear probing were sufficient, while fine-tuning
often degraded model performance. These findings suggest a paradigm shift from
"training encoders on extensive pathological data" to "querying pre-trained
encoders with labeled datasets", providing practical implications for
implementing AI-assisted diagnosis in clinical pathology.
|
2501.17848 | Fabricio Olivetti de Franca | Fabricio Olivetti de Franca and Gabriel Kronberger | Improving Genetic Programming for Symbolic Regression with Equality
Graphs | 10 pages, 5 figures, 4 tables. In Genetic and Evolutionary
Computation Conference (GECCO 25) | null | 10.1145/3712256.3726383 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The search for symbolic regression models with genetic programming (GP) has a
tendency of revisiting expressions in their original or equivalent forms.
Repeatedly evaluating equivalent expressions is inefficient, as it does not
immediately lead to better solutions. However, evolutionary algorithms require
diversity and should allow the accumulation of inactive building blocks that
can play an important role at a later point. The equality graph is a data
structure capable of compactly storing expressions and their equivalent forms
allowing an efficient verification of whether an expression has been visited in
any of their stored equivalent forms. We exploit the e-graph to adapt the
subtree operators to reduce the chances of revisiting expressions. Our
adaptation, called eggp, stores every visited expression in the e-graph,
allowing us to filter out from the available selection of subtrees all the
combinations that would create already visited expressions. Results show that,
for small expressions, this approach improves the performance of a simple GP
algorithm to compete with PySR and Operon without increasing computational
cost. As a highlight, eggp was capable of reliably delivering short and at the
same time accurate models for a selected set of benchmarks from SRBench and a
set of real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2025 18:49:34 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 16:48:10 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"de Franca",
"Fabricio Olivetti",
""
],
[
"Kronberger",
"Gabriel",
""
]
] | TITLE: Improving Genetic Programming for Symbolic Regression with Equality
Graphs
ABSTRACT: The search for symbolic regression models with genetic programming (GP) has a
tendency of revisiting expressions in their original or equivalent forms.
Repeatedly evaluating equivalent expressions is inefficient, as it does not
immediately lead to better solutions. However, evolutionary algorithms require
diversity and should allow the accumulation of inactive building blocks that
can play an important role at a later point. The equality graph is a data
structure capable of compactly storing expressions and their equivalent forms
allowing an efficient verification of whether an expression has been visited in
any of their stored equivalent forms. We exploit the e-graph to adapt the
subtree operators to reduce the chances of revisiting expressions. Our
adaptation, called eggp, stores every visited expression in the e-graph,
allowing us to filter out from the available selection of subtrees all the
combinations that would create already visited expressions. Results show that,
for small expressions, this approach improves the performance of a simple GP
algorithm to compete with PySR and Operon without increasing computational
cost. As a highlight, eggp was capable of reliably delivering short and at the
same time accurate models for a selected set of benchmarks from SRBench and a
set of real-world datasets.
|
2502.03251 | Li Sun | Li Sun, Zhenhao Huang, Suyang Zhou, Qiqi Wan, Hao Peng, Philip Yu | RiemannGFM: Learning a Graph Foundation Model from Riemannian Geometry | Accepted by WWW 2025 (Oral) | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The foundation model has heralded a new era in artificial intelligence,
pretraining a single model to offer cross-domain transferability on different
datasets. Graph neural networks excel at learning graph data, the omnipresent
non-Euclidean structure, but often lack the generalization capacity. Hence,
graph foundation model is drawing increasing attention, and recent efforts have
been made to leverage Large Language Models. On the one hand, existing studies
primarily focus on text-attributed graphs, while a wider range of real graphs
do not contain fruitful textual attributes. On the other hand, the sequential
graph description tailored for the Large Language Model neglects the structural
complexity, which is a predominant characteristic of the graph. Such
limitations motivate an important question: Can we go beyond Large Language
Models, and pretrain a universal model to learn the structural knowledge for
any graph? The answer in the language or vision domain is a shared vocabulary.
We observe the fact that there also exist shared substructures underlying graph
domain, and thereby open a new opportunity of graph foundation model with
structural vocabulary. The key innovation is the discovery of a simple yet
effective structural vocabulary of trees and cycles, and we explore its
inherent connection to Riemannian geometry. Herein, we present a universal
pretraining model, RiemannGFM. Concretely, we first construct a novel product
bundle to incorporate the diverse geometries of the vocabulary. Then, on this
constructed space, we stack Riemannian layers where the structural vocabulary,
regardless of specific graph, is learned in Riemannian manifold offering
cross-domain transferability. Extensive experiments show the effectiveness of
RiemannGFM on a diversity of real graphs.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 15:06:09 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 07:04:29 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sun",
"Li",
""
],
[
"Huang",
"Zhenhao",
""
],
[
"Zhou",
"Suyang",
""
],
[
"Wan",
"Qiqi",
""
],
[
"Peng",
"Hao",
""
],
[
"Yu",
"Philip",
""
]
] | TITLE: RiemannGFM: Learning a Graph Foundation Model from Riemannian Geometry
ABSTRACT: The foundation model has heralded a new era in artificial intelligence,
pretraining a single model to offer cross-domain transferability on different
datasets. Graph neural networks excel at learning graph data, the omnipresent
non-Euclidean structure, but often lack the generalization capacity. Hence,
graph foundation model is drawing increasing attention, and recent efforts have
been made to leverage Large Language Models. On the one hand, existing studies
primarily focus on text-attributed graphs, while a wider range of real graphs
do not contain fruitful textual attributes. On the other hand, the sequential
graph description tailored for the Large Language Model neglects the structural
complexity, which is a predominant characteristic of the graph. Such
limitations motivate an important question: Can we go beyond Large Language
Models, and pretrain a universal model to learn the structural knowledge for
any graph? The answer in the language or vision domain is a shared vocabulary.
We observe the fact that there also exist shared substructures underlying graph
domain, and thereby open a new opportunity of graph foundation model with
structural vocabulary. The key innovation is the discovery of a simple yet
effective structural vocabulary of trees and cycles, and we explore its
inherent connection to Riemannian geometry. Herein, we present a universal
pretraining model, RiemannGFM. Concretely, we first construct a novel product
bundle to incorporate the diverse geometries of the vocabulary. Then, on this
constructed space, we stack Riemannian layers where the structural vocabulary,
regardless of specific graph, is learned in Riemannian manifold offering
cross-domain transferability. Extensive experiments show the effectiveness of
RiemannGFM on a diversity of real graphs.
|
2502.04760 | Rui Wang | Rui Wang | Graph Federated Learning Based Proactive Content Caching in Edge
Computing | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of mobile data traffic and the increasing prevalence of
video streaming, proactive content caching in edge computing has become crucial
for reducing latency and alleviating network congestion. However, traditional
caching strategies such as FIFO, LRU, and LFU fail to effectively predict
future content popularity, while existing proactive caching approaches often
require users to upload data to a central server, raising concerns regarding
privacy and scalability. To address these challenges, this paper proposes a
Graph Federated Learning-based Proactive Content Caching (GFPCC) scheme that
enhances caching efficiency while preserving user privacy. The proposed
approach integrates federated learning and graph neural networks, enabling
users to locally train Light Graph Convolutional Networks (LightGCN) to capture
user-item relationships and predict content popularity. Instead of sharing raw
data, only the trained model parameters are transmitted to the central server,
where a federated averaging algorithm aggregates updates, refines the global
model, and selects the most popular files for proactive caching. Experimental
evaluations on real-world datasets, such as MovieLens, demonstrate that GFPCC
outperforms baseline caching algorithms by achieving higher cache efficiency
through more accurate content popularity predictions. Moreover, the federated
learning framework strengthens privacy protection while maintaining efficient
model training; however, scalability remains a challenge in large-scale
networks with dynamic user preferences.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2025 08:48:06 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 12:46:45 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wang",
"Rui",
""
]
] | TITLE: Graph Federated Learning Based Proactive Content Caching in Edge
Computing
ABSTRACT: With the rapid growth of mobile data traffic and the increasing prevalence of
video streaming, proactive content caching in edge computing has become crucial
for reducing latency and alleviating network congestion. However, traditional
caching strategies such as FIFO, LRU, and LFU fail to effectively predict
future content popularity, while existing proactive caching approaches often
require users to upload data to a central server, raising concerns regarding
privacy and scalability. To address these challenges, this paper proposes a
Graph Federated Learning-based Proactive Content Caching (GFPCC) scheme that
enhances caching efficiency while preserving user privacy. The proposed
approach integrates federated learning and graph neural networks, enabling
users to locally train Light Graph Convolutional Networks (LightGCN) to capture
user-item relationships and predict content popularity. Instead of sharing raw
data, only the trained model parameters are transmitted to the central server,
where a federated averaging algorithm aggregates updates, refines the global
model, and selects the most popular files for proactive caching. Experimental
evaluations on real-world datasets, such as MovieLens, demonstrate that GFPCC
outperforms baseline caching algorithms by achieving higher cache efficiency
through more accurate content popularity predictions. Moreover, the federated
learning framework strengthens privacy protection while maintaining efficient
model training; however, scalability remains a challenge in large-scale
networks with dynamic user preferences.
|
2502.07847 | Behraj Khan | Behraj Khan, Rizwan Qureshi, Nouman Muhammad Durrani, Tahir Syed | Confidence-calibrated covariate shift correction for few-shot
classification in Vision-Language Models | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Since the establishment of vision-language foundation models as the new
mainstay in low-shot vision classification tasks, the question of domain
generalization arising from insufficient target data is assuming more
importance. This scarcity challenge induces sampling bias and amplifies model
sensitivity to variations and shifts in data distributions. While fine-tuning
on multiple domains could mitigate such domain generalization issues, it is
resource-intensive and demands diverse data sources.
In this work, we systematically analyze two critical challenges: (1)
covariate shift between the pre-training distribution and the underspecified
target distribution, and (2) confidence misalignment, where predictions on
novel data are overconfident.
To address both challenges simultaneously, we introduce
\textbf{Confidence-Calibrated Covariate Shift Correction (CalShift)} -- a
unified approach that combines a Fisher information penalty to mitigate
covariate shift and a Confidence Misalignment Penalty (CMP) to reduce
overconfidence in misclassified examples.
Experimental evaluations across various vision and covariate shift benchmarks
demonstrate that CalShift significantly improves model calibration, achieving
up to a 5.82\% reduction in Expected Calibration Error (ECE). Furthermore,
CalShift enhances robustness, improving accuracy by 3.5\% on challenging
datasets impacted by covariate shifts.
Our results highlight CalShift as a promising strategy for building robust
and reliable low-shot vision-language systems for real-world applications.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 10:10:15 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 07:54:30 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Khan",
"Behraj",
""
],
[
"Qureshi",
"Rizwan",
""
],
[
"Durrani",
"Nouman Muhammad",
""
],
[
"Syed",
"Tahir",
""
]
] | TITLE: Confidence-calibrated covariate shift correction for few-shot
classification in Vision-Language Models
ABSTRACT: Since the establishment of vision-language foundation models as the new
mainstay in low-shot vision classification tasks, the question of domain
generalization arising from insufficient target data is assuming more
importance. This scarcity challenge induces sampling bias and amplifies model
sensitivity to variations and shifts in data distributions. While fine-tuning
on multiple domains could mitigate such domain generalization issues, it is
resource-intensive and demands diverse data sources.
In this work, we systematically analyze two critical challenges: (1)
covariate shift between the pre-training distribution and the underspecified
target distribution, and (2) confidence misalignment, where predictions on
novel data are overconfident.
To address both challenges simultaneously, we introduce
\textbf{Confidence-Calibrated Covariate Shift Correction (CalShift)} -- a
unified approach that combines a Fisher information penalty to mitigate
covariate shift and a Confidence Misalignment Penalty (CMP) to reduce
overconfidence in misclassified examples.
Experimental evaluations across various vision and covariate shift benchmarks
demonstrate that CalShift significantly improves model calibration, achieving
up to a 5.82\% reduction in Expected Calibration Error (ECE). Furthermore,
CalShift enhances robustness, improving accuracy by 3.5\% on challenging
datasets impacted by covariate shifts.
Our results highlight CalShift as a promising strategy for building robust
and reliable low-shot vision-language systems for real-world applications.
|
2502.11007 | Liangqi Yuan | Liangqi Yuan and Dong-Jun Han and Shiqiang Wang and Christopher G.
Brinton | Local-Cloud Inference Offloading for LLMs in Multi-Modal, Multi-Task,
Multi-Dialogue Settings | null | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | Compared to traditional machine learning models, recent large language models
(LLMs) can exhibit multi-task-solving capabilities through multiple dialogues
and multi-modal data sources. These unique characteristics of LLMs, together
with their large model size, make their deployment more challenging.
Specifically, (i) deploying LLMs on local devices faces computational, memory,
and energy resource issues, while (ii) deploying them in the cloud cannot
guarantee real-time service and incurs communication/usage costs. In this
paper, we design TMO, a local-cloud LLM inference system with Three-M
Offloading: Multi-modal, Multi-task, and Multi-dialogue. TMO incorporates (i) a
lightweight local LLM that can process simple tasks at high speed and (ii) a
large-scale cloud LLM that can handle multi-modal data sources. We develop a
resource-constrained reinforcement learning (RCRL) strategy for TMO that
optimizes the inference location (i.e., local vs. cloud) and multi-modal data
sources to use for each task/dialogue, aiming to maximize the long-term reward
(response quality, latency, and usage cost) while adhering to resource
constraints. We also contribute M4A1, a new dataset we curated that contains
reward and cost metrics across multiple modality, task, dialogue, and LLM
configurations, enabling evaluation of offloading decisions. We demonstrate the
effectiveness of TMO compared to several exploration-decision and LLM-as-Agent
baselines, showing significant improvements in latency, cost, and response
quality.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 06:18:28 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 18:49:28 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yuan",
"Liangqi",
""
],
[
"Han",
"Dong-Jun",
""
],
[
"Wang",
"Shiqiang",
""
],
[
"Brinton",
"Christopher G.",
""
]
] | TITLE: Local-Cloud Inference Offloading for LLMs in Multi-Modal, Multi-Task,
Multi-Dialogue Settings
ABSTRACT: Compared to traditional machine learning models, recent large language models
(LLMs) can exhibit multi-task-solving capabilities through multiple dialogues
and multi-modal data sources. These unique characteristics of LLMs, together
with their large model size, make their deployment more challenging.
Specifically, (i) deploying LLMs on local devices faces computational, memory,
and energy resource issues, while (ii) deploying them in the cloud cannot
guarantee real-time service and incurs communication/usage costs. In this
paper, we design TMO, a local-cloud LLM inference system with Three-M
Offloading: Multi-modal, Multi-task, and Multi-dialogue. TMO incorporates (i) a
lightweight local LLM that can process simple tasks at high speed and (ii) a
large-scale cloud LLM that can handle multi-modal data sources. We develop a
resource-constrained reinforcement learning (RCRL) strategy for TMO that
optimizes the inference location (i.e., local vs. cloud) and multi-modal data
sources to use for each task/dialogue, aiming to maximize the long-term reward
(response quality, latency, and usage cost) while adhering to resource
constraints. We also contribute M4A1, a new dataset we curated that contains
reward and cost metrics across multiple modality, task, dialogue, and LLM
configurations, enabling evaluation of offloading decisions. We demonstrate the
effectiveness of TMO compared to several exploration-decision and LLM-as-Agent
baselines, showing significant improvements in latency, cost, and response
quality.
|
2502.14270 | Rajeshwari Mistri | Nachiket Kapure, Harsh Joshi, Rajeshwari Mistri, Parul Kumari, Manasi
Mali, Seema Purohit, Neha Sharma, Mrityunjoy Panday, Chittaranjan S. Yajnik | Predicting Fetal Birthweight from High Dimensional Data using Advanced
Machine Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Birth weight serves as a fundamental indicator of neonatal health, closely
linked to both early medical interventions and long-term developmental risks.
Traditional predictive models, often constrained by limited feature selection
and incomplete datasets, struggle to achieve overlooking complex maternal and
fetal interactions in diverse clinical settings. This research explores machine
learning to address these limitations, utilizing a structured methodology that
integrates advanced imputation strategies, supervised feature selection
techniques, and predictive modeling. Given the constraints of the dataset, the
research strengthens the role of data preprocessing in improving the model
performance. Among the various methodologies explored, tree-based feature
selection methods demonstrated superior capability in identifying the most
relevant predictors, while ensemble-based regression models proved highly
effective in capturing non-linear relationships and complex maternal-fetal
interactions within the data. Beyond model performance, the study highlights
the clinical significance of key physiological determinants, offering insights
into maternal and fetal health factors that influence birth weight, offering
insights that extend over statistical modeling. By bridging computational
intelligence with perinatal research, this work underscores the transformative
role of machine learning in enhancing predictive accuracy, refining risk
assessment and informing data-driven decision-making in maternal and neonatal
care. Keywords: Birth weight prediction, maternal-fetal health, MICE, BART,
Gradient Boosting, neonatal outcomes, Clinipredictive.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 05:17:39 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 07:54:17 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kapure",
"Nachiket",
""
],
[
"Joshi",
"Harsh",
""
],
[
"Mistri",
"Rajeshwari",
""
],
[
"Kumari",
"Parul",
""
],
[
"Mali",
"Manasi",
""
],
[
"Purohit",
"Seema",
""
],
[
"Sharma",
"Neha",
""
],
[
"Panday",
"Mrityunjoy",
""
],
[
"Yajnik",
"Chittaranjan S.",
""
]
] | TITLE: Predicting Fetal Birthweight from High Dimensional Data using Advanced
Machine Learning
ABSTRACT: Birth weight serves as a fundamental indicator of neonatal health, closely
linked to both early medical interventions and long-term developmental risks.
Traditional predictive models, often constrained by limited feature selection
and incomplete datasets, struggle to achieve overlooking complex maternal and
fetal interactions in diverse clinical settings. This research explores machine
learning to address these limitations, utilizing a structured methodology that
integrates advanced imputation strategies, supervised feature selection
techniques, and predictive modeling. Given the constraints of the dataset, the
research strengthens the role of data preprocessing in improving the model
performance. Among the various methodologies explored, tree-based feature
selection methods demonstrated superior capability in identifying the most
relevant predictors, while ensemble-based regression models proved highly
effective in capturing non-linear relationships and complex maternal-fetal
interactions within the data. Beyond model performance, the study highlights
the clinical significance of key physiological determinants, offering insights
into maternal and fetal health factors that influence birth weight, offering
insights that extend over statistical modeling. By bridging computational
intelligence with perinatal research, this work underscores the transformative
role of machine learning in enhancing predictive accuracy, refining risk
assessment and informing data-driven decision-making in maternal and neonatal
care. Keywords: Birth weight prediction, maternal-fetal health, MICE, BART,
Gradient Boosting, neonatal outcomes, Clinipredictive.
|
2502.19363 | Ru Peng | Ru Peng, Kexin Yang, Yawen Zeng, Junyang Lin, Dayiheng Liu, Junbo Zhao | DataMan: Data Manager for Pre-training Large Language Models | ICLR2025 paper | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The performance emergence of large language models (LLMs) driven by data
scaling laws makes the selection of pre-training data increasingly important.
However, existing methods rely on limited heuristics and human intuition,
lacking comprehensive and clear guidelines. To address this, we are inspired by
``reverse thinking'' -- prompting LLMs to self-identify which criteria benefit
its performance. As its pre-training capabilities are related to perplexity
(PPL), we derive 14 quality criteria from the causes of text perplexity
anomalies and introduce 15 common application domains to support domain mixing.
In this paper, we train a Data Manager (DataMan) to learn quality ratings and
domain recognition from pointwise rating, and use it to annotate a 447B token
pre-training corpus with 14 quality ratings and domain type. Our experiments
validate our approach, using DataMan to select 30B tokens to train a
1.3B-parameter language model, demonstrating significant improvements in
in-context learning (ICL), perplexity, and instruction-following ability over
the state-of-the-art baseline. The best-performing model, based on the Overall
Score l=5 surpasses a model trained with 50% more data using uniform sampling.
We continue pre-training with high-rated, domain-specific data annotated by
DataMan to enhance domain-specific ICL performance and thus verify DataMan's
domain mixing ability. Our findings emphasize the importance of quality
ranking, the complementary nature of quality criteria, and their low
correlation with perplexity, analyzing misalignment between PPL and ICL
performance. We also thoroughly analyzed our pre-training dataset, examining
its composition, the distribution of quality ratings, and the original document
sources.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 18:01:19 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 15:42:07 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 03:21:10 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Peng",
"Ru",
""
],
[
"Yang",
"Kexin",
""
],
[
"Zeng",
"Yawen",
""
],
[
"Lin",
"Junyang",
""
],
[
"Liu",
"Dayiheng",
""
],
[
"Zhao",
"Junbo",
""
]
] | TITLE: DataMan: Data Manager for Pre-training Large Language Models
ABSTRACT: The performance emergence of large language models (LLMs) driven by data
scaling laws makes the selection of pre-training data increasingly important.
However, existing methods rely on limited heuristics and human intuition,
lacking comprehensive and clear guidelines. To address this, we are inspired by
``reverse thinking'' -- prompting LLMs to self-identify which criteria benefit
its performance. As its pre-training capabilities are related to perplexity
(PPL), we derive 14 quality criteria from the causes of text perplexity
anomalies and introduce 15 common application domains to support domain mixing.
In this paper, we train a Data Manager (DataMan) to learn quality ratings and
domain recognition from pointwise rating, and use it to annotate a 447B token
pre-training corpus with 14 quality ratings and domain type. Our experiments
validate our approach, using DataMan to select 30B tokens to train a
1.3B-parameter language model, demonstrating significant improvements in
in-context learning (ICL), perplexity, and instruction-following ability over
the state-of-the-art baseline. The best-performing model, based on the Overall
Score l=5 surpasses a model trained with 50% more data using uniform sampling.
We continue pre-training with high-rated, domain-specific data annotated by
DataMan to enhance domain-specific ICL performance and thus verify DataMan's
domain mixing ability. Our findings emphasize the importance of quality
ranking, the complementary nature of quality criteria, and their low
correlation with perplexity, analyzing misalignment between PPL and ICL
performance. We also thoroughly analyzed our pre-training dataset, examining
its composition, the distribution of quality ratings, and the original document
sources.
|
2502.19679 | Linzhuo Li | Linzhuo li | Old Experience Helps: Leveraging Survey Methodology to Improve AI Text
Annotation Reliability in Social Sciences | 7 figures | null | null | null | cs.DL cs.HC | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a framework for assessing the reliability of Large
Language Model (LLM) text annotations in social science research by adapting
established survey methodology principles. Drawing parallels between survey
respondent behavior and LLM outputs, the study implements three key
interventions: option randomization, position randomization, and reverse
validation. While traditional accuracy metrics may mask model instabilities,
particularly in edge cases, the framework provides a more comprehensive
reliability assessment. Using the F1000 dataset in biomedical science and three
sizes of Llama models (8B, 70B, and 405B parameters), the paper demonstrates
that these survey-inspired interventions can effectively identify unreliable
annotations that might otherwise go undetected through accuracy metrics alone.
The results show that 5-25% of LLM annotations change under these
interventions, with larger models exhibiting greater stability. Notably, for
rare categories approximately 50% of "correct" annotations demonstrate low
reliability when subjected to this framework. The paper then introduce an
information-theoretic reliability score (R-score) based on Kullback-Leibler
divergence that quantifies annotation confidence and distinguishes between
random guessing and meaningful annotations at the case level. This approach
complements existing expert validation methods by providing a scalable way to
assess internal annotation reliability and offers practical guidance for prompt
design and downstream analysis.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 01:42:10 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 03:06:47 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 06:48:04 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"li",
"Linzhuo",
""
]
] | TITLE: Old Experience Helps: Leveraging Survey Methodology to Improve AI Text
Annotation Reliability in Social Sciences
ABSTRACT: This paper introduces a framework for assessing the reliability of Large
Language Model (LLM) text annotations in social science research by adapting
established survey methodology principles. Drawing parallels between survey
respondent behavior and LLM outputs, the study implements three key
interventions: option randomization, position randomization, and reverse
validation. While traditional accuracy metrics may mask model instabilities,
particularly in edge cases, the framework provides a more comprehensive
reliability assessment. Using the F1000 dataset in biomedical science and three
sizes of Llama models (8B, 70B, and 405B parameters), the paper demonstrates
that these survey-inspired interventions can effectively identify unreliable
annotations that might otherwise go undetected through accuracy metrics alone.
The results show that 5-25% of LLM annotations change under these
interventions, with larger models exhibiting greater stability. Notably, for
rare categories approximately 50% of "correct" annotations demonstrate low
reliability when subjected to this framework. The paper then introduce an
information-theoretic reliability score (R-score) based on Kullback-Leibler
divergence that quantifies annotation confidence and distinguishes between
random guessing and meaningful annotations at the case level. This approach
complements existing expert validation methods by providing a scalable way to
assess internal annotation reliability and offers practical guidance for prompt
design and downstream analysis.
|
2502.21024 | Abdelrahman E.M. Abdallah | Abdelrahman Abdallah, Bhawna Piryani, Jonas Wallat, Avishek Anand,
Adam Jatowt | TempRetriever: Fusion-based Temporal Dense Passage Retrieval for
Time-Sensitive Questions | null | null | null | null | cs.IR cs.CL | http://creativecommons.org/licenses/by/4.0/ | Temporal awareness is crucial in many information retrieval tasks,
particularly in scenarios where the relevance of documents depends on their
alignment with the query's temporal context. Traditional approaches such as
BM25 and Dense Passage Retrieval (DPR) focus on lexical or semantic similarity
but tend to neglect the temporal alignment between queries and documents, which
is essential for time-sensitive tasks like temporal question answering (TQA).
We propose TempRetriever, a novel extension of DPR that explicitly incorporates
temporal information by embedding both the query date and document timestamp
into the retrieval process. This allows retrieving passages that are not only
contextually relevant but also aligned with the temporal intent of queries. We
evaluate TempRetriever on two large-scale datasets ArchivalQA and
ChroniclingAmericaQA demonstrating its superiority over baseline retrieval
models across multiple metrics. TempRetriever achieves a 6.63\% improvement in
Top-1 retrieval accuracy and a 3.79\% improvement in NDCG@10 compared to the
standard DPR on ArchivalQA. Similarly, for ChroniclingAmericaQA, TempRetriever
exhibits a 9.56\% improvement in Top-1 retrieval accuracy and a 4.68\%
improvement in NDCG@10. We also propose a novel, time-based negative sampling
strategy which further enhances retrieval performance by addressing temporal
misalignment during training. Our results underline the importance of temporal
aspects in dense retrieval systems and establish a new benchmark for time-aware
passage retrieval.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 13:06:25 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 13:11:58 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Abdallah",
"Abdelrahman",
""
],
[
"Piryani",
"Bhawna",
""
],
[
"Wallat",
"Jonas",
""
],
[
"Anand",
"Avishek",
""
],
[
"Jatowt",
"Adam",
""
]
] | TITLE: TempRetriever: Fusion-based Temporal Dense Passage Retrieval for
Time-Sensitive Questions
ABSTRACT: Temporal awareness is crucial in many information retrieval tasks,
particularly in scenarios where the relevance of documents depends on their
alignment with the query's temporal context. Traditional approaches such as
BM25 and Dense Passage Retrieval (DPR) focus on lexical or semantic similarity
but tend to neglect the temporal alignment between queries and documents, which
is essential for time-sensitive tasks like temporal question answering (TQA).
We propose TempRetriever, a novel extension of DPR that explicitly incorporates
temporal information by embedding both the query date and document timestamp
into the retrieval process. This allows retrieving passages that are not only
contextually relevant but also aligned with the temporal intent of queries. We
evaluate TempRetriever on two large-scale datasets ArchivalQA and
ChroniclingAmericaQA demonstrating its superiority over baseline retrieval
models across multiple metrics. TempRetriever achieves a 6.63\% improvement in
Top-1 retrieval accuracy and a 3.79\% improvement in NDCG@10 compared to the
standard DPR on ArchivalQA. Similarly, for ChroniclingAmericaQA, TempRetriever
exhibits a 9.56\% improvement in Top-1 retrieval accuracy and a 4.68\%
improvement in NDCG@10. We also propose a novel, time-based negative sampling
strategy which further enhances retrieval performance by addressing temporal
misalignment during training. Our results underline the importance of temporal
aspects in dense retrieval systems and establish a new benchmark for time-aware
passage retrieval.
|
2503.05050 | Melkamu Mersha | Melkamu Abay Mersha, Mesay Gemeda Yigezu, Hassan Shakil, Ali K.
AlShami, Sanghyun Byun, Jugal Kalita | A Unified Framework with Novel Metrics for Evaluating the Effectiveness
of XAI Techniques in LLMs | arXiv admin note: substantial text overlap with arXiv:2501.15374 | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing complexity of LLMs presents significant challenges to their
transparency and interpretability, necessitating the use of eXplainable AI
(XAI) techniques to enhance trustworthiness and usability. This study
introduces a comprehensive evaluation framework with four novel metrics for
assessing the effectiveness of five XAI techniques across five LLMs and two
downstream tasks. We apply this framework to evaluate several XAI techniques
LIME, SHAP, Integrated Gradients, Layer-wise Relevance Propagation (LRP), and
Attention Mechanism Visualization (AMV) using the IMDB Movie Reviews and Tweet
Sentiment Extraction datasets. The evaluation focuses on four key metrics:
Human-reasoning Agreement (HA), Robustness, Consistency, and Contrastivity. Our
results show that LIME consistently achieves high scores across multiple LLMs
and evaluation metrics, while AMV demonstrates superior Robustness and
near-perfect Consistency. LRP excels in Contrastivity, particularly with more
complex models. Our findings provide valuable insights into the strengths and
limitations of different XAI methods, offering guidance for developing and
selecting appropriate XAI techniques for LLMs.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 23:59:50 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 20:37:11 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mersha",
"Melkamu Abay",
""
],
[
"Yigezu",
"Mesay Gemeda",
""
],
[
"Shakil",
"Hassan",
""
],
[
"AlShami",
"Ali K.",
""
],
[
"Byun",
"Sanghyun",
""
],
[
"Kalita",
"Jugal",
""
]
] | TITLE: A Unified Framework with Novel Metrics for Evaluating the Effectiveness
of XAI Techniques in LLMs
ABSTRACT: The increasing complexity of LLMs presents significant challenges to their
transparency and interpretability, necessitating the use of eXplainable AI
(XAI) techniques to enhance trustworthiness and usability. This study
introduces a comprehensive evaluation framework with four novel metrics for
assessing the effectiveness of five XAI techniques across five LLMs and two
downstream tasks. We apply this framework to evaluate several XAI techniques
LIME, SHAP, Integrated Gradients, Layer-wise Relevance Propagation (LRP), and
Attention Mechanism Visualization (AMV) using the IMDB Movie Reviews and Tweet
Sentiment Extraction datasets. The evaluation focuses on four key metrics:
Human-reasoning Agreement (HA), Robustness, Consistency, and Contrastivity. Our
results show that LIME consistently achieves high scores across multiple LLMs
and evaluation metrics, while AMV demonstrates superior Robustness and
near-perfect Consistency. LRP excels in Contrastivity, particularly with more
complex models. Our findings provide valuable insights into the strengths and
limitations of different XAI methods, offering guidance for developing and
selecting appropriate XAI techniques for LLMs.
|
2503.05725 | Kim Duc Tran | T.Q.D. Pham, K.D. Tran, Khanh T. P. Nguyen, X.V. Tran, L. K\"oehl, and
K.P. Tran | A new framework for prognostics in decentralized industries: Enhancing
fairness, security, and transparency through Blockchain and Federated
Learning | null | null | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | As global industries transition towards Industry 5.0 predictive maintenance
PM remains crucial for cost effective operations resilience and minimizing
downtime in increasingly smart manufacturing environments In this chapter we
explore how the integration of Federated Learning FL and blockchain BC
technologies enhances the prediction of machinerys Remaining Useful Life RUL
within decentralized and human centric industrial ecosystems Traditional
centralized data approaches raise concerns over privacy security and
scalability especially as Artificial intelligence AI driven smart manufacturing
becomes more prevalent This chapter leverages FL to enable localized model
training across multiple sites while utilizing BC to ensure trust transparency
and data integrity across the network This BC integrated FL framework optimizes
RUL predictions enhances data privacy and security establishes transparency and
promotes collaboration in decentralized manufacturing It addresses key
challenges such as maintaining privacy and security ensuring transparency and
fairness and incentivizing participation in decentralized networks Experimental
validation using the NASA CMAPSS dataset demonstrates the model effectiveness
in real world scenarios and we extend our findings to the broader research
community through open source code on GitHub inviting collaborative development
to drive innovation in Industry 5.0
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 20:28:40 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 16:53:33 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Pham",
"T. Q. D.",
""
],
[
"Tran",
"K. D.",
""
],
[
"Nguyen",
"Khanh T. P.",
""
],
[
"Tran",
"X. V.",
""
],
[
"Köehl",
"L.",
""
],
[
"Tran",
"K. P.",
""
]
] | TITLE: A new framework for prognostics in decentralized industries: Enhancing
fairness, security, and transparency through Blockchain and Federated
Learning
ABSTRACT: As global industries transition towards Industry 5.0 predictive maintenance
PM remains crucial for cost effective operations resilience and minimizing
downtime in increasingly smart manufacturing environments In this chapter we
explore how the integration of Federated Learning FL and blockchain BC
technologies enhances the prediction of machinerys Remaining Useful Life RUL
within decentralized and human centric industrial ecosystems Traditional
centralized data approaches raise concerns over privacy security and
scalability especially as Artificial intelligence AI driven smart manufacturing
becomes more prevalent This chapter leverages FL to enable localized model
training across multiple sites while utilizing BC to ensure trust transparency
and data integrity across the network This BC integrated FL framework optimizes
RUL predictions enhances data privacy and security establishes transparency and
promotes collaboration in decentralized manufacturing It addresses key
challenges such as maintaining privacy and security ensuring transparency and
fairness and incentivizing participation in decentralized networks Experimental
validation using the NASA CMAPSS dataset demonstrates the model effectiveness
in real world scenarios and we extend our findings to the broader research
community through open source code on GitHub inviting collaborative development
to drive innovation in Industry 5.0
|
2503.07378 | Yusuke Hashimoto | Yusuke Hashimoto, Xue Jia, Hao Li, Takaaki Tomai | A Materials Map Integrating Experimental and Computational Data via
Graph-Based Machine Learning for Enhanced Materials Discovery | null | null | null | null | cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by/4.0/ | Materials informatics (MI), emerging from the integration of materials
science and data science, is expected to significantly accelerate material
development and discovery. The data used in MI are derived from both
computational and experimental studies; however, their integration remains
challenging. In our previous study, we reported the integration of these
datasets by applying a machine learning model that is trained on the
experimental dataset to the compositional data stored in the computational
database. In this study, we use the obtained datasets to construct materials
maps, which visualize the relationships between material properties and
structural features, aiming to support experimental researchers. The materials
map is constructed using the MatDeepLearn (MDL) framework, which implements
materials property prediction using graph-based representations of material
structure and deep learning modeling. Through statistical analysis, we find
that the MDL framework using the message passing neural network (MPNN)
architecture efficiently extracts features reflecting the structural complexity
of materials. Moreover, we find that this advantage does not necessarily
translate into improved accuracy in the prediction of material properties. We
attribute this unexpected outcome to the high learning performance inherent in
MPNN, which can contribute to the structuring of data points within the
materials map.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:31:34 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 06:31:52 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 10:04:14 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 04:43:10 GMT"
},
{
"version": "v5",
"created": "Tue, 8 Apr 2025 11:19:16 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Hashimoto",
"Yusuke",
""
],
[
"Jia",
"Xue",
""
],
[
"Li",
"Hao",
""
],
[
"Tomai",
"Takaaki",
""
]
] | TITLE: A Materials Map Integrating Experimental and Computational Data via
Graph-Based Machine Learning for Enhanced Materials Discovery
ABSTRACT: Materials informatics (MI), emerging from the integration of materials
science and data science, is expected to significantly accelerate material
development and discovery. The data used in MI are derived from both
computational and experimental studies; however, their integration remains
challenging. In our previous study, we reported the integration of these
datasets by applying a machine learning model that is trained on the
experimental dataset to the compositional data stored in the computational
database. In this study, we use the obtained datasets to construct materials
maps, which visualize the relationships between material properties and
structural features, aiming to support experimental researchers. The materials
map is constructed using the MatDeepLearn (MDL) framework, which implements
materials property prediction using graph-based representations of material
structure and deep learning modeling. Through statistical analysis, we find
that the MDL framework using the message passing neural network (MPNN)
architecture efficiently extracts features reflecting the structural complexity
of materials. Moreover, we find that this advantage does not necessarily
translate into improved accuracy in the prediction of material properties. We
attribute this unexpected outcome to the high learning performance inherent in
MPNN, which can contribute to the structuring of data points within the
materials map.
|
2503.08111 | Jianhui Wang | Jianhui Wang, Zhifei Yang, Yangfan He, Huixiong Zhang, Yuxuan Chen,
Jingwei Huang | MaRI: Material Retrieval Integration across Domains | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate material retrieval is critical for creating realistic 3D assets.
Existing methods rely on datasets that capture shape-invariant and
lighting-varied representations of materials, which are scarce and face
challenges due to limited diversity and inadequate real-world generalization.
Most current approaches adopt traditional image search techniques. They fall
short in capturing the unique properties of material spaces, leading to
suboptimal performance in retrieval tasks. Addressing these challenges, we
introduce MaRI, a framework designed to bridge the feature space gap between
synthetic and real-world materials. MaRI constructs a shared embedding space
that harmonizes visual and material attributes through a contrastive learning
strategy by jointly training an image and a material encoder, bringing similar
materials and images closer while separating dissimilar pairs within the
feature space. To support this, we construct a comprehensive dataset comprising
high-quality synthetic materials rendered with controlled shape variations and
diverse lighting conditions, along with real-world materials processed and
standardized using material transfer techniques. Extensive experiments
demonstrate the superior performance, accuracy, and generalization capabilities
of MaRI across diverse and complex material retrieval tasks, outperforming
existing methods.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:23:11 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 07:30:21 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 08:53:57 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wang",
"Jianhui",
""
],
[
"Yang",
"Zhifei",
""
],
[
"He",
"Yangfan",
""
],
[
"Zhang",
"Huixiong",
""
],
[
"Chen",
"Yuxuan",
""
],
[
"Huang",
"Jingwei",
""
]
] | TITLE: MaRI: Material Retrieval Integration across Domains
ABSTRACT: Accurate material retrieval is critical for creating realistic 3D assets.
Existing methods rely on datasets that capture shape-invariant and
lighting-varied representations of materials, which are scarce and face
challenges due to limited diversity and inadequate real-world generalization.
Most current approaches adopt traditional image search techniques. They fall
short in capturing the unique properties of material spaces, leading to
suboptimal performance in retrieval tasks. Addressing these challenges, we
introduce MaRI, a framework designed to bridge the feature space gap between
synthetic and real-world materials. MaRI constructs a shared embedding space
that harmonizes visual and material attributes through a contrastive learning
strategy by jointly training an image and a material encoder, bringing similar
materials and images closer while separating dissimilar pairs within the
feature space. To support this, we construct a comprehensive dataset comprising
high-quality synthetic materials rendered with controlled shape variations and
diverse lighting conditions, along with real-world materials processed and
standardized using material transfer techniques. Extensive experiments
demonstrate the superior performance, accuracy, and generalization capabilities
of MaRI across diverse and complex material retrieval tasks, outperforming
existing methods.
|
2503.09516 | Bowen Jin | Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong
Wang, Hamed Zamani, Jiawei Han | Search-R1: Training LLMs to Reason and Leverage Search Engines with
Reinforcement Learning | 31 pages | null | null | null | cs.CL cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficiently acquiring external knowledge and up-to-date information is
essential for effective reasoning and text generation in large language models
(LLMs). Prompting advanced LLMs with reasoning capabilities to use search
engines during inference is often suboptimal, as the LLM might not fully
possess the capability on how to interact optimally with the search engine.
This paper introduces Search-R1, an extension of reinforcement learning (RL)
for reasoning frameworks where the LLM learns to autonomously generate
(multiple) search queries during step-by-step reasoning with real-time
retrieval. Search-R1 optimizes LLM reasoning trajectories with multi-turn
search interactions, leveraging retrieved token masking for stable RL training
and a simple outcome-based reward function. Experiments on seven
question-answering datasets show that Search-R1 improves performance by 41%
(Qwen2.5-7B) and 20% (Qwen2.5-3B) over various RAG baselines under the same
setting. This paper further provides empirical insights into RL optimization
methods, LLM choices, and response length dynamics in retrieval-augmented
reasoning. The code and model checkpoints are available at
https://github.com/PeterGriffinJin/Search-R1.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 16:26:39 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 21:40:12 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 14:03:26 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Jin",
"Bowen",
""
],
[
"Zeng",
"Hansi",
""
],
[
"Yue",
"Zhenrui",
""
],
[
"Yoon",
"Jinsung",
""
],
[
"Arik",
"Sercan",
""
],
[
"Wang",
"Dong",
""
],
[
"Zamani",
"Hamed",
""
],
[
"Han",
"Jiawei",
""
]
] | TITLE: Search-R1: Training LLMs to Reason and Leverage Search Engines with
Reinforcement Learning
ABSTRACT: Efficiently acquiring external knowledge and up-to-date information is
essential for effective reasoning and text generation in large language models
(LLMs). Prompting advanced LLMs with reasoning capabilities to use search
engines during inference is often suboptimal, as the LLM might not fully
possess the capability on how to interact optimally with the search engine.
This paper introduces Search-R1, an extension of reinforcement learning (RL)
for reasoning frameworks where the LLM learns to autonomously generate
(multiple) search queries during step-by-step reasoning with real-time
retrieval. Search-R1 optimizes LLM reasoning trajectories with multi-turn
search interactions, leveraging retrieved token masking for stable RL training
and a simple outcome-based reward function. Experiments on seven
question-answering datasets show that Search-R1 improves performance by 41%
(Qwen2.5-7B) and 20% (Qwen2.5-3B) over various RAG baselines under the same
setting. This paper further provides empirical insights into RL optimization
methods, LLM choices, and response length dynamics in retrieval-augmented
reasoning. The code and model checkpoints are available at
https://github.com/PeterGriffinJin/Search-R1.
|
2503.12763 | Kewei Sui | Kewei Sui, Anindita Ghosh, Inwoo Hwang, Bing Zhou, Jian Wang, Chuan
Guo | A Survey on Human Interaction Motion Generation | The repository listing relevant papers is accessible at:
https://github.com/soraproducer/Awesome-Human-Interaction-Motion-Generation | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Humans inhabit a world defined by interactions -- with other humans, objects,
and environments. These interactive movements not only convey our relationships
with our surroundings but also demonstrate how we perceive and communicate with
the real world. Therefore, replicating these interaction behaviors in digital
systems has emerged as an important topic for applications in robotics, virtual
reality, and animation. While recent advances in deep generative models and new
datasets have accelerated progress in this field, significant challenges remain
in modeling the intricate human dynamics and their interactions with entities
in the external world. In this survey, we present, for the first time, a
comprehensive overview of the literature in human interaction motion
generation. We begin by establishing foundational concepts essential for
understanding the research background. We then systematically review existing
solutions and datasets across three primary interaction tasks -- human-human,
human-object, and human-scene interactions -- followed by evaluation metrics.
Finally, we discuss open research directions and future opportunities.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 02:55:10 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 23:38:41 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sui",
"Kewei",
""
],
[
"Ghosh",
"Anindita",
""
],
[
"Hwang",
"Inwoo",
""
],
[
"Zhou",
"Bing",
""
],
[
"Wang",
"Jian",
""
],
[
"Guo",
"Chuan",
""
]
] | TITLE: A Survey on Human Interaction Motion Generation
ABSTRACT: Humans inhabit a world defined by interactions -- with other humans, objects,
and environments. These interactive movements not only convey our relationships
with our surroundings but also demonstrate how we perceive and communicate with
the real world. Therefore, replicating these interaction behaviors in digital
systems has emerged as an important topic for applications in robotics, virtual
reality, and animation. While recent advances in deep generative models and new
datasets have accelerated progress in this field, significant challenges remain
in modeling the intricate human dynamics and their interactions with entities
in the external world. In this survey, we present, for the first time, a
comprehensive overview of the literature in human interaction motion
generation. We begin by establishing foundational concepts essential for
understanding the research background. We then systematically review existing
solutions and datasets across three primary interaction tasks -- human-human,
human-object, and human-scene interactions -- followed by evaluation metrics.
Finally, we discuss open research directions and future opportunities.
|
2503.17486 | Zhengqing Gao | Zhengqing Gao, Dongting Hu, Jia-Wang Bian, Huan Fu, Yan Li, Tongliang
Liu, Mingming Gong, Kun Zhang | ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian
Prototypes | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | 3D Gaussian Splatting (3DGS) has made significant strides in novel view
synthesis but is limited by the substantial number of Gaussian primitives
required, posing challenges for deployment on lightweight devices. Recent
methods address this issue by compressing the storage size of densified
Gaussians, yet fail to preserve rendering quality and efficiency. To overcome
these limitations, we propose ProtoGS to learn Gaussian prototypes to represent
Gaussian primitives, significantly reducing the total Gaussian amount without
sacrificing visual quality. Our method directly uses Gaussian prototypes to
enable efficient rendering and leverage the resulting reconstruction loss to
guide prototype learning. To further optimize memory efficiency during
training, we incorporate structure-from-motion (SfM) points as anchor points to
group Gaussian primitives. Gaussian prototypes are derived within each group by
clustering of K-means, and both the anchor points and the prototypes are
optimized jointly. Our experiments on real-world and synthetic datasets prove
that we outperform existing methods, achieving a substantial reduction in the
number of Gaussians, and enabling high rendering speed while maintaining or
even enhancing rendering fidelity.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 18:55:14 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 13:03:48 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 12:19:01 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Gao",
"Zhengqing",
""
],
[
"Hu",
"Dongting",
""
],
[
"Bian",
"Jia-Wang",
""
],
[
"Fu",
"Huan",
""
],
[
"Li",
"Yan",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Gong",
"Mingming",
""
],
[
"Zhang",
"Kun",
""
]
] | TITLE: ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian
Prototypes
ABSTRACT: 3D Gaussian Splatting (3DGS) has made significant strides in novel view
synthesis but is limited by the substantial number of Gaussian primitives
required, posing challenges for deployment on lightweight devices. Recent
methods address this issue by compressing the storage size of densified
Gaussians, yet fail to preserve rendering quality and efficiency. To overcome
these limitations, we propose ProtoGS to learn Gaussian prototypes to represent
Gaussian primitives, significantly reducing the total Gaussian amount without
sacrificing visual quality. Our method directly uses Gaussian prototypes to
enable efficient rendering and leverage the resulting reconstruction loss to
guide prototype learning. To further optimize memory efficiency during
training, we incorporate structure-from-motion (SfM) points as anchor points to
group Gaussian primitives. Gaussian prototypes are derived within each group by
clustering of K-means, and both the anchor points and the prototypes are
optimized jointly. Our experiments on real-world and synthetic datasets prove
that we outperform existing methods, achieving a substantial reduction in the
number of Gaussians, and enabling high rendering speed while maintaining or
even enhancing rendering fidelity.
|
2503.22926 | Zikang Yuan | Zikang Yuan, Ruiye Ming, Chengwei Zhao, Yonghao Tan, Pingcheng Dong,
Hongcheng Luo, Yuzhong Jiao, Xin Yang and Kwang-Ting Cheng | SR-LIO++: Efficient LiDAR-Inertial Odometry and Quantized Mapping with
Sweep Reconstruction | 10 pages, 12 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Addressing the inherent low acquisition frequency limitation of 3D LiDAR to
achieve high-frequency output has become a critical research focus in the
LiDAR-Inertial Odometry (LIO) domain. To ensure real-time performance,
frequency-enhanced LIO systems must process each sweep within significantly
reduced timeframe, which presents substantial challenges for deployment on
low-computational-power platforms. To address these limitations, we introduce
SR-LIO++, an innovative LIO system capable of achieving doubled output
frequency relative to input frequency on resource-constrained hardware
platforms, including the Raspberry Pi 4B. Our system employs a sweep
reconstruction methodology to enhance LiDAR sweep frequency, generating
high-frequency reconstructed sweeps. Building upon this foundation, we propose
a caching mechanism for intermediate results (i.e., surface parameters) of the
most recent segments, effectively minimizing redundant processing of common
segments in adjacent reconstructed sweeps. This method decouples processing
time from the traditionally linear dependence on reconstructed sweep frequency.
Furthermore, we present a quantized map point management based on index table
mapping, significantly reducing memory usage by converting global 3D point
storage from 64-bit double precision to 8-bit char representation. This method
also converts the computationally intensive Euclidean distance calculations in
nearest neighbor searches from 64-bit double precision to 16-bit short and
32-bit integer formats, significantly reducing both memory and computational
cost. Extensive experimental evaluations across three distinct computing
platforms and four public datasets demonstrate that SR-LIO++ maintains
state-of-the-art accuracy while substantially enhancing efficiency. Notably,
our system successfully achieves 20Hz state output on Raspberry Pi 4B hardware.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 01:06:54 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 05:27:15 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yuan",
"Zikang",
""
],
[
"Ming",
"Ruiye",
""
],
[
"Zhao",
"Chengwei",
""
],
[
"Tan",
"Yonghao",
""
],
[
"Dong",
"Pingcheng",
""
],
[
"Luo",
"Hongcheng",
""
],
[
"Jiao",
"Yuzhong",
""
],
[
"Yang",
"Xin",
""
],
[
"Cheng",
"Kwang-Ting",
""
]
] | TITLE: SR-LIO++: Efficient LiDAR-Inertial Odometry and Quantized Mapping with
Sweep Reconstruction
ABSTRACT: Addressing the inherent low acquisition frequency limitation of 3D LiDAR to
achieve high-frequency output has become a critical research focus in the
LiDAR-Inertial Odometry (LIO) domain. To ensure real-time performance,
frequency-enhanced LIO systems must process each sweep within significantly
reduced timeframe, which presents substantial challenges for deployment on
low-computational-power platforms. To address these limitations, we introduce
SR-LIO++, an innovative LIO system capable of achieving doubled output
frequency relative to input frequency on resource-constrained hardware
platforms, including the Raspberry Pi 4B. Our system employs a sweep
reconstruction methodology to enhance LiDAR sweep frequency, generating
high-frequency reconstructed sweeps. Building upon this foundation, we propose
a caching mechanism for intermediate results (i.e., surface parameters) of the
most recent segments, effectively minimizing redundant processing of common
segments in adjacent reconstructed sweeps. This method decouples processing
time from the traditionally linear dependence on reconstructed sweep frequency.
Furthermore, we present a quantized map point management based on index table
mapping, significantly reducing memory usage by converting global 3D point
storage from 64-bit double precision to 8-bit char representation. This method
also converts the computationally intensive Euclidean distance calculations in
nearest neighbor searches from 64-bit double precision to 16-bit short and
32-bit integer formats, significantly reducing both memory and computational
cost. Extensive experimental evaluations across three distinct computing
platforms and four public datasets demonstrate that SR-LIO++ maintains
state-of-the-art accuracy while substantially enhancing efficiency. Notably,
our system successfully achieves 20Hz state output on Raspberry Pi 4B hardware.
|
2504.00597 | Jirui Qi | Jirui Qi, Raquel Fern\'andez, Arianna Bisazza | On the Consistency of Multilingual Context Utilization in
Retrieval-Augmented Generation | Under review at COLM2025. All codes and data are released at
https://github.com/Betswish/mRAG-Context-Consistency | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Retrieval-augmented generation (RAG) with large language models (LLMs) has
demonstrated strong performance in multilingual question-answering (QA) tasks
by leveraging relevant passages retrieved from corpora. In multilingual RAG
(mRAG), the retrieved passages can be written in languages other than that of
the query entered by the user, making it challenging for LLMs to effectively
utilize the provided information. Recent research suggests that retrieving
passages from multilingual corpora can improve RAG performance, particularly
for low-resource languages. However, the extent to which LLMs can leverage
different kinds of multilingual contexts to generate accurate answers,
*independently from retrieval quality*, remains understudied. In this paper, we
conduct an extensive assessment of LLMs' ability to (i) make consistent use of
a relevant passage regardless of its language, (ii) respond in the expected
language, and (iii) focus on the relevant passage even when multiple
`distracting' passages in different languages are provided in the context. Our
experiments with four LLMs across three QA datasets covering a total of 48
languages reveal a surprising ability of LLMs to extract the relevant
information from out-language passages, but a much weaker ability to formulate
a full answer in the correct language. Our analysis, based on both accuracy and
feature attribution techniques, further shows that distracting passages
negatively impact answer quality regardless of their language. However,
distractors in the query language exert a slightly stronger influence. Taken
together, our findings deepen the understanding of how LLMs utilize context in
mRAG systems, providing directions for future improvements.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 09:55:23 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 12:40:23 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Qi",
"Jirui",
""
],
[
"Fernández",
"Raquel",
""
],
[
"Bisazza",
"Arianna",
""
]
] | TITLE: On the Consistency of Multilingual Context Utilization in
Retrieval-Augmented Generation
ABSTRACT: Retrieval-augmented generation (RAG) with large language models (LLMs) has
demonstrated strong performance in multilingual question-answering (QA) tasks
by leveraging relevant passages retrieved from corpora. In multilingual RAG
(mRAG), the retrieved passages can be written in languages other than that of
the query entered by the user, making it challenging for LLMs to effectively
utilize the provided information. Recent research suggests that retrieving
passages from multilingual corpora can improve RAG performance, particularly
for low-resource languages. However, the extent to which LLMs can leverage
different kinds of multilingual contexts to generate accurate answers,
*independently from retrieval quality*, remains understudied. In this paper, we
conduct an extensive assessment of LLMs' ability to (i) make consistent use of
a relevant passage regardless of its language, (ii) respond in the expected
language, and (iii) focus on the relevant passage even when multiple
`distracting' passages in different languages are provided in the context. Our
experiments with four LLMs across three QA datasets covering a total of 48
languages reveal a surprising ability of LLMs to extract the relevant
information from out-language passages, but a much weaker ability to formulate
a full answer in the correct language. Our analysis, based on both accuracy and
feature attribution techniques, further shows that distracting passages
negatively impact answer quality regardless of their language. However,
distractors in the query language exert a slightly stronger influence. Taken
together, our findings deepen the understanding of how LLMs utilize context in
mRAG systems, providing directions for future improvements.
|
2504.01698 | Yilong Lu | Yi-Long Lu, Chunhui Zhang, Jiajun Song, Lifeng Fan, Wei Wang | ToM-RL: Reinforcement Learning Unlocks Theory of Mind in Small LLMs | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in rule-based reinforcement learning (RL), applied during
the post-training phase of large language models (LLMs), have significantly
enhanced their capabilities in structured reasoning tasks such as mathematics
and logical inference. However, the effectiveness of RL in social reasoning,
particularly in Theory of Mind (ToM), the ability to infer others' mental
states, remains largely unexplored. In this study, we demonstrate that RL
methods effectively unlock ToM reasoning capabilities even in small-scale LLMs
(0.5B to 7B parameters). Using a modest dataset comprising 3200 questions
across diverse scenarios, our RL-trained 7B model achieves 84.50\% accuracy on
the Hi-ToM benchmark, surpassing models like GPT-4o and DeepSeek-v3 despite
significantly fewer parameters. While smaller models ($\leq$3B parameters)
suffer from reasoning collapse, larger models (7B parameters) maintain stable
performance through consistent belief tracking. Additionally, our RL-based
models demonstrate robust generalization to higher-order, out-of-distribution
ToM problems, novel textual presentations, and previously unseen datasets.
These findings highlight RL's potential to enhance social cognitive reasoning,
bridging the gap between structured problem-solving and nuanced social
inference in LLMs.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 12:58:42 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 03:58:20 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Lu",
"Yi-Long",
""
],
[
"Zhang",
"Chunhui",
""
],
[
"Song",
"Jiajun",
""
],
[
"Fan",
"Lifeng",
""
],
[
"Wang",
"Wei",
""
]
] | TITLE: ToM-RL: Reinforcement Learning Unlocks Theory of Mind in Small LLMs
ABSTRACT: Recent advancements in rule-based reinforcement learning (RL), applied during
the post-training phase of large language models (LLMs), have significantly
enhanced their capabilities in structured reasoning tasks such as mathematics
and logical inference. However, the effectiveness of RL in social reasoning,
particularly in Theory of Mind (ToM), the ability to infer others' mental
states, remains largely unexplored. In this study, we demonstrate that RL
methods effectively unlock ToM reasoning capabilities even in small-scale LLMs
(0.5B to 7B parameters). Using a modest dataset comprising 3200 questions
across diverse scenarios, our RL-trained 7B model achieves 84.50\% accuracy on
the Hi-ToM benchmark, surpassing models like GPT-4o and DeepSeek-v3 despite
significantly fewer parameters. While smaller models ($\leq$3B parameters)
suffer from reasoning collapse, larger models (7B parameters) maintain stable
performance through consistent belief tracking. Additionally, our RL-based
models demonstrate robust generalization to higher-order, out-of-distribution
ToM problems, novel textual presentations, and previously unseen datasets.
These findings highlight RL's potential to enhance social cognitive reasoning,
bridging the gap between structured problem-solving and nuanced social
inference in LLMs.
|
2504.02010 | Nan Zhang | Nan Zhang, Yusen Zhang, Prasenjit Mitra, Rui Zhang | When Reasoning Meets Compression: Benchmarking Compressed Large
Reasoning Models on Complex Reasoning Tasks | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent open-source large reasoning models (LRMs) exhibit strong performance
on complex reasoning tasks, but their large parameter count makes them
prohibitively expensive for individuals. The compression of large language
models (LLMs) offers an effective solution to reduce cost of computational
resources. However, systematic studies on the performance of compressed LLMs in
complex reasoning tasks, especially for LRMs, are lacking. Most works on
quantization and pruning focus on preserving language modeling performance,
while existing distillation works do not comprehensively benchmark student
models based on reasoning difficulty or compression impact on knowledge and
reasoning. In this paper, we benchmark compressed DeepSeek-R1 models on four
different reasoning datasets (AIME 2024, FOLIO, Temporal Sequences of BIG-Bench
Hard, and MuSiQue), ranging from mathematical to multihop reasoning, using
quantization, distillation, and pruning methods. We benchmark 2.51-, 1.73-, and
1.58-bit R1 models that adopt dynamic quantization. We also benchmark distilled
R1 models that are based on LLaMA or Qwen and run SparseGPT on them to obtain
various sparsity levels. Studying the performance and behavior of compressed
LRMs, we report their performance scores and test-time compute (number of
tokens spent on each question). Notably, using MuSiQue, we find that parameter
count has a much greater impact on LRMs' knowledge memorization than on their
reasoning capability, which can inform the choice of compression techniques.
Through our empirical analysis of test-time compute, we find that shorter model
outputs generally achieve better performance than longer ones across several
benchmarks for both R1 and its compressed variants, highlighting the need for
more concise reasoning chains.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:17:46 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhang",
"Nan",
""
],
[
"Zhang",
"Yusen",
""
],
[
"Mitra",
"Prasenjit",
""
],
[
"Zhang",
"Rui",
""
]
] | TITLE: When Reasoning Meets Compression: Benchmarking Compressed Large
Reasoning Models on Complex Reasoning Tasks
ABSTRACT: Recent open-source large reasoning models (LRMs) exhibit strong performance
on complex reasoning tasks, but their large parameter count makes them
prohibitively expensive for individuals. The compression of large language
models (LLMs) offers an effective solution to reduce cost of computational
resources. However, systematic studies on the performance of compressed LLMs in
complex reasoning tasks, especially for LRMs, are lacking. Most works on
quantization and pruning focus on preserving language modeling performance,
while existing distillation works do not comprehensively benchmark student
models based on reasoning difficulty or compression impact on knowledge and
reasoning. In this paper, we benchmark compressed DeepSeek-R1 models on four
different reasoning datasets (AIME 2024, FOLIO, Temporal Sequences of BIG-Bench
Hard, and MuSiQue), ranging from mathematical to multihop reasoning, using
quantization, distillation, and pruning methods. We benchmark 2.51-, 1.73-, and
1.58-bit R1 models that adopt dynamic quantization. We also benchmark distilled
R1 models that are based on LLaMA or Qwen and run SparseGPT on them to obtain
various sparsity levels. Studying the performance and behavior of compressed
LRMs, we report their performance scores and test-time compute (number of
tokens spent on each question). Notably, using MuSiQue, we find that parameter
count has a much greater impact on LRMs' knowledge memorization than on their
reasoning capability, which can inform the choice of compression techniques.
Through our empirical analysis of test-time compute, we find that shorter model
outputs generally achieve better performance than longer ones across several
benchmarks for both R1 and its compressed variants, highlighting the need for
more concise reasoning chains.
|
2504.02329 | Seif Mzoughi Msc | Seif Mzoughi, Ahmed Haj yahmed, Mohamed Elshafei, Foutse Khomh, Diego
Elias Costa | Towards Assessing Deep Learning Test Input Generators | Accepted to EASE 2025 | null | null | null | cs.LG cs.CV cs.SE | http://creativecommons.org/licenses/by/4.0/ | Deep Learning (DL) systems are increasingly deployed in safety-critical
applications, yet they remain vulnerable to robustness issues that can lead to
significant failures. While numerous Test Input Generators (TIGs) have been
developed to evaluate DL robustness, a comprehensive assessment of their
effectiveness across different dimensions is still lacking. This paper presents
a comprehensive assessment of four state-of-the-art TIGs--DeepHunter,
DeepFault, AdvGAN, and SinVAD--across multiple critical aspects:
fault-revealing capability, naturalness, diversity, and efficiency. Our
empirical study leverages three pre-trained models (LeNet-5, VGG16, and
EfficientNetB3) on datasets of varying complexity (MNIST, CIFAR-10, and
ImageNet-1K) to evaluate TIG performance. Our findings reveal important
trade-offs in robustness revealing capability, variation in test case
generation, and computational efficiency across TIGs. The results also show
that TIG performance varies significantly with dataset complexity, as tools
that perform well on simpler datasets may struggle with more complex ones. In
contrast, others maintain steadier performance or better scalability. This
paper offers practical guidance for selecting appropriate TIGs aligned with
specific objectives and dataset characteristics. Nonetheless, more work is
needed to address TIG limitations and advance TIGs for real-world,
safety-critical systems.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:06:55 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 18:35:13 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mzoughi",
"Seif",
""
],
[
"yahmed",
"Ahmed Haj",
""
],
[
"Elshafei",
"Mohamed",
""
],
[
"Khomh",
"Foutse",
""
],
[
"Costa",
"Diego Elias",
""
]
] | TITLE: Towards Assessing Deep Learning Test Input Generators
ABSTRACT: Deep Learning (DL) systems are increasingly deployed in safety-critical
applications, yet they remain vulnerable to robustness issues that can lead to
significant failures. While numerous Test Input Generators (TIGs) have been
developed to evaluate DL robustness, a comprehensive assessment of their
effectiveness across different dimensions is still lacking. This paper presents
a comprehensive assessment of four state-of-the-art TIGs--DeepHunter,
DeepFault, AdvGAN, and SinVAD--across multiple critical aspects:
fault-revealing capability, naturalness, diversity, and efficiency. Our
empirical study leverages three pre-trained models (LeNet-5, VGG16, and
EfficientNetB3) on datasets of varying complexity (MNIST, CIFAR-10, and
ImageNet-1K) to evaluate TIG performance. Our findings reveal important
trade-offs in robustness revealing capability, variation in test case
generation, and computational efficiency across TIGs. The results also show
that TIG performance varies significantly with dataset complexity, as tools
that perform well on simpler datasets may struggle with more complex ones. In
contrast, others maintain steadier performance or better scalability. This
paper offers practical guidance for selecting appropriate TIGs aligned with
specific objectives and dataset characteristics. Nonetheless, more work is
needed to address TIG limitations and advance TIGs for real-world,
safety-critical systems.
|
2504.02971 | Shaoyuan Xu Ph.D. | Binh M. Le, Shaoyuan Xu, Jinmiao Fu, Zhishen Huang, Moyan Li, Yanhui
Guo, Hongdong Li, Sameera Ramasinghe, Bryan Wang | QID: Efficient Query-Informed ViTs in Data-Scarce Regimes for OCR-free
Visual Document Understanding | 8 pages, accepted by CVPR 2025 MULA | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | In Visual Document Understanding (VDU) tasks, fine-tuning a pre-trained
Vision-Language Model (VLM) with new datasets often falls short in optimizing
the vision encoder to identify query-specific regions in text-rich document
images. Existing methods that directly inject queries into model layers by
modifying the network architecture often struggle to adapt to new datasets with
limited annotations. To address this, we introduce QID, a novel, streamlined,
architecture-preserving approach that integrates query embeddings into the
vision encoder, leading to notable performance gains, particularly in
data-scarce fine-tuning scenarios. Specifically, our approach introduces a
dual-module framework: a query-aware module that generates a unique query
vector to precisely guide the model's focus, as well as a query-agnostic module
that captures the positional relationships among tokens, ensuring robust
spatial understanding. Notably, both modules operate independently of the
vision attention blocks, facilitating targeted learning of query embeddings and
enhancing visual semantic identification. Experiments with OCR-free VLMs across
multiple datasets demonstrate significant performance improvements using our
method, especially in handling text-rich documents in data-scarce environments.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 18:47:16 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 17:58:44 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Le",
"Binh M.",
""
],
[
"Xu",
"Shaoyuan",
""
],
[
"Fu",
"Jinmiao",
""
],
[
"Huang",
"Zhishen",
""
],
[
"Li",
"Moyan",
""
],
[
"Guo",
"Yanhui",
""
],
[
"Li",
"Hongdong",
""
],
[
"Ramasinghe",
"Sameera",
""
],
[
"Wang",
"Bryan",
""
]
] | TITLE: QID: Efficient Query-Informed ViTs in Data-Scarce Regimes for OCR-free
Visual Document Understanding
ABSTRACT: In Visual Document Understanding (VDU) tasks, fine-tuning a pre-trained
Vision-Language Model (VLM) with new datasets often falls short in optimizing
the vision encoder to identify query-specific regions in text-rich document
images. Existing methods that directly inject queries into model layers by
modifying the network architecture often struggle to adapt to new datasets with
limited annotations. To address this, we introduce QID, a novel, streamlined,
architecture-preserving approach that integrates query embeddings into the
vision encoder, leading to notable performance gains, particularly in
data-scarce fine-tuning scenarios. Specifically, our approach introduces a
dual-module framework: a query-aware module that generates a unique query
vector to precisely guide the model's focus, as well as a query-agnostic module
that captures the positional relationships among tokens, ensuring robust
spatial understanding. Notably, both modules operate independently of the
vision attention blocks, facilitating targeted learning of query embeddings and
enhancing visual semantic identification. Experiments with OCR-free VLMs across
multiple datasets demonstrate significant performance improvements using our
method, especially in handling text-rich documents in data-scarce environments.
|
2504.03809 | Niclas Boehmer | Stanis{\l}aw Szufa, Niclas Boehmer, Robert Bredereck, Piotr
Faliszewski, Rolf Niedermeier, Piotr Skowron, Arkadii Slinko, Nimrod Talmon | Drawing a Map of Elections | Journal article merging results from arxiv:2105.07815,
arXiv:2407.11889 and Szufa et al., "Drawing a Map of Elections in the Space
of Statistical Cultures", AAMAS '20 | null | 10.1016/j.artint.2025.104332 | null | cs.MA cs.AI cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our main contribution is the introduction of the map of elections framework.
A map of elections consists of three main elements: (1) a dataset of elections
(i.e., collections of ordinal votes over given sets of candidates), (2) a way
of measuring similarities between these elections, and (3) a representation of
the elections in the 2D Euclidean space as points, so that the more similar two
elections are, the closer are their points. In our maps, we mostly focus on
datasets of synthetic elections, but we also show an example of a map over
real-life ones. To measure similarities, we would have preferred to use, e.g.,
the isomorphic swap distance, but this is infeasible due to its high
computational complexity. Hence, we propose polynomial-time computable
positionwise distance and use it instead. Regarding the representations in 2D
Euclidean space, we mostly use the Kamada-Kawai algorithm, but we also show two
alternatives. We develop the necessary theoretical results to form our maps and
argue experimentally that they are accurate and credible. Further, we show how
coloring the elections in a map according to various criteria helps in
analyzing results of a number of experiments. In particular, we show colorings
according to the scores of winning candidates or committees, running times of
ILP-based winner determination algorithms, and approximation ratios achieved by
particular algorithms.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 11:44:56 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 10:52:54 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Szufa",
"Stanisław",
""
],
[
"Boehmer",
"Niclas",
""
],
[
"Bredereck",
"Robert",
""
],
[
"Faliszewski",
"Piotr",
""
],
[
"Niedermeier",
"Rolf",
""
],
[
"Skowron",
"Piotr",
""
],
[
"Slinko",
"Arkadii",
""
],
[
"Talmon",
"Nimrod",
""
]
] | TITLE: Drawing a Map of Elections
ABSTRACT: Our main contribution is the introduction of the map of elections framework.
A map of elections consists of three main elements: (1) a dataset of elections
(i.e., collections of ordinal votes over given sets of candidates), (2) a way
of measuring similarities between these elections, and (3) a representation of
the elections in the 2D Euclidean space as points, so that the more similar two
elections are, the closer are their points. In our maps, we mostly focus on
datasets of synthetic elections, but we also show an example of a map over
real-life ones. To measure similarities, we would have preferred to use, e.g.,
the isomorphic swap distance, but this is infeasible due to its high
computational complexity. Hence, we propose polynomial-time computable
positionwise distance and use it instead. Regarding the representations in 2D
Euclidean space, we mostly use the Kamada-Kawai algorithm, but we also show two
alternatives. We develop the necessary theoretical results to form our maps and
argue experimentally that they are accurate and credible. Further, we show how
coloring the elections in a map according to various criteria helps in
analyzing results of a number of experiments. In particular, we show colorings
according to the scores of winning candidates or committees, running times of
ILP-based winner determination algorithms, and approximation ratios achieved by
particular algorithms.
|
2504.03814 | Grgur Kova\v{c} | Grgur Kova\v{c}, J\'er\'emy Perez, R\'emy Portelas, Peter Ford
Dominey, Pierre-Yves Oudeyer | Recursive Training Loops in LLMs: How training data properties modulate
distribution shift in generated data? | null | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) are increasingly contributing to the creation of
content on the Internet. This creates a feedback loop as subsequent generations
of models will be trained on this generated, synthetic data. This phenomenon is
receiving increasing interest, in particular because previous studies have
shown that it may lead to distribution shift - models misrepresent and forget
the true underlying distributions of human data they are expected to
approximate (e.g. resulting in a drastic loss of quality). In this study, we
study the impact of human data properties on distribution shift dynamics in
iterated training loops. We first confirm that the distribution shift dynamics
greatly vary depending on the human data by comparing four datasets (two based
on Twitter and two on Reddit). We then test whether data quality may influence
the rate of this shift. We find that it does on the twitter, but not on the
Reddit datasets. We then focus on a Reddit dataset and conduct a more
exhaustive evaluation of a large set of dataset properties. This experiment
associated lexical diversity with larger, and semantic diversity with smaller
detrimental shifts, suggesting that incorporating text with high lexical (but
limited semantic) diversity could exacerbate the degradation of generated text.
We then focus on the evolution of political bias, and find that the type of
shift observed (bias reduction, amplification or inversion) depends on the
political lean of the human (true) distribution. Overall, our work extends the
existing literature on the consequences of recursive fine-tuning by showing
that this phenomenon is highly dependent on features of the human data on which
training occurs. This suggests that different parts of internet (e.g. GitHub,
Reddit) may undergo different types of shift depending on their properties.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:41:41 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 08:45:26 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kovač",
"Grgur",
""
],
[
"Perez",
"Jérémy",
""
],
[
"Portelas",
"Rémy",
""
],
[
"Dominey",
"Peter Ford",
""
],
[
"Oudeyer",
"Pierre-Yves",
""
]
] | TITLE: Recursive Training Loops in LLMs: How training data properties modulate
distribution shift in generated data?
ABSTRACT: Large language models (LLMs) are increasingly contributing to the creation of
content on the Internet. This creates a feedback loop as subsequent generations
of models will be trained on this generated, synthetic data. This phenomenon is
receiving increasing interest, in particular because previous studies have
shown that it may lead to distribution shift - models misrepresent and forget
the true underlying distributions of human data they are expected to
approximate (e.g. resulting in a drastic loss of quality). In this study, we
study the impact of human data properties on distribution shift dynamics in
iterated training loops. We first confirm that the distribution shift dynamics
greatly vary depending on the human data by comparing four datasets (two based
on Twitter and two on Reddit). We then test whether data quality may influence
the rate of this shift. We find that it does on the twitter, but not on the
Reddit datasets. We then focus on a Reddit dataset and conduct a more
exhaustive evaluation of a large set of dataset properties. This experiment
associated lexical diversity with larger, and semantic diversity with smaller
detrimental shifts, suggesting that incorporating text with high lexical (but
limited semantic) diversity could exacerbate the degradation of generated text.
We then focus on the evolution of political bias, and find that the type of
shift observed (bias reduction, amplification or inversion) depends on the
political lean of the human (true) distribution. Overall, our work extends the
existing literature on the consequences of recursive fine-tuning by showing
that this phenomenon is highly dependent on features of the human data on which
training occurs. This suggests that different parts of internet (e.g. GitHub,
Reddit) may undergo different types of shift depending on their properties.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.