Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.07887 | Riccardo Cantini | Riccardo Cantini, Alessio Orsino, Massimo Ruggiero, Domenico Talia | Benchmarking Adversarial Robustness to Bias Elicitation in Large
Language Models: Scalable Automated Assessment with LLM-as-a-Judge | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs) have revolutionized artificial intelligence,
driving advancements in machine translation, summarization, and conversational
agents. However, their increasing integration into critical societal domains
has raised concerns about embedded biases, which can perpetuate stereotypes and
compromise fairness. These biases stem from various sources, including
historical inequalities in training data, linguistic imbalances, and
adversarial manipulation. Despite mitigation efforts, recent studies indicate
that LLMs remain vulnerable to adversarial attacks designed to elicit biased
responses. This work proposes a scalable benchmarking framework to evaluate LLM
robustness against adversarial bias elicitation. Our methodology involves (i)
systematically probing models with a multi-task approach targeting biases
across various sociocultural dimensions, (ii) quantifying robustness through
safety scores using an LLM-as-a-Judge approach for automated assessment of
model responses, and (iii) employing jailbreak techniques to investigate
vulnerabilities in safety mechanisms. Our analysis examines prevalent biases in
both small and large state-of-the-art models and their impact on model safety.
Additionally, we assess the safety of domain-specific models fine-tuned for
critical fields, such as medicine. Finally, we release a curated dataset of
bias-related prompts, CLEAR-Bias, to facilitate systematic vulnerability
benchmarking. Our findings reveal critical trade-offs between model size and
safety, aiding the development of fairer and more robust future language
models.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 16:00:59 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Cantini",
"Riccardo",
""
],
[
"Orsino",
"Alessio",
""
],
[
"Ruggiero",
"Massimo",
""
],
[
"Talia",
"Domenico",
""
]
] | TITLE: Benchmarking Adversarial Robustness to Bias Elicitation in Large
Language Models: Scalable Automated Assessment with LLM-as-a-Judge
ABSTRACT: Large Language Models (LLMs) have revolutionized artificial intelligence,
driving advancements in machine translation, summarization, and conversational
agents. However, their increasing integration into critical societal domains
has raised concerns about embedded biases, which can perpetuate stereotypes and
compromise fairness. These biases stem from various sources, including
historical inequalities in training data, linguistic imbalances, and
adversarial manipulation. Despite mitigation efforts, recent studies indicate
that LLMs remain vulnerable to adversarial attacks designed to elicit biased
responses. This work proposes a scalable benchmarking framework to evaluate LLM
robustness against adversarial bias elicitation. Our methodology involves (i)
systematically probing models with a multi-task approach targeting biases
across various sociocultural dimensions, (ii) quantifying robustness through
safety scores using an LLM-as-a-Judge approach for automated assessment of
model responses, and (iii) employing jailbreak techniques to investigate
vulnerabilities in safety mechanisms. Our analysis examines prevalent biases in
both small and large state-of-the-art models and their impact on model safety.
Additionally, we assess the safety of domain-specific models fine-tuned for
critical fields, such as medicine. Finally, we release a curated dataset of
bias-related prompts, CLEAR-Bias, to facilitate systematic vulnerability
benchmarking. Our findings reveal critical trade-offs between model size and
safety, aiding the development of fairer and more robust future language
models.
|
2504.07901 | Hongcheng Guo | Hongcheng Guo, Fei Zhao, Shaosheng Cao, Xinze Lyu, Ziyan Liu, Yue
Wang, Boyang Wang, Zhoujun Li, Chonggang Lu, Zhe Xu, Yao Hu | Redefining Machine Translation on Social Network Services with Large
Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The globalization of social interactions has heightened the need for machine
translation (MT) on Social Network Services (SNS), yet traditional models
struggle with culturally nuanced content like memes, slang, and pop culture
references. While large language models (LLMs) have advanced general-purpose
translation, their performance on SNS-specific content remains limited due to
insufficient specialized training data and evaluation benchmarks. This paper
introduces RedTrans, a 72B LLM tailored for SNS translation, trained on a novel
dataset developed through three innovations: (1) Supervised Finetuning with
Dual-LLM Back-Translation Sampling, an unsupervised sampling method using
LLM-based back-translation to select diverse data for large-scale finetuning;
(2) Rewritten Preference Optimization (RePO), an algorithm that identifies and
corrects erroneous preference pairs through expert annotation, building
reliable preference corpora; and (3) RedTrans-Bench, the first benchmark for
SNS translation, evaluating phenomena like humor localization, emoji semantics,
and meme adaptation. Experiments show RedTrans outperforms state-of-the-art
LLMs. Besides, RedTrans has already been deployed in a real-world production
environment, demonstrating that domain-specific adaptation, effectively bridges
the gap between generic and culturally grounded translation systems.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 16:24:28 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Guo",
"Hongcheng",
""
],
[
"Zhao",
"Fei",
""
],
[
"Cao",
"Shaosheng",
""
],
[
"Lyu",
"Xinze",
""
],
[
"Liu",
"Ziyan",
""
],
[
"Wang",
"Yue",
""
],
[
"Wang",
"Boyang",
""
],
[
"Li",
"Zhoujun",
""
],
[
"Lu",
"Chonggang",
""
],
[
"Xu",
"Zhe",
""
],
[
"Hu",
"Yao",
""
]
] | TITLE: Redefining Machine Translation on Social Network Services with Large
Language Models
ABSTRACT: The globalization of social interactions has heightened the need for machine
translation (MT) on Social Network Services (SNS), yet traditional models
struggle with culturally nuanced content like memes, slang, and pop culture
references. While large language models (LLMs) have advanced general-purpose
translation, their performance on SNS-specific content remains limited due to
insufficient specialized training data and evaluation benchmarks. This paper
introduces RedTrans, a 72B LLM tailored for SNS translation, trained on a novel
dataset developed through three innovations: (1) Supervised Finetuning with
Dual-LLM Back-Translation Sampling, an unsupervised sampling method using
LLM-based back-translation to select diverse data for large-scale finetuning;
(2) Rewritten Preference Optimization (RePO), an algorithm that identifies and
corrects erroneous preference pairs through expert annotation, building
reliable preference corpora; and (3) RedTrans-Bench, the first benchmark for
SNS translation, evaluating phenomena like humor localization, emoji semantics,
and meme adaptation. Experiments show RedTrans outperforms state-of-the-art
LLMs. Besides, RedTrans has already been deployed in a real-world production
environment, demonstrating that domain-specific adaptation, effectively bridges
the gap between generic and culturally grounded translation systems.
|
2504.07905 | Iat Hin Tam | Frederick Iat-Hin Tam, Fabien Augsburger, Tom Beucler | From Winter Storm Thermodynamics to Wind Gust Extremes: Discovering
Interpretable Equations from Data | 9 pages, 4 figures | null | null | null | physics.ao-ph stat.AP | http://creativecommons.org/licenses/by/4.0/ | Reliably identifying and understanding temporal precursors to extreme wind
gusts is crucial for early warning and mitigation. This study proposes a simple
data-driven approach to extract key predictors from a dataset of historical
extreme European winter windstorms and derive simple equations linking these
precursors to extreme gusts over land. A major challenge is the limited
training data for extreme events, increasing the risk of model overfitting.
Testing various mitigation strategies, we find that combining dimensionality
reduction, careful cross-validation, feature selection, and a nonlinear
transformation of maximum wind gusts informed by Generalized Extreme Value
distributions successfully reduces overfitting. These measures yield
interpretable equations that generalize across regions while maintaining
satisfactory predictive skill. The discovered equations reveal the association
between a steady drying low-troposphere before landfall and wind gust intensity
in Northwestern Europe.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 16:28:22 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Tam",
"Frederick Iat-Hin",
""
],
[
"Augsburger",
"Fabien",
""
],
[
"Beucler",
"Tom",
""
]
] | TITLE: From Winter Storm Thermodynamics to Wind Gust Extremes: Discovering
Interpretable Equations from Data
ABSTRACT: Reliably identifying and understanding temporal precursors to extreme wind
gusts is crucial for early warning and mitigation. This study proposes a simple
data-driven approach to extract key predictors from a dataset of historical
extreme European winter windstorms and derive simple equations linking these
precursors to extreme gusts over land. A major challenge is the limited
training data for extreme events, increasing the risk of model overfitting.
Testing various mitigation strategies, we find that combining dimensionality
reduction, careful cross-validation, feature selection, and a nonlinear
transformation of maximum wind gusts informed by Generalized Extreme Value
distributions successfully reduces overfitting. These measures yield
interpretable equations that generalize across regions while maintaining
satisfactory predictive skill. The discovered equations reveal the association
between a steady drying low-troposphere before landfall and wind gust intensity
in Northwestern Europe.
|
2504.07912 | Rosie Zhao | Rosie Zhao, Alexandru Meterez, Sham Kakade, Cengiz Pehlevan, Samy
Jelassi, Eran Malach | Echo Chamber: RL Post-training Amplifies Behaviors Learned in
Pretraining | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Reinforcement learning (RL)-based fine-tuning has become a crucial step in
post-training language models for advanced mathematical reasoning and coding.
Following the success of frontier reasoning models, recent work has
demonstrated that RL fine-tuning consistently improves performance, even in
smaller-scale models; however, the underlying mechanisms driving these
improvements are not well-understood. Understanding the effects of RL
fine-tuning requires disentangling its interaction with pretraining data
composition, hyperparameters, and model scale, but such problems are
exacerbated by the lack of transparency regarding the training data used in
many existing models. In this work, we present a systematic end-to-end study of
RL fine-tuning for mathematical reasoning by training models entirely from
scratch on different mixtures of fully open datasets. We investigate the
effects of various RL fine-tuning algorithms (PPO, GRPO, and Expert Iteration)
across models of different scales. Our study reveals that RL algorithms
consistently converge towards a dominant output distribution, amplifying
patterns in the pretraining data. We also find that models of different scales
trained on the same data mixture will converge to distinct output
distributions, suggesting that there are scale-dependent biases in model
generalization. Moreover, we find that RL post-training on simpler questions
can lead to performance gains on harder ones, indicating that certain reasoning
capabilities generalize across tasks. Our findings show that small-scale
proxies in controlled settings can elicit interesting insights regarding the
role of RL in shaping language model behavior.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:15:53 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhao",
"Rosie",
""
],
[
"Meterez",
"Alexandru",
""
],
[
"Kakade",
"Sham",
""
],
[
"Pehlevan",
"Cengiz",
""
],
[
"Jelassi",
"Samy",
""
],
[
"Malach",
"Eran",
""
]
] | TITLE: Echo Chamber: RL Post-training Amplifies Behaviors Learned in
Pretraining
ABSTRACT: Reinforcement learning (RL)-based fine-tuning has become a crucial step in
post-training language models for advanced mathematical reasoning and coding.
Following the success of frontier reasoning models, recent work has
demonstrated that RL fine-tuning consistently improves performance, even in
smaller-scale models; however, the underlying mechanisms driving these
improvements are not well-understood. Understanding the effects of RL
fine-tuning requires disentangling its interaction with pretraining data
composition, hyperparameters, and model scale, but such problems are
exacerbated by the lack of transparency regarding the training data used in
many existing models. In this work, we present a systematic end-to-end study of
RL fine-tuning for mathematical reasoning by training models entirely from
scratch on different mixtures of fully open datasets. We investigate the
effects of various RL fine-tuning algorithms (PPO, GRPO, and Expert Iteration)
across models of different scales. Our study reveals that RL algorithms
consistently converge towards a dominant output distribution, amplifying
patterns in the pretraining data. We also find that models of different scales
trained on the same data mixture will converge to distinct output
distributions, suggesting that there are scale-dependent biases in model
generalization. Moreover, we find that RL post-training on simpler questions
can lead to performance gains on harder ones, indicating that certain reasoning
capabilities generalize across tasks. Our findings show that small-scale
proxies in controlled settings can elicit interesting insights regarding the
role of RL in shaping language model behavior.
|
2504.07916 | Guanyi Mou | Wen Ge and Guanyi Mou, Emmanuel O. Agu, Kyumin Lee | Semantically Encoding Activity Labels for Context-Aware Human Activity
Recognition | Percom 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Prior work has primarily formulated CA-HAR as a multi-label classification
problem, where model inputs are time-series sensor data and target labels are
binary encodings representing whether a given activity or context occurs. These
CA-HAR methods either predicted each label independently or manually imposed
relationships using graphs. However, both strategies often neglect an essential
aspect: activity labels have rich semantic relationships. For instance,
walking, jogging, and running activities share similar movement patterns but
differ in pace and intensity, indicating that they are semantically related.
Consequently, prior CA-HAR methods often struggled to accurately capture these
inherent and nuanced relationships, particularly on datasets with noisy labels
typically used for CA-HAR or situations where the ideal sensor type is
unavailable (e.g., recognizing speech without audio sensors). To address this
limitation, we propose SEAL, which leverage LMs to encode CA-HAR activity
labels to capture semantic relationships. LMs generate vector embeddings that
preserve rich semantic information from natural language. Our SEAL approach
encodes input-time series sensor data from smart devices and their associated
activity and context labels (text) as vector embeddings. During training, SEAL
aligns the sensor data representations with their corresponding
activity/context label embeddings in a shared embedding space. At inference
time, SEAL performs a similarity search, returning the CA-HAR label with the
embedding representation closest to the input data. Although LMs have been
widely explored in other domains, surprisingly, their potential in CA-HAR has
been underexplored, making our approach a novel contribution to the field. Our
research opens up new possibilities for integrating more advanced LMs into
CA-HAR tasks.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:30:07 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Ge",
"Wen",
""
],
[
"Mou",
"Guanyi",
""
],
[
"Agu",
"Emmanuel O.",
""
],
[
"Lee",
"Kyumin",
""
]
] | TITLE: Semantically Encoding Activity Labels for Context-Aware Human Activity
Recognition
ABSTRACT: Prior work has primarily formulated CA-HAR as a multi-label classification
problem, where model inputs are time-series sensor data and target labels are
binary encodings representing whether a given activity or context occurs. These
CA-HAR methods either predicted each label independently or manually imposed
relationships using graphs. However, both strategies often neglect an essential
aspect: activity labels have rich semantic relationships. For instance,
walking, jogging, and running activities share similar movement patterns but
differ in pace and intensity, indicating that they are semantically related.
Consequently, prior CA-HAR methods often struggled to accurately capture these
inherent and nuanced relationships, particularly on datasets with noisy labels
typically used for CA-HAR or situations where the ideal sensor type is
unavailable (e.g., recognizing speech without audio sensors). To address this
limitation, we propose SEAL, which leverage LMs to encode CA-HAR activity
labels to capture semantic relationships. LMs generate vector embeddings that
preserve rich semantic information from natural language. Our SEAL approach
encodes input-time series sensor data from smart devices and their associated
activity and context labels (text) as vector embeddings. During training, SEAL
aligns the sensor data representations with their corresponding
activity/context label embeddings in a shared embedding space. At inference
time, SEAL performs a similarity search, returning the CA-HAR label with the
embedding representation closest to the input data. Although LMs have been
widely explored in other domains, surprisingly, their potential in CA-HAR has
been underexplored, making our approach a novel contribution to the field. Our
research opens up new possibilities for integrating more advanced LMs into
CA-HAR tasks.
|
2504.07927 | Yongyi Shi | Yongyi Shi, Ge Wang | Zero-Shot Low-dose CT Denoising via Sinogram Flicking | 4 pages, 4 figures | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many low-dose CT imaging methods rely on supervised learning, which requires
a large number of paired noisy and clean images. However, obtaining paired
images in clinical practice is challenging. To address this issue, zero-shot
self-supervised methods train denoising networks using only the information
within a single image, such as ZS-N2N. However, these methods often employ
downsampling operations that degrade image resolution. Additionally, the
training dataset is inherently constrained to the image itself. In this paper,
we propose a zero-shot low-dose CT imaging method based on sinogram flicking,
which operates within a single image but generates many copies via random
conjugate ray matching. Specifically, two conjugate X-ray pencil beams measure
the same path; their expected values should be identical, while their noise
levels vary during measurements. By randomly swapping portions of the conjugate
X-rays in the sinogram domain, we generate a large set of sinograms with
consistent content but varying noise patterns. When displayed dynamically,
these sinograms exhibit a flickering effect due to their identical structural
content but differing noise patterns-hence the term sinogram flicking. We train
the network on pairs of sinograms with the same content but different noise
distributions using a lightweight model adapted from ZS-NSN. This process is
repeated to obtain the final results. A simulation study demonstrates that our
method outperforms state-of-the-art approaches such as ZS-N2N.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:42:01 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Shi",
"Yongyi",
""
],
[
"Wang",
"Ge",
""
]
] | TITLE: Zero-Shot Low-dose CT Denoising via Sinogram Flicking
ABSTRACT: Many low-dose CT imaging methods rely on supervised learning, which requires
a large number of paired noisy and clean images. However, obtaining paired
images in clinical practice is challenging. To address this issue, zero-shot
self-supervised methods train denoising networks using only the information
within a single image, such as ZS-N2N. However, these methods often employ
downsampling operations that degrade image resolution. Additionally, the
training dataset is inherently constrained to the image itself. In this paper,
we propose a zero-shot low-dose CT imaging method based on sinogram flicking,
which operates within a single image but generates many copies via random
conjugate ray matching. Specifically, two conjugate X-ray pencil beams measure
the same path; their expected values should be identical, while their noise
levels vary during measurements. By randomly swapping portions of the conjugate
X-rays in the sinogram domain, we generate a large set of sinograms with
consistent content but varying noise patterns. When displayed dynamically,
these sinograms exhibit a flickering effect due to their identical structural
content but differing noise patterns-hence the term sinogram flicking. We train
the network on pairs of sinograms with the same content but different noise
distributions using a lightweight model adapted from ZS-NSN. This process is
repeated to obtain the final results. A simulation study demonstrates that our
method outperforms state-of-the-art approaches such as ZS-N2N.
|
2504.07934 | Xiyao Wang | Xiyao Wang, Zhengyuan Yang, Chao Feng, Hongjin Lu, Linjie Li,
Chung-Ching Lin, Kevin Lin, Furong Huang, Lijuan Wang | SoTA with Less: MCTS-Guided Sample Selection for Data-Efficient Visual
Reasoning Self-Improvement | 21 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an effective method to enhance visual reasoning
with significantly fewer training samples, relying purely on self-improvement
with no knowledge distillation. Our key insight is that the difficulty of
training data during reinforcement fine-tuning (RFT) is critical. Appropriately
challenging samples can substantially boost reasoning capabilities even when
the dataset is small. Despite being intuitive, the main challenge remains in
accurately quantifying sample difficulty to enable effective data filtering. To
this end, we propose a novel way of repurposing Monte Carlo Tree Search (MCTS)
to achieve that. Starting from our curated 70k open-source training samples, we
introduce an MCTS-based selection method that quantifies sample difficulty
based on the number of iterations required by the VLMs to solve each problem.
This explicit step-by-step reasoning in MCTS enforces the model to think longer
and better identifies samples that are genuinely challenging. We filter and
retain 11k samples to perform RFT on Qwen2.5-VL-7B-Instruct, resulting in our
final model, ThinkLite-VL. Evaluation results on eight benchmarks show that
ThinkLite-VL improves the average performance of Qwen2.5-VL-7B-Instruct by 7%,
using only 11k training samples with no knowledge distillation. This
significantly outperforms all existing 7B-level reasoning VLMs, and our fairly
comparable baselines that use classic selection methods such as accuracy-based
filtering. Notably, on MathVista, ThinkLite-VL-7B achieves the SoTA accuracy of
75.1, surpassing Qwen2.5-VL-72B, GPT-4o, and O1. Our code, data, and model are
available at https://github.com/si0wang/ThinkLite-VL.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:49:05 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Wang",
"Xiyao",
""
],
[
"Yang",
"Zhengyuan",
""
],
[
"Feng",
"Chao",
""
],
[
"Lu",
"Hongjin",
""
],
[
"Li",
"Linjie",
""
],
[
"Lin",
"Chung-Ching",
""
],
[
"Lin",
"Kevin",
""
],
[
"Huang",
"Furong",
""
],
[
"Wang",
"Lijuan",
""
]
] | TITLE: SoTA with Less: MCTS-Guided Sample Selection for Data-Efficient Visual
Reasoning Self-Improvement
ABSTRACT: In this paper, we present an effective method to enhance visual reasoning
with significantly fewer training samples, relying purely on self-improvement
with no knowledge distillation. Our key insight is that the difficulty of
training data during reinforcement fine-tuning (RFT) is critical. Appropriately
challenging samples can substantially boost reasoning capabilities even when
the dataset is small. Despite being intuitive, the main challenge remains in
accurately quantifying sample difficulty to enable effective data filtering. To
this end, we propose a novel way of repurposing Monte Carlo Tree Search (MCTS)
to achieve that. Starting from our curated 70k open-source training samples, we
introduce an MCTS-based selection method that quantifies sample difficulty
based on the number of iterations required by the VLMs to solve each problem.
This explicit step-by-step reasoning in MCTS enforces the model to think longer
and better identifies samples that are genuinely challenging. We filter and
retain 11k samples to perform RFT on Qwen2.5-VL-7B-Instruct, resulting in our
final model, ThinkLite-VL. Evaluation results on eight benchmarks show that
ThinkLite-VL improves the average performance of Qwen2.5-VL-7B-Instruct by 7%,
using only 11k training samples with no knowledge distillation. This
significantly outperforms all existing 7B-level reasoning VLMs, and our fairly
comparable baselines that use classic selection methods such as accuracy-based
filtering. Notably, on MathVista, ThinkLite-VL-7B achieves the SoTA accuracy of
75.1, surpassing Qwen2.5-VL-72B, GPT-4o, and O1. Our code, data, and model are
available at https://github.com/si0wang/ThinkLite-VL.
|
2504.07936 | Jordi Linares-Pellicer | Jordi Linares-Pellicer, Juan Izquierdo-Domenech, Isabel Ferri-Molla,
Carlos Aliaga-Torro | We Are All Creators: Generative AI, Collective Knowledge, and the Path
Towards Human-AI Synergy | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Generative AI presents a profound challenge to traditional notions of human
uniqueness, particularly in creativity. Fueled by neural network based
foundation models, these systems demonstrate remarkable content generation
capabilities, sparking intense debates about authorship, copyright, and
intelligence itself. This paper argues that generative AI represents an
alternative form of intelligence and creativity, operating through mathematical
pattern synthesis rather than biological understanding or verbatim replication.
The fundamental differences between artificial and biological neural networks
reveal AI learning as primarily statistical pattern extraction from vast
datasets crystallized forms of collective human knowledge scraped from the
internet. This perspective complicates copyright theft narratives and
highlights practical challenges in attributing AI outputs to individual
sources. Rather than pursuing potentially futile legal restrictions, we
advocate for human AI synergy. By embracing generative AI as a complementary
tool alongside human intuition, context, and ethical judgment, society can
unlock unprecedented innovation, democratize creative expression, and address
complex challenges. This collaborative approach, grounded in realistic
understanding of AIs capabilities and limitations, offers the most promising
path forward. Additionally, recognizing these models as products of collective
human knowledge raises ethical questions about accessibility ensuring equitable
access to these tools could prevent widening societal divides and leverage
their full potential for collective benefit.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:50:17 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Linares-Pellicer",
"Jordi",
""
],
[
"Izquierdo-Domenech",
"Juan",
""
],
[
"Ferri-Molla",
"Isabel",
""
],
[
"Aliaga-Torro",
"Carlos",
""
]
] | TITLE: We Are All Creators: Generative AI, Collective Knowledge, and the Path
Towards Human-AI Synergy
ABSTRACT: Generative AI presents a profound challenge to traditional notions of human
uniqueness, particularly in creativity. Fueled by neural network based
foundation models, these systems demonstrate remarkable content generation
capabilities, sparking intense debates about authorship, copyright, and
intelligence itself. This paper argues that generative AI represents an
alternative form of intelligence and creativity, operating through mathematical
pattern synthesis rather than biological understanding or verbatim replication.
The fundamental differences between artificial and biological neural networks
reveal AI learning as primarily statistical pattern extraction from vast
datasets crystallized forms of collective human knowledge scraped from the
internet. This perspective complicates copyright theft narratives and
highlights practical challenges in attributing AI outputs to individual
sources. Rather than pursuing potentially futile legal restrictions, we
advocate for human AI synergy. By embracing generative AI as a complementary
tool alongside human intuition, context, and ethical judgment, society can
unlock unprecedented innovation, democratize creative expression, and address
complex challenges. This collaborative approach, grounded in realistic
understanding of AIs capabilities and limitations, offers the most promising
path forward. Additionally, recognizing these models as products of collective
human knowledge raises ethical questions about accessibility ensuring equitable
access to these tools could prevent widening societal divides and leverage
their full potential for collective benefit.
|
2504.07939 | Artem Bazhenov | Artem Bazhenov, Sergei Satsevich, Sergei Egorov, Farit Khabibullin,
Dzmitry Tsetserukou | Echo: An Open-Source, Low-Cost Teleoperation System with Force Feedback
for Dataset Collection in Robot Learning | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | In this article, we propose Echo, a novel joint-matching teleoperation system
designed to enhance the collection of datasets for manual and bimanual tasks.
Our system is specifically tailored for controlling the UR manipulator and
features a custom controller with force feedback and adjustable sensitivity
modes, enabling precise and intuitive operation. Additionally, Echo integrates
a user-friendly dataset recording interface, simplifying the process of
collecting high-quality training data for imitation learning. The system is
designed to be reliable, cost-effective, and easily reproducible, making it an
accessible tool for researchers, laboratories, and startups passionate about
advancing robotics through imitation learning. Although the current
implementation focuses on the UR manipulator, Echo architecture is
reconfigurable and can be adapted to other manipulators and humanoid systems.
We demonstrate the effectiveness of Echo through a series of experiments,
showcasing its ability to perform complex bimanual tasks and its potential to
accelerate research in the field. We provide assembly instructions, a hardware
description, and code at https://eterwait.github.io/Echo/.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:51:37 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Bazhenov",
"Artem",
""
],
[
"Satsevich",
"Sergei",
""
],
[
"Egorov",
"Sergei",
""
],
[
"Khabibullin",
"Farit",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] | TITLE: Echo: An Open-Source, Low-Cost Teleoperation System with Force Feedback
for Dataset Collection in Robot Learning
ABSTRACT: In this article, we propose Echo, a novel joint-matching teleoperation system
designed to enhance the collection of datasets for manual and bimanual tasks.
Our system is specifically tailored for controlling the UR manipulator and
features a custom controller with force feedback and adjustable sensitivity
modes, enabling precise and intuitive operation. Additionally, Echo integrates
a user-friendly dataset recording interface, simplifying the process of
collecting high-quality training data for imitation learning. The system is
designed to be reliable, cost-effective, and easily reproducible, making it an
accessible tool for researchers, laboratories, and startups passionate about
advancing robotics through imitation learning. Although the current
implementation focuses on the UR manipulator, Echo architecture is
reconfigurable and can be adapted to other manipulators and humanoid systems.
We demonstrate the effectiveness of Echo through a series of experiments,
showcasing its ability to perform complex bimanual tasks and its potential to
accelerate research in the field. We provide assembly instructions, a hardware
description, and code at https://eterwait.github.io/Echo/.
|
2504.07943 | Yunhan Yang | Yunhan Yang, Yuan-Chen Guo, Yukun Huang, Zi-Xin Zou, Zhipeng Yu,
Yangguang Li, Yan-Pei Cao, Xihui Liu | HoloPart: Generative 3D Part Amodal Segmentation | Project Page: https://vast-ai-research.github.io/HoloPart | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D part amodal segmentation--decomposing a 3D shape into complete,
semantically meaningful parts, even when occluded--is a challenging but crucial
task for 3D content creation and understanding. Existing 3D part segmentation
methods only identify visible surface patches, limiting their utility. Inspired
by 2D amodal segmentation, we introduce this novel task to the 3D domain and
propose a practical, two-stage approach, addressing the key challenges of
inferring occluded 3D geometry, maintaining global shape consistency, and
handling diverse shapes with limited training data. First, we leverage existing
3D part segmentation to obtain initial, incomplete part segments. Second, we
introduce HoloPart, a novel diffusion-based model, to complete these segments
into full 3D parts. HoloPart utilizes a specialized architecture with local
attention to capture fine-grained part geometry and global shape context
attention to ensure overall shape consistency. We introduce new benchmarks
based on the ABO and PartObjaverse-Tiny datasets and demonstrate that HoloPart
significantly outperforms state-of-the-art shape completion methods. By
incorporating HoloPart with existing segmentation techniques, we achieve
promising results on 3D part amodal segmentation, opening new avenues for
applications in geometry editing, animation, and material assignment.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:53:31 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Yang",
"Yunhan",
""
],
[
"Guo",
"Yuan-Chen",
""
],
[
"Huang",
"Yukun",
""
],
[
"Zou",
"Zi-Xin",
""
],
[
"Yu",
"Zhipeng",
""
],
[
"Li",
"Yangguang",
""
],
[
"Cao",
"Yan-Pei",
""
],
[
"Liu",
"Xihui",
""
]
] | TITLE: HoloPart: Generative 3D Part Amodal Segmentation
ABSTRACT: 3D part amodal segmentation--decomposing a 3D shape into complete,
semantically meaningful parts, even when occluded--is a challenging but crucial
task for 3D content creation and understanding. Existing 3D part segmentation
methods only identify visible surface patches, limiting their utility. Inspired
by 2D amodal segmentation, we introduce this novel task to the 3D domain and
propose a practical, two-stage approach, addressing the key challenges of
inferring occluded 3D geometry, maintaining global shape consistency, and
handling diverse shapes with limited training data. First, we leverage existing
3D part segmentation to obtain initial, incomplete part segments. Second, we
introduce HoloPart, a novel diffusion-based model, to complete these segments
into full 3D parts. HoloPart utilizes a specialized architecture with local
attention to capture fine-grained part geometry and global shape context
attention to ensure overall shape consistency. We introduce new benchmarks
based on the ABO and PartObjaverse-Tiny datasets and demonstrate that HoloPart
significantly outperforms state-of-the-art shape completion methods. By
incorporating HoloPart with existing segmentation techniques, we achieve
promising results on 3D part amodal segmentation, opening new avenues for
applications in geometry editing, animation, and material assignment.
|
2504.07945 | Hao Yu | Hao Yu, Rupayan Mallick, Margrit Betke, Sarah Adel Bargal | GenEAva: Generating Cartoon Avatars with Fine-Grained Facial Expressions
from Realistic Diffusion-based Faces | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cartoon avatars have been widely used in various applications, including
social media, online tutoring, and gaming. However, existing cartoon avatar
datasets and generation methods struggle to present highly expressive avatars
with fine-grained facial expressions and are often inspired from real-world
identities, raising privacy concerns. To address these challenges, we propose a
novel framework, GenEAva, for generating high-quality cartoon avatars with
fine-grained facial expressions. Our approach fine-tunes a state-of-the-art
text-to-image diffusion model to synthesize highly detailed and expressive
facial expressions. We then incorporate a stylization model that transforms
these realistic faces into cartoon avatars while preserving both identity and
expression. Leveraging this framework, we introduce the first expressive
cartoon avatar dataset, GenEAva 1.0, specifically designed to capture 135
fine-grained facial expressions, featuring 13,230 expressive cartoon avatars
with a balanced distribution across genders, racial groups, and age ranges. We
demonstrate that our fine-tuned model generates more expressive faces than the
state-of-the-art text-to-image diffusion model SDXL. We also verify that the
cartoon avatars generated by our framework do not include memorized identities
from fine-tuning data. The proposed framework and dataset provide a diverse and
expressive benchmark for future research in cartoon avatar generation.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:54:02 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Yu",
"Hao",
""
],
[
"Mallick",
"Rupayan",
""
],
[
"Betke",
"Margrit",
""
],
[
"Bargal",
"Sarah Adel",
""
]
] | TITLE: GenEAva: Generating Cartoon Avatars with Fine-Grained Facial Expressions
from Realistic Diffusion-based Faces
ABSTRACT: Cartoon avatars have been widely used in various applications, including
social media, online tutoring, and gaming. However, existing cartoon avatar
datasets and generation methods struggle to present highly expressive avatars
with fine-grained facial expressions and are often inspired from real-world
identities, raising privacy concerns. To address these challenges, we propose a
novel framework, GenEAva, for generating high-quality cartoon avatars with
fine-grained facial expressions. Our approach fine-tunes a state-of-the-art
text-to-image diffusion model to synthesize highly detailed and expressive
facial expressions. We then incorporate a stylization model that transforms
these realistic faces into cartoon avatars while preserving both identity and
expression. Leveraging this framework, we introduce the first expressive
cartoon avatar dataset, GenEAva 1.0, specifically designed to capture 135
fine-grained facial expressions, featuring 13,230 expressive cartoon avatars
with a balanced distribution across genders, racial groups, and age ranges. We
demonstrate that our fine-tuned model generates more expressive faces than the
state-of-the-art text-to-image diffusion model SDXL. We also verify that the
cartoon avatars generated by our framework do not include memorized identities
from fine-tuning data. The proposed framework and dataset provide a diverse and
expressive benchmark for future research in cartoon avatar generation.
|
2504.07948 | Jean-Philip Piquemal | Anouar Benali, Thomas Pl\'e, Olivier Adjoua, Valay Agarawal, Thomas
Applencourt, Marharyta Blazhynska, Raymond Clay III, Kevin Gasperich, Khalid
Hossain, Jeongnim Kim, Christopher Knight, Jaron T. Krogel, Yvon Maday,
Maxime Maria, Mathieu Montes, Ye Luo, Evgeny Posenitskiy, Corentin Villot,
Venkat Vishwanath, Louis Lagard\`ere, Jean-Philip Piquemal | Pushing the Accuracy Limit of Foundation Neural Network Models with
Quantum Monte Carlo Forces and Path Integrals | null | null | null | null | physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | We propose an end-to-end integrated strategy for the production of highly
accurate quantum chemistry (QC) synthetic datasets aimed at deriving atomistic
Foundation Machine Learning (ML) Models. We first present a GPU-accelerated QC
database generation Exascale protocol able to produce the required energies and
forces. A "Jacob's Ladder" approach leverages computationally-optimized layers
of massively parallel high performance software with increasing accuracy to
compute: i) Density Functional Theory (DFT); ii) Quantum Monte Carlo (QMC);
iii) Selected Configuration Interaction (s-CI), within large volumes and
optimized time-to-solution performances. Handling this ambitious computational
pipeline would be impossible without exascale computing resources, particularly
for the notoriously difficult and computationally intensive calculation of QMC
forces and for the combination of multi-determinant QMC energies and forces
using selected CI wavefunctions methodologies. To our knowledge, this is the
first time that such quantities are computed at such scale. We combine these
data with the FeNNix-Bio-1 foundation ML model to bridge the gap between highly
accurate QC calculations and condensed-phase Molecular Dynamics (MD). We
demonstrate stable multi-ns simulations using the resulting beyond DFT accuracy
fully reactive model coupled to full path integrals adaptive sampling quantum
dynamics. A complete 1 million-atom plant virus solvated structure, including
its full genetic material, is simulated using Ring-Polymer MD quantum dynamics
along as its response to acidification under physiological NaCl concentrations.
These new capabilities open the door to the possibility to monitor bond
breaking/creation and proton transfers chemical interactions taking place in
biosystems allowing us to reach a deeper understanding of their complex
internal machinery.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:55:09 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Benali",
"Anouar",
""
],
[
"Plé",
"Thomas",
""
],
[
"Adjoua",
"Olivier",
""
],
[
"Agarawal",
"Valay",
""
],
[
"Applencourt",
"Thomas",
""
],
[
"Blazhynska",
"Marharyta",
""
],
[
"Clay",
"Raymond",
"III"
],
[
"Gasperich",
"Kevin",
""
],
[
"Hossain",
"Khalid",
""
],
[
"Kim",
"Jeongnim",
""
],
[
"Knight",
"Christopher",
""
],
[
"Krogel",
"Jaron T.",
""
],
[
"Maday",
"Yvon",
""
],
[
"Maria",
"Maxime",
""
],
[
"Montes",
"Mathieu",
""
],
[
"Luo",
"Ye",
""
],
[
"Posenitskiy",
"Evgeny",
""
],
[
"Villot",
"Corentin",
""
],
[
"Vishwanath",
"Venkat",
""
],
[
"Lagardère",
"Louis",
""
],
[
"Piquemal",
"Jean-Philip",
""
]
] | TITLE: Pushing the Accuracy Limit of Foundation Neural Network Models with
Quantum Monte Carlo Forces and Path Integrals
ABSTRACT: We propose an end-to-end integrated strategy for the production of highly
accurate quantum chemistry (QC) synthetic datasets aimed at deriving atomistic
Foundation Machine Learning (ML) Models. We first present a GPU-accelerated QC
database generation Exascale protocol able to produce the required energies and
forces. A "Jacob's Ladder" approach leverages computationally-optimized layers
of massively parallel high performance software with increasing accuracy to
compute: i) Density Functional Theory (DFT); ii) Quantum Monte Carlo (QMC);
iii) Selected Configuration Interaction (s-CI), within large volumes and
optimized time-to-solution performances. Handling this ambitious computational
pipeline would be impossible without exascale computing resources, particularly
for the notoriously difficult and computationally intensive calculation of QMC
forces and for the combination of multi-determinant QMC energies and forces
using selected CI wavefunctions methodologies. To our knowledge, this is the
first time that such quantities are computed at such scale. We combine these
data with the FeNNix-Bio-1 foundation ML model to bridge the gap between highly
accurate QC calculations and condensed-phase Molecular Dynamics (MD). We
demonstrate stable multi-ns simulations using the resulting beyond DFT accuracy
fully reactive model coupled to full path integrals adaptive sampling quantum
dynamics. A complete 1 million-atom plant virus solvated structure, including
its full genetic material, is simulated using Ring-Polymer MD quantum dynamics
along as its response to acidification under physiological NaCl concentrations.
These new capabilities open the door to the possibility to monitor bond
breaking/creation and proton transfers chemical interactions taking place in
biosystems allowing us to reach a deeper understanding of their complex
internal machinery.
|
2504.07955 | Yuanhong Yu | Yuanhong Yu, Xingyi He, Chen Zhao, Junhao Yu, Jiaqi Yang, Ruizhen Hu,
Yujun Shen, Xing Zhu, Xiaowei Zhou, Sida Peng | BoxDreamer: Dreaming Box Corners for Generalizable Object Pose
Estimation | Project page: https://zju3dv.github.io/boxdreamer | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a generalizable RGB-based approach for object pose
estimation, specifically designed to address challenges in sparse-view
settings. While existing methods can estimate the poses of unseen objects,
their generalization ability remains limited in scenarios involving occlusions
and sparse reference views, restricting their real-world applicability. To
overcome these limitations, we introduce corner points of the object bounding
box as an intermediate representation of the object pose. The 3D object corners
can be reliably recovered from sparse input views, while the 2D corner points
in the target view are estimated through a novel reference-based point
synthesizer, which works well even in scenarios involving occlusions. As object
semantic points, object corners naturally establish 2D-3D correspondences for
object pose estimation with a PnP algorithm. Extensive experiments on the
YCB-Video and Occluded-LINEMOD datasets show that our approach outperforms
state-of-the-art methods, highlighting the effectiveness of the proposed
representation and significantly enhancing the generalization capabilities of
object pose estimation, which is crucial for real-world applications.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:58:35 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Yu",
"Yuanhong",
""
],
[
"He",
"Xingyi",
""
],
[
"Zhao",
"Chen",
""
],
[
"Yu",
"Junhao",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Hu",
"Ruizhen",
""
],
[
"Shen",
"Yujun",
""
],
[
"Zhu",
"Xing",
""
],
[
"Zhou",
"Xiaowei",
""
],
[
"Peng",
"Sida",
""
]
] | TITLE: BoxDreamer: Dreaming Box Corners for Generalizable Object Pose
Estimation
ABSTRACT: This paper presents a generalizable RGB-based approach for object pose
estimation, specifically designed to address challenges in sparse-view
settings. While existing methods can estimate the poses of unseen objects,
their generalization ability remains limited in scenarios involving occlusions
and sparse reference views, restricting their real-world applicability. To
overcome these limitations, we introduce corner points of the object bounding
box as an intermediate representation of the object pose. The 3D object corners
can be reliably recovered from sparse input views, while the 2D corner points
in the target view are estimated through a novel reference-based point
synthesizer, which works well even in scenarios involving occlusions. As object
semantic points, object corners naturally establish 2D-3D correspondences for
object pose estimation with a PnP algorithm. Extensive experiments on the
YCB-Video and Occluded-LINEMOD datasets show that our approach outperforms
state-of-the-art methods, highlighting the effectiveness of the proposed
representation and significantly enhancing the generalization capabilities of
object pose estimation, which is crucial for real-world applications.
|
2504.07959 | Dongyoung Kim | Dongyoung Kim, Mahmoud Afifi, Dongyun Kim, Michael S. Brown, Seon Joo
Kim | CCMNet: Leveraging Calibrated Color Correction Matrices for Cross-Camera
Color Constancy | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational color constancy, or white balancing, is a key module in a
camera's image signal processor (ISP) that corrects color casts from scene
lighting. Because this operation occurs in the camera-specific raw color space,
white balance algorithms must adapt to different cameras. This paper introduces
a learning-based method for cross-camera color constancy that generalizes to
new cameras without retraining. Our method leverages pre-calibrated color
correction matrices (CCMs) available on ISPs that map the camera's raw color
space to a standard space (e.g., CIE XYZ). Our method uses these CCMs to
transform predefined illumination colors (i.e., along the Planckian locus) into
the test camera's raw space. The mapped illuminants are encoded into a compact
camera fingerprint embedding (CFE) that enables the network to adapt to unseen
cameras. To prevent overfitting due to limited cameras and CCMs during
training, we introduce a data augmentation technique that interpolates between
cameras and their CCMs. Experimental results across multiple datasets and
backbones show that our method achieves state-of-the-art cross-camera color
constancy while remaining lightweight and relying only on data readily
available in camera ISPs.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:59:31 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Kim",
"Dongyoung",
""
],
[
"Afifi",
"Mahmoud",
""
],
[
"Kim",
"Dongyun",
""
],
[
"Brown",
"Michael S.",
""
],
[
"Kim",
"Seon Joo",
""
]
] | TITLE: CCMNet: Leveraging Calibrated Color Correction Matrices for Cross-Camera
Color Constancy
ABSTRACT: Computational color constancy, or white balancing, is a key module in a
camera's image signal processor (ISP) that corrects color casts from scene
lighting. Because this operation occurs in the camera-specific raw color space,
white balance algorithms must adapt to different cameras. This paper introduces
a learning-based method for cross-camera color constancy that generalizes to
new cameras without retraining. Our method leverages pre-calibrated color
correction matrices (CCMs) available on ISPs that map the camera's raw color
space to a standard space (e.g., CIE XYZ). Our method uses these CCMs to
transform predefined illumination colors (i.e., along the Planckian locus) into
the test camera's raw space. The mapped illuminants are encoded into a compact
camera fingerprint embedding (CFE) that enables the network to adapt to unseen
cameras. To prevent overfitting due to limited cameras and CCMs during
training, we introduce a data augmentation technique that interpolates between
cameras and their CCMs. Experimental results across multiple datasets and
backbones show that our method achieves state-of-the-art cross-camera color
constancy while remaining lightweight and relying only on data readily
available in camera ISPs.
|
2504.07960 | Zhongyu Li | Zhong-Yu Li, Ruoyi Du, Juncheng Yan, Le Zhuo, Zhen Li, Peng Gao,
Zhanyu Ma, Ming-Ming Cheng | VisualCloze: A Universal Image Generation Framework via Visual
In-Context Learning | Project page: https://visualcloze.github.io/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent progress in diffusion models significantly advances various image
generation tasks. However, the current mainstream approach remains focused on
building task-specific models, which have limited efficiency when supporting a
wide range of different needs. While universal models attempt to address this
limitation, they face critical challenges, including generalizable task
instruction, appropriate task distributions, and unified architectural design.
To tackle these challenges, we propose VisualCloze, a universal image
generation framework, which supports a wide range of in-domain tasks,
generalization to unseen ones, unseen unification of multiple tasks, and
reverse generation. Unlike existing methods that rely on language-based task
instruction, leading to task ambiguity and weak generalization, we integrate
visual in-context learning, allowing models to identify tasks from visual
demonstrations. Meanwhile, the inherent sparsity of visual task distributions
hampers the learning of transferable knowledge across tasks. To this end, we
introduce Graph200K, a graph-structured dataset that establishes various
interrelated tasks, enhancing task density and transferable knowledge.
Furthermore, we uncover that our unified image generation formulation shared a
consistent objective with image infilling, enabling us to leverage the strong
generative priors of pre-trained infilling models without modifying the
architectures.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 17:59:42 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Li",
"Zhong-Yu",
""
],
[
"Du",
"Ruoyi",
""
],
[
"Yan",
"Juncheng",
""
],
[
"Zhuo",
"Le",
""
],
[
"Li",
"Zhen",
""
],
[
"Gao",
"Peng",
""
],
[
"Ma",
"Zhanyu",
""
],
[
"Cheng",
"Ming-Ming",
""
]
] | TITLE: VisualCloze: A Universal Image Generation Framework via Visual
In-Context Learning
ABSTRACT: Recent progress in diffusion models significantly advances various image
generation tasks. However, the current mainstream approach remains focused on
building task-specific models, which have limited efficiency when supporting a
wide range of different needs. While universal models attempt to address this
limitation, they face critical challenges, including generalizable task
instruction, appropriate task distributions, and unified architectural design.
To tackle these challenges, we propose VisualCloze, a universal image
generation framework, which supports a wide range of in-domain tasks,
generalization to unseen ones, unseen unification of multiple tasks, and
reverse generation. Unlike existing methods that rely on language-based task
instruction, leading to task ambiguity and weak generalization, we integrate
visual in-context learning, allowing models to identify tasks from visual
demonstrations. Meanwhile, the inherent sparsity of visual task distributions
hampers the learning of transferable knowledge across tasks. To this end, we
introduce Graph200K, a graph-structured dataset that establishes various
interrelated tasks, enhancing task density and transferable knowledge.
Furthermore, we uncover that our unified image generation formulation shared a
consistent objective with image infilling, enabling us to leverage the strong
generative priors of pre-trained infilling models without modifying the
architectures.
|
2304.04884 | Jie Zhang | Jie Zhang, Minghui Nie, Changqing Zou, Jian Liu, Ligang Liu and Junjie
Cao | PointNorm-Net: Self-Supervised Normal Prediction of 3D Point Clouds via
Multi-Modal Distribution Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Although supervised deep normal estimators have recently shown impressive
results on synthetic benchmarks, their performance deteriorates significantly
in real-world scenarios due to the domain gap between synthetic and real data.
Building high-quality real training data to boost those supervised methods is
not trivial because point-wise annotation of normals for varying-scale
real-world 3D scenes is a tedious and expensive task. This paper introduces
PointNorm-Net, the first self-supervised deep learning framework to tackle this
challenge. The key novelty of PointNorm-Net is a three-stage multi-modal normal
distribution estimation paradigm that can be integrated into either deep or
traditional optimization-based normal estimation frameworks. Extensive
experiments show that our method achieves superior generalization and
outperforms state-of-the-art conventional and deep learning approaches across
three real-world datasets that exhibit distinct characteristics compared to the
synthetic training data.
| [
{
"version": "v1",
"created": "Mon, 10 Apr 2023 22:11:13 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 11:21:48 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Zhang",
"Jie",
""
],
[
"Nie",
"Minghui",
""
],
[
"Zou",
"Changqing",
""
],
[
"Liu",
"Jian",
""
],
[
"Liu",
"Ligang",
""
],
[
"Cao",
"Junjie",
""
]
] | TITLE: PointNorm-Net: Self-Supervised Normal Prediction of 3D Point Clouds via
Multi-Modal Distribution Estimation
ABSTRACT: Although supervised deep normal estimators have recently shown impressive
results on synthetic benchmarks, their performance deteriorates significantly
in real-world scenarios due to the domain gap between synthetic and real data.
Building high-quality real training data to boost those supervised methods is
not trivial because point-wise annotation of normals for varying-scale
real-world 3D scenes is a tedious and expensive task. This paper introduces
PointNorm-Net, the first self-supervised deep learning framework to tackle this
challenge. The key novelty of PointNorm-Net is a three-stage multi-modal normal
distribution estimation paradigm that can be integrated into either deep or
traditional optimization-based normal estimation frameworks. Extensive
experiments show that our method achieves superior generalization and
outperforms state-of-the-art conventional and deep learning approaches across
three real-world datasets that exhibit distinct characteristics compared to the
synthetic training data.
|
2304.14765 | Maruf Ahmed Dhali | Andrei Voinea, Robin Kock, Maruf A. Dhali | LostPaw: Finding Lost Pets using a Contrastive Learning-based
Transformer with Visual Input | 7 Pages, 7 figures | In Proceedings of the 14th International Conference on Pattern
Recognition Applications and Methods ICPRAM - Volume 1, 757-763, 2025 ,
Porto, Portugal | 10.5220/0013261600003905 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Losing pets can be highly distressing for pet owners, and finding a lost pet
is often challenging and time-consuming. An artificial intelligence-based
application can significantly improve the speed and accuracy of finding lost
pets. To facilitate such an application, this study introduces a contrastive
neural network model capable of accurately distinguishing between images of
pets. The model was trained on a large dataset of dog images and evaluated
through 3-fold cross-validation. Following 350 epochs of training, the model
achieved a test accuracy of 90%. Furthermore, overfitting was avoided, as the
test accuracy closely matched the training accuracy. Our findings suggest that
contrastive neural network models hold promise as a tool for locating lost
pets. This paper presents the foundational framework for a potential web
application designed to assist users in locating their missing pets. The
application will allow users to upload images of their lost pets and provide
notifications when matching images are identified within its image database.
This functionality aims to enhance the efficiency and accuracy with which pet
owners can search for and reunite with their beloved animals.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2023 11:23:44 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 11:17:26 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Voinea",
"Andrei",
""
],
[
"Kock",
"Robin",
""
],
[
"Dhali",
"Maruf A.",
""
]
] | TITLE: LostPaw: Finding Lost Pets using a Contrastive Learning-based
Transformer with Visual Input
ABSTRACT: Losing pets can be highly distressing for pet owners, and finding a lost pet
is often challenging and time-consuming. An artificial intelligence-based
application can significantly improve the speed and accuracy of finding lost
pets. To facilitate such an application, this study introduces a contrastive
neural network model capable of accurately distinguishing between images of
pets. The model was trained on a large dataset of dog images and evaluated
through 3-fold cross-validation. Following 350 epochs of training, the model
achieved a test accuracy of 90%. Furthermore, overfitting was avoided, as the
test accuracy closely matched the training accuracy. Our findings suggest that
contrastive neural network models hold promise as a tool for locating lost
pets. This paper presents the foundational framework for a potential web
application designed to assist users in locating their missing pets. The
application will allow users to upload images of their lost pets and provide
notifications when matching images are identified within its image database.
This functionality aims to enhance the efficiency and accuracy with which pet
owners can search for and reunite with their beloved animals.
|
2305.09958 | Haoyu Liu | Haoyu Liu, Ningyi Liao, Siqiang Luo | SIGMA: An Efficient Heterophilous Graph Neural Network with Fast Global
Aggregation | Acceptted to ICDE 2025 | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) realize great success in graph learning but
suffer from performance loss when meeting heterophily, i.e. neighboring nodes
are dissimilar, due to their local and uniform aggregation. Existing attempts
of heterophilous GNNs incorporate long-range or global aggregations to
distinguish nodes in the graph. However, these aggregations usually require
iteratively maintaining and updating full-graph information, which limits their
efficiency when applying to large-scale graphs. In this paper, we propose
SIGMA, an efficient global heterophilous GNN aggregation integrating the
structural similarity measurement SimRank. Our theoretical analysis illustrates
that SIGMA inherently captures distant global similarity even under
heterophily, that conventional approaches can only achieve after iterative
aggregations. Furthermore, it enjoys efficient one-time computation with a
complexity only linear to the node set size $\mathcal{O}(n)$. Comprehensive
evaluation demonstrates that SIGMA achieves state-of-the-art performance with
superior aggregation and overall efficiency. Notably, it obtains $5\times$
acceleration on the large-scale heterophily dataset pokec with over 30 million
edges compared to the best baseline aggregation.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 05:35:49 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Aug 2024 10:24:09 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Aug 2024 02:32:05 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Apr 2025 07:19:32 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Liu",
"Haoyu",
""
],
[
"Liao",
"Ningyi",
""
],
[
"Luo",
"Siqiang",
""
]
] | TITLE: SIGMA: An Efficient Heterophilous Graph Neural Network with Fast Global
Aggregation
ABSTRACT: Graph neural networks (GNNs) realize great success in graph learning but
suffer from performance loss when meeting heterophily, i.e. neighboring nodes
are dissimilar, due to their local and uniform aggregation. Existing attempts
of heterophilous GNNs incorporate long-range or global aggregations to
distinguish nodes in the graph. However, these aggregations usually require
iteratively maintaining and updating full-graph information, which limits their
efficiency when applying to large-scale graphs. In this paper, we propose
SIGMA, an efficient global heterophilous GNN aggregation integrating the
structural similarity measurement SimRank. Our theoretical analysis illustrates
that SIGMA inherently captures distant global similarity even under
heterophily, that conventional approaches can only achieve after iterative
aggregations. Furthermore, it enjoys efficient one-time computation with a
complexity only linear to the node set size $\mathcal{O}(n)$. Comprehensive
evaluation demonstrates that SIGMA achieves state-of-the-art performance with
superior aggregation and overall efficiency. Notably, it obtains $5\times$
acceleration on the large-scale heterophily dataset pokec with over 30 million
edges compared to the best baseline aggregation.
|
2305.18450 | Qin Xie | Qin Xie, Qinghua Zhang, Shuyin Xia, Fan Zhao, Chengying Wu, Guoyin
Wang and Weiping Ding | GBG++: A Fast and Stable Granular Ball Generation Method for
Classification | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Granular ball computing (GBC), as an efficient, robust, and scalable learning
method, has become a popular research topic of granular computing. GBC includes
two stages: granular ball generation (GBG) and multi-granularity learning based
on the granular ball (GB). However, the stability and efficiency of existing
GBG methods need to be further improved due to their strong dependence on
$k$-means or $k$-division. In addition, GB-based classifiers only unilaterally
consider the GB's geometric characteristics to construct classification rules,
but the GB's quality is ignored. Therefore, in this paper, based on the
attention mechanism, a fast and stable GBG (GBG++) method is proposed first.
Specifically, the proposed GBG++ method only needs to calculate the distances
from the data-driven center to the undivided samples when splitting each GB
instead of randomly selecting the center and calculating the distances between
it and all samples. Moreover, an outlier detection method is introduced to
identify local outliers. Consequently, the GBG++ method can significantly
improve effectiveness, robustness, and efficiency while being absolutely
stable. Second, considering the influence of the sample size within the GB on
the GB's quality, based on the GBG++ method, an improved GB-based $k$-nearest
neighbors algorithm (GB$k$NN++) is presented, which can reduce
misclassification at the class boundary. Finally, the experimental results
indicate that the proposed method outperforms several existing GB-based
classifiers and classical machine learning classifiers on $24$ public benchmark
datasets. The implementation code of experiments is available at
https://github.com/CherylTse/GBG-plusplus.
| [
{
"version": "v1",
"created": "Mon, 29 May 2023 04:00:19 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Nov 2023 15:09:49 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 02:25:03 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Xie",
"Qin",
""
],
[
"Zhang",
"Qinghua",
""
],
[
"Xia",
"Shuyin",
""
],
[
"Zhao",
"Fan",
""
],
[
"Wu",
"Chengying",
""
],
[
"Wang",
"Guoyin",
""
],
[
"Ding",
"Weiping",
""
]
] | TITLE: GBG++: A Fast and Stable Granular Ball Generation Method for
Classification
ABSTRACT: Granular ball computing (GBC), as an efficient, robust, and scalable learning
method, has become a popular research topic of granular computing. GBC includes
two stages: granular ball generation (GBG) and multi-granularity learning based
on the granular ball (GB). However, the stability and efficiency of existing
GBG methods need to be further improved due to their strong dependence on
$k$-means or $k$-division. In addition, GB-based classifiers only unilaterally
consider the GB's geometric characteristics to construct classification rules,
but the GB's quality is ignored. Therefore, in this paper, based on the
attention mechanism, a fast and stable GBG (GBG++) method is proposed first.
Specifically, the proposed GBG++ method only needs to calculate the distances
from the data-driven center to the undivided samples when splitting each GB
instead of randomly selecting the center and calculating the distances between
it and all samples. Moreover, an outlier detection method is introduced to
identify local outliers. Consequently, the GBG++ method can significantly
improve effectiveness, robustness, and efficiency while being absolutely
stable. Second, considering the influence of the sample size within the GB on
the GB's quality, based on the GBG++ method, an improved GB-based $k$-nearest
neighbors algorithm (GB$k$NN++) is presented, which can reduce
misclassification at the class boundary. Finally, the experimental results
indicate that the proposed method outperforms several existing GB-based
classifiers and classical machine learning classifiers on $24$ public benchmark
datasets. The implementation code of experiments is available at
https://github.com/CherylTse/GBG-plusplus.
|
2309.02583 | Md Ferdous Alam | Md Ferdous Alam, Yi Wang, Chin-Yi Cheng, Jieliang Luo | Representation Learning for Sequential Volumetric Design Tasks | 12 pages, 12 figures | null | 10.1115/1.4066686 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Volumetric design, also called massing design, is the first and critical step
in professional building design which is sequential in nature. As the
volumetric design process requires careful design decisions and iterative
adjustments, the underlying sequential design process encodes valuable
information for designers. Many efforts have been made to automatically
generate reasonable volumetric designs, but the quality of the generated design
solutions varies, and evaluating a design solution requires either a
prohibitively comprehensive set of metrics or expensive human expertise. While
previous approaches focused on learning only the final design instead of
sequential design tasks, we propose to encode the design knowledge from a
collection of expert or high-performing design sequences and extract useful
representations using transformer-based models. Later we propose to utilize the
learned representations for crucial downstream applications such as design
preference evaluation and procedural design generation. We develop the
preference model by estimating the density of the learned representations
whereas we train an autoregressive transformer model for sequential design
generation. We demonstrate our ideas by leveraging a novel dataset of thousands
of sequential volumetric designs. Our preference model can compare two
arbitrarily given design sequences and is almost $90\%$ accurate in evaluation
against random design sequences. Our autoregressive model is also capable of
autocompleting a volumetric design sequence from a partial design sequence.
| [
{
"version": "v1",
"created": "Tue, 5 Sep 2023 21:21:06 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Sep 2024 17:28:47 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Dec 2024 22:33:40 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Alam",
"Md Ferdous",
""
],
[
"Wang",
"Yi",
""
],
[
"Cheng",
"Chin-Yi",
""
],
[
"Luo",
"Jieliang",
""
]
] | TITLE: Representation Learning for Sequential Volumetric Design Tasks
ABSTRACT: Volumetric design, also called massing design, is the first and critical step
in professional building design which is sequential in nature. As the
volumetric design process requires careful design decisions and iterative
adjustments, the underlying sequential design process encodes valuable
information for designers. Many efforts have been made to automatically
generate reasonable volumetric designs, but the quality of the generated design
solutions varies, and evaluating a design solution requires either a
prohibitively comprehensive set of metrics or expensive human expertise. While
previous approaches focused on learning only the final design instead of
sequential design tasks, we propose to encode the design knowledge from a
collection of expert or high-performing design sequences and extract useful
representations using transformer-based models. Later we propose to utilize the
learned representations for crucial downstream applications such as design
preference evaluation and procedural design generation. We develop the
preference model by estimating the density of the learned representations
whereas we train an autoregressive transformer model for sequential design
generation. We demonstrate our ideas by leveraging a novel dataset of thousands
of sequential volumetric designs. Our preference model can compare two
arbitrarily given design sequences and is almost $90\%$ accurate in evaluation
against random design sequences. Our autoregressive model is also capable of
autocompleting a volumetric design sequence from a partial design sequence.
|
2310.01038 | Jiahao Wu | Jiahao Wu and Wenqi Fan and Jingfan Chen and Shengcai Liu and Qijiong
Liu and Rui He and Qing Li and Ke Tang | Dataset Condensation for Recommendation | Accepted by IEEE TKDE. Previously titled as "Condensing Pre-augmented
Recommendation Data via Lightweight Policy Gradient Estimation" | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training recommendation models on large datasets requires significant time
and resources. It is desired to construct concise yet informative datasets for
efficient training. Recent advances in dataset condensation show promise in
addressing this problem by synthesizing small datasets. However, applying
existing methods of dataset condensation to recommendation has limitations: (1)
they fail to generate discrete user-item interactions, and (2) they could not
preserve users' potential preferences. To address the limitations, we propose a
lightweight condensation framework tailored for recommendation (DConRec),
focusing on condensing user-item historical interaction sets. Specifically, we
model the discrete user-item interactions via a probabilistic approach and
design a pre-augmentation module to incorporate the potential preferences of
users into the condensed datasets. While the substantial size of datasets leads
to costly optimization, we propose a lightweight policy gradient estimation to
accelerate the data synthesis. Experimental results on multiple real-world
datasets have demonstrated the effectiveness and efficiency of our framework.
Besides, we provide a theoretical analysis of the provable convergence of
DConRec. Our implementation is available at:
https://github.com/JiahaoWuGit/DConRec.
| [
{
"version": "v1",
"created": "Mon, 2 Oct 2023 09:30:11 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Oct 2024 18:35:41 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 07:41:22 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wu",
"Jiahao",
""
],
[
"Fan",
"Wenqi",
""
],
[
"Chen",
"Jingfan",
""
],
[
"Liu",
"Shengcai",
""
],
[
"Liu",
"Qijiong",
""
],
[
"He",
"Rui",
""
],
[
"Li",
"Qing",
""
],
[
"Tang",
"Ke",
""
]
] | TITLE: Dataset Condensation for Recommendation
ABSTRACT: Training recommendation models on large datasets requires significant time
and resources. It is desired to construct concise yet informative datasets for
efficient training. Recent advances in dataset condensation show promise in
addressing this problem by synthesizing small datasets. However, applying
existing methods of dataset condensation to recommendation has limitations: (1)
they fail to generate discrete user-item interactions, and (2) they could not
preserve users' potential preferences. To address the limitations, we propose a
lightweight condensation framework tailored for recommendation (DConRec),
focusing on condensing user-item historical interaction sets. Specifically, we
model the discrete user-item interactions via a probabilistic approach and
design a pre-augmentation module to incorporate the potential preferences of
users into the condensed datasets. While the substantial size of datasets leads
to costly optimization, we propose a lightweight policy gradient estimation to
accelerate the data synthesis. Experimental results on multiple real-world
datasets have demonstrated the effectiveness and efficiency of our framework.
Besides, we provide a theoretical analysis of the provable convergence of
DConRec. Our implementation is available at:
https://github.com/JiahaoWuGit/DConRec.
|
2311.12047 | Jiali Cheng | Jiali Cheng, Hadi Amiri | MultiDelete for Multimodal Machine Unlearning | ECCV 2024 | null | null | null | cs.AI cs.CL cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Machine Unlearning removes specific knowledge about training data samples
from an already trained model. It has significant practical benefits, such as
purging private, inaccurate, or outdated information from trained models
without the need for complete re-training. Unlearning within a multimodal
setting presents unique challenges due to the complex dependencies between
different data modalities and the expensive cost of training on large
multimodal datasets and architectures. This paper presents the first machine
unlearning approach for multimodal data and models, titled MultiDelete, which
is designed to decouple associations between unimodal data points during
unlearning without losing the overall representation strength of the trained
model. MultiDelete advocates for three key properties for effective multimodal
unlearning: (a): modality decoupling, which effectively decouples the
association between individual unimodal data points marked for deletion,
rendering them as unrelated data points, (b): multimodal knowledge retention,
which retains the multimodal representation post-unlearning, and (c): unimodal
knowledge retention, which retains the unimodal representation postunlearning.
MultiDelete is efficient to train and is not constrained by using a strongly
convex loss -- a common restriction among existing baselines. Experiments on
two architectures and four datasets, including image-text and graph-text
datasets, show that MultiDelete gains an average improvement of 17.6 points
over best performing baseline in unlearning multimodal samples, can maintain
the multimodal and unimodal knowledge of the original model post unlearning,
and can provide better protection to unlearned data against adversarial
attacks.
| [
{
"version": "v1",
"created": "Sat, 18 Nov 2023 08:30:38 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Jul 2024 01:40:54 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Cheng",
"Jiali",
""
],
[
"Amiri",
"Hadi",
""
]
] | TITLE: MultiDelete for Multimodal Machine Unlearning
ABSTRACT: Machine Unlearning removes specific knowledge about training data samples
from an already trained model. It has significant practical benefits, such as
purging private, inaccurate, or outdated information from trained models
without the need for complete re-training. Unlearning within a multimodal
setting presents unique challenges due to the complex dependencies between
different data modalities and the expensive cost of training on large
multimodal datasets and architectures. This paper presents the first machine
unlearning approach for multimodal data and models, titled MultiDelete, which
is designed to decouple associations between unimodal data points during
unlearning without losing the overall representation strength of the trained
model. MultiDelete advocates for three key properties for effective multimodal
unlearning: (a): modality decoupling, which effectively decouples the
association between individual unimodal data points marked for deletion,
rendering them as unrelated data points, (b): multimodal knowledge retention,
which retains the multimodal representation post-unlearning, and (c): unimodal
knowledge retention, which retains the unimodal representation postunlearning.
MultiDelete is efficient to train and is not constrained by using a strongly
convex loss -- a common restriction among existing baselines. Experiments on
two architectures and four datasets, including image-text and graph-text
datasets, show that MultiDelete gains an average improvement of 17.6 points
over best performing baseline in unlearning multimodal samples, can maintain
the multimodal and unimodal knowledge of the original model post unlearning,
and can provide better protection to unlearned data against adversarial
attacks.
|
2402.00786 | Manuel Faysse | Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro, Ant\'onio Loison,
Duarte M. Alves, Caio Corro, Nicolas Boizard, Jo\~ao Alves, Ricardo Rei,
Pedro H. Martins, Antoni Bigata Casademunt, Fran\c{c}ois Yvon, Andr\'e F.T.
Martins, Gautier Viaud, C\'eline Hudelot, Pierre Colombo | CroissantLLM: A Truly Bilingual French-English Language Model | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T
English and French tokens, to bring to the research and industrial community a
high-performance, fully open-sourced bilingual model that runs swiftly on
consumer-grade local hardware. To that end, we pioneer the approach of training
an intrinsically bilingual model with a 1:1 English-to-French pretraining data
ratio, a custom tokenizer, and bilingual finetuning datasets. We release the
training dataset, notably containing a French split with manually curated,
high-quality, and varied data sources. To assess performance outside of
English, we craft a novel benchmark, FrenchBench, consisting of an array of
classification and generation tasks, covering various orthogonal aspects of
model performance in the French Language. Additionally, rooted in transparency
and to foster further Large Language Model research, we release codebases, and
dozens of checkpoints across various model sizes, training data distributions,
and training steps, as well as fine-tuned Chat models, and strong translation
models. We evaluate our model through the FMTI framework, and validate 81 % of
the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous
English-centric work in order to strengthen our understanding of
multilinguality in language models.
| [
{
"version": "v1",
"created": "Thu, 1 Feb 2024 17:17:55 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Feb 2024 17:43:41 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Feb 2024 17:12:26 GMT"
},
{
"version": "v4",
"created": "Fri, 29 Mar 2024 14:56:42 GMT"
},
{
"version": "v5",
"created": "Wed, 9 Apr 2025 09:45:01 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Faysse",
"Manuel",
""
],
[
"Fernandes",
"Patrick",
""
],
[
"Guerreiro",
"Nuno M.",
""
],
[
"Loison",
"António",
""
],
[
"Alves",
"Duarte M.",
""
],
[
"Corro",
"Caio",
""
],
[
"Boizard",
"Nicolas",
""
],
[
"Alves",
"João",
""
],
[
"Rei",
"Ricardo",
""
],
[
"Martins",
"Pedro H.",
""
],
[
"Casademunt",
"Antoni Bigata",
""
],
[
"Yvon",
"François",
""
],
[
"Martins",
"André F. T.",
""
],
[
"Viaud",
"Gautier",
""
],
[
"Hudelot",
"Céline",
""
],
[
"Colombo",
"Pierre",
""
]
] | TITLE: CroissantLLM: A Truly Bilingual French-English Language Model
ABSTRACT: We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T
English and French tokens, to bring to the research and industrial community a
high-performance, fully open-sourced bilingual model that runs swiftly on
consumer-grade local hardware. To that end, we pioneer the approach of training
an intrinsically bilingual model with a 1:1 English-to-French pretraining data
ratio, a custom tokenizer, and bilingual finetuning datasets. We release the
training dataset, notably containing a French split with manually curated,
high-quality, and varied data sources. To assess performance outside of
English, we craft a novel benchmark, FrenchBench, consisting of an array of
classification and generation tasks, covering various orthogonal aspects of
model performance in the French Language. Additionally, rooted in transparency
and to foster further Large Language Model research, we release codebases, and
dozens of checkpoints across various model sizes, training data distributions,
and training steps, as well as fine-tuned Chat models, and strong translation
models. We evaluate our model through the FMTI framework, and validate 81 % of
the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous
English-centric work in order to strengthen our understanding of
multilinguality in language models.
|
2402.01359 | Shae McFadden | Zeliang Kan, Shae McFadden, Daniel Arp, Feargus Pendlebury, Roberto
Jordaney, Johannes Kinder, Fabio Pierazzi, Lorenzo Cavallaro | TESSERACT: Eliminating Experimental Bias in Malware Classification
across Space and Time (Extended Version) | 30 pages. arXiv admin note: text overlap with arXiv:1807.07838 | null | null | null | cs.LG cs.CR cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning (ML) plays a pivotal role in detecting malicious software.
Despite the high F1-scores reported in numerous studies reaching upwards of
0.99, the issue is not completely solved. Malware detectors often experience
performance decay due to constantly evolving operating systems and attack
methods, which can render previously learned knowledge insufficient for
accurate decision-making on new inputs. This paper argues that commonly
reported results are inflated due to two pervasive sources of experimental bias
in the detection task: spatial bias caused by data distributions that are not
representative of a real-world deployment; and temporal bias caused by
incorrect time splits of data, leading to unrealistic configurations. To
address these biases, we introduce a set of constraints for fair experiment
design, and propose a new metric, AUT, for classifier robustness in real-world
settings. We additionally propose an algorithm designed to tune training data
to enhance classifier performance. Finally, we present TESSERACT, an
open-source framework for realistic classifier comparison. Our evaluation
encompasses both traditional ML and deep learning methods, examining published
works on an extensive Android dataset with 259,230 samples over a five-year
span. Additionally, we conduct case studies in the Windows PE and PDF domains.
Our findings identify the existence of biases in previous studies and reveal
that significant performance enhancements are possible through appropriate,
periodic tuning. We explore how mitigation strategies may support in achieving
a more stable and better performance over time by employing multiple strategies
to delay performance decay.
| [
{
"version": "v1",
"created": "Fri, 2 Feb 2024 12:27:32 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 12:32:21 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Kan",
"Zeliang",
""
],
[
"McFadden",
"Shae",
""
],
[
"Arp",
"Daniel",
""
],
[
"Pendlebury",
"Feargus",
""
],
[
"Jordaney",
"Roberto",
""
],
[
"Kinder",
"Johannes",
""
],
[
"Pierazzi",
"Fabio",
""
],
[
"Cavallaro",
"Lorenzo",
""
]
] | TITLE: TESSERACT: Eliminating Experimental Bias in Malware Classification
across Space and Time (Extended Version)
ABSTRACT: Machine learning (ML) plays a pivotal role in detecting malicious software.
Despite the high F1-scores reported in numerous studies reaching upwards of
0.99, the issue is not completely solved. Malware detectors often experience
performance decay due to constantly evolving operating systems and attack
methods, which can render previously learned knowledge insufficient for
accurate decision-making on new inputs. This paper argues that commonly
reported results are inflated due to two pervasive sources of experimental bias
in the detection task: spatial bias caused by data distributions that are not
representative of a real-world deployment; and temporal bias caused by
incorrect time splits of data, leading to unrealistic configurations. To
address these biases, we introduce a set of constraints for fair experiment
design, and propose a new metric, AUT, for classifier robustness in real-world
settings. We additionally propose an algorithm designed to tune training data
to enhance classifier performance. Finally, we present TESSERACT, an
open-source framework for realistic classifier comparison. Our evaluation
encompasses both traditional ML and deep learning methods, examining published
works on an extensive Android dataset with 259,230 samples over a five-year
span. Additionally, we conduct case studies in the Windows PE and PDF domains.
Our findings identify the existence of biases in previous studies and reveal
that significant performance enhancements are possible through appropriate,
periodic tuning. We explore how mitigation strategies may support in achieving
a more stable and better performance over time by employing multiple strategies
to delay performance decay.
|
2402.07601 | Long Teng | Long Teng and Yanhao Wang and Zhe Lin and Fei Yu | Topic-aware Most Influential Community Search in Social Networks | Accepted by Neurocomputing | null | 10.1016/j.neucom.2025.130173 | null | cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Influential community search (ICS) finds a set of densely connected and
high-impact vertices from a social network. Although great effort has been
devoted to ICS problems, most existing methods do not consider how relevant the
influential community found is to specific topics. A few attempts at
topic-aware ICS problems cannot capture the stochastic nature of community
formation and influence propagation in social networks. To address these
issues, we introduce a novel problem of topic-aware most influential community
search (TAMICS) to discover a set of vertices such that for a given topic
vector q, they induce a $(k, l, \eta)$-core in an uncertain directed
interaction graph and have the highest influence scores under the independent
cascade (IC) model. We propose an online algorithm to provide an approximate
result for any TAMICS query with bounded errors. Furthermore, we design two
index structures and an index-based heuristic algorithm for efficient TAMICS
query processing. Finally, we experimentally evaluate the efficacy and
efficiency of our proposed approaches on various real-world datasets. The
results show that (1) the communities of TAMICS have higher relevance and
social influence w.r.t.~the query topics as well as structural cohesiveness
than those of several state-of-the-art topic-aware and influential CS methods
and (2) the index-based algorithm achieves speed-ups of up to three orders of
magnitude over the online algorithm with an affordable overhead for index
construction.
| [
{
"version": "v1",
"created": "Mon, 12 Feb 2024 11:59:47 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 16:51:19 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 04:13:54 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Teng",
"Long",
""
],
[
"Wang",
"Yanhao",
""
],
[
"Lin",
"Zhe",
""
],
[
"Yu",
"Fei",
""
]
] | TITLE: Topic-aware Most Influential Community Search in Social Networks
ABSTRACT: Influential community search (ICS) finds a set of densely connected and
high-impact vertices from a social network. Although great effort has been
devoted to ICS problems, most existing methods do not consider how relevant the
influential community found is to specific topics. A few attempts at
topic-aware ICS problems cannot capture the stochastic nature of community
formation and influence propagation in social networks. To address these
issues, we introduce a novel problem of topic-aware most influential community
search (TAMICS) to discover a set of vertices such that for a given topic
vector q, they induce a $(k, l, \eta)$-core in an uncertain directed
interaction graph and have the highest influence scores under the independent
cascade (IC) model. We propose an online algorithm to provide an approximate
result for any TAMICS query with bounded errors. Furthermore, we design two
index structures and an index-based heuristic algorithm for efficient TAMICS
query processing. Finally, we experimentally evaluate the efficacy and
efficiency of our proposed approaches on various real-world datasets. The
results show that (1) the communities of TAMICS have higher relevance and
social influence w.r.t.~the query topics as well as structural cohesiveness
than those of several state-of-the-art topic-aware and influential CS methods
and (2) the index-based algorithm achieves speed-ups of up to three orders of
magnitude over the online algorithm with an affordable overhead for index
construction.
|
2402.12513 | Usama Muneeb | Usama Muneeb and Mesrob I. Ohannessian | Induced Model Matching: Restricted Models Help Train Full-Featured
Models | null | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider scenarios where a very accurate (often small) predictive model
using restricted features is available when training a full-featured (often
larger) model. This restricted model may be thought of as side-information'',
and can come either from an auxiliary dataset or from the same dataset by
forcing the restriction. How can the restricted model be useful to the full
model? To answer this, we introduce a methodology called Induced Model Matching
(IMM). IMM aligns the context-restricted, or induced, version of the large
model with the restricted model. We relate IMM to approaches such as noising,
which is implicit in addressing the problem, and reverse knowledge distillation
from weak teachers, which is explicit but does not exploit restriction being
the nature of the weakness. We show that these prior methods can be thought of
as approximations to IMM and can be problematic in terms of consistency.
Experimentally, we first motivate IMM using logistic regression as a toy
example. We then explore it in language modeling, the application that
initially inspired it, and demonstrate it on both LSTM and transformer full
models, using bigrams as restricted models. We lastly give a simple RL example,
which shows that POMDP policies can help learn better MDP policies. The IMM
principle is thus generally applicable in common scenarios where restricted
data is cheaper to collect or restricted models are easier to learn.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2024 20:21:09 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 19:27:14 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Muneeb",
"Usama",
""
],
[
"Ohannessian",
"Mesrob I.",
""
]
] | TITLE: Induced Model Matching: Restricted Models Help Train Full-Featured
Models
ABSTRACT: We consider scenarios where a very accurate (often small) predictive model
using restricted features is available when training a full-featured (often
larger) model. This restricted model may be thought of as side-information'',
and can come either from an auxiliary dataset or from the same dataset by
forcing the restriction. How can the restricted model be useful to the full
model? To answer this, we introduce a methodology called Induced Model Matching
(IMM). IMM aligns the context-restricted, or induced, version of the large
model with the restricted model. We relate IMM to approaches such as noising,
which is implicit in addressing the problem, and reverse knowledge distillation
from weak teachers, which is explicit but does not exploit restriction being
the nature of the weakness. We show that these prior methods can be thought of
as approximations to IMM and can be problematic in terms of consistency.
Experimentally, we first motivate IMM using logistic regression as a toy
example. We then explore it in language modeling, the application that
initially inspired it, and demonstrate it on both LSTM and transformer full
models, using bigrams as restricted models. We lastly give a simple RL example,
which shows that POMDP policies can help learn better MDP policies. The IMM
principle is thus generally applicable in common scenarios where restricted
data is cheaper to collect or restricted models are easier to learn.
|
2403.04821 | Gilles Dejaegere | Gilles Dejaegere, Mahmoud Sakr | New algorithms for the simplification of multiple trajectories under
bandwidth constraints | Preprint, To be published as a proceeding of Workshop on Big Mobility
Data Analytics (BMDA) co-located with EDBT/ICDT 2024 Joint Conference | null | null | null | cs.OH | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study introduces time-windowed variations of three established
trajectory simplification algorithms. These new algorithms are specifically
designed to be used in contexts with bandwidth limitations. We present the
details of these algorithms and highlight the differences compared to their
classical counterparts.
To evaluate their performance, we conduct accuracy assessments for varying
sizes of time windows, utilizing two different datasets and exploring different
compression ratios. The accuracies of the proposed algorithms are compared with
those of existing methods. Our findings demonstrate that, for larger time
windows, the enhanced version of the bandwidth-constrained STTrace outperforms
other algorithms, with the bandwidth-constrained improved version of \squish
also yielding satisfactory results at a lower computational cost. Conversely,
for short time windows, only the bandwidth-constrained version of Dead
Reckoning remains satisfactory.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 15:39:48 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Dejaegere",
"Gilles",
""
],
[
"Sakr",
"Mahmoud",
""
]
] | TITLE: New algorithms for the simplification of multiple trajectories under
bandwidth constraints
ABSTRACT: This study introduces time-windowed variations of three established
trajectory simplification algorithms. These new algorithms are specifically
designed to be used in contexts with bandwidth limitations. We present the
details of these algorithms and highlight the differences compared to their
classical counterparts.
To evaluate their performance, we conduct accuracy assessments for varying
sizes of time windows, utilizing two different datasets and exploring different
compression ratios. The accuracies of the proposed algorithms are compared with
those of existing methods. Our findings demonstrate that, for larger time
windows, the enhanced version of the bandwidth-constrained STTrace outperforms
other algorithms, with the bandwidth-constrained improved version of \squish
also yielding satisfactory results at a lower computational cost. Conversely,
for short time windows, only the bandwidth-constrained version of Dead
Reckoning remains satisfactory.
|
2403.05821 | Shu Liu | Shu Liu, Asim Biswal, Amog Kamsetty, Audrey Cheng, Luis Gaspar
Schroeder, Liana Patel, Shiyi Cao, Xiangxi Mo, Ion Stoica, Joseph E.
Gonzalez, Matei Zaharia | Optimizing LLM Queries in Relational Data Analytics Workloads | null | null | null | null | cs.LG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Batch data analytics is a growing application for Large Language Models
(LLMs). LLMs enable users to perform a wide range of natural language tasks,
such as classification, entity extraction, and translation, over large
datasets. However, LLM inference is highly costly and slow: for example, an
NVIDIA L4 GPU running Llama3-8B can only process 6 KB of text per second,
taking about a day to handle 15 GB of data; processing a similar amount of data
costs around $10K on OpenAI's GPT-4o. In this paper, we propose novel
techniques that can significantly reduce the cost of LLM calls for relational
data analytics workloads. Our key contribution is developing efficient
algorithms for reordering the rows and the fields within each row of an input
table to maximize key-value (KV) cache reuse when performing LLM serving. As
such, our approach can be easily applied to existing analytics systems and
serving platforms. Our evaluation shows that our solution can yield up to 3.4x
improvement in job completion time on a benchmark of diverse LLM-based queries
using Llama 3 models. Our solution also achieves a 32% cost savings under
OpenAI and Anthropic pricing models.
| [
{
"version": "v1",
"created": "Sat, 9 Mar 2024 07:01:44 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 10:23:39 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Liu",
"Shu",
""
],
[
"Biswal",
"Asim",
""
],
[
"Kamsetty",
"Amog",
""
],
[
"Cheng",
"Audrey",
""
],
[
"Schroeder",
"Luis Gaspar",
""
],
[
"Patel",
"Liana",
""
],
[
"Cao",
"Shiyi",
""
],
[
"Mo",
"Xiangxi",
""
],
[
"Stoica",
"Ion",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Zaharia",
"Matei",
""
]
] | TITLE: Optimizing LLM Queries in Relational Data Analytics Workloads
ABSTRACT: Batch data analytics is a growing application for Large Language Models
(LLMs). LLMs enable users to perform a wide range of natural language tasks,
such as classification, entity extraction, and translation, over large
datasets. However, LLM inference is highly costly and slow: for example, an
NVIDIA L4 GPU running Llama3-8B can only process 6 KB of text per second,
taking about a day to handle 15 GB of data; processing a similar amount of data
costs around $10K on OpenAI's GPT-4o. In this paper, we propose novel
techniques that can significantly reduce the cost of LLM calls for relational
data analytics workloads. Our key contribution is developing efficient
algorithms for reordering the rows and the fields within each row of an input
table to maximize key-value (KV) cache reuse when performing LLM serving. As
such, our approach can be easily applied to existing analytics systems and
serving platforms. Our evaluation shows that our solution can yield up to 3.4x
improvement in job completion time on a benchmark of diverse LLM-based queries
using Llama 3 models. Our solution also achieves a 32% cost savings under
OpenAI and Anthropic pricing models.
|
2403.12072 | Eduardo R. B. Marques | Ant\'onio Filgueiras, Eduardo R. B. Marques, Lu\'is M. B. Lopes,
Miguel Marques, Hugo Silva | Floralens: a Deep Learning Model for the Portuguese Native Flora | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Machine-learning techniques, especially deep convolutional neural networks,
are pivotal for image-based identification of biological species in many
Citizen Science platforms. In this paper, we describe the construction of a
dataset for the Portuguese native flora based on publicly available
research-grade datasets, and the derivation of a high-accuracy model from it
using off-the-shelf deep convolutional neural networks. We anchored the dataset
in high-quality data provided by Sociedade Portuguesa de Bot\^anica and added
further sampled data from research-grade datasets available from GBIF. We find
that with a careful dataset design, off-the-shelf machine-learning cloud
services such as Google's AutoML Vision produce accurate models, with results
comparable to those of Pl@ntNet, a state-of-the-art citizen science platform.
The best model we derived, dubbed Floralens, has been integrated into the
public website of Project Biolens, where we gather models for other taxa as
well. The dataset used to train the model is also publicly available on Zenodo.
| [
{
"version": "v1",
"created": "Tue, 13 Feb 2024 15:23:21 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Oct 2024 10:00:15 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 10:12:38 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Filgueiras",
"António",
""
],
[
"Marques",
"Eduardo R. B.",
""
],
[
"Lopes",
"Luís M. B.",
""
],
[
"Marques",
"Miguel",
""
],
[
"Silva",
"Hugo",
""
]
] | TITLE: Floralens: a Deep Learning Model for the Portuguese Native Flora
ABSTRACT: Machine-learning techniques, especially deep convolutional neural networks,
are pivotal for image-based identification of biological species in many
Citizen Science platforms. In this paper, we describe the construction of a
dataset for the Portuguese native flora based on publicly available
research-grade datasets, and the derivation of a high-accuracy model from it
using off-the-shelf deep convolutional neural networks. We anchored the dataset
in high-quality data provided by Sociedade Portuguesa de Bot\^anica and added
further sampled data from research-grade datasets available from GBIF. We find
that with a careful dataset design, off-the-shelf machine-learning cloud
services such as Google's AutoML Vision produce accurate models, with results
comparable to those of Pl@ntNet, a state-of-the-art citizen science platform.
The best model we derived, dubbed Floralens, has been integrated into the
public website of Project Biolens, where we gather models for other taxa as
well. The dataset used to train the model is also publicly available on Zenodo.
|
2404.01663 | Meiling Tao | Xuechen Liang, Meiling Tao, Yinghui Xia, Tianyu Shi, Jun Wang,
JingSong Yang | CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small
Language Models | null | null | null | null | cs.CL cs.AI cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open large language models (LLMs) have significantly advanced the field of
natural language processing, showcasing impressive performance across various
tasks.Despite the significant advancements in LLMs, their effective operation
still relies heavily on human input to accurately guide the dialogue flow, with
agent tuning being a crucial optimization technique that involves human
adjustments to the model for better response to such guidance.Addressing this
dependency, our work introduces the TinyAgent model, trained on a meticulously
curated high-quality dataset. We also present the Collaborative Multi-Agent
Tuning (CMAT) framework, an innovative system designed to augment language
agent capabilities through adaptive weight updates based on environmental
feedback. This framework fosters collaborative learning and real-time
adaptation among multiple intelligent agents, enhancing their context-awareness
and long-term memory. In this research, we propose a new communication agent
framework that integrates multi-agent systems with environmental feedback
mechanisms, offering a scalable method to explore cooperative behaviors.
Notably, our TinyAgent-7B model exhibits performance on par with GPT-3.5,
despite having fewer parameters, signifying a substantial improvement in the
efficiency and effectiveness of LLMs.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2024 06:07:35 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Apr 2024 12:40:03 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Aug 2024 20:30:40 GMT"
},
{
"version": "v4",
"created": "Sun, 1 Sep 2024 22:02:32 GMT"
},
{
"version": "v5",
"created": "Sun, 23 Mar 2025 05:26:38 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Liang",
"Xuechen",
""
],
[
"Tao",
"Meiling",
""
],
[
"Xia",
"Yinghui",
""
],
[
"Shi",
"Tianyu",
""
],
[
"Wang",
"Jun",
""
],
[
"Yang",
"JingSong",
""
]
] | TITLE: CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small
Language Models
ABSTRACT: Open large language models (LLMs) have significantly advanced the field of
natural language processing, showcasing impressive performance across various
tasks.Despite the significant advancements in LLMs, their effective operation
still relies heavily on human input to accurately guide the dialogue flow, with
agent tuning being a crucial optimization technique that involves human
adjustments to the model for better response to such guidance.Addressing this
dependency, our work introduces the TinyAgent model, trained on a meticulously
curated high-quality dataset. We also present the Collaborative Multi-Agent
Tuning (CMAT) framework, an innovative system designed to augment language
agent capabilities through adaptive weight updates based on environmental
feedback. This framework fosters collaborative learning and real-time
adaptation among multiple intelligent agents, enhancing their context-awareness
and long-term memory. In this research, we propose a new communication agent
framework that integrates multi-agent systems with environmental feedback
mechanisms, offering a scalable method to explore cooperative behaviors.
Notably, our TinyAgent-7B model exhibits performance on par with GPT-3.5,
despite having fewer parameters, signifying a substantial improvement in the
efficiency and effectiveness of LLMs.
|
2404.16323 | Jiamin Wu | Jiamin Wu, Kenkun Liu, Han Gao, Xiaoke Jiang, Yao Yuan, Lei Zhang | LeanGaussian: Breaking Pixel or Point Cloud Correspondence in Modeling
3D Gaussians | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recently, Gaussian splatting has demonstrated significant success in novel
view synthesis. Current methods often regress Gaussians with pixel or point
cloud correspondence, linking each Gaussian with a pixel or a 3D point. This
leads to the redundancy of Gaussians being used to overfit the correspondence
rather than the objects represented by the 3D Gaussians themselves,
consequently wasting resources and lacking accurate geometries or textures. In
this paper, we introduce LeanGaussian, a novel approach that treats each query
in deformable Transformer as one 3D Gaussian ellipsoid, breaking the pixel or
point cloud correspondence constraints. We leverage deformable decoder to
iteratively refine the Gaussians layer-by-layer with the image features as keys
and values. Notably, the center of each 3D Gaussian is defined as 3D reference
points, which are then projected onto the image for deformable attention in 2D
space. On both the ShapeNet SRN dataset (category level) and the Google Scanned
Objects dataset (open-category level, trained with the Objaverse dataset), our
approach, outperforms prior methods by approximately 6.1%, achieving a PSNR of
25.44 and 22.36, respectively. Additionally, our method achieves a 3D
reconstruction speed of 7.2 FPS and rendering speed 500 FPS. Codes are
available at https://github.com/jwubz123/LeanGaussian.
| [
{
"version": "v1",
"created": "Thu, 25 Apr 2024 04:18:59 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2024 03:11:06 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 08:14:57 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Apr 2025 07:00:32 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wu",
"Jiamin",
""
],
[
"Liu",
"Kenkun",
""
],
[
"Gao",
"Han",
""
],
[
"Jiang",
"Xiaoke",
""
],
[
"Yuan",
"Yao",
""
],
[
"Zhang",
"Lei",
""
]
] | TITLE: LeanGaussian: Breaking Pixel or Point Cloud Correspondence in Modeling
3D Gaussians
ABSTRACT: Recently, Gaussian splatting has demonstrated significant success in novel
view synthesis. Current methods often regress Gaussians with pixel or point
cloud correspondence, linking each Gaussian with a pixel or a 3D point. This
leads to the redundancy of Gaussians being used to overfit the correspondence
rather than the objects represented by the 3D Gaussians themselves,
consequently wasting resources and lacking accurate geometries or textures. In
this paper, we introduce LeanGaussian, a novel approach that treats each query
in deformable Transformer as one 3D Gaussian ellipsoid, breaking the pixel or
point cloud correspondence constraints. We leverage deformable decoder to
iteratively refine the Gaussians layer-by-layer with the image features as keys
and values. Notably, the center of each 3D Gaussian is defined as 3D reference
points, which are then projected onto the image for deformable attention in 2D
space. On both the ShapeNet SRN dataset (category level) and the Google Scanned
Objects dataset (open-category level, trained with the Objaverse dataset), our
approach, outperforms prior methods by approximately 6.1%, achieving a PSNR of
25.44 and 22.36, respectively. Additionally, our method achieves a 3D
reconstruction speed of 7.2 FPS and rendering speed 500 FPS. Codes are
available at https://github.com/jwubz123/LeanGaussian.
|
2405.15868 | Marco Paul E. Apolinario | Marco Paul E. Apolinario, Arani Roy, Kaushik Roy | LLS: Local Learning Rule for Deep Neural Networks Inspired by Neural
Activity Synchronization | 12 pages, 4 figures | Proceedings of the Winter Conference on Applications of Computer
Vision (WACV), 2025 | 10.1109/WACV61041.2025.00758 | null | cs.NE cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training deep neural networks (DNNs) using traditional backpropagation (BP)
presents challenges in terms of computational complexity and energy
consumption, particularly for on-device learning where computational resources
are limited. Various alternatives to BP, including random feedback alignment,
forward-forward, and local classifiers, have been explored to address these
challenges. These methods have their advantages, but they can encounter
difficulties when dealing with intricate visual tasks or demand considerable
computational resources. In this paper, we propose a novel Local Learning rule
inspired by neural activity Synchronization phenomena (LLS) observed in the
brain. LLS utilizes fixed periodic basis vectors to synchronize neuron activity
within each layer, enabling efficient training without the need for additional
trainable parameters. We demonstrate the effectiveness of LLS and its
variations, LLS-M and LLS-MxM, on multiple image classification datasets,
achieving accuracy comparable to BP with reduced computational complexity and
minimal additional parameters. Specifically, LLS achieves comparable
performance with up to $300 \times$ fewer multiply-accumulate (MAC) operations
and half the memory requirements of BP. Furthermore, the performance of LLS on
the Visual Wake Word (VWW) dataset highlights its suitability for on-device
learning tasks, making it a promising candidate for edge hardware
implementations.
| [
{
"version": "v1",
"created": "Fri, 24 May 2024 18:24:24 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Oct 2024 16:35:59 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Apolinario",
"Marco Paul E.",
""
],
[
"Roy",
"Arani",
""
],
[
"Roy",
"Kaushik",
""
]
] | TITLE: LLS: Local Learning Rule for Deep Neural Networks Inspired by Neural
Activity Synchronization
ABSTRACT: Training deep neural networks (DNNs) using traditional backpropagation (BP)
presents challenges in terms of computational complexity and energy
consumption, particularly for on-device learning where computational resources
are limited. Various alternatives to BP, including random feedback alignment,
forward-forward, and local classifiers, have been explored to address these
challenges. These methods have their advantages, but they can encounter
difficulties when dealing with intricate visual tasks or demand considerable
computational resources. In this paper, we propose a novel Local Learning rule
inspired by neural activity Synchronization phenomena (LLS) observed in the
brain. LLS utilizes fixed periodic basis vectors to synchronize neuron activity
within each layer, enabling efficient training without the need for additional
trainable parameters. We demonstrate the effectiveness of LLS and its
variations, LLS-M and LLS-MxM, on multiple image classification datasets,
achieving accuracy comparable to BP with reduced computational complexity and
minimal additional parameters. Specifically, LLS achieves comparable
performance with up to $300 \times$ fewer multiply-accumulate (MAC) operations
and half the memory requirements of BP. Furthermore, the performance of LLS on
the Visual Wake Word (VWW) dataset highlights its suitability for on-device
learning tasks, making it a promising candidate for edge hardware
implementations.
|
2406.06650 | Geongyu Lee | Geongyu Lee, Joonho Lee, Tae-Yeong Kwak, Sun Woo Kim, Youngmee Kwon,
Chungyeul Kim, Hyeyoon Chang | Assessing the risk of recurrence in early-stage breast cancer through
H&E stained whole slide images | 20 pages, 9 figures | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate prediction of the likelihood of recurrence is important in the
selection of postoperative treatment for patients with early-stage breast
cancer. In this study, we investigated whether deep learning algorithms can
predict patients' risk of recurrence by analyzing the pathology images of their
cancer histology.We analyzed 125 hematoxylin and eosin-stained whole slide
images (WSIs) from 125 patients across two institutions (National Cancer Center
and Korea University Medical Center Guro Hospital) to predict breast cancer
recurrence risk using deep learning. Sensitivity reached 0.857, 0.746, and
0.529 for low, intermediate, and high-risk categories, respectively, with
specificity of 0.816, 0.803, and 0.972, and a Pearson correlation of 0.61 with
histological grade. Class activation maps highlighted features like tubule
formation and mitotic rate, suggesting a cost-effective approach to risk
stratification, pending broader validation. These findings suggest that deep
learning models trained exclusively on hematoxylin and eosin stained whole
slide images can approximate genomic assay results, offering a cost-effective
and scalable tool for breast cancer recurrence risk assessment. However,
further validation using larger and more balanced datasets is needed to confirm
the clinical applicability of our approach.
| [
{
"version": "v1",
"created": "Mon, 10 Jun 2024 08:51:59 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 08:51:52 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Lee",
"Geongyu",
""
],
[
"Lee",
"Joonho",
""
],
[
"Kwak",
"Tae-Yeong",
""
],
[
"Kim",
"Sun Woo",
""
],
[
"Kwon",
"Youngmee",
""
],
[
"Kim",
"Chungyeul",
""
],
[
"Chang",
"Hyeyoon",
""
]
] | TITLE: Assessing the risk of recurrence in early-stage breast cancer through
H&E stained whole slide images
ABSTRACT: Accurate prediction of the likelihood of recurrence is important in the
selection of postoperative treatment for patients with early-stage breast
cancer. In this study, we investigated whether deep learning algorithms can
predict patients' risk of recurrence by analyzing the pathology images of their
cancer histology.We analyzed 125 hematoxylin and eosin-stained whole slide
images (WSIs) from 125 patients across two institutions (National Cancer Center
and Korea University Medical Center Guro Hospital) to predict breast cancer
recurrence risk using deep learning. Sensitivity reached 0.857, 0.746, and
0.529 for low, intermediate, and high-risk categories, respectively, with
specificity of 0.816, 0.803, and 0.972, and a Pearson correlation of 0.61 with
histological grade. Class activation maps highlighted features like tubule
formation and mitotic rate, suggesting a cost-effective approach to risk
stratification, pending broader validation. These findings suggest that deep
learning models trained exclusively on hematoxylin and eosin stained whole
slide images can approximate genomic assay results, offering a cost-effective
and scalable tool for breast cancer recurrence risk assessment. However,
further validation using larger and more balanced datasets is needed to confirm
the clinical applicability of our approach.
|
2406.10999 | Liman Wang | Hanyang Zhong, Liman Wang, Wenting Cao, Zeyuan Sun | Balancing Rigor and Utility: Mitigating Cognitive Biases in Large
Language Models for Multiple-Choice Questions | This work has been accepted as a full paper at the 2025 Annual
Conference of the Cognitive Science Society (CogSci 2025) and will be
presented in the form of a poster. The associated public dataset and project
website are available at: https://hanyangzhong.github.io/BRU-website/ | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the role of cognitive biases in the decision-making
processes of large language models (LLMs), challenging the conventional goal of
eliminating all biases. When properly balanced, we show that certain cognitive
biases can enhance decision-making efficiency through rational deviations and
heuristic shortcuts. By introducing heuristic moderation and an abstention
option, which allows LLMs to withhold responses when uncertain, we reduce error
rates, improve decision accuracy, and optimize decision rates. Using the
Balance Rigor and Utility (BRU) dataset, developed through expert
collaboration, our findings demonstrate that targeted inspection of cognitive
biases aligns LLM decisions more closely with human reasoning, enhancing
reliability and suggesting strategies for future improvements. This approach
offers a novel way to leverage cognitive biases to improve the practical
utility of LLMs across various applications.
| [
{
"version": "v1",
"created": "Sun, 16 Jun 2024 16:25:22 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Sep 2024 20:26:30 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Sep 2024 16:28:09 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2025 23:59:08 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Zhong",
"Hanyang",
""
],
[
"Wang",
"Liman",
""
],
[
"Cao",
"Wenting",
""
],
[
"Sun",
"Zeyuan",
""
]
] | TITLE: Balancing Rigor and Utility: Mitigating Cognitive Biases in Large
Language Models for Multiple-Choice Questions
ABSTRACT: This paper examines the role of cognitive biases in the decision-making
processes of large language models (LLMs), challenging the conventional goal of
eliminating all biases. When properly balanced, we show that certain cognitive
biases can enhance decision-making efficiency through rational deviations and
heuristic shortcuts. By introducing heuristic moderation and an abstention
option, which allows LLMs to withhold responses when uncertain, we reduce error
rates, improve decision accuracy, and optimize decision rates. Using the
Balance Rigor and Utility (BRU) dataset, developed through expert
collaboration, our findings demonstrate that targeted inspection of cognitive
biases aligns LLM decisions more closely with human reasoning, enhancing
reliability and suggesting strategies for future improvements. This approach
offers a novel way to leverage cognitive biases to improve the practical
utility of LLMs across various applications.
|
2406.16899 | Yuni Susanti | Yuni Susanti, Nina Holsmoelle | Prompting or Fine-tuning? Exploring Large Language Models for Causal
Graph Validation | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study explores the capability of Large Language Models (LLMs) to
evaluate causality in causal graphs generated by conventional statistical
causal discovery methods-a task traditionally reliant on manual assessment by
human subject matter experts. To bridge this gap in causality assessment, LLMs
are employed to evaluate the causal relationships by determining whether a
causal connection between variable pairs can be inferred from textual context.
Our study compares two approaches: (1) prompting-based method for zero-shot and
few-shot causal inference and, (2) fine-tuning language models for the causal
relation prediction task. While prompt-based LLMs have demonstrated versatility
across various NLP tasks, our experiments on biomedical and general-domain
datasets show that fine-tuned models consistently outperform them, achieving up
to a 20.5-point improvement in F1 score-even when using smaller-parameter
language models. These findings provide valuable insights into the strengths
and limitations of both approaches for causal graph evaluation.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 09:06:18 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 04:44:48 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Susanti",
"Yuni",
""
],
[
"Holsmoelle",
"Nina",
""
]
] | TITLE: Prompting or Fine-tuning? Exploring Large Language Models for Causal
Graph Validation
ABSTRACT: This study explores the capability of Large Language Models (LLMs) to
evaluate causality in causal graphs generated by conventional statistical
causal discovery methods-a task traditionally reliant on manual assessment by
human subject matter experts. To bridge this gap in causality assessment, LLMs
are employed to evaluate the causal relationships by determining whether a
causal connection between variable pairs can be inferred from textual context.
Our study compares two approaches: (1) prompting-based method for zero-shot and
few-shot causal inference and, (2) fine-tuning language models for the causal
relation prediction task. While prompt-based LLMs have demonstrated versatility
across various NLP tasks, our experiments on biomedical and general-domain
datasets show that fine-tuned models consistently outperform them, achieving up
to a 20.5-point improvement in F1 score-even when using smaller-parameter
language models. These findings provide valuable insights into the strengths
and limitations of both approaches for causal graph evaluation.
|
2407.00742 | Dazhou Yu | Dazhou Yu, Yuntong Hu, Yun Li, Liang Zhao | PolygonGNN: Representation Learning for Polygonal Geometries with
Heterogeneous Visibility Graph | null | null | 10.1145/3637528.3671738 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Polygon representation learning is essential for diverse applications,
encompassing tasks such as shape coding, building pattern classification, and
geographic question answering. While recent years have seen considerable
advancements in this field, much of the focus has been on single polygons,
overlooking the intricate inner- and inter-polygonal relationships inherent in
multipolygons. To address this gap, our study introduces a comprehensive
framework specifically designed for learning representations of polygonal
geometries, particularly multipolygons. Central to our approach is the
incorporation of a heterogeneous visibility graph, which seamlessly integrates
both inner- and inter-polygonal relationships. To enhance computational
efficiency and minimize graph redundancy, we implement a heterogeneous spanning
tree sampling method. Additionally, we devise a rotation-translation invariant
geometric representation, ensuring broader applicability across diverse
scenarios. Finally, we introduce Multipolygon-GNN, a novel model tailored to
leverage the spatial and semantic heterogeneity inherent in the visibility
graph. Experiments on five real-world and synthetic datasets demonstrate its
ability to capture informative representations for polygonal geometries. Code
and data are available at
\href{https://github.com/dyu62/PolyGNN}{$github.com/dyu62/PolyGNN$}.
| [
{
"version": "v1",
"created": "Sun, 30 Jun 2024 16:07:49 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 06:17:32 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Yu",
"Dazhou",
""
],
[
"Hu",
"Yuntong",
""
],
[
"Li",
"Yun",
""
],
[
"Zhao",
"Liang",
""
]
] | TITLE: PolygonGNN: Representation Learning for Polygonal Geometries with
Heterogeneous Visibility Graph
ABSTRACT: Polygon representation learning is essential for diverse applications,
encompassing tasks such as shape coding, building pattern classification, and
geographic question answering. While recent years have seen considerable
advancements in this field, much of the focus has been on single polygons,
overlooking the intricate inner- and inter-polygonal relationships inherent in
multipolygons. To address this gap, our study introduces a comprehensive
framework specifically designed for learning representations of polygonal
geometries, particularly multipolygons. Central to our approach is the
incorporation of a heterogeneous visibility graph, which seamlessly integrates
both inner- and inter-polygonal relationships. To enhance computational
efficiency and minimize graph redundancy, we implement a heterogeneous spanning
tree sampling method. Additionally, we devise a rotation-translation invariant
geometric representation, ensuring broader applicability across diverse
scenarios. Finally, we introduce Multipolygon-GNN, a novel model tailored to
leverage the spatial and semantic heterogeneity inherent in the visibility
graph. Experiments on five real-world and synthetic datasets demonstrate its
ability to capture informative representations for polygonal geometries. Code
and data are available at
\href{https://github.com/dyu62/PolyGNN}{$github.com/dyu62/PolyGNN$}.
|
2407.03038 | Feijie Wu | Feijie Wu, Xiaoze Liu, Haoyu Wang, Xingchen Wang, Lu Su, Jing Gao | Towards Federated RLHF with Aggregated Client Preference for LLMs | ICLR'25 | null | null | null | cs.CL cs.DC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Reinforcement learning with human feedback (RLHF) fine-tunes a pretrained
large language model (LLM) using user preference data, enabling it to generate
content aligned with human preferences. However, due to privacy concerns, users
may be reluctant to share sensitive preference data. To address this, we
propose utilizing Federated Learning (FL) techniques, allowing large-scale
preference collection from diverse real-world users without requiring them to
transmit data to a central server. Our federated RLHF methods (i.e., FedBis and
FedBiscuit) encode each client's preferences into binary selectors and
aggregate them to capture common preferences. In particular, FedBiscuit
overcomes key challenges, such as preference heterogeneity and reward hacking,
through innovative solutions like grouping clients with similar preferences to
reduce heterogeneity and using multiple binary selectors to enhance LLM output
quality. To evaluate the performance of the proposed methods, we establish the
first federated RLHF benchmark with a heterogeneous human preference dataset.
Experimental results show that by integrating the LLM with aggregated client
preferences, FedBis and FedBiscuit significantly enhance the professionalism
and readability of the generated content.
| [
{
"version": "v1",
"created": "Wed, 3 Jul 2024 12:02:24 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jan 2025 20:14:32 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 18:13:57 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wu",
"Feijie",
""
],
[
"Liu",
"Xiaoze",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Wang",
"Xingchen",
""
],
[
"Su",
"Lu",
""
],
[
"Gao",
"Jing",
""
]
] | TITLE: Towards Federated RLHF with Aggregated Client Preference for LLMs
ABSTRACT: Reinforcement learning with human feedback (RLHF) fine-tunes a pretrained
large language model (LLM) using user preference data, enabling it to generate
content aligned with human preferences. However, due to privacy concerns, users
may be reluctant to share sensitive preference data. To address this, we
propose utilizing Federated Learning (FL) techniques, allowing large-scale
preference collection from diverse real-world users without requiring them to
transmit data to a central server. Our federated RLHF methods (i.e., FedBis and
FedBiscuit) encode each client's preferences into binary selectors and
aggregate them to capture common preferences. In particular, FedBiscuit
overcomes key challenges, such as preference heterogeneity and reward hacking,
through innovative solutions like grouping clients with similar preferences to
reduce heterogeneity and using multiple binary selectors to enhance LLM output
quality. To evaluate the performance of the proposed methods, we establish the
first federated RLHF benchmark with a heterogeneous human preference dataset.
Experimental results show that by integrating the LLM with aggregated client
preferences, FedBis and FedBiscuit significantly enhance the professionalism
and readability of the generated content.
|
2407.06204 | Weilin Cai | Weilin Cai, Juyong Jiang, Fan Wang, Jing Tang, Sunghun Kim, Jiayi
Huang | A Survey on Mixture of Experts in Large Language Models | The first three authors contributed equally to this work; Accepted by
TKDE | IEEE Transactions on Knowledge and Data Engineering (TKDE) 2025 | 10.1109/TKDE.2025.3554028 | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have garnered unprecedented advancements across
diverse fields, ranging from natural language processing to computer vision and
beyond. The prowess of LLMs is underpinned by their substantial model size,
extensive and diverse datasets, and the vast computational power harnessed
during training, all of which contribute to the emergent abilities of LLMs
(e.g., in-context learning) that are not present in small models. Within this
context, the mixture of experts (MoE) has emerged as an effective method for
substantially scaling up model capacity with minimal computation overhead,
gaining significant attention from academia and industry. Despite its growing
prevalence, there lacks a systematic and comprehensive review of the literature
on MoE. This survey seeks to bridge that gap, serving as an essential resource
for researchers delving into the intricacies of MoE. We first briefly introduce
the structure of the MoE layer, followed by proposing a new taxonomy of MoE.
Next, we overview the core designs for various MoE models including both
algorithmic and systemic aspects, alongside collections of available
open-source implementations, hyperparameter configurations and empirical
evaluations. Furthermore, we delineate the multifaceted applications of MoE in
practice, and outline some potential directions for future research. To
facilitate ongoing updates and the sharing of cutting-edge advances in MoE
research, we have established a resource repository at
https://github.com/withinmiaov/A-Survey-on-Mixture-of-Experts-in-LLMs.
| [
{
"version": "v1",
"created": "Wed, 26 Jun 2024 16:34:33 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Aug 2024 07:13:37 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 13:54:59 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Cai",
"Weilin",
""
],
[
"Jiang",
"Juyong",
""
],
[
"Wang",
"Fan",
""
],
[
"Tang",
"Jing",
""
],
[
"Kim",
"Sunghun",
""
],
[
"Huang",
"Jiayi",
""
]
] | TITLE: A Survey on Mixture of Experts in Large Language Models
ABSTRACT: Large language models (LLMs) have garnered unprecedented advancements across
diverse fields, ranging from natural language processing to computer vision and
beyond. The prowess of LLMs is underpinned by their substantial model size,
extensive and diverse datasets, and the vast computational power harnessed
during training, all of which contribute to the emergent abilities of LLMs
(e.g., in-context learning) that are not present in small models. Within this
context, the mixture of experts (MoE) has emerged as an effective method for
substantially scaling up model capacity with minimal computation overhead,
gaining significant attention from academia and industry. Despite its growing
prevalence, there lacks a systematic and comprehensive review of the literature
on MoE. This survey seeks to bridge that gap, serving as an essential resource
for researchers delving into the intricacies of MoE. We first briefly introduce
the structure of the MoE layer, followed by proposing a new taxonomy of MoE.
Next, we overview the core designs for various MoE models including both
algorithmic and systemic aspects, alongside collections of available
open-source implementations, hyperparameter configurations and empirical
evaluations. Furthermore, we delineate the multifaceted applications of MoE in
practice, and outline some potential directions for future research. To
facilitate ongoing updates and the sharing of cutting-edge advances in MoE
research, we have established a resource repository at
https://github.com/withinmiaov/A-Survey-on-Mixture-of-Experts-in-LLMs.
|
2407.17378 | Nan Peng | Nan Peng, Xun Zhou, Mingming Wang, Xiaojun Yang, Songming Chen,
Guisong Chen | PrevPredMap: Exploring Temporal Modeling with Previous Predictions for
Online Vectorized HD Map Construction | null | null | 10.1109/WACV61041.2025.00789 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal information is crucial for detecting occluded instances. Existing
temporal representations have progressed from BEV or PV features to more
compact query features. Compared to these aforementioned features, predictions
offer the highest level of abstraction, providing explicit information. In the
context of online vectorized HD map construction, this unique characteristic of
predictions is potentially advantageous for long-term temporal modeling and the
integration of map priors. This paper introduces PrevPredMap, a pioneering
temporal modeling framework that leverages previous predictions for
constructing online vectorized HD maps. We have meticulously crafted two
essential modules for PrevPredMap: the previous-predictions-based query
generator and the dynamic-position-query decoder. Specifically, the
previous-predictions-based query generator is designed to separately encode
different types of information from previous predictions, which are then
effectively utilized by the dynamic-position-query decoder to generate current
predictions. Furthermore, we have developed a dual-mode strategy to ensure
PrevPredMap's robust performance across both single-frame and temporal modes.
Extensive experiments demonstrate that PrevPredMap achieves state-of-the-art
performance on the nuScenes and Argoverse2 datasets. Code will be available at
https://github.com/pnnnnnnn/PrevPredMap.
| [
{
"version": "v1",
"created": "Wed, 24 Jul 2024 15:58:24 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Peng",
"Nan",
""
],
[
"Zhou",
"Xun",
""
],
[
"Wang",
"Mingming",
""
],
[
"Yang",
"Xiaojun",
""
],
[
"Chen",
"Songming",
""
],
[
"Chen",
"Guisong",
""
]
] | TITLE: PrevPredMap: Exploring Temporal Modeling with Previous Predictions for
Online Vectorized HD Map Construction
ABSTRACT: Temporal information is crucial for detecting occluded instances. Existing
temporal representations have progressed from BEV or PV features to more
compact query features. Compared to these aforementioned features, predictions
offer the highest level of abstraction, providing explicit information. In the
context of online vectorized HD map construction, this unique characteristic of
predictions is potentially advantageous for long-term temporal modeling and the
integration of map priors. This paper introduces PrevPredMap, a pioneering
temporal modeling framework that leverages previous predictions for
constructing online vectorized HD maps. We have meticulously crafted two
essential modules for PrevPredMap: the previous-predictions-based query
generator and the dynamic-position-query decoder. Specifically, the
previous-predictions-based query generator is designed to separately encode
different types of information from previous predictions, which are then
effectively utilized by the dynamic-position-query decoder to generate current
predictions. Furthermore, we have developed a dual-mode strategy to ensure
PrevPredMap's robust performance across both single-frame and temporal modes.
Extensive experiments demonstrate that PrevPredMap achieves state-of-the-art
performance on the nuScenes and Argoverse2 datasets. Code will be available at
https://github.com/pnnnnnnn/PrevPredMap.
|
2408.13230 | Daniel Habermann | Daniel Habermann, Marvin Schmitt, Lars K\"uhmichel, Andreas Bulling,
Stefan T. Radev, Paul-Christian B\"urkner | Amortized Bayesian Multilevel Models | 24 pages, 13 figures | null | null | null | stat.ML cs.LG stat.CO | http://creativecommons.org/licenses/by-sa/4.0/ | Multilevel models (MLMs) are a central building block of the Bayesian
workflow. They enable joint, interpretable modeling of data across hierarchical
levels and provide a fully probabilistic quantification of uncertainty. Despite
their well-recognized advantages, MLMs pose significant computational
challenges, often rendering their estimation and evaluation intractable within
reasonable time constraints. Recent advances in simulation-based inference
offer promising solutions for addressing complex probabilistic models using
deep generative networks. However, the utility and reliability of deep learning
methods for estimating Bayesian MLMs remains largely unexplored, especially
when compared with gold-standard samplers. To this end, we explore a family of
neural network architectures that leverage the probabilistic factorization of
multilevel models to facilitate efficient neural network training and
subsequent near-instant posterior inference on unseen datasets. We test our
method on several real-world case studies and provide comprehensive comparisons
to Stan's gold standard sampler, where possible. Finally, we provide an
open-source implementation of our methods to stimulate further research in the
nascent field of amortized Bayesian inference.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2024 17:11:04 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 13:38:39 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Habermann",
"Daniel",
""
],
[
"Schmitt",
"Marvin",
""
],
[
"Kühmichel",
"Lars",
""
],
[
"Bulling",
"Andreas",
""
],
[
"Radev",
"Stefan T.",
""
],
[
"Bürkner",
"Paul-Christian",
""
]
] | TITLE: Amortized Bayesian Multilevel Models
ABSTRACT: Multilevel models (MLMs) are a central building block of the Bayesian
workflow. They enable joint, interpretable modeling of data across hierarchical
levels and provide a fully probabilistic quantification of uncertainty. Despite
their well-recognized advantages, MLMs pose significant computational
challenges, often rendering their estimation and evaluation intractable within
reasonable time constraints. Recent advances in simulation-based inference
offer promising solutions for addressing complex probabilistic models using
deep generative networks. However, the utility and reliability of deep learning
methods for estimating Bayesian MLMs remains largely unexplored, especially
when compared with gold-standard samplers. To this end, we explore a family of
neural network architectures that leverage the probabilistic factorization of
multilevel models to facilitate efficient neural network training and
subsequent near-instant posterior inference on unseen datasets. We test our
method on several real-world case studies and provide comprehensive comparisons
to Stan's gold standard sampler, where possible. Finally, we provide an
open-source implementation of our methods to stimulate further research in the
nascent field of amortized Bayesian inference.
|
2409.03025 | Manu Gaur | Manu Gaur and Darshan Singh and Makarand Tapaswi | No Detail Left Behind: Revisiting Self-Retrieval for Fine-Grained Image
Captioning | Published at Transactions on Machine Learning Research (TMLR)
https://openreview.net/forum?id=gqh0yzPYdo | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image captioning systems are unable to generate fine-grained captions as they
are trained on data that is either noisy (alt-text) or generic (human
annotations). This is further exacerbated by maximum likelihood training that
encourages generation of frequently occurring phrases. Previous works have
tried to address this limitation by fine-tuning captioners with a
self-retrieval (SR) reward. However, we find that SR fine-tuning has a tendency
to reduce caption faithfulness and even hallucinate. In this work, we
circumvent this bottleneck by improving the MLE initialization of the
captioning system and designing a curriculum for the SR fine-tuning process. To
this extent, we present (1) Visual Caption Boosting, a novel framework to
instill fine-grainedness in generic image captioning datasets while remaining
anchored in human annotations; and (2) BagCurri, a carefully designed training
curriculum that more optimally leverages the contrastive nature of the
self-retrieval reward. Jointly, they enable the captioner to describe
fine-grained aspects in the image while preserving faithfulness to ground-truth
captions. Our approach outperforms previous work by +8.9% on SR against 99
random distractors (RD100) (Dessi et al., 2023); and +7.6% on ImageCoDe.
Additionally, existing metrics to evaluate captioning systems fail to reward
diversity or evaluate a model's fine-grained understanding ability. Our third
contribution addresses this by proposing self-retrieval from the lens of
evaluation. We introduce TrueMatch, a benchmark comprising bags of highly
similar images that uses SR to assess the captioner's ability to capture subtle
visual distinctions. We evaluate and compare several state-of-the-art
open-source MLLMs on TrueMatch, and find that our SR approach outperforms them
all by a significant margin (e.g. +4.8% - 7.1% over Cambrian) while having 1-2
orders of magnitude fewer parameters.
| [
{
"version": "v1",
"created": "Wed, 4 Sep 2024 18:32:39 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 04:34:41 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Gaur",
"Manu",
""
],
[
"Singh",
"Darshan",
""
],
[
"Tapaswi",
"Makarand",
""
]
] | TITLE: No Detail Left Behind: Revisiting Self-Retrieval for Fine-Grained Image
Captioning
ABSTRACT: Image captioning systems are unable to generate fine-grained captions as they
are trained on data that is either noisy (alt-text) or generic (human
annotations). This is further exacerbated by maximum likelihood training that
encourages generation of frequently occurring phrases. Previous works have
tried to address this limitation by fine-tuning captioners with a
self-retrieval (SR) reward. However, we find that SR fine-tuning has a tendency
to reduce caption faithfulness and even hallucinate. In this work, we
circumvent this bottleneck by improving the MLE initialization of the
captioning system and designing a curriculum for the SR fine-tuning process. To
this extent, we present (1) Visual Caption Boosting, a novel framework to
instill fine-grainedness in generic image captioning datasets while remaining
anchored in human annotations; and (2) BagCurri, a carefully designed training
curriculum that more optimally leverages the contrastive nature of the
self-retrieval reward. Jointly, they enable the captioner to describe
fine-grained aspects in the image while preserving faithfulness to ground-truth
captions. Our approach outperforms previous work by +8.9% on SR against 99
random distractors (RD100) (Dessi et al., 2023); and +7.6% on ImageCoDe.
Additionally, existing metrics to evaluate captioning systems fail to reward
diversity or evaluate a model's fine-grained understanding ability. Our third
contribution addresses this by proposing self-retrieval from the lens of
evaluation. We introduce TrueMatch, a benchmark comprising bags of highly
similar images that uses SR to assess the captioner's ability to capture subtle
visual distinctions. We evaluate and compare several state-of-the-art
open-source MLLMs on TrueMatch, and find that our SR approach outperforms them
all by a significant margin (e.g. +4.8% - 7.1% over Cambrian) while having 1-2
orders of magnitude fewer parameters.
|
2409.13415 | Raghunath Sahoo | Kamaljeet Singh, Kangkan Goswami, Raghunath Sahoo, and Sumanta Samal | Design and development of advanced Al-Ti-V alloys for beampipe
applications in particle accelerators | Same as the published version | Phys. Rev. Accel. Beams 28, 043101 (2025) | 10.1103/PhysRevAccelBeams.28.043101 | null | physics.acc-ph cond-mat.mtrl-sci hep-ex nucl-ex | http://creativecommons.org/licenses/by-sa/4.0/ | The present investigation reports the design and development of an advanced
material with a high figure of merit (FoM) for beampipe applications in
particle accelerators by bringing synergy between computational and
experimental approaches. Machine learning algorithms have been used to predict
the phase(s), low density, and high radiation length of the designed Al-Ti-V
alloys. Al-Ti-V alloys with various compositions for single-phase and
dual-phase mixtures, liquidus temperature, and density values are obtained
using the Latin hypercube sampling method in TC Python Thermo-Calc software.
The obtained dataset is utilized to train the machine-learning algorithms.
Classification algorithms such as XGBoost and regression models such as Linear
Regression and Random Forest regressor have been used to compute the number of
phases, radiation length, and density respectively. The XGBoost algorithms show
an accuracy of $98\%$, the Linear regression model shows an accuracy of $94\%$,
and the Random Forest regressor model is accurate up to $99\%$. The developed
Al-Ti-V alloys exhibit high radiation length as well as a good combination of
high elastic modulus and toughness due to the synergistic effect of the
presence of hard $Al_3Ti$ phase along with a minor volume fraction of FCC
$(Al)_{ss}$ solid solution phase mixture. The comparison of our alloys, alloy-1
($Al_{75.2}Ti_{22.8}V_{2}$) and alloy-2 ($Al_{89}Ti_{10}V_{1}$) shows an
increase in the radiation length by seven-times and a decrease in the density
by two to three times as compared to stainless steel 304, the preferred
material for constructing beampipes in low-energy particle accelerators.
Further, we experimentally verify the elastic modulus of the alloy-1 and
compute the FoM equal to 0.416, which is better than other existing materials
for beampipes in low-energy experiments.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2024 11:27:13 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 09:58:04 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Singh",
"Kamaljeet",
""
],
[
"Goswami",
"Kangkan",
""
],
[
"Sahoo",
"Raghunath",
""
],
[
"Samal",
"Sumanta",
""
]
] | TITLE: Design and development of advanced Al-Ti-V alloys for beampipe
applications in particle accelerators
ABSTRACT: The present investigation reports the design and development of an advanced
material with a high figure of merit (FoM) for beampipe applications in
particle accelerators by bringing synergy between computational and
experimental approaches. Machine learning algorithms have been used to predict
the phase(s), low density, and high radiation length of the designed Al-Ti-V
alloys. Al-Ti-V alloys with various compositions for single-phase and
dual-phase mixtures, liquidus temperature, and density values are obtained
using the Latin hypercube sampling method in TC Python Thermo-Calc software.
The obtained dataset is utilized to train the machine-learning algorithms.
Classification algorithms such as XGBoost and regression models such as Linear
Regression and Random Forest regressor have been used to compute the number of
phases, radiation length, and density respectively. The XGBoost algorithms show
an accuracy of $98\%$, the Linear regression model shows an accuracy of $94\%$,
and the Random Forest regressor model is accurate up to $99\%$. The developed
Al-Ti-V alloys exhibit high radiation length as well as a good combination of
high elastic modulus and toughness due to the synergistic effect of the
presence of hard $Al_3Ti$ phase along with a minor volume fraction of FCC
$(Al)_{ss}$ solid solution phase mixture. The comparison of our alloys, alloy-1
($Al_{75.2}Ti_{22.8}V_{2}$) and alloy-2 ($Al_{89}Ti_{10}V_{1}$) shows an
increase in the radiation length by seven-times and a decrease in the density
by two to three times as compared to stainless steel 304, the preferred
material for constructing beampipes in low-energy particle accelerators.
Further, we experimentally verify the elastic modulus of the alloy-1 and
compute the FoM equal to 0.416, which is better than other existing materials
for beampipes in low-energy experiments.
|
2409.16507 | Ryan Lagerquist | Ryan Lagerquist, Galina Chirokova, Robert DeMaria, Mark DeMaria, Imme
Ebert-Uphoff | Center-fixing of tropical cyclones using uncertainty-aware deep learning
applied to high-temporal-resolution geostationary satellite imagery | Submitted to AMS journal Weather and Forecasting. Main body is 64
pages and 17 figures; supplement is another 33 pages and 31 figures | null | null | null | physics.ao-ph cs.AI | http://creativecommons.org/licenses/by/4.0/ | Determining the location of a tropical cyclone's (TC) surface circulation
center -- "center-fixing" -- is a critical first step in the TC-forecasting
process, affecting current and future estimates of track, intensity, and
structure. Despite a recent increase in automated center-fixing methods, only
one such method (ARCHER-2) is operational, and its best performance is achieved
when using microwave or scatterometer data, which are not available at every
forecast cycle. We develop a deep-learning algorithm called GeoCenter; besides
a few scalars in the operational ATCF, it relies only on geostationary IR
satellite imagery, which is available for all TC basins at high frequency (10
min) and low latency (< 10 min) during both day and night. GeoCenter ingests an
animation (time series) of IR images, including 9 channels at lag times up to 4
hours. The animation is centered at a "first guess" location, offset from the
true TC-center location by 48 km on average and sometimes > 100 km; GeoCenter
is tasked with correcting this offset. On an independent testing dataset,
GeoCenter achieves a mean/median/RMS (root mean square) error of 26.6/22.2/32.4
km for all systems, 24.7/20.8/30.0 km for tropical systems, and 14.6/12.5/17.3
km for category-2--5 hurricanes. These values are similar to ARCHER-2 errors
with microwave or scatterometer data, and better than ARCHER-2 errors when only
IR data are available. GeoCenter also performs skillful uncertainty
quantification, producing a well calibrated ensemble of 150 TC-center
locations. Furthermore, all predictors used by GeoCenter are available in real
time, which would make GeoCenter easy to implement operationally every 10 min.
| [
{
"version": "v1",
"created": "Tue, 24 Sep 2024 23:39:56 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 18:34:36 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Lagerquist",
"Ryan",
""
],
[
"Chirokova",
"Galina",
""
],
[
"DeMaria",
"Robert",
""
],
[
"DeMaria",
"Mark",
""
],
[
"Ebert-Uphoff",
"Imme",
""
]
] | TITLE: Center-fixing of tropical cyclones using uncertainty-aware deep learning
applied to high-temporal-resolution geostationary satellite imagery
ABSTRACT: Determining the location of a tropical cyclone's (TC) surface circulation
center -- "center-fixing" -- is a critical first step in the TC-forecasting
process, affecting current and future estimates of track, intensity, and
structure. Despite a recent increase in automated center-fixing methods, only
one such method (ARCHER-2) is operational, and its best performance is achieved
when using microwave or scatterometer data, which are not available at every
forecast cycle. We develop a deep-learning algorithm called GeoCenter; besides
a few scalars in the operational ATCF, it relies only on geostationary IR
satellite imagery, which is available for all TC basins at high frequency (10
min) and low latency (< 10 min) during both day and night. GeoCenter ingests an
animation (time series) of IR images, including 9 channels at lag times up to 4
hours. The animation is centered at a "first guess" location, offset from the
true TC-center location by 48 km on average and sometimes > 100 km; GeoCenter
is tasked with correcting this offset. On an independent testing dataset,
GeoCenter achieves a mean/median/RMS (root mean square) error of 26.6/22.2/32.4
km for all systems, 24.7/20.8/30.0 km for tropical systems, and 14.6/12.5/17.3
km for category-2--5 hurricanes. These values are similar to ARCHER-2 errors
with microwave or scatterometer data, and better than ARCHER-2 errors when only
IR data are available. GeoCenter also performs skillful uncertainty
quantification, producing a well calibrated ensemble of 150 TC-center
locations. Furthermore, all predictors used by GeoCenter are available in real
time, which would make GeoCenter easy to implement operationally every 10 min.
|
2410.00876 | Sharmishtha Dutta | Sharmishtha Dutta, Alex Gittens, Mohammed J. Zaki, Charu C. Aggarwal | Replacing Paths with Connection-Biased Attention for Knowledge Graph
Completion | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Knowledge graph (KG) completion aims to identify additional facts that can be
inferred from the existing facts in the KG. Recent developments in this field
have explored this task in the inductive setting, where at test time one sees
entities that were not present during training; the most performant models in
the inductive setting have employed path encoding modules in addition to
standard subgraph encoding modules. This work similarly focuses on KG
completion in the inductive setting, without the explicit use of path
encodings, which can be time-consuming and introduces several hyperparameters
that require costly hyperparameter optimization. Our approach uses a
Transformer-based subgraph encoding module only; we introduce connection-biased
attention and entity role embeddings into the subgraph encoding module to
eliminate the need for an expensive and time-consuming path encoding module.
Evaluations on standard inductive KG completion benchmark datasets demonstrate
that our \textbf{C}onnection-\textbf{B}iased \textbf{Li}nk \textbf{P}rediction
(CBLiP) model has superior performance to models that do not use path
information. Compared to models that utilize path information, CBLiP shows
competitive or superior performance while being faster. Additionally, to show
that the effectiveness of connection-biased attention and entity role
embeddings also holds in the transductive setting, we compare CBLiP's
performance on the relation prediction task in the transductive setting.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 17:12:41 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Dec 2024 20:34:15 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Feb 2025 22:52:22 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Apr 2025 02:12:28 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Dutta",
"Sharmishtha",
""
],
[
"Gittens",
"Alex",
""
],
[
"Zaki",
"Mohammed J.",
""
],
[
"Aggarwal",
"Charu C.",
""
]
] | TITLE: Replacing Paths with Connection-Biased Attention for Knowledge Graph
Completion
ABSTRACT: Knowledge graph (KG) completion aims to identify additional facts that can be
inferred from the existing facts in the KG. Recent developments in this field
have explored this task in the inductive setting, where at test time one sees
entities that were not present during training; the most performant models in
the inductive setting have employed path encoding modules in addition to
standard subgraph encoding modules. This work similarly focuses on KG
completion in the inductive setting, without the explicit use of path
encodings, which can be time-consuming and introduces several hyperparameters
that require costly hyperparameter optimization. Our approach uses a
Transformer-based subgraph encoding module only; we introduce connection-biased
attention and entity role embeddings into the subgraph encoding module to
eliminate the need for an expensive and time-consuming path encoding module.
Evaluations on standard inductive KG completion benchmark datasets demonstrate
that our \textbf{C}onnection-\textbf{B}iased \textbf{Li}nk \textbf{P}rediction
(CBLiP) model has superior performance to models that do not use path
information. Compared to models that utilize path information, CBLiP shows
competitive or superior performance while being faster. Additionally, to show
that the effectiveness of connection-biased attention and entity role
embeddings also holds in the transductive setting, we compare CBLiP's
performance on the relation prediction task in the transductive setting.
|
2410.07991 | Lorenzo Cima | Tommaso Giorgi, Lorenzo Cima, Tiziano Fagni, Marco Avvenuti, Stefano
Cresci | Human and LLM Biases in Hate Speech Annotations: A Socio-Demographic
Analysis of Annotators and Targets | null | null | null | null | cs.CL cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of online platforms exacerbated the spread of hate speech, demanding
scalable and effective detection. However, the accuracy of hate speech
detection systems heavily relies on human-labeled data, which is inherently
susceptible to biases. While previous work has examined the issue, the
interplay between the characteristics of the annotator and those of the target
of the hate are still unexplored. We fill this gap by leveraging an extensive
dataset with rich socio-demographic information of both annotators and targets,
uncovering how human biases manifest in relation to the target's attributes.
Our analysis surfaces the presence of widespread biases, which we
quantitatively describe and characterize based on their intensity and
prevalence, revealing marked differences. Furthermore, we compare human biases
with those exhibited by persona-based LLMs. Our findings indicate that while
persona-based LLMs do exhibit biases, these differ significantly from those of
human annotators. Overall, our work offers new and nuanced results on human
biases in hate speech annotations, as well as fresh insights into the design of
AI-driven hate speech detection systems.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 14:48:57 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Oct 2024 14:44:45 GMT"
},
{
"version": "v3",
"created": "Sun, 20 Oct 2024 08:13:18 GMT"
},
{
"version": "v4",
"created": "Thu, 19 Dec 2024 15:16:49 GMT"
},
{
"version": "v5",
"created": "Wed, 9 Apr 2025 15:05:27 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Giorgi",
"Tommaso",
""
],
[
"Cima",
"Lorenzo",
""
],
[
"Fagni",
"Tiziano",
""
],
[
"Avvenuti",
"Marco",
""
],
[
"Cresci",
"Stefano",
""
]
] | TITLE: Human and LLM Biases in Hate Speech Annotations: A Socio-Demographic
Analysis of Annotators and Targets
ABSTRACT: The rise of online platforms exacerbated the spread of hate speech, demanding
scalable and effective detection. However, the accuracy of hate speech
detection systems heavily relies on human-labeled data, which is inherently
susceptible to biases. While previous work has examined the issue, the
interplay between the characteristics of the annotator and those of the target
of the hate are still unexplored. We fill this gap by leveraging an extensive
dataset with rich socio-demographic information of both annotators and targets,
uncovering how human biases manifest in relation to the target's attributes.
Our analysis surfaces the presence of widespread biases, which we
quantitatively describe and characterize based on their intensity and
prevalence, revealing marked differences. Furthermore, we compare human biases
with those exhibited by persona-based LLMs. Our findings indicate that while
persona-based LLMs do exhibit biases, these differ significantly from those of
human annotators. Overall, our work offers new and nuanced results on human
biases in hate speech annotations, as well as fresh insights into the design of
AI-driven hate speech detection systems.
|
2410.08427 | Jens Dietrich | Jens Dietrich, Tim White, Behnaz Hassanshahi, Paddy Krishnan | Levels of Binary Equivalence for the Comparison of Binaries from
Alternative Builds | 20 pages, 1 figure, 10 tables | null | null | null | cs.CR cs.SE | http://creativecommons.org/licenses/by-sa/4.0/ | In response to challenges in software supply chain security, several
organisations have created infrastructures to independently build commodity
open source projects and release the resulting binaries. Build platform
variability can strengthen security as it facilitates the detection of
compromised build environments. Furthermore, by improving the security posture
of the build platform and collecting provenance information during the build,
the resulting artifacts can be used with greater trust. Such offerings are now
available from Google, Oracle and RedHat. The availability of multiple binaries
built from the same sources creates new challenges and opportunities, and
raises questions such as: 'Does build A confirm the integrity of build B?' or
'Can build A reveal a compromised build B?'. To answer such questions requires
a notion of equivalence between binaries. We demonstrate that the obvious
approach based on bitwise equality has significant shortcomings in practice,
and that there is value in opting for alternative notions. We conceptualise
this by introducing levels of equivalence, inspired by clone detection types.
We demonstrate the value of these new levels through several experiments. We
construct a dataset consisting of Java binaries built from the same sources
independently by different providers, resulting in 14,156 pairs of binaries in
total. We then compare the compiled class files in those jar files and find
that for 3,750 pairs of jars (26.49%) there is at least one such file that is
different, also forcing the jar files and their cryptographic hashes to be
different. However, based on the new equivalence levels, we can still establish
that many of them are practically equivalent. We evaluate several candidate
equivalence relations on a semi-synthetic dataset that provides oracles
consisting of pairs of binaries that either should be, or must not be
equivalent.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 00:16:26 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 08:55:38 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Dietrich",
"Jens",
""
],
[
"White",
"Tim",
""
],
[
"Hassanshahi",
"Behnaz",
""
],
[
"Krishnan",
"Paddy",
""
]
] | TITLE: Levels of Binary Equivalence for the Comparison of Binaries from
Alternative Builds
ABSTRACT: In response to challenges in software supply chain security, several
organisations have created infrastructures to independently build commodity
open source projects and release the resulting binaries. Build platform
variability can strengthen security as it facilitates the detection of
compromised build environments. Furthermore, by improving the security posture
of the build platform and collecting provenance information during the build,
the resulting artifacts can be used with greater trust. Such offerings are now
available from Google, Oracle and RedHat. The availability of multiple binaries
built from the same sources creates new challenges and opportunities, and
raises questions such as: 'Does build A confirm the integrity of build B?' or
'Can build A reveal a compromised build B?'. To answer such questions requires
a notion of equivalence between binaries. We demonstrate that the obvious
approach based on bitwise equality has significant shortcomings in practice,
and that there is value in opting for alternative notions. We conceptualise
this by introducing levels of equivalence, inspired by clone detection types.
We demonstrate the value of these new levels through several experiments. We
construct a dataset consisting of Java binaries built from the same sources
independently by different providers, resulting in 14,156 pairs of binaries in
total. We then compare the compiled class files in those jar files and find
that for 3,750 pairs of jars (26.49%) there is at least one such file that is
different, also forcing the jar files and their cryptographic hashes to be
different. However, based on the new equivalence levels, we can still establish
that many of them are practically equivalent. We evaluate several candidate
equivalence relations on a semi-synthetic dataset that provides oracles
consisting of pairs of binaries that either should be, or must not be
equivalent.
|
2410.12695 | Phoenix Yu | Phoenix Yu, Tilo Burghardt, Andrew W Dowsey, Neill W Campbell | Holstein-Friesian Re-Identification using Multiple Cameras and
Self-Supervision on a Working Farm | 24 pages, 10 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present MultiCamCows2024, a farm-scale image dataset filmed across
multiple cameras for the biometric identification of individual
Holstein-Friesian cattle exploiting their unique black and white coat-patterns.
Captured by three ceiling-mounted visual sensors covering adjacent barn areas
over seven days on a working dairy farm, the dataset comprises 101,329 images
of 90 cows, plus underlying original CCTV footage. The dataset is provided with
full computer vision recognition baselines, that is both a supervised and
self-supervised learning framework for individual cow identification trained on
cattle tracklets. We report a performance above 96% single image identification
accuracy from the dataset and demonstrate that combining data from multiple
cameras during learning enhances self-supervised identification. We show that
our framework enables automatic cattle identification, barring only the simple
human verification of tracklet integrity during data collection. Crucially, our
study highlights that multi-camera, supervised and self-supervised components
in tandem not only deliver highly accurate individual cow identification, but
also achieve this efficiently with no labelling of cattle identities by humans.
We argue that this improvement in efficacy has practical implications for
livestock management, behaviour analysis, and agricultural monitoring. For
reproducibility and practical ease of use, we publish all key software and code
including re-identification components and the species detector with this
paper, available at https://tinyurl.com/MultiCamCows2024.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 15:58:47 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 17:01:38 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Yu",
"Phoenix",
""
],
[
"Burghardt",
"Tilo",
""
],
[
"Dowsey",
"Andrew W",
""
],
[
"Campbell",
"Neill W",
""
]
] | TITLE: Holstein-Friesian Re-Identification using Multiple Cameras and
Self-Supervision on a Working Farm
ABSTRACT: We present MultiCamCows2024, a farm-scale image dataset filmed across
multiple cameras for the biometric identification of individual
Holstein-Friesian cattle exploiting their unique black and white coat-patterns.
Captured by three ceiling-mounted visual sensors covering adjacent barn areas
over seven days on a working dairy farm, the dataset comprises 101,329 images
of 90 cows, plus underlying original CCTV footage. The dataset is provided with
full computer vision recognition baselines, that is both a supervised and
self-supervised learning framework for individual cow identification trained on
cattle tracklets. We report a performance above 96% single image identification
accuracy from the dataset and demonstrate that combining data from multiple
cameras during learning enhances self-supervised identification. We show that
our framework enables automatic cattle identification, barring only the simple
human verification of tracklet integrity during data collection. Crucially, our
study highlights that multi-camera, supervised and self-supervised components
in tandem not only deliver highly accurate individual cow identification, but
also achieve this efficiently with no labelling of cattle identities by humans.
We argue that this improvement in efficacy has practical implications for
livestock management, behaviour analysis, and agricultural monitoring. For
reproducibility and practical ease of use, we publish all key software and code
including re-identification components and the species detector with this
paper, available at https://tinyurl.com/MultiCamCows2024.
|
2410.15198 | Md Elias Hossain | Elias Hossain, Tasfia Nuzhat, Shamsul Masum, Shahram Rahimi and
Noorbakhsh Amiri Golilarz | Medical-GAT: Cancer Document Classification Leveraging Graph-Based
Residual Network for Scenarios with Limited Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Accurate classification of cancer-related medical abstracts is crucial for
healthcare management and research. However, obtaining large, labeled datasets
in the medical domain is challenging due to privacy concerns and the complexity
of clinical data. This scarcity of annotated data impedes the development of
effective machine learning models for cancer document classification. To
address this challenge, we present a curated dataset of 1,874 biomedical
abstracts, categorized into thyroid cancer, colon cancer, lung cancer, and
generic topics. Our research focuses on leveraging this dataset to improve
classification performance, particularly in data-scarce scenarios. We introduce
a Residual Graph Attention Network (R-GAT) with multiple graph attention layers
that capture the semantic information and structural relationships within
cancer-related documents. Our R-GAT model is compared with various techniques,
including transformer-based models such as Bidirectional Encoder
Representations from Transformers (BERT), RoBERTa, and domain-specific models
like BioBERT and Bio+ClinicalBERT. We also evaluated deep learning models
(CNNs, LSTMs) and traditional machine learning models (Logistic Regression,
SVM). Additionally, we explore ensemble approaches that combine deep learning
models to enhance classification. Various feature extraction methods are
assessed, including Term Frequency-Inverse Document Frequency (TF-IDF) with
unigrams and bigrams, Word2Vec, and tokenizers from BERT and RoBERTa. The R-GAT
model outperforms other techniques, achieving precision, recall, and F1 scores
of 0.99, 0.97, and 0.98 for thyroid cancer; 0.96, 0.94, and 0.95 for colon
cancer; 0.96, 0.99, and 0.97 for lung cancer; and 0.95, 0.96, and 0.95 for
generic topics.
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2024 20:07:40 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2024 14:42:30 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 02:20:22 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2025 22:53:41 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Hossain",
"Elias",
""
],
[
"Nuzhat",
"Tasfia",
""
],
[
"Masum",
"Shamsul",
""
],
[
"Rahimi",
"Shahram",
""
],
[
"Golilarz",
"Noorbakhsh Amiri",
""
]
] | TITLE: Medical-GAT: Cancer Document Classification Leveraging Graph-Based
Residual Network for Scenarios with Limited Data
ABSTRACT: Accurate classification of cancer-related medical abstracts is crucial for
healthcare management and research. However, obtaining large, labeled datasets
in the medical domain is challenging due to privacy concerns and the complexity
of clinical data. This scarcity of annotated data impedes the development of
effective machine learning models for cancer document classification. To
address this challenge, we present a curated dataset of 1,874 biomedical
abstracts, categorized into thyroid cancer, colon cancer, lung cancer, and
generic topics. Our research focuses on leveraging this dataset to improve
classification performance, particularly in data-scarce scenarios. We introduce
a Residual Graph Attention Network (R-GAT) with multiple graph attention layers
that capture the semantic information and structural relationships within
cancer-related documents. Our R-GAT model is compared with various techniques,
including transformer-based models such as Bidirectional Encoder
Representations from Transformers (BERT), RoBERTa, and domain-specific models
like BioBERT and Bio+ClinicalBERT. We also evaluated deep learning models
(CNNs, LSTMs) and traditional machine learning models (Logistic Regression,
SVM). Additionally, we explore ensemble approaches that combine deep learning
models to enhance classification. Various feature extraction methods are
assessed, including Term Frequency-Inverse Document Frequency (TF-IDF) with
unigrams and bigrams, Word2Vec, and tokenizers from BERT and RoBERTa. The R-GAT
model outperforms other techniques, achieving precision, recall, and F1 scores
of 0.99, 0.97, and 0.98 for thyroid cancer; 0.96, 0.94, and 0.95 for colon
cancer; 0.96, 0.99, and 0.97 for lung cancer; and 0.95, 0.96, and 0.95 for
generic topics.
|
2410.18388 | Bo Han | Bo Han, Yuheng Jia, Hui Liu, Junhui Hou | Irregular Tensor Low-Rank Representation for Hyperspectral Image
Representation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spectral variations pose a common challenge in analyzing hyperspectral images
(HSI). To address this, low-rank tensor representation has emerged as a robust
strategy, leveraging inherent correlations within HSI data. However, the
spatial distribution of ground objects in HSIs is inherently irregular,
existing naturally in tensor format, with numerous class-specific regions
manifesting as irregular tensors. Current low-rank representation techniques
are designed for regular tensor structures and overlook this fundamental
irregularity in real-world HSIs, leading to performance limitations. To tackle
this issue, we propose a novel model for irregular tensor low-rank
representation tailored to efficiently model irregular 3D cubes. By
incorporating a non-convex nuclear norm to promote low-rankness and integrating
a global negative low-rank term to enhance the discriminative ability, our
proposed model is formulated as a constrained optimization problem and solved
using an alternating augmented Lagrangian method. Experimental validation
conducted on four public datasets demonstrates the superior performance of our
method compared to existing state-of-the-art approaches. The code is publicly
available at https://github.com/hb-studying/ITLRR.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 02:56:22 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Feb 2025 13:44:29 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 02:24:14 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Han",
"Bo",
""
],
[
"Jia",
"Yuheng",
""
],
[
"Liu",
"Hui",
""
],
[
"Hou",
"Junhui",
""
]
] | TITLE: Irregular Tensor Low-Rank Representation for Hyperspectral Image
Representation
ABSTRACT: Spectral variations pose a common challenge in analyzing hyperspectral images
(HSI). To address this, low-rank tensor representation has emerged as a robust
strategy, leveraging inherent correlations within HSI data. However, the
spatial distribution of ground objects in HSIs is inherently irregular,
existing naturally in tensor format, with numerous class-specific regions
manifesting as irregular tensors. Current low-rank representation techniques
are designed for regular tensor structures and overlook this fundamental
irregularity in real-world HSIs, leading to performance limitations. To tackle
this issue, we propose a novel model for irregular tensor low-rank
representation tailored to efficiently model irregular 3D cubes. By
incorporating a non-convex nuclear norm to promote low-rankness and integrating
a global negative low-rank term to enhance the discriminative ability, our
proposed model is formulated as a constrained optimization problem and solved
using an alternating augmented Lagrangian method. Experimental validation
conducted on four public datasets demonstrates the superior performance of our
method compared to existing state-of-the-art approaches. The code is publicly
available at https://github.com/hb-studying/ITLRR.
|
2410.21591 | Zifeng Wang | Zifeng Wang, Benjamin Danek, Ziwei Yang, Zheng Chen, Jimeng Sun | Can Large Language Models Replace Data Scientists in Biomedical
Research? | null | null | null | null | cs.AI cs.CL q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Data science plays a critical role in biomedical research, but it requires
professionals with expertise in coding and medical data analysis. Large
language models (LLMs) have shown great potential in supporting medical tasks
and performing well in general coding tests. However, existing evaluations fail
to assess their capability in biomedical data science, particularly in handling
diverse data types such as genomics and clinical datasets. To address this gap,
we developed a benchmark of data science coding tasks derived from the analyses
of 39 published studies. This benchmark comprises 293 coding tasks (128 in
Python and 165 in R) performed on real-world TCGA-type genomics and clinical
data. Our findings reveal that the vanilla prompting of LLMs yields suboptimal
performances due to drawbacks in following input instructions, understanding
target data, and adhering to standard analysis practices. Next, we benchmarked
six cutting-edge LLMs and advanced adaptation methods, finding two methods to
be particularly effective: chain-of-thought prompting, which provides a
step-by-step plan for data analysis, which led to a 21% code accuracy
improvement (56.6% versus 35.3%); and self-reflection, enabling LLMs to refine
the buggy code iteratively, yielding an 11% code accuracy improvement (45.5%
versus 34.3%). Building on these insights, we developed a platform that
integrates LLMs into the data science workflow for medical professionals. In a
user study with five medical professionals, we found that while LLMs cannot
fully automate programming tasks, they significantly streamline the programming
process. We found that 80% of their submitted code solutions were incorporated
from LLM-generated code, with up to 96% reuse in some cases. Our analysis
highlights the potential of LLMs to enhance data science efficiency in
biomedical research when integrated into expert workflows.
| [
{
"version": "v1",
"created": "Mon, 28 Oct 2024 22:48:06 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 21:48:54 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wang",
"Zifeng",
""
],
[
"Danek",
"Benjamin",
""
],
[
"Yang",
"Ziwei",
""
],
[
"Chen",
"Zheng",
""
],
[
"Sun",
"Jimeng",
""
]
] | TITLE: Can Large Language Models Replace Data Scientists in Biomedical
Research?
ABSTRACT: Data science plays a critical role in biomedical research, but it requires
professionals with expertise in coding and medical data analysis. Large
language models (LLMs) have shown great potential in supporting medical tasks
and performing well in general coding tests. However, existing evaluations fail
to assess their capability in biomedical data science, particularly in handling
diverse data types such as genomics and clinical datasets. To address this gap,
we developed a benchmark of data science coding tasks derived from the analyses
of 39 published studies. This benchmark comprises 293 coding tasks (128 in
Python and 165 in R) performed on real-world TCGA-type genomics and clinical
data. Our findings reveal that the vanilla prompting of LLMs yields suboptimal
performances due to drawbacks in following input instructions, understanding
target data, and adhering to standard analysis practices. Next, we benchmarked
six cutting-edge LLMs and advanced adaptation methods, finding two methods to
be particularly effective: chain-of-thought prompting, which provides a
step-by-step plan for data analysis, which led to a 21% code accuracy
improvement (56.6% versus 35.3%); and self-reflection, enabling LLMs to refine
the buggy code iteratively, yielding an 11% code accuracy improvement (45.5%
versus 34.3%). Building on these insights, we developed a platform that
integrates LLMs into the data science workflow for medical professionals. In a
user study with five medical professionals, we found that while LLMs cannot
fully automate programming tasks, they significantly streamline the programming
process. We found that 80% of their submitted code solutions were incorporated
from LLM-generated code, with up to 96% reuse in some cases. Our analysis
highlights the potential of LLMs to enhance data science efficiency in
biomedical research when integrated into expert workflows.
|
2410.22622 | Dung Nguyen | Dung Thuy Nguyen, Taylor T. Johnson, Kevin Leach | PARDON: Privacy-Aware and Robust Federated Domain Generalization | 2025 IEEE 45th International Conference on Distributed Computing
Systems (ICDCS) | null | null | null | cs.LG cs.CV cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) shows promise in preserving privacy and enabling
collaborative learning. However, most current solutions focus on private data
collected from a single domain. A significant challenge arises when client data
comes from diverse domains (i.e., domain shift), leading to poor performance on
unseen domains. Existing Federated Domain Generalization approaches address
this problem but assume each client holds data for an entire domain, limiting
their practicality in real-world scenarios with domain-based heterogeneity and
client sampling. In addition, certain methods enable information sharing among
clients, raising privacy concerns as this information could be used to
reconstruct sensitive private data.
To overcome this, we introduce FISC, a novel FedDG paradigm designed to
robustly handle more complicated domain distributions between clients while
ensuring security. FISC enables learning across domains by extracting an
interpolative style from local styles and employing contrastive learning. This
strategy gives clients multi-domain representations and unbiased convergent
targets. Empirical results on multiple datasets, including PACS, Office-Home,
and IWildCam, show FISC outperforms state-of-the-art (SOTA) methods. Our method
achieves accuracy on unseen domains, with improvements ranging from 3.64% to
57.22% on unseen domains. Our code is available at
https://github.com/judydnguyen/PARDON-FedDG.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 00:50:23 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 22:15:47 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Nguyen",
"Dung Thuy",
""
],
[
"Johnson",
"Taylor T.",
""
],
[
"Leach",
"Kevin",
""
]
] | TITLE: PARDON: Privacy-Aware and Robust Federated Domain Generalization
ABSTRACT: Federated Learning (FL) shows promise in preserving privacy and enabling
collaborative learning. However, most current solutions focus on private data
collected from a single domain. A significant challenge arises when client data
comes from diverse domains (i.e., domain shift), leading to poor performance on
unseen domains. Existing Federated Domain Generalization approaches address
this problem but assume each client holds data for an entire domain, limiting
their practicality in real-world scenarios with domain-based heterogeneity and
client sampling. In addition, certain methods enable information sharing among
clients, raising privacy concerns as this information could be used to
reconstruct sensitive private data.
To overcome this, we introduce FISC, a novel FedDG paradigm designed to
robustly handle more complicated domain distributions between clients while
ensuring security. FISC enables learning across domains by extracting an
interpolative style from local styles and employing contrastive learning. This
strategy gives clients multi-domain representations and unbiased convergent
targets. Empirical results on multiple datasets, including PACS, Office-Home,
and IWildCam, show FISC outperforms state-of-the-art (SOTA) methods. Our method
achieves accuracy on unseen domains, with improvements ranging from 3.64% to
57.22% on unseen domains. Our code is available at
https://github.com/judydnguyen/PARDON-FedDG.
|
2411.03299 | Roodabeh Safavi | Monika Henzinger, Roodabeh Safavi, Salil Vadhan | Concurrent Composition for Differentially Private Continual Mechanisms | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many intended uses of differential privacy involve a $\textit{continual
mechanism}$ that is set up to run continuously over a long period of time,
making more statistical releases as either queries come in or the dataset is
updated. In this paper, we give the first general treatment of privacy against
$\textit{adaptive}$ adversaries for mechanisms that support dataset updates and
a variety of queries, all arbitrarily interleaved. It also models a very
general notion of neighboring, that includes both event-level and user-level
privacy.
We prove several $\textit{concurrent}$ composition theorems for continual
mechanisms, which ensure privacy even when an adversary can interleave queries
and dataset updates to the different composed mechanisms. Previous concurrent
composition theorems for differential privacy were only for the case when the
dataset is static, with no adaptive updates. Moreover, we also give the first
interactive and continual generalizations of the "parallel composition theorem"
for noninteractive differential privacy. Specifically, we show that the
analogue of the noninteractive parallel composition theorem holds if either
there are no adaptive dataset updates or each of the composed mechanisms
satisfies pure differential privacy, but it fails to hold for composing
approximately differentially private mechanisms with dataset updates.
We then formalize a set of general conditions on a continual mechanism $M$
that runs multiple continual sub-mechanisms such that the privacy guarantees of
$M$ follow directly using the above concurrent composition theorems on the
sub-mechanisms, without further privacy loss. This enables us to give a simpler
and more modular privacy analysis of a recent continual histogram mechanism of
Henzinger, Sricharan, and Steiner. In the case of approximate DP, ours is the
first proof showing that its privacy holds against adaptive adversaries.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2024 17:50:39 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 18:47:59 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Henzinger",
"Monika",
""
],
[
"Safavi",
"Roodabeh",
""
],
[
"Vadhan",
"Salil",
""
]
] | TITLE: Concurrent Composition for Differentially Private Continual Mechanisms
ABSTRACT: Many intended uses of differential privacy involve a $\textit{continual
mechanism}$ that is set up to run continuously over a long period of time,
making more statistical releases as either queries come in or the dataset is
updated. In this paper, we give the first general treatment of privacy against
$\textit{adaptive}$ adversaries for mechanisms that support dataset updates and
a variety of queries, all arbitrarily interleaved. It also models a very
general notion of neighboring, that includes both event-level and user-level
privacy.
We prove several $\textit{concurrent}$ composition theorems for continual
mechanisms, which ensure privacy even when an adversary can interleave queries
and dataset updates to the different composed mechanisms. Previous concurrent
composition theorems for differential privacy were only for the case when the
dataset is static, with no adaptive updates. Moreover, we also give the first
interactive and continual generalizations of the "parallel composition theorem"
for noninteractive differential privacy. Specifically, we show that the
analogue of the noninteractive parallel composition theorem holds if either
there are no adaptive dataset updates or each of the composed mechanisms
satisfies pure differential privacy, but it fails to hold for composing
approximately differentially private mechanisms with dataset updates.
We then formalize a set of general conditions on a continual mechanism $M$
that runs multiple continual sub-mechanisms such that the privacy guarantees of
$M$ follow directly using the above concurrent composition theorems on the
sub-mechanisms, without further privacy loss. This enables us to give a simpler
and more modular privacy analysis of a recent continual histogram mechanism of
Henzinger, Sricharan, and Steiner. In the case of approximate DP, ours is the
first proof showing that its privacy holds against adaptive adversaries.
|
2411.03861 | Joseph Geo Benjamin | Joseph Geo Benjamin, Mothilal Asokan, Mohammad Yaqub, Karthik
Nandakumar | FedSECA: Sign Election and Coordinate-wise Aggregation of Gradients for
Byzantine Tolerant Federated Learning | Accepted in 4th Workshop on Federated Learning for Computer Vision
(FedVision-2025), held in conjunction with CVPR-2025 | null | null | null | cs.CV cs.CR | http://creativecommons.org/licenses/by/4.0/ | One of the most common defense strategies against Byzantine clients in
federated learning (FL) is to employ a robust aggregator mechanism that makes
the training more resilient. While many existing Byzantine robust aggregators
provide theoretical convergence guarantees and are empirically effective
against certain categories of attacks, we observe that certain high-strength
attacks can subvert the robust aggregator and collapse the training. To
overcome this limitation, we propose a method called FedSECA for robust Sign
Election and Coordinate-wise Aggregation of gradients in FL that is less
susceptible to malicious updates by an omniscient attacker. The proposed method
has two main components. The Concordance Ratio Induced Sign Election(CRISE)
module determines the consensus direction (elected sign) for each individual
parameter gradient through a weighted voting strategy. The client weights are
assigned based on a novel metric called concordance ratio, which quantifies the
degree of sign agreement between the client gradient updates. Based on the
elected sign, a Robust Coordinate-wise Aggregation(RoCA) strategy is employed,
where variance-reduced sparse gradients are aggregated only if they are in
alignment with the corresponding elected sign. We compare our proposed FedSECA
method against 10 robust aggregators under 7 Byzantine attacks on 3 datasets
and architectures. The results show that existing robust aggregators fail for
at least some attacks, while FedSECA exhibits better robustness. Code -
https://github.com/JosephGeoBenjamin/FedSECA-ByzantineTolerance
| [
{
"version": "v1",
"created": "Wed, 6 Nov 2024 12:14:11 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 21:19:40 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Benjamin",
"Joseph Geo",
""
],
[
"Asokan",
"Mothilal",
""
],
[
"Yaqub",
"Mohammad",
""
],
[
"Nandakumar",
"Karthik",
""
]
] | TITLE: FedSECA: Sign Election and Coordinate-wise Aggregation of Gradients for
Byzantine Tolerant Federated Learning
ABSTRACT: One of the most common defense strategies against Byzantine clients in
federated learning (FL) is to employ a robust aggregator mechanism that makes
the training more resilient. While many existing Byzantine robust aggregators
provide theoretical convergence guarantees and are empirically effective
against certain categories of attacks, we observe that certain high-strength
attacks can subvert the robust aggregator and collapse the training. To
overcome this limitation, we propose a method called FedSECA for robust Sign
Election and Coordinate-wise Aggregation of gradients in FL that is less
susceptible to malicious updates by an omniscient attacker. The proposed method
has two main components. The Concordance Ratio Induced Sign Election(CRISE)
module determines the consensus direction (elected sign) for each individual
parameter gradient through a weighted voting strategy. The client weights are
assigned based on a novel metric called concordance ratio, which quantifies the
degree of sign agreement between the client gradient updates. Based on the
elected sign, a Robust Coordinate-wise Aggregation(RoCA) strategy is employed,
where variance-reduced sparse gradients are aggregated only if they are in
alignment with the corresponding elected sign. We compare our proposed FedSECA
method against 10 robust aggregators under 7 Byzantine attacks on 3 datasets
and architectures. The results show that existing robust aggregators fail for
at least some attacks, while FedSECA exhibits better robustness. Code -
https://github.com/JosephGeoBenjamin/FedSECA-ByzantineTolerance
|
2411.04502 | Sunan Zhao | Sunan Zhao, Zhijie Li, Boyu Fan, Yunpeng Wang, Huiyu Yang, Jianchun
Wang | LESnets (Large-Eddy Simulation nets): Physics-informed neural operator
for large-eddy simulation of turbulence | 37 pages, 28 figures, 73 conferences | null | null | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Acquisition of large datasets for three-dimensional (3D) partial differential
equations (PDE) is usually very expensive. Physics-informed neural operator
(PINO) eliminates the high costs associated with generation of training
datasets, and shows great potential in a variety of partial differential
equations. In this work, we employ physics-informed neural operator, encoding
the large-eddy simulation (LES) equations directly into the neural operator for
simulating three-dimensional incompressible turbulent flows. We develop the
LESnets (Large-Eddy Simulation nets) by adding large-eddy simulation equations
to two different data-driven models, including Fourier neural operator (FNO)
and implicit Fourier neural operator (IFNO) without using label data. Notably,
by leveraging only PDE constraints to learn the spatio-temporal dynamics,
LESnets models retain the computational efficiency of data-driven approaches
while obviating the necessity for data. Meanwhile, using LES equations as PDE
constraints makes it possible to efficiently predict complex turbulence at
coarse grids. We investigate the performance of the LESnets models with two
standard three-dimensional turbulent flows: decaying homogeneous isotropic
turbulence and temporally evolving turbulent mixing layer. In the numerical
experiments, the LESnets models show similar accuracy as compared to
traditional large-eddy simulation and data-driven models including FNO and
IFNO, and exhibits a robust generalization ability to unseen regime of flow
fields. By integrating a single set of flow data, the LESnets models can
automatically learn the coefficient of the subgrid scale (SGS) model during the
training of the neural operator. Moreover, the well-trained LESnets models are
significantly faster than traditional LES, and exhibits comparable
computational efficiency to the data-driven FNO and IFNO models.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 07:53:01 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 07:31:17 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 05:25:43 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Zhao",
"Sunan",
""
],
[
"Li",
"Zhijie",
""
],
[
"Fan",
"Boyu",
""
],
[
"Wang",
"Yunpeng",
""
],
[
"Yang",
"Huiyu",
""
],
[
"Wang",
"Jianchun",
""
]
] | TITLE: LESnets (Large-Eddy Simulation nets): Physics-informed neural operator
for large-eddy simulation of turbulence
ABSTRACT: Acquisition of large datasets for three-dimensional (3D) partial differential
equations (PDE) is usually very expensive. Physics-informed neural operator
(PINO) eliminates the high costs associated with generation of training
datasets, and shows great potential in a variety of partial differential
equations. In this work, we employ physics-informed neural operator, encoding
the large-eddy simulation (LES) equations directly into the neural operator for
simulating three-dimensional incompressible turbulent flows. We develop the
LESnets (Large-Eddy Simulation nets) by adding large-eddy simulation equations
to two different data-driven models, including Fourier neural operator (FNO)
and implicit Fourier neural operator (IFNO) without using label data. Notably,
by leveraging only PDE constraints to learn the spatio-temporal dynamics,
LESnets models retain the computational efficiency of data-driven approaches
while obviating the necessity for data. Meanwhile, using LES equations as PDE
constraints makes it possible to efficiently predict complex turbulence at
coarse grids. We investigate the performance of the LESnets models with two
standard three-dimensional turbulent flows: decaying homogeneous isotropic
turbulence and temporally evolving turbulent mixing layer. In the numerical
experiments, the LESnets models show similar accuracy as compared to
traditional large-eddy simulation and data-driven models including FNO and
IFNO, and exhibits a robust generalization ability to unseen regime of flow
fields. By integrating a single set of flow data, the LESnets models can
automatically learn the coefficient of the subgrid scale (SGS) model during the
training of the neural operator. Moreover, the well-trained LESnets models are
significantly faster than traditional LES, and exhibits comparable
computational efficiency to the data-driven FNO and IFNO models.
|
2411.06565 | Ting-Ju Wei | Ting-Ju Wei and Chuin-Shan Chen | Foundation Model for Composite Microstructures: Reconstruction,
Stiffness, and Nonlinear Behavior Prediction | null | null | null | null | cs.CE cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid advancement of machine learning has unlocked numerous opportunities
for materials science, particularly in accelerating the design and analysis of
materials. However, a significant challenge lies in the scarcity and high cost
of obtaining high-quality materials datasets. While foundation models
pre-trained on large datasets have excelled in fields like natural language
processing by leveraging latent features through transfer learning, their
application in materials science remains limited. Here, we present a foundation
model specifically designed for composite materials. Pre-trained on a dataset
of short-fiber composites to learn robust latent features, the model accurately
predicts homogenized stiffness during transfer learning, even with limited
training data. Additionally, our model effectively predicts the material's
nonlinear behavior by transferring these learned features to an
Interaction-based Material Network, which is a constitutive surrogate model.
These results demonstrate the potential of our foundation model to capture
complex material behaviors. Our findings validate the feasibility and
effectiveness of foundation models in composite materials. We anticipate
extending this approach to more complex three-dimensional composite materials,
polycrystalline materials, and beyond. Moreover, this framework enables
high-accuracy predictions even when experimental data are scarce, paving the
way for more efficient and cost-effective materials design and analysis.
| [
{
"version": "v1",
"created": "Sun, 10 Nov 2024 19:06:25 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Feb 2025 14:57:37 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Apr 2025 19:00:34 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wei",
"Ting-Ju",
""
],
[
"Chen",
"Chuin-Shan",
""
]
] | TITLE: Foundation Model for Composite Microstructures: Reconstruction,
Stiffness, and Nonlinear Behavior Prediction
ABSTRACT: The rapid advancement of machine learning has unlocked numerous opportunities
for materials science, particularly in accelerating the design and analysis of
materials. However, a significant challenge lies in the scarcity and high cost
of obtaining high-quality materials datasets. While foundation models
pre-trained on large datasets have excelled in fields like natural language
processing by leveraging latent features through transfer learning, their
application in materials science remains limited. Here, we present a foundation
model specifically designed for composite materials. Pre-trained on a dataset
of short-fiber composites to learn robust latent features, the model accurately
predicts homogenized stiffness during transfer learning, even with limited
training data. Additionally, our model effectively predicts the material's
nonlinear behavior by transferring these learned features to an
Interaction-based Material Network, which is a constitutive surrogate model.
These results demonstrate the potential of our foundation model to capture
complex material behaviors. Our findings validate the feasibility and
effectiveness of foundation models in composite materials. We anticipate
extending this approach to more complex three-dimensional composite materials,
polycrystalline materials, and beyond. Moreover, this framework enables
high-accuracy predictions even when experimental data are scarce, paving the
way for more efficient and cost-effective materials design and analysis.
|
2411.07413 | Futoon M. Abushaqra PhD | Futoon M.Abushaqra, Hao Xue, Yongli Ren and Flora D.Salim | ODEStream: A Buffer-Free Online Learning Framework with ODE-based
Adaptor for Streaming Time Series Forecasting | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Addressing the challenges of irregularity and concept drift in streaming time
series is crucial for real-world predictive modelling. Previous studies in time
series continual learning often propose models that require buffering long
sequences, potentially restricting the responsiveness of the inference system.
Moreover, these models are typically designed for regularly sampled data, an
unrealistic assumption in real-world scenarios. This paper introduces
ODEStream, a novel buffer-free continual learning framework that incorporates a
temporal isolation layer to capture temporal dependencies within the data.
Simultaneously, it leverages the capability of neural ordinary differential
equations to process irregular sequences and generate a continuous data
representation, enabling seamless adaptation to changing dynamics in a data
streaming scenario. Our approach focuses on learning how the dynamics and
distribution of historical data change over time, facilitating direct
processing of streaming sequences. Evaluations on benchmark real-world datasets
demonstrate that ODEStream outperforms the state-of-the-art online learning and
streaming analysis baseline models, providing accurate predictions over
extended periods while minimising performance degradation over time by learning
how the sequence dynamics change. The implementation of ODEStream is available
at: https://github.com/FtoonAbushaqra/ODEStream.git.
| [
{
"version": "v1",
"created": "Mon, 11 Nov 2024 22:36:33 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 13:29:09 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Abushaqra",
"Futoon M.",
""
],
[
"Xue",
"Hao",
""
],
[
"Ren",
"Yongli",
""
],
[
"Salim",
"Flora D.",
""
]
] | TITLE: ODEStream: A Buffer-Free Online Learning Framework with ODE-based
Adaptor for Streaming Time Series Forecasting
ABSTRACT: Addressing the challenges of irregularity and concept drift in streaming time
series is crucial for real-world predictive modelling. Previous studies in time
series continual learning often propose models that require buffering long
sequences, potentially restricting the responsiveness of the inference system.
Moreover, these models are typically designed for regularly sampled data, an
unrealistic assumption in real-world scenarios. This paper introduces
ODEStream, a novel buffer-free continual learning framework that incorporates a
temporal isolation layer to capture temporal dependencies within the data.
Simultaneously, it leverages the capability of neural ordinary differential
equations to process irregular sequences and generate a continuous data
representation, enabling seamless adaptation to changing dynamics in a data
streaming scenario. Our approach focuses on learning how the dynamics and
distribution of historical data change over time, facilitating direct
processing of streaming sequences. Evaluations on benchmark real-world datasets
demonstrate that ODEStream outperforms the state-of-the-art online learning and
streaming analysis baseline models, providing accurate predictions over
extended periods while minimising performance degradation over time by learning
how the sequence dynamics change. The implementation of ODEStream is available
at: https://github.com/FtoonAbushaqra/ODEStream.git.
|
2411.08397 | Aoi Ito | Aoi Ito, Kota Dohi, Yohei Kawaguchi | CLaSP: Learning Concepts for Time-Series Signals from Natural Language
Supervision | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents CLaSP, a novel model for retrieving time-series signals
using natural language queries that describe signal characteristics. The
ability to search time-series signals based on descriptive queries is essential
in domains such as industrial diagnostics, where data scientists often need to
find signals with specific characteristics. However, existing methods rely on
sketch-based inputs, predefined synonym dictionaries, or domain-specific manual
designs, limiting their scalability and adaptability. CLaSP addresses these
challenges by employing contrastive learning to map time-series signals to
natural language descriptions. Unlike prior approaches, it eliminates the need
for predefined synonym dictionaries and leverages the rich contextual knowledge
of large language models (LLMs). Using the TRUCE and SUSHI datasets, which pair
time-series signals with natural language descriptions, we demonstrate that
CLaSP achieves high accuracy in retrieving a variety of time series patterns
based on natural language queries.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 07:32:58 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 08:01:55 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Ito",
"Aoi",
""
],
[
"Dohi",
"Kota",
""
],
[
"Kawaguchi",
"Yohei",
""
]
] | TITLE: CLaSP: Learning Concepts for Time-Series Signals from Natural Language
Supervision
ABSTRACT: This paper presents CLaSP, a novel model for retrieving time-series signals
using natural language queries that describe signal characteristics. The
ability to search time-series signals based on descriptive queries is essential
in domains such as industrial diagnostics, where data scientists often need to
find signals with specific characteristics. However, existing methods rely on
sketch-based inputs, predefined synonym dictionaries, or domain-specific manual
designs, limiting their scalability and adaptability. CLaSP addresses these
challenges by employing contrastive learning to map time-series signals to
natural language descriptions. Unlike prior approaches, it eliminates the need
for predefined synonym dictionaries and leverages the rich contextual knowledge
of large language models (LLMs). Using the TRUCE and SUSHI datasets, which pair
time-series signals with natural language descriptions, we demonstrate that
CLaSP achieves high accuracy in retrieving a variety of time series patterns
based on natural language queries.
|
2411.09216 | Ryan Krueger | Ryan K. Krueger, Megan C. Engel, Ryan Hausen, Michael P. Brenner | Fitting Coarse-Grained Models to Macroscopic Experimental Data via
Automatic Differentiation | null | null | null | null | physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing physics-based models for molecular simulation requires fitting
many unknown parameters to diverse experimental datasets. Traditionally, this
process is piecemeal and difficult to reproduce, leading to a fragmented
landscape of models. Here, we establish a systematic, extensible framework for
fitting coarse-grained molecular models to macroscopic experimental data by
leveraging recently developed methods for computing low-variance gradient
estimates with automatic differentiation. Using a widely validated DNA force
field as an exemplar, we develop methods for optimizing structural, mechanical,
and thermodynamic properties across a range of simulation techniques, including
enhanced sampling and external forcing, spanning micro- and millisecond
timescales. We highlight how gradients enable efficient sensitivity analyses
that yield physical insight. We then demonstrate the broad applicability of
these techniques by optimizing diverse biomolecular systems, including RNA and
DNA-protein hybrid models. We show how conflict-free gradient methods from
multi-task learning can be adapted to impose multiple constraints
simultaneously without compromising accuracy. This approach provides a
foundation for transparent, reproducible, community-driven force field
development, accelerating progress in molecular modeling.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2024 06:28:05 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 03:09:39 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Krueger",
"Ryan K.",
""
],
[
"Engel",
"Megan C.",
""
],
[
"Hausen",
"Ryan",
""
],
[
"Brenner",
"Michael P.",
""
]
] | TITLE: Fitting Coarse-Grained Models to Macroscopic Experimental Data via
Automatic Differentiation
ABSTRACT: Developing physics-based models for molecular simulation requires fitting
many unknown parameters to diverse experimental datasets. Traditionally, this
process is piecemeal and difficult to reproduce, leading to a fragmented
landscape of models. Here, we establish a systematic, extensible framework for
fitting coarse-grained molecular models to macroscopic experimental data by
leveraging recently developed methods for computing low-variance gradient
estimates with automatic differentiation. Using a widely validated DNA force
field as an exemplar, we develop methods for optimizing structural, mechanical,
and thermodynamic properties across a range of simulation techniques, including
enhanced sampling and external forcing, spanning micro- and millisecond
timescales. We highlight how gradients enable efficient sensitivity analyses
that yield physical insight. We then demonstrate the broad applicability of
these techniques by optimizing diverse biomolecular systems, including RNA and
DNA-protein hybrid models. We show how conflict-free gradient methods from
multi-task learning can be adapted to impose multiple constraints
simultaneously without compromising accuracy. This approach provides a
foundation for transparent, reproducible, community-driven force field
development, accelerating progress in molecular modeling.
|
2411.12556 | Xiang Li | Xiang Li, Jianpeng Qi, Zhongying Zhao, Guanjie Zheng, Lei Cao, Junyu
Dong, Yanwei Yu | UMGAD: Unsupervised Multiplex Graph Anomaly Detection | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph anomaly detection (GAD) is a critical task in graph machine learning,
with the primary objective of identifying anomalous nodes that deviate
significantly from the majority. This task is widely applied in various
real-world scenarios, including fraud detection and social network analysis.
However, existing GAD methods still face two major challenges: (1) They are
often limited to detecting anomalies in single-type interaction graphs and
struggle with multiple interaction types in multiplex heterogeneous graphs. (2)
In unsupervised scenarios, selecting appropriate anomaly score thresholds
remains a significant challenge for accurate anomaly detection. To address the
above challenges, we propose a novel Unsupervised Multiplex Graph Anomaly
Detection method, named UMGAD. We first learn multi-relational correlations
among nodes in multiplex heterogeneous graphs and capture anomaly information
during node attribute and structure reconstruction through graph-masked
autoencoder (GMAE). Then, to further extract abnormal information, we generate
attribute-level and subgraph-level augmented-view graphs, respectively, and
perform attribute and structure reconstruction through GMAE. Finally, we learn
to optimize node attributes and structural features through contrastive
learning between original-view and augmented-view graphs to improve the model's
ability to capture anomalies. Meanwhile, we propose a new anomaly score
threshold selection strategy, which allows the model to be independent of
ground truth information in real unsupervised scenarios. Extensive experiments
on six datasets show that our UMGAD significantly outperforms state-of-the-art
methods, achieving average improvements of 12.25% in AUC and 11.29% in Macro-F1
across all datasets.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2024 15:15:45 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 13:29:03 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 09:56:09 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Apr 2025 04:11:23 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Li",
"Xiang",
""
],
[
"Qi",
"Jianpeng",
""
],
[
"Zhao",
"Zhongying",
""
],
[
"Zheng",
"Guanjie",
""
],
[
"Cao",
"Lei",
""
],
[
"Dong",
"Junyu",
""
],
[
"Yu",
"Yanwei",
""
]
] | TITLE: UMGAD: Unsupervised Multiplex Graph Anomaly Detection
ABSTRACT: Graph anomaly detection (GAD) is a critical task in graph machine learning,
with the primary objective of identifying anomalous nodes that deviate
significantly from the majority. This task is widely applied in various
real-world scenarios, including fraud detection and social network analysis.
However, existing GAD methods still face two major challenges: (1) They are
often limited to detecting anomalies in single-type interaction graphs and
struggle with multiple interaction types in multiplex heterogeneous graphs. (2)
In unsupervised scenarios, selecting appropriate anomaly score thresholds
remains a significant challenge for accurate anomaly detection. To address the
above challenges, we propose a novel Unsupervised Multiplex Graph Anomaly
Detection method, named UMGAD. We first learn multi-relational correlations
among nodes in multiplex heterogeneous graphs and capture anomaly information
during node attribute and structure reconstruction through graph-masked
autoencoder (GMAE). Then, to further extract abnormal information, we generate
attribute-level and subgraph-level augmented-view graphs, respectively, and
perform attribute and structure reconstruction through GMAE. Finally, we learn
to optimize node attributes and structural features through contrastive
learning between original-view and augmented-view graphs to improve the model's
ability to capture anomalies. Meanwhile, we propose a new anomaly score
threshold selection strategy, which allows the model to be independent of
ground truth information in real unsupervised scenarios. Extensive experiments
on six datasets show that our UMGAD significantly outperforms state-of-the-art
methods, achieving average improvements of 12.25% in AUC and 11.29% in Macro-F1
across all datasets.
|
2411.12946 | Gabriel Chua | Gabriel Chua, Shing Yee Chan, Shaun Khoo | A Flexible Large Language Models Guardrail Development Methodology
Applied to Off-Topic Prompt Detection | 8 pages, 5 figures | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are prone to off-topic misuse, where users may
prompt these models to perform tasks beyond their intended scope. Current
guardrails, which often rely on curated examples or custom classifiers, suffer
from high false-positive rates, limited adaptability, and the impracticality of
requiring real-world data that is not available in pre-production. In this
paper, we introduce a flexible, data-free guardrail development methodology
that addresses these challenges. By thoroughly defining the problem space
qualitatively and passing this to an LLM to generate diverse prompts, we
construct a synthetic dataset to benchmark and train off-topic guardrails that
outperform heuristic approaches. Additionally, by framing the task as
classifying whether the user prompt is relevant with respect to the system
prompt, our guardrails effectively generalize to other misuse categories,
including jailbreak and harmful prompts. Lastly, we further contribute to the
field by open-sourcing both the synthetic dataset and the off-topic guardrail
models, providing valuable resources for developing guardrails in
pre-production environments and supporting future research and development in
LLM safety.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 00:31:23 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 08:59:26 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Chua",
"Gabriel",
""
],
[
"Chan",
"Shing Yee",
""
],
[
"Khoo",
"Shaun",
""
]
] | TITLE: A Flexible Large Language Models Guardrail Development Methodology
Applied to Off-Topic Prompt Detection
ABSTRACT: Large Language Models (LLMs) are prone to off-topic misuse, where users may
prompt these models to perform tasks beyond their intended scope. Current
guardrails, which often rely on curated examples or custom classifiers, suffer
from high false-positive rates, limited adaptability, and the impracticality of
requiring real-world data that is not available in pre-production. In this
paper, we introduce a flexible, data-free guardrail development methodology
that addresses these challenges. By thoroughly defining the problem space
qualitatively and passing this to an LLM to generate diverse prompts, we
construct a synthetic dataset to benchmark and train off-topic guardrails that
outperform heuristic approaches. Additionally, by framing the task as
classifying whether the user prompt is relevant with respect to the system
prompt, our guardrails effectively generalize to other misuse categories,
including jailbreak and harmful prompts. Lastly, we further contribute to the
field by open-sourcing both the synthetic dataset and the off-topic guardrail
models, providing valuable resources for developing guardrails in
pre-production environments and supporting future research and development in
LLM safety.
|
2411.15209 | Xinye Chen | Erin Carson, Xinye Chen, and Cheng Kang | Quantized symbolic time series approximation | null | null | null | null | cs.LG eess.SP stat.ML | http://creativecommons.org/licenses/by/4.0/ | Time series are ubiquitous in numerous science and engineering domains, e.g.,
signal processing, bioinformatics, and astronomy. Previous work has verified
the efficacy of symbolic time series representation in a variety of engineering
applications due to its storage efficiency and numerosity reduction. The most
recent symbolic aggregate approximation technique, ABBA, has been shown to
preserve essential shape information of time series and improve downstream
applications, e.g., neural network inference regarding prediction and anomaly
detection in time series.
Motivated by the emergence of high-performance hardware which enables
efficient computation for low bit-width representations, we present a new
quantization-based ABBA symbolic approximation technique, QABBA, which exhibits
improved storage efficiency while retaining the original speed and accuracy of
symbolic reconstruction. We prove an upper bound for the error arising from
quantization and discuss how the number of bits should be chosen to balance
this with other errors.
An application of QABBA with large language models (LLMs) for time series
regression is also presented, and its utility is investigated. By representing
the symbolic chain of patterns on time series, QABBA not only avoids the
training of embedding from scratch, but also achieves a new state-of-the-art on
Monash regression dataset. The symbolic approximation to the time series offers
a more efficient way to fine-tune LLMs on the time series regression task which
contains various application domains. We further present a set of extensive
experiments performed across various well-established datasets to demonstrate
the advantages of the QABBA method for symbolic approximation.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 10:32:22 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 13:46:27 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Carson",
"Erin",
""
],
[
"Chen",
"Xinye",
""
],
[
"Kang",
"Cheng",
""
]
] | TITLE: Quantized symbolic time series approximation
ABSTRACT: Time series are ubiquitous in numerous science and engineering domains, e.g.,
signal processing, bioinformatics, and astronomy. Previous work has verified
the efficacy of symbolic time series representation in a variety of engineering
applications due to its storage efficiency and numerosity reduction. The most
recent symbolic aggregate approximation technique, ABBA, has been shown to
preserve essential shape information of time series and improve downstream
applications, e.g., neural network inference regarding prediction and anomaly
detection in time series.
Motivated by the emergence of high-performance hardware which enables
efficient computation for low bit-width representations, we present a new
quantization-based ABBA symbolic approximation technique, QABBA, which exhibits
improved storage efficiency while retaining the original speed and accuracy of
symbolic reconstruction. We prove an upper bound for the error arising from
quantization and discuss how the number of bits should be chosen to balance
this with other errors.
An application of QABBA with large language models (LLMs) for time series
regression is also presented, and its utility is investigated. By representing
the symbolic chain of patterns on time series, QABBA not only avoids the
training of embedding from scratch, but also achieves a new state-of-the-art on
Monash regression dataset. The symbolic approximation to the time series offers
a more efficient way to fine-tune LLMs on the time series regression task which
contains various application domains. We further present a set of extensive
experiments performed across various well-established datasets to demonstrate
the advantages of the QABBA method for symbolic approximation.
|
2411.18923 | Dennis Singh Moirangthem Dr | Meher Bhardwaj, Hrishikesh Ethari, and Dennis Singh Moirangthem | EzSQL: An SQL intermediate representation for improving SQL-to-text
Generation | Under revision and review at Expert System With Applications Journal
after first review | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The SQL-to-text generation task traditionally uses template base, Seq2Seq,
tree-to-sequence, and graph-to-sequence models. Recent models take advantage of
pre-trained generative language models for this task in the Seq2Seq framework.
However, treating SQL as a sequence of inputs to the pre-trained models is not
optimal. In this work, we put forward a new SQL intermediate representation
called EzSQL to align SQL with the natural language text sequence. EzSQL
simplifies the SQL queries and brings them closer to natural language text by
modifying operators and keywords, which can usually be described in natural
language. EzSQL also removes the need for set operators. Our proposed
SQL-to-text generation model uses EzSQL as the input to a pre-trained
generative language model for generating the text descriptions. We demonstrate
that our model is an effective state-of-the-art method to generate text
narrations from SQL queries on the WikiSQL and Spider datasets. We also show
that by generating pretraining data using our SQL-to-text generation model, we
can enhance the performance of Text-to-SQL parsers.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 05:24:46 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 05:40:29 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Bhardwaj",
"Meher",
""
],
[
"Ethari",
"Hrishikesh",
""
],
[
"Moirangthem",
"Dennis Singh",
""
]
] | TITLE: EzSQL: An SQL intermediate representation for improving SQL-to-text
Generation
ABSTRACT: The SQL-to-text generation task traditionally uses template base, Seq2Seq,
tree-to-sequence, and graph-to-sequence models. Recent models take advantage of
pre-trained generative language models for this task in the Seq2Seq framework.
However, treating SQL as a sequence of inputs to the pre-trained models is not
optimal. In this work, we put forward a new SQL intermediate representation
called EzSQL to align SQL with the natural language text sequence. EzSQL
simplifies the SQL queries and brings them closer to natural language text by
modifying operators and keywords, which can usually be described in natural
language. EzSQL also removes the need for set operators. Our proposed
SQL-to-text generation model uses EzSQL as the input to a pre-trained
generative language model for generating the text descriptions. We demonstrate
that our model is an effective state-of-the-art method to generate text
narrations from SQL queries on the WikiSQL and Spider datasets. We also show
that by generating pretraining data using our SQL-to-text generation model, we
can enhance the performance of Text-to-SQL parsers.
|
2411.19942 | Hang Ye | Hang Ye, Xiaoxuan Ma, Hai Ci, Wentao Zhu, Yizhou Wang | FreeCloth: Free-form Generation Enhances Challenging Clothed Human
Modeling | 23 pages, 26 figures | null | null | null | cs.CV cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Achieving realistic animated human avatars requires accurate modeling of
pose-dependent clothing deformations. Existing learning-based methods heavily
rely on the Linear Blend Skinning (LBS) of minimally-clothed human models like
SMPL to model deformation. However, they struggle to handle loose clothing,
such as long dresses, where the canonicalization process becomes ill-defined
when the clothing is far from the body, leading to disjointed and fragmented
results. To overcome this limitation, we propose FreeCloth, a novel hybrid
framework to model challenging clothed humans. Our core idea is to use
dedicated strategies to model different regions, depending on whether they are
close to or distant from the body. Specifically, we segment the human body into
three categories: unclothed, deformed, and generated. We simply replicate
unclothed regions that require no deformation. For deformed regions close to
the body, we leverage LBS to handle the deformation. As for the generated
regions, which correspond to loose clothing areas, we introduce a novel
free-form, part-aware generator to model them, as they are less affected by
movements. This free-form generation paradigm brings enhanced flexibility and
expressiveness to our hybrid framework, enabling it to capture the intricate
geometric details of challenging loose clothing, such as skirts and dresses.
Experimental results on the benchmark dataset featuring loose clothing
demonstrate that FreeCloth achieves state-of-the-art performance with superior
visual fidelity and realism, particularly in the most challenging cases.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2024 18:58:17 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 07:24:19 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 12:48:01 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Ye",
"Hang",
""
],
[
"Ma",
"Xiaoxuan",
""
],
[
"Ci",
"Hai",
""
],
[
"Zhu",
"Wentao",
""
],
[
"Wang",
"Yizhou",
""
]
] | TITLE: FreeCloth: Free-form Generation Enhances Challenging Clothed Human
Modeling
ABSTRACT: Achieving realistic animated human avatars requires accurate modeling of
pose-dependent clothing deformations. Existing learning-based methods heavily
rely on the Linear Blend Skinning (LBS) of minimally-clothed human models like
SMPL to model deformation. However, they struggle to handle loose clothing,
such as long dresses, where the canonicalization process becomes ill-defined
when the clothing is far from the body, leading to disjointed and fragmented
results. To overcome this limitation, we propose FreeCloth, a novel hybrid
framework to model challenging clothed humans. Our core idea is to use
dedicated strategies to model different regions, depending on whether they are
close to or distant from the body. Specifically, we segment the human body into
three categories: unclothed, deformed, and generated. We simply replicate
unclothed regions that require no deformation. For deformed regions close to
the body, we leverage LBS to handle the deformation. As for the generated
regions, which correspond to loose clothing areas, we introduce a novel
free-form, part-aware generator to model them, as they are less affected by
movements. This free-form generation paradigm brings enhanced flexibility and
expressiveness to our hybrid framework, enabling it to capture the intricate
geometric details of challenging loose clothing, such as skirts and dresses.
Experimental results on the benchmark dataset featuring loose clothing
demonstrate that FreeCloth achieves state-of-the-art performance with superior
visual fidelity and realism, particularly in the most challenging cases.
|
2412.02993 | Jiongtong Hu | Jiongtong Hu, Wei Zhuo, Jun Cheng, Yingying Liu, Wufeng Xue and Dong
Ni | EchoONE: Segmenting Multiple echocardiography Planes in One Model | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In clinical practice of echocardiography examinations, multiple planes
containing the heart structures of different view are usually required in
screening, diagnosis and treatment of cardiac disease. AI models for
echocardiography have to be tailored for each specific plane due to the
dramatic structure differences, thus resulting in repetition development and
extra complexity. Effective solution for such a multi-plane segmentation (MPS)
problem is highly demanded for medical images, yet has not been well
investigated. In this paper, we propose a novel solution, EchoONE, for this
problem with a SAM-based segmentation architecture, a prior-composable mask
learning (PC-Mask) module for semantic-aware dense prompt generation, and a
learnable CNN-branch with a simple yet effective local feature fusion and
adaption (LFFA) module for SAM adapting. We extensively evaluated our method on
multiple internal and external echocardiography datasets, and achieved
consistently state-of-the-art performance for multi-source datasets with
different heart planes. This is the first time that the MPS problem is solved
in one model for echocardiography data. The code will be available at
https://github.com/a2502503/EchoONE.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 03:19:43 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 13:59:01 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 03:11:43 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Hu",
"Jiongtong",
""
],
[
"Zhuo",
"Wei",
""
],
[
"Cheng",
"Jun",
""
],
[
"Liu",
"Yingying",
""
],
[
"Xue",
"Wufeng",
""
],
[
"Ni",
"Dong",
""
]
] | TITLE: EchoONE: Segmenting Multiple echocardiography Planes in One Model
ABSTRACT: In clinical practice of echocardiography examinations, multiple planes
containing the heart structures of different view are usually required in
screening, diagnosis and treatment of cardiac disease. AI models for
echocardiography have to be tailored for each specific plane due to the
dramatic structure differences, thus resulting in repetition development and
extra complexity. Effective solution for such a multi-plane segmentation (MPS)
problem is highly demanded for medical images, yet has not been well
investigated. In this paper, we propose a novel solution, EchoONE, for this
problem with a SAM-based segmentation architecture, a prior-composable mask
learning (PC-Mask) module for semantic-aware dense prompt generation, and a
learnable CNN-branch with a simple yet effective local feature fusion and
adaption (LFFA) module for SAM adapting. We extensively evaluated our method on
multiple internal and external echocardiography datasets, and achieved
consistently state-of-the-art performance for multi-source datasets with
different heart planes. This is the first time that the MPS problem is solved
in one model for echocardiography data. The code will be available at
https://github.com/a2502503/EchoONE.
|
2412.04244 | Dingxi Zhang | Rao Fu, Dingxi Zhang, Alex Jiang, Wanjia Fu, Austin Funk, Daniel
Ritchie, Srinath Sridhar | GigaHands: A Massive Annotated Dataset of Bimanual Hand Activities | CVPR 2025 Highlight | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding bimanual human hand activities is a critical problem in AI and
robotics. We cannot build large models of bimanual activities because existing
datasets lack the scale, coverage of diverse hand activities, and detailed
annotations. We introduce GigaHands, a massive annotated dataset capturing 34
hours of bimanual hand activities from 56 subjects and 417 objects, totaling
14k motion clips derived from 183 million frames paired with 84k text
annotations. Our markerless capture setup and data acquisition protocol enable
fully automatic 3D hand and object estimation while minimizing the effort
required for text annotation. The scale and diversity of GigaHands enable broad
applications, including text-driven action synthesis, hand motion captioning,
and dynamic radiance field reconstruction. Our website are avaliable at
https://ivl.cs.brown.edu/research/gigahands.html .
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 15:26:51 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Dec 2024 22:20:30 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 10:18:05 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Fu",
"Rao",
""
],
[
"Zhang",
"Dingxi",
""
],
[
"Jiang",
"Alex",
""
],
[
"Fu",
"Wanjia",
""
],
[
"Funk",
"Austin",
""
],
[
"Ritchie",
"Daniel",
""
],
[
"Sridhar",
"Srinath",
""
]
] | TITLE: GigaHands: A Massive Annotated Dataset of Bimanual Hand Activities
ABSTRACT: Understanding bimanual human hand activities is a critical problem in AI and
robotics. We cannot build large models of bimanual activities because existing
datasets lack the scale, coverage of diverse hand activities, and detailed
annotations. We introduce GigaHands, a massive annotated dataset capturing 34
hours of bimanual hand activities from 56 subjects and 417 objects, totaling
14k motion clips derived from 183 million frames paired with 84k text
annotations. Our markerless capture setup and data acquisition protocol enable
fully automatic 3D hand and object estimation while minimizing the effort
required for text annotation. The scale and diversity of GigaHands enable broad
applications, including text-driven action synthesis, hand motion captioning,
and dynamic radiance field reconstruction. Our website are avaliable at
https://ivl.cs.brown.edu/research/gigahands.html .
|
2412.10972 | Luis Wiedmann | Luis Wiedmann, Luca Wiehe, David Rozenberszki | DCSEG: Decoupled 3D Open-Set Segmentation using Gaussian Splatting | To be published in CVPR Workshop on Open-World 3D Scene Understanding
with Foundation Models | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Open-set 3D segmentation represents a major point of interest for multiple
downstream robotics and augmented/virtual reality applications. We present a
decoupled 3D segmentation pipeline to ensure modularity and adaptability to
novel 3D representations as well as semantic segmentation foundation models. We
first reconstruct a scene with 3D Gaussians and learn class-agnostic features
through contrastive supervision from a 2D instance proposal network. These 3D
features are then clustered to form coarse object- or part-level masks.
Finally, we match each 3D cluster to class-aware masks predicted by a 2D
open-vocabulary segmentation model, assigning semantic labels without
retraining the 3D representation. Our decoupled design (1) provides a
plug-and-play interface for swapping different 2D or 3D modules, (2) ensures
multi-object instance segmentation at no extra cost, and (3) leverages rich 3D
geometry for robust scene understanding. We evaluate on synthetic and
real-world indoor datasets, demonstrating improved performance over comparable
NeRF-based pipelines on mIoU and mAcc, particularly for challenging or
long-tail classes. We also show how varying the 2D backbone affects the final
segmentation, highlighting the modularity of our framework. These results
confirm that decoupling 3D mask proposal and semantic classification can
deliver flexible, efficient, and open-vocabulary 3D segmentation.
| [
{
"version": "v1",
"created": "Sat, 14 Dec 2024 21:26:44 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 22:38:24 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wiedmann",
"Luis",
""
],
[
"Wiehe",
"Luca",
""
],
[
"Rozenberszki",
"David",
""
]
] | TITLE: DCSEG: Decoupled 3D Open-Set Segmentation using Gaussian Splatting
ABSTRACT: Open-set 3D segmentation represents a major point of interest for multiple
downstream robotics and augmented/virtual reality applications. We present a
decoupled 3D segmentation pipeline to ensure modularity and adaptability to
novel 3D representations as well as semantic segmentation foundation models. We
first reconstruct a scene with 3D Gaussians and learn class-agnostic features
through contrastive supervision from a 2D instance proposal network. These 3D
features are then clustered to form coarse object- or part-level masks.
Finally, we match each 3D cluster to class-aware masks predicted by a 2D
open-vocabulary segmentation model, assigning semantic labels without
retraining the 3D representation. Our decoupled design (1) provides a
plug-and-play interface for swapping different 2D or 3D modules, (2) ensures
multi-object instance segmentation at no extra cost, and (3) leverages rich 3D
geometry for robust scene understanding. We evaluate on synthetic and
real-world indoor datasets, demonstrating improved performance over comparable
NeRF-based pipelines on mIoU and mAcc, particularly for challenging or
long-tail classes. We also show how varying the 2D backbone affects the final
segmentation, highlighting the modularity of our framework. These results
confirm that decoupling 3D mask proposal and semantic classification can
deliver flexible, efficient, and open-vocabulary 3D segmentation.
|
2412.11589 | Yu-Hsuan Huang | Yu-Hsuan Huang, Ling Lo, Hongxia Xie, Hong-Han Shuai, Wen-Huang Cheng | Future Sight and Tough Fights: Revolutionizing Sequential Recommendation
with FENRec | Our code is available at https://github.com/uikdwnd/FENRec | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential recommendation (SR) systems predict user preferences by analyzing
time-ordered interaction sequences. A common challenge for SR is data sparsity,
as users typically interact with only a limited number of items. While
contrastive learning has been employed in previous approaches to address the
challenges, these methods often adopt binary labels, missing finer patterns and
overlooking detailed information in subsequent behaviors of users.
Additionally, they rely on random sampling to select negatives in contrastive
learning, which may not yield sufficiently hard negatives during later training
stages. In this paper, we propose Future data utilization with Enduring
Negatives for contrastive learning in sequential Recommendation (FENRec). Our
approach aims to leverage future data with time-dependent soft labels and
generate enduring hard negatives from existing data, thereby enhancing the
effectiveness in tackling data sparsity. Experiment results demonstrate our
state-of-the-art performance across four benchmark datasets, with an average
improvement of 6.16\% across all metrics.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 09:20:29 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Dec 2024 07:36:52 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Feb 2025 08:36:53 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Apr 2025 03:06:59 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Huang",
"Yu-Hsuan",
""
],
[
"Lo",
"Ling",
""
],
[
"Xie",
"Hongxia",
""
],
[
"Shuai",
"Hong-Han",
""
],
[
"Cheng",
"Wen-Huang",
""
]
] | TITLE: Future Sight and Tough Fights: Revolutionizing Sequential Recommendation
with FENRec
ABSTRACT: Sequential recommendation (SR) systems predict user preferences by analyzing
time-ordered interaction sequences. A common challenge for SR is data sparsity,
as users typically interact with only a limited number of items. While
contrastive learning has been employed in previous approaches to address the
challenges, these methods often adopt binary labels, missing finer patterns and
overlooking detailed information in subsequent behaviors of users.
Additionally, they rely on random sampling to select negatives in contrastive
learning, which may not yield sufficiently hard negatives during later training
stages. In this paper, we propose Future data utilization with Enduring
Negatives for contrastive learning in sequential Recommendation (FENRec). Our
approach aims to leverage future data with time-dependent soft labels and
generate enduring hard negatives from existing data, thereby enhancing the
effectiveness in tackling data sparsity. Experiment results demonstrate our
state-of-the-art performance across four benchmark datasets, with an average
improvement of 6.16\% across all metrics.
|
2412.12225 | Pan Wang | Pan Wang, Qiang Zhou, Yawen Wu, Tianlong Chen, Jingtong Hu | DLF: Disentangled-Language-Focused Multimodal Sentiment Analysis | AAAI 2025 accepted | null | null | null | cs.LG cs.AI cs.CL cs.MM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Multimodal Sentiment Analysis (MSA) leverages heterogeneous modalities, such
as language, vision, and audio, to enhance the understanding of human
sentiment. While existing models often focus on extracting shared information
across modalities or directly fusing heterogeneous modalities, such approaches
can introduce redundancy and conflicts due to equal treatment of all modalities
and the mutual transfer of information between modality pairs. To address these
issues, we propose a Disentangled-Language-Focused (DLF) multimodal
representation learning framework, which incorporates a feature disentanglement
module to separate modality-shared and modality-specific information. To
further reduce redundancy and enhance language-targeted features, four
geometric measures are introduced to refine the disentanglement process. A
Language-Focused Attractor (LFA) is further developed to strengthen language
representation by leveraging complementary modality-specific information
through a language-guided cross-attention mechanism. The framework also employs
hierarchical predictions to improve overall accuracy. Extensive experiments on
two popular MSA datasets, CMU-MOSI and CMU-MOSEI, demonstrate the significant
performance gains achieved by the proposed DLF framework. Comprehensive
ablation studies further validate the effectiveness of the feature
disentanglement module, language-focused attractor, and hierarchical
predictions. Our code is available at https://github.com/pwang322/DLF.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 10:03:44 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Dec 2024 19:23:17 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 00:52:30 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wang",
"Pan",
""
],
[
"Zhou",
"Qiang",
""
],
[
"Wu",
"Yawen",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Hu",
"Jingtong",
""
]
] | TITLE: DLF: Disentangled-Language-Focused Multimodal Sentiment Analysis
ABSTRACT: Multimodal Sentiment Analysis (MSA) leverages heterogeneous modalities, such
as language, vision, and audio, to enhance the understanding of human
sentiment. While existing models often focus on extracting shared information
across modalities or directly fusing heterogeneous modalities, such approaches
can introduce redundancy and conflicts due to equal treatment of all modalities
and the mutual transfer of information between modality pairs. To address these
issues, we propose a Disentangled-Language-Focused (DLF) multimodal
representation learning framework, which incorporates a feature disentanglement
module to separate modality-shared and modality-specific information. To
further reduce redundancy and enhance language-targeted features, four
geometric measures are introduced to refine the disentanglement process. A
Language-Focused Attractor (LFA) is further developed to strengthen language
representation by leveraging complementary modality-specific information
through a language-guided cross-attention mechanism. The framework also employs
hierarchical predictions to improve overall accuracy. Extensive experiments on
two popular MSA datasets, CMU-MOSI and CMU-MOSEI, demonstrate the significant
performance gains achieved by the proposed DLF framework. Comprehensive
ablation studies further validate the effectiveness of the feature
disentanglement module, language-focused attractor, and hierarchical
predictions. Our code is available at https://github.com/pwang322/DLF.
|
2412.12448 | Sheng Cheng | Sheng Cheng, Ran Tao, Yuliang Gu, Shenlong Wang, Xiaofeng Wang, Naira
Hovakimyan | Task-Parameter Nexus: Task-Specific Parameter Learning for Model-Based
Control | null | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | This paper presents the Task-Parameter Nexus (TPN), a learning-based approach
for online determination of the (near-)optimal control parameters of
model-based controllers (MBCs) for tracking tasks. In TPN, a deep neural
network is introduced to predict the control parameters for any given tracking
task at runtime, especially when optimal parameters for new tasks are not
immediately available. To train this network, we constructed a trajectory bank
with various speeds and curvatures that represent different motion
characteristics. Then, for each trajectory in the bank, we auto-tune the
optimal control parameters offline and use them as the corresponding ground
truth. With this dataset, the TPN is trained by supervised learning. We
evaluated the TPN on the quadrotor platform. In simulation experiments, it is
shown that the TPN can predict near-optimal control parameters for a spectrum
of tracking tasks, demonstrating its robust generalization capabilities to
unseen tasks.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 01:24:02 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 16:54:38 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Cheng",
"Sheng",
""
],
[
"Tao",
"Ran",
""
],
[
"Gu",
"Yuliang",
""
],
[
"Wang",
"Shenlong",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Hovakimyan",
"Naira",
""
]
] | TITLE: Task-Parameter Nexus: Task-Specific Parameter Learning for Model-Based
Control
ABSTRACT: This paper presents the Task-Parameter Nexus (TPN), a learning-based approach
for online determination of the (near-)optimal control parameters of
model-based controllers (MBCs) for tracking tasks. In TPN, a deep neural
network is introduced to predict the control parameters for any given tracking
task at runtime, especially when optimal parameters for new tasks are not
immediately available. To train this network, we constructed a trajectory bank
with various speeds and curvatures that represent different motion
characteristics. Then, for each trajectory in the bank, we auto-tune the
optimal control parameters offline and use them as the corresponding ground
truth. With this dataset, the TPN is trained by supervised learning. We
evaluated the TPN on the quadrotor platform. In simulation experiments, it is
shown that the TPN can predict near-optimal control parameters for a spectrum
of tracking tasks, demonstrating its robust generalization capabilities to
unseen tasks.
|
2412.16615 | Feixiang Guo | Luo Ji, Feixiang Guo, Teng Chen, Qingqing Gu, Xiaoyu Wang, Ningyuan
Xi, Yihong Wang, Peng Yu, Yue Zhao, Hongyang Lei, Zhonglin Jiang, Yong Chen | Large Language Model Can Be a Foundation for Hidden Rationale-Based
Retrieval | 10 pages, 3 figures, ECIR 2025 | null | null | null | cs.IR cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite the recent advancement in Retrieval-Augmented Generation (RAG)
systems, most retrieval methodologies are often developed for factual
retrieval, which assumes query and positive documents are semantically similar.
In this paper, we instead propose and study a more challenging type of
retrieval task, called hidden rationale retrieval, in which query and document
are not similar but can be inferred by reasoning chains, logic relationships,
or empirical experiences. To address such problems, an instruction-tuned Large
language model (LLM) with a cross-encoder architecture could be a reasonable
choice. To further strengthen pioneering LLM-based retrievers, we design a
special instruction that transforms the retrieval task into a generative task
by prompting LLM to answer a binary-choice question. The model can be
fine-tuned with direct preference optimization (DPO). The framework is also
optimized for computational efficiency with no performance degradation. We name
this retrieval framework by RaHoRe and verify its zero-shot and fine-tuned
performance superiority on Emotional Support Conversation (ESC), compared with
previous retrieval works. Our study suggests the potential to employ LLM as a
foundation for a wider scope of retrieval tasks. Our codes, models, and
datasets are available on https://github.com/flyfree5/LaHoRe.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2024 13:19:15 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 14:08:58 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Ji",
"Luo",
""
],
[
"Guo",
"Feixiang",
""
],
[
"Chen",
"Teng",
""
],
[
"Gu",
"Qingqing",
""
],
[
"Wang",
"Xiaoyu",
""
],
[
"Xi",
"Ningyuan",
""
],
[
"Wang",
"Yihong",
""
],
[
"Yu",
"Peng",
""
],
[
"Zhao",
"Yue",
""
],
[
"Lei",
"Hongyang",
""
],
[
"Jiang",
"Zhonglin",
""
],
[
"Chen",
"Yong",
""
]
] | TITLE: Large Language Model Can Be a Foundation for Hidden Rationale-Based
Retrieval
ABSTRACT: Despite the recent advancement in Retrieval-Augmented Generation (RAG)
systems, most retrieval methodologies are often developed for factual
retrieval, which assumes query and positive documents are semantically similar.
In this paper, we instead propose and study a more challenging type of
retrieval task, called hidden rationale retrieval, in which query and document
are not similar but can be inferred by reasoning chains, logic relationships,
or empirical experiences. To address such problems, an instruction-tuned Large
language model (LLM) with a cross-encoder architecture could be a reasonable
choice. To further strengthen pioneering LLM-based retrievers, we design a
special instruction that transforms the retrieval task into a generative task
by prompting LLM to answer a binary-choice question. The model can be
fine-tuned with direct preference optimization (DPO). The framework is also
optimized for computational efficiency with no performance degradation. We name
this retrieval framework by RaHoRe and verify its zero-shot and fine-tuned
performance superiority on Emotional Support Conversation (ESC), compared with
previous retrieval works. Our study suggests the potential to employ LLM as a
foundation for a wider scope of retrieval tasks. Our codes, models, and
datasets are available on https://github.com/flyfree5/LaHoRe.
|
2412.16742 | Yung-Hong Sun | Yung-Hong Sun, Gefei Shen, Jiangang Chen, Jayer Fernandes, Amber L.
Shada, Charles P. Heise, Hongrui Jiang, Yu Hen Hu | EasyVis2: A Real Time Multi-view 3D Visualization System for
Laparoscopic Surgery Training Enhanced by a Deep Neural Network YOLOv8-Pose | 11 pages (12 pages with citations), 12 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | EasyVis2 is a system designed to provide hands-free, real-time 3D
visualization for laparoscopic surgery. It incorporates a surgical trocar
equipped with an array of micro-cameras, which can be inserted into the body
cavity to offer an enhanced field of view and a 3D perspective of the surgical
procedure. A specialized deep neural network algorithm, YOLOv8-Pose, is
utilized to estimate the position and orientation of surgical instruments in
each individual camera view. These multi-view estimates enable the calculation
of 3D poses of surgical tools, facilitating the rendering of a 3D surface model
of the instruments, overlaid on the background scene, for real-time
visualization. This study presents methods for adapting YOLOv8-Pose to the
EasyVis2 system, including the development of a tailored training dataset.
Experimental results demonstrate that, with an identical number of cameras, the
new system improves 3D reconstruction accuracy and reduces computation time.
Additionally, the adapted YOLOv8-Pose system shows high accuracy in 2D pose
estimation.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2024 19:26:19 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 21:14:22 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Sun",
"Yung-Hong",
""
],
[
"Shen",
"Gefei",
""
],
[
"Chen",
"Jiangang",
""
],
[
"Fernandes",
"Jayer",
""
],
[
"Shada",
"Amber L.",
""
],
[
"Heise",
"Charles P.",
""
],
[
"Jiang",
"Hongrui",
""
],
[
"Hu",
"Yu Hen",
""
]
] | TITLE: EasyVis2: A Real Time Multi-view 3D Visualization System for
Laparoscopic Surgery Training Enhanced by a Deep Neural Network YOLOv8-Pose
ABSTRACT: EasyVis2 is a system designed to provide hands-free, real-time 3D
visualization for laparoscopic surgery. It incorporates a surgical trocar
equipped with an array of micro-cameras, which can be inserted into the body
cavity to offer an enhanced field of view and a 3D perspective of the surgical
procedure. A specialized deep neural network algorithm, YOLOv8-Pose, is
utilized to estimate the position and orientation of surgical instruments in
each individual camera view. These multi-view estimates enable the calculation
of 3D poses of surgical tools, facilitating the rendering of a 3D surface model
of the instruments, overlaid on the background scene, for real-time
visualization. This study presents methods for adapting YOLOv8-Pose to the
EasyVis2 system, including the development of a tailored training dataset.
Experimental results demonstrate that, with an identical number of cameras, the
new system improves 3D reconstruction accuracy and reduces computation time.
Additionally, the adapted YOLOv8-Pose system shows high accuracy in 2D pose
estimation.
|
2501.03225 | Yuhui Zhang | Yuhui Zhang, Yuchang Su, Yiming Liu, Xiaohan Wang, James Burgess,
Elaine Sui, Chenyu Wang, Josiah Aklilu, Alejandro Lozano, Anjiang Wei, Ludwig
Schmidt, Serena Yeung-Levy | Automated Generation of Challenging Multiple-Choice Questions for Vision
Language Model Evaluation | CVPR 2025 | null | null | null | cs.CV cs.AI cs.CL cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | The rapid development of vision language models (VLMs) demands rigorous and
reliable evaluation. However, current visual question answering (VQA)
benchmarks often depend on open-ended questions, making accurate evaluation
difficult due to the variability in natural language responses. To address
this, we introduce AutoConverter, an agentic framework that automatically
converts these open-ended questions into multiple-choice format, enabling
objective evaluation while reducing the costly multiple-choice question
creation process. Our experiments demonstrate that AutoConverter can generate
correct and challenging multiple-choice questions, with VLMs demonstrating
consistently similar or lower accuracy on these questions compared to
human-created ones. Using AutoConverter, we construct VMCBench, a benchmark
created by transforming 20 existing VQA datasets into a unified multiple-choice
format, totaling 9,018 questions. We comprehensively evaluate 33
state-of-the-art VLMs on VMCBench, setting a new standard for scalable,
consistent, and reproducible VLM evaluation.
| [
{
"version": "v1",
"created": "Mon, 6 Jan 2025 18:57:31 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 17:25:07 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Zhang",
"Yuhui",
""
],
[
"Su",
"Yuchang",
""
],
[
"Liu",
"Yiming",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Burgess",
"James",
""
],
[
"Sui",
"Elaine",
""
],
[
"Wang",
"Chenyu",
""
],
[
"Aklilu",
"Josiah",
""
],
[
"Lozano",
"Alejandro",
""
],
[
"Wei",
"Anjiang",
""
],
[
"Schmidt",
"Ludwig",
""
],
[
"Yeung-Levy",
"Serena",
""
]
] | TITLE: Automated Generation of Challenging Multiple-Choice Questions for Vision
Language Model Evaluation
ABSTRACT: The rapid development of vision language models (VLMs) demands rigorous and
reliable evaluation. However, current visual question answering (VQA)
benchmarks often depend on open-ended questions, making accurate evaluation
difficult due to the variability in natural language responses. To address
this, we introduce AutoConverter, an agentic framework that automatically
converts these open-ended questions into multiple-choice format, enabling
objective evaluation while reducing the costly multiple-choice question
creation process. Our experiments demonstrate that AutoConverter can generate
correct and challenging multiple-choice questions, with VLMs demonstrating
consistently similar or lower accuracy on these questions compared to
human-created ones. Using AutoConverter, we construct VMCBench, a benchmark
created by transforming 20 existing VQA datasets into a unified multiple-choice
format, totaling 9,018 questions. We comprehensively evaluate 33
state-of-the-art VLMs on VMCBench, setting a new standard for scalable,
consistent, and reproducible VLM evaluation.
|
2501.03916 | Bo Zhang | Jiakang Yuan, Xiangchao Yan, Shiyang Feng, Bo Zhang, Tao Chen, Botian
Shi, Wanli Ouyang, Yu Qiao, Lei Bai, Bowen Zhou | Dolphin: Moving Towards Closed-loop Auto-research through Thinking,
Practice, and Feedback | 21 pages, 12 figures, and our homepage:
https://alpha-innovator.github.io/Dolphin-project-page | null | null | null | cs.AI cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The scientific research paradigm is undergoing a profound transformation
owing to the development of Artificial Intelligence (AI). Recent works
demonstrate that various AI-assisted research methods can largely improve
research efficiency by improving data analysis, accelerating computation, and
fostering novel idea generation. To further move towards the ultimate goal
(i.e., automatic scientific research), in this paper, we introduce Dolphin, a
closed-loop LLM-driven framework to enhance the automation level of scientific
research. Dolphin first generates novel ideas based on feedback from previous
experiments and relevant papers ranked by the topic and task attributes. Then,
the generated ideas can be implemented using a code template refined and
debugged with the designed exception-traceback-guided local code structure.
Finally, Dolphin automatically analyzes the results of each idea and feeds the
results back to the next round of idea generation. Experiments are conducted on
the benchmark datasets of different topics and a subset of MLE-bench. Results
show that Dolphin can continuously improve the performance of the input topic
in a loop. We highlight that Dolphin can automatically propose methods that are
comparable to the state-of-the-art in some tasks such as 3D point
classification.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2025 16:31:10 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jan 2025 13:14:28 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 16:27:02 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Yuan",
"Jiakang",
""
],
[
"Yan",
"Xiangchao",
""
],
[
"Feng",
"Shiyang",
""
],
[
"Zhang",
"Bo",
""
],
[
"Chen",
"Tao",
""
],
[
"Shi",
"Botian",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Qiao",
"Yu",
""
],
[
"Bai",
"Lei",
""
],
[
"Zhou",
"Bowen",
""
]
] | TITLE: Dolphin: Moving Towards Closed-loop Auto-research through Thinking,
Practice, and Feedback
ABSTRACT: The scientific research paradigm is undergoing a profound transformation
owing to the development of Artificial Intelligence (AI). Recent works
demonstrate that various AI-assisted research methods can largely improve
research efficiency by improving data analysis, accelerating computation, and
fostering novel idea generation. To further move towards the ultimate goal
(i.e., automatic scientific research), in this paper, we introduce Dolphin, a
closed-loop LLM-driven framework to enhance the automation level of scientific
research. Dolphin first generates novel ideas based on feedback from previous
experiments and relevant papers ranked by the topic and task attributes. Then,
the generated ideas can be implemented using a code template refined and
debugged with the designed exception-traceback-guided local code structure.
Finally, Dolphin automatically analyzes the results of each idea and feeds the
results back to the next round of idea generation. Experiments are conducted on
the benchmark datasets of different topics and a subset of MLE-bench. Results
show that Dolphin can continuously improve the performance of the input topic
in a loop. We highlight that Dolphin can automatically propose methods that are
comparable to the state-of-the-art in some tasks such as 3D point
classification.
|
2501.10481 | Qinyi Tian | Qinyi Tian, Winston Lindqwister, Manolis Veveakis, Laura E. Dalton | Learning Latent Hardening (LLH): Enhancing Deep Learning with Domain
Knowledge for Material Inverse Problems | null | null | null | null | cs.LG cond-mat.mtrl-sci cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advancements in deep learning and machine learning have improved the ability
to model complex, nonlinear relationships, such as those encountered in complex
material inverse problems. However, the effectiveness of these methods often
depends on large datasets, which are not always available. In this study, the
incorporation of domain-specific knowledge of the mechanical behavior of
material microstructures is investigated to evaluate the impact on the
predictive performance of the models in data-scarce scenarios. To overcome data
limitations, a two-step framework, Learning Latent Hardening (LLH), is
proposed. In the first step of LLH, a Deep Neural Network is employed to
reconstruct full stress-strain curves from randomly selected portions of the
stress-strain curves to capture the latent mechanical response of a material
based on key microstructural features. In the second step of LLH, the results
of the reconstructed stress-strain curves are leveraged to predict key
microstructural features of porous materials. The performance of six deep
learning and/or machine learning models trained with and without domain
knowledge are compared: Convolutional Neural Networks, Deep Neural Networks,
Extreme Gradient Boosting, K-Nearest Neighbors, Long Short-Term Memory, and
Random Forest. The results from the models with domain-specific information
consistently achieved higher $R^2$ values compared to models without prior
knowledge. Models without domain knowledge missed critical patterns linking
stress-strain behavior to microstructural changes, whereas domain-informed
models better identified essential stress-strain features predictive of
microstructure. These findings highlight the importance of integrating
domain-specific knowledge with deep learning to achieve accurate outcomes in
materials science.
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 03:09:25 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Feb 2025 04:15:56 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 03:04:57 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Tian",
"Qinyi",
""
],
[
"Lindqwister",
"Winston",
""
],
[
"Veveakis",
"Manolis",
""
],
[
"Dalton",
"Laura E.",
""
]
] | TITLE: Learning Latent Hardening (LLH): Enhancing Deep Learning with Domain
Knowledge for Material Inverse Problems
ABSTRACT: Advancements in deep learning and machine learning have improved the ability
to model complex, nonlinear relationships, such as those encountered in complex
material inverse problems. However, the effectiveness of these methods often
depends on large datasets, which are not always available. In this study, the
incorporation of domain-specific knowledge of the mechanical behavior of
material microstructures is investigated to evaluate the impact on the
predictive performance of the models in data-scarce scenarios. To overcome data
limitations, a two-step framework, Learning Latent Hardening (LLH), is
proposed. In the first step of LLH, a Deep Neural Network is employed to
reconstruct full stress-strain curves from randomly selected portions of the
stress-strain curves to capture the latent mechanical response of a material
based on key microstructural features. In the second step of LLH, the results
of the reconstructed stress-strain curves are leveraged to predict key
microstructural features of porous materials. The performance of six deep
learning and/or machine learning models trained with and without domain
knowledge are compared: Convolutional Neural Networks, Deep Neural Networks,
Extreme Gradient Boosting, K-Nearest Neighbors, Long Short-Term Memory, and
Random Forest. The results from the models with domain-specific information
consistently achieved higher $R^2$ values compared to models without prior
knowledge. Models without domain knowledge missed critical patterns linking
stress-strain behavior to microstructural changes, whereas domain-informed
models better identified essential stress-strain features predictive of
microstructure. These findings highlight the importance of integrating
domain-specific knowledge with deep learning to achieve accurate outcomes in
materials science.
|
2501.10629 | Jiajia Guo | Jiajia Guo, Yiming Cui, Chao-Kai Wen, Shi Jin | Prompt-Enabled Large AI Models for CSI Feedback | 13 pages, 11 figures, 1 table | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) has emerged as a promising tool for channel
state information (CSI) feedback. While recent research primarily focuses on
improving feedback accuracy on a specific dataset through novel architectures,
the underlying mechanism of AI-based CSI feedback remains unclear. This study
explores the mechanism through analyzing performance across diverse datasets,
with findings suggesting that superior feedback performance stems from AI
models' strong fitting capabilities and their ability to leverage environmental
knowledge. Building on these findings, we propose a prompt enabled large AI
model (LAM) for CSI feedback. The LAM employs powerful transformer blocks and
is trained on extensive datasets from various scenarios. Meanwhile, the channel
distribution (environmental knowledge) -- represented as the mean of channel
magnitude in the angular-delay domain -- is incorporated as a prompt within the
decoder to further enhance reconstruction quality. Simulation results confirm
that the proposed prompt-enabled LAM significantly improves feedback accuracy
and generalization performance while reducing data collection requirements in
new scenarios.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2025 02:12:47 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Mar 2025 19:05:58 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 01:26:11 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Guo",
"Jiajia",
""
],
[
"Cui",
"Yiming",
""
],
[
"Wen",
"Chao-Kai",
""
],
[
"Jin",
"Shi",
""
]
] | TITLE: Prompt-Enabled Large AI Models for CSI Feedback
ABSTRACT: Artificial intelligence (AI) has emerged as a promising tool for channel
state information (CSI) feedback. While recent research primarily focuses on
improving feedback accuracy on a specific dataset through novel architectures,
the underlying mechanism of AI-based CSI feedback remains unclear. This study
explores the mechanism through analyzing performance across diverse datasets,
with findings suggesting that superior feedback performance stems from AI
models' strong fitting capabilities and their ability to leverage environmental
knowledge. Building on these findings, we propose a prompt enabled large AI
model (LAM) for CSI feedback. The LAM employs powerful transformer blocks and
is trained on extensive datasets from various scenarios. Meanwhile, the channel
distribution (environmental knowledge) -- represented as the mean of channel
magnitude in the angular-delay domain -- is incorporated as a prompt within the
decoder to further enhance reconstruction quality. Simulation results confirm
that the proposed prompt-enabled LAM significantly improves feedback accuracy
and generalization performance while reducing data collection requirements in
new scenarios.
|
2501.12900 | Ido Kanter | Ella Koresh, Ronit D. Gross, Yuval Meir, Yarden Tzach, Tal Halevi, and
Ido Kanter | Unified CNNs and transformers underlying learning mechanism reveals
multi-head attention modus vivendi | 31 pages, 11 figures, A short YouTube Video describing the main
results https://www.youtube.com/watch?v=7I8bp7UAudk | Physica A, Statistical Mechanics and its Applications, 666 (2025)
130529 | 10.1016/j.physa.2025.130529 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Convolutional neural networks (CNNs) evaluate short-range correlations in
input images which progress along the layers, whereas vision transformer (ViT)
architectures evaluate long-range correlations, using repeated transformer
encoders composed of fully connected layers. Both are designed to solve complex
classification tasks but from different perspectives. This study demonstrates
that CNNs and ViT architectures stem from a unified underlying learning
mechanism, which quantitatively measures the single-nodal performance (SNP) of
each node in feedforward (FF) and multi-head attention (MHA) sub-blocks. Each
node identifies small clusters of possible output labels, with additional noise
represented as labels outside these clusters. These features are progressively
sharpened along the transformer encoders, enhancing the signal-to-noise ratio.
This unified underlying learning mechanism leads to two main findings. First,
it enables an efficient applied nodal diagonal connection (ANDC) pruning
technique without affecting the accuracy. Second, based on the SNP, spontaneous
symmetry breaking occurs among the MHA heads, such that each head focuses its
attention on a subset of labels through cooperation among its SNPs.
Consequently, each head becomes an expert in recognizing its designated labels,
representing a quantitative MHA modus vivendi mechanism. This statistical
mechanics inspired viewpoint enables to reveal macroscopic behavior of the
entire network from the microscopic performance of each node. These results are
based on a compact convolutional transformer architecture trained on the
CIFAR-100 and Flowers-102 datasets and call for their extension to other
architectures and applications, such as natural language processing.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2025 14:19:48 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 13:41:43 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 13:06:49 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Koresh",
"Ella",
""
],
[
"Gross",
"Ronit D.",
""
],
[
"Meir",
"Yuval",
""
],
[
"Tzach",
"Yarden",
""
],
[
"Halevi",
"Tal",
""
],
[
"Kanter",
"Ido",
""
]
] | TITLE: Unified CNNs and transformers underlying learning mechanism reveals
multi-head attention modus vivendi
ABSTRACT: Convolutional neural networks (CNNs) evaluate short-range correlations in
input images which progress along the layers, whereas vision transformer (ViT)
architectures evaluate long-range correlations, using repeated transformer
encoders composed of fully connected layers. Both are designed to solve complex
classification tasks but from different perspectives. This study demonstrates
that CNNs and ViT architectures stem from a unified underlying learning
mechanism, which quantitatively measures the single-nodal performance (SNP) of
each node in feedforward (FF) and multi-head attention (MHA) sub-blocks. Each
node identifies small clusters of possible output labels, with additional noise
represented as labels outside these clusters. These features are progressively
sharpened along the transformer encoders, enhancing the signal-to-noise ratio.
This unified underlying learning mechanism leads to two main findings. First,
it enables an efficient applied nodal diagonal connection (ANDC) pruning
technique without affecting the accuracy. Second, based on the SNP, spontaneous
symmetry breaking occurs among the MHA heads, such that each head focuses its
attention on a subset of labels through cooperation among its SNPs.
Consequently, each head becomes an expert in recognizing its designated labels,
representing a quantitative MHA modus vivendi mechanism. This statistical
mechanics inspired viewpoint enables to reveal macroscopic behavior of the
entire network from the microscopic performance of each node. These results are
based on a compact convolutional transformer architecture trained on the
CIFAR-100 and Flowers-102 datasets and call for their extension to other
architectures and applications, such as natural language processing.
|
2502.02514 | Jan Dubi\'nski | Antoni Kowalczuk, Jan Dubi\'nski, Franziska Boenisch, Adam Dziedzic | Privacy Attacks on Image AutoRegressive Models | Code: https://github.com/sprintml/privacy_attacks_against_iars | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Image autoregressive generation has emerged as a powerful new paradigm, with
image autoregressive models (IARs) matching state-of-the-art diffusion models
(DMs) in image quality (FID: 1.48 vs. 1.58) while allowing for higher
generation speed. However, the privacy risks associated with IARs remain
unexplored, raising concerns about their responsible deployment. To address
this gap, we conduct a comprehensive privacy analysis of IARs, comparing their
privacy risks to those of DMs as a reference point. Specifically, we develop a
novel membership inference attack (MIA) that achieves a remarkably high success
rate in detecting training images, with a True Positive Rate at False Positive
Rate = 1% (TPR@FPR=1%) of 86.38%, compared to just 6.38% for DMs using
comparable attacks. We leverage our novel MIA to perform dataset inference (DI)
for IARs and show that it requires as few as 6 samples to detect dataset
membership, compared to 200 samples for DI in DMs. This confirms a higher level
of information leakage in IARs. Finally, we are able to extract hundreds of
training data points from an IAR (e.g., 698 from VAR-d30). Our results suggest
a fundamental privacy-utility trade-off: while IARs excel in image generation
quality and speed, they are empirically significantly more vulnerable to
privacy attacks compared to DMs that achieve similar performance. This trend
suggests that incorporating techniques from DMs into IARs, such as modeling the
per-token probability distribution using a diffusion procedure, could help
mitigate IARs' vulnerability to privacy attacks. We make our code available at:
https://github.com/sprintml/privacy_attacks_against_iars
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 17:33:08 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 17:28:09 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 08:33:54 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Kowalczuk",
"Antoni",
""
],
[
"Dubiński",
"Jan",
""
],
[
"Boenisch",
"Franziska",
""
],
[
"Dziedzic",
"Adam",
""
]
] | TITLE: Privacy Attacks on Image AutoRegressive Models
ABSTRACT: Image autoregressive generation has emerged as a powerful new paradigm, with
image autoregressive models (IARs) matching state-of-the-art diffusion models
(DMs) in image quality (FID: 1.48 vs. 1.58) while allowing for higher
generation speed. However, the privacy risks associated with IARs remain
unexplored, raising concerns about their responsible deployment. To address
this gap, we conduct a comprehensive privacy analysis of IARs, comparing their
privacy risks to those of DMs as a reference point. Specifically, we develop a
novel membership inference attack (MIA) that achieves a remarkably high success
rate in detecting training images, with a True Positive Rate at False Positive
Rate = 1% (TPR@FPR=1%) of 86.38%, compared to just 6.38% for DMs using
comparable attacks. We leverage our novel MIA to perform dataset inference (DI)
for IARs and show that it requires as few as 6 samples to detect dataset
membership, compared to 200 samples for DI in DMs. This confirms a higher level
of information leakage in IARs. Finally, we are able to extract hundreds of
training data points from an IAR (e.g., 698 from VAR-d30). Our results suggest
a fundamental privacy-utility trade-off: while IARs excel in image generation
quality and speed, they are empirically significantly more vulnerable to
privacy attacks compared to DMs that achieve similar performance. This trend
suggests that incorporating techniques from DMs into IARs, such as modeling the
per-token probability distribution using a diffusion procedure, could help
mitigate IARs' vulnerability to privacy attacks. We make our code available at:
https://github.com/sprintml/privacy_attacks_against_iars
|
2502.02862 | Peiyan Yue | Peiyan Yue, Die Cai, Chu Guo, Mengxing Liu, Jun Xia, Yi Wang | Learning Generalizable Features for Tibial Plateau Fracture Segmentation
Using Masked Autoencoder and Limited Annotations | 5 pages, 6 figures. Accepted to IEEE EMBC 2025 | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate automated segmentation of tibial plateau fractures (TPF) from
computed tomography (CT) requires large amounts of annotated data to train deep
learning models, but obtaining such annotations presents unique challenges. The
process demands expert knowledge to identify diverse fracture patterns, assess
severity, and account for individual anatomical variations, making the
annotation process highly time-consuming and expensive. Although
semi-supervised learning methods can utilize unlabeled data, existing
approaches often struggle with the complexity and variability of fracture
morphologies, as well as limited generalizability across datasets. To tackle
these issues, we propose an effective training strategy based on masked
autoencoder (MAE) for the accurate TPF segmentation in CT. Our method leverages
MAE pretraining to capture global skeletal structures and fine-grained fracture
details from unlabeled data, followed by fine-tuning with a small set of
labeled data. This strategy reduces the dependence on extensive annotations
while enhancing the model's ability to learn generalizable and transferable
features. The proposed method is evaluated on an in-house dataset containing
180 CT scans with TPF. Experimental results demonstrate that our method
consistently outperforms semi-supervised methods, achieving an average Dice
similarity coefficient (DSC) of 95.81%, average symmetric surface distance
(ASSD) of 1.91mm, and Hausdorff distance (95HD) of 9.42mm with only 20
annotated cases. Moreover, our method exhibits strong transferability when
applying to another public pelvic CT dataset with hip fractures, highlighting
its potential for broader applications in fracture segmentation tasks.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 03:44:52 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 05:15:50 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Yue",
"Peiyan",
""
],
[
"Cai",
"Die",
""
],
[
"Guo",
"Chu",
""
],
[
"Liu",
"Mengxing",
""
],
[
"Xia",
"Jun",
""
],
[
"Wang",
"Yi",
""
]
] | TITLE: Learning Generalizable Features for Tibial Plateau Fracture Segmentation
Using Masked Autoencoder and Limited Annotations
ABSTRACT: Accurate automated segmentation of tibial plateau fractures (TPF) from
computed tomography (CT) requires large amounts of annotated data to train deep
learning models, but obtaining such annotations presents unique challenges. The
process demands expert knowledge to identify diverse fracture patterns, assess
severity, and account for individual anatomical variations, making the
annotation process highly time-consuming and expensive. Although
semi-supervised learning methods can utilize unlabeled data, existing
approaches often struggle with the complexity and variability of fracture
morphologies, as well as limited generalizability across datasets. To tackle
these issues, we propose an effective training strategy based on masked
autoencoder (MAE) for the accurate TPF segmentation in CT. Our method leverages
MAE pretraining to capture global skeletal structures and fine-grained fracture
details from unlabeled data, followed by fine-tuning with a small set of
labeled data. This strategy reduces the dependence on extensive annotations
while enhancing the model's ability to learn generalizable and transferable
features. The proposed method is evaluated on an in-house dataset containing
180 CT scans with TPF. Experimental results demonstrate that our method
consistently outperforms semi-supervised methods, achieving an average Dice
similarity coefficient (DSC) of 95.81%, average symmetric surface distance
(ASSD) of 1.91mm, and Hausdorff distance (95HD) of 9.42mm with only 20
annotated cases. Moreover, our method exhibits strong transferability when
applying to another public pelvic CT dataset with hip fractures, highlighting
its potential for broader applications in fracture segmentation tasks.
|
2502.03307 | Yu Wang | Yu Wang and Lei Sang and Yi Zhang and Yiwen Zhang | Intent Representation Learning with Large Language Model for
Recommendation | Accepted by SIGIR 2025 Full Paper | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intent-based recommender systems have garnered significant attention for
uncovering latent fine-grained preferences. Intents, as underlying factors of
interactions, are crucial for improving recommendation interpretability. Most
methods define intents as learnable parameters updated alongside interactions.
However, existing frameworks often overlook textual information (e.g., user
reviews, item descriptions), which is crucial for alleviating the sparsity of
interaction intents. Exploring these multimodal intents, especially the
inherent differences in representation spaces, poses two key challenges: i) How
to align multimodal intents and effectively mitigate noise issues; ii) How to
extract and match latent key intents across modalities. To tackle these
challenges, we propose a model-agnostic framework, Intent Representation
Learning with Large Language Model (IRLLRec), which leverages large language
models (LLMs) to construct multimodal intents and enhance recommendations.
Specifically, IRLLRec employs a dual-tower architecture to learn multimodal
intent representations. Next, we propose pairwise and translation alignment to
eliminate inter-modal differences and enhance robustness against noisy input
features. Finally, to better match textual and interaction-based intents, we
employ momentum distillation to perform teacher-student learning on fused
intent representations. Empirical evaluations on three datasets show that our
IRLLRec framework outperforms baselines.Code available at
https://github.com/wangyu0627/IRLLRec.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 16:08:05 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Feb 2025 14:29:44 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Feb 2025 08:16:44 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Apr 2025 07:21:18 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Wang",
"Yu",
""
],
[
"Sang",
"Lei",
""
],
[
"Zhang",
"Yi",
""
],
[
"Zhang",
"Yiwen",
""
]
] | TITLE: Intent Representation Learning with Large Language Model for
Recommendation
ABSTRACT: Intent-based recommender systems have garnered significant attention for
uncovering latent fine-grained preferences. Intents, as underlying factors of
interactions, are crucial for improving recommendation interpretability. Most
methods define intents as learnable parameters updated alongside interactions.
However, existing frameworks often overlook textual information (e.g., user
reviews, item descriptions), which is crucial for alleviating the sparsity of
interaction intents. Exploring these multimodal intents, especially the
inherent differences in representation spaces, poses two key challenges: i) How
to align multimodal intents and effectively mitigate noise issues; ii) How to
extract and match latent key intents across modalities. To tackle these
challenges, we propose a model-agnostic framework, Intent Representation
Learning with Large Language Model (IRLLRec), which leverages large language
models (LLMs) to construct multimodal intents and enhance recommendations.
Specifically, IRLLRec employs a dual-tower architecture to learn multimodal
intent representations. Next, we propose pairwise and translation alignment to
eliminate inter-modal differences and enhance robustness against noisy input
features. Finally, to better match textual and interaction-based intents, we
employ momentum distillation to perform teacher-student learning on fused
intent representations. Empirical evaluations on three datasets show that our
IRLLRec framework outperforms baselines.Code available at
https://github.com/wangyu0627/IRLLRec.
|
2502.03375 | Songwen Hu | Songwen Hu, Ryan A. Rossi, Tong Yu, Junda Wu, Handong Zhao, Sungchul
Kim, Shuai Li | Interactive Visualization Recommendation with Hier-SUCB | null | null | 10.1145/3696410.3714697 | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Visualization recommendation aims to enable rapid visual analysis of massive
datasets. In real-world scenarios, it is essential to quickly gather and
comprehend user preferences to cover users from diverse backgrounds, including
varying skill levels and analytical tasks. Previous approaches to personalized
visualization recommendations are non-interactive and rely on initial user data
for new users. As a result, these models cannot effectively explore options or
adapt to real-time feedback. To address this limitation, we propose an
interactive personalized visualization recommendation (PVisRec) system that
learns on user feedback from previous interactions. For more interactive and
accurate recommendations, we propose Hier-SUCB, a contextual combinatorial
semi-bandit in the PVisRec setting. Theoretically, we show an improved overall
regret bound with the same rank of time but an improved rank of action space.
We further demonstrate the effectiveness of Hier-SUCB through extensive
experiments where it is comparable to offline methods and outperforms other
bandit algorithms in the setting of visualization recommendation.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 17:14:45 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Feb 2025 03:46:29 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Feb 2025 02:17:49 GMT"
},
{
"version": "v4",
"created": "Sun, 9 Mar 2025 04:14:14 GMT"
},
{
"version": "v5",
"created": "Tue, 8 Apr 2025 21:05:45 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Hu",
"Songwen",
""
],
[
"Rossi",
"Ryan A.",
""
],
[
"Yu",
"Tong",
""
],
[
"Wu",
"Junda",
""
],
[
"Zhao",
"Handong",
""
],
[
"Kim",
"Sungchul",
""
],
[
"Li",
"Shuai",
""
]
] | TITLE: Interactive Visualization Recommendation with Hier-SUCB
ABSTRACT: Visualization recommendation aims to enable rapid visual analysis of massive
datasets. In real-world scenarios, it is essential to quickly gather and
comprehend user preferences to cover users from diverse backgrounds, including
varying skill levels and analytical tasks. Previous approaches to personalized
visualization recommendations are non-interactive and rely on initial user data
for new users. As a result, these models cannot effectively explore options or
adapt to real-time feedback. To address this limitation, we propose an
interactive personalized visualization recommendation (PVisRec) system that
learns on user feedback from previous interactions. For more interactive and
accurate recommendations, we propose Hier-SUCB, a contextual combinatorial
semi-bandit in the PVisRec setting. Theoretically, we show an improved overall
regret bound with the same rank of time but an improved rank of action space.
We further demonstrate the effectiveness of Hier-SUCB through extensive
experiments where it is comparable to offline methods and outperforms other
bandit algorithms in the setting of visualization recommendation.
|
2502.12063 | Lester Mackey | Annabelle Michael Carrell, Albert Gong, Abhishek Shetty, Raaz Dwivedi,
Lester Mackey | Low-Rank Thinning | null | null | null | null | stat.ML cs.LG math.OC math.ST stat.ME stat.TH | http://creativecommons.org/licenses/by/4.0/ | The goal in thinning is to summarize a dataset using a small set of
representative points. Remarkably, sub-Gaussian thinning algorithms like Kernel
Halving and Compress can match the quality of uniform subsampling while
substantially reducing the number of summary points. However, existing
guarantees cover only a restricted range of distributions and kernel-based
quality measures and suffer from pessimistic dimension dependence. To address
these deficiencies, we introduce a new low-rank analysis of sub-Gaussian
thinning that applies to any distribution and any kernel, guaranteeing
high-quality compression whenever the kernel or data matrix is approximately
low-rank. To demonstrate the broad applicability of the techniques, we design
practical sub-Gaussian thinning approaches that improve upon the best known
guarantees for approximating attention in transformers, accelerating stochastic
gradient training through reordering, and distinguishing distributions in
near-linear time.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 17:30:14 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 14:13:04 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Apr 2025 17:36:49 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Apr 2025 21:57:48 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Carrell",
"Annabelle Michael",
""
],
[
"Gong",
"Albert",
""
],
[
"Shetty",
"Abhishek",
""
],
[
"Dwivedi",
"Raaz",
""
],
[
"Mackey",
"Lester",
""
]
] | TITLE: Low-Rank Thinning
ABSTRACT: The goal in thinning is to summarize a dataset using a small set of
representative points. Remarkably, sub-Gaussian thinning algorithms like Kernel
Halving and Compress can match the quality of uniform subsampling while
substantially reducing the number of summary points. However, existing
guarantees cover only a restricted range of distributions and kernel-based
quality measures and suffer from pessimistic dimension dependence. To address
these deficiencies, we introduce a new low-rank analysis of sub-Gaussian
thinning that applies to any distribution and any kernel, guaranteeing
high-quality compression whenever the kernel or data matrix is approximately
low-rank. To demonstrate the broad applicability of the techniques, we design
practical sub-Gaussian thinning approaches that improve upon the best known
guarantees for approximating attention in transformers, accelerating stochastic
gradient training through reordering, and distinguishing distributions in
near-linear time.
|
2502.18389 | Nicola Cecere | Nicola Cecere, Andrea Bacciu, Ignacio Fern\'andez Tob\'ias, Amin
Mantrach | Monte Carlo Temperature: a robust sampling strategy for LLM's
uncertainty quantification methods | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Uncertainty quantification (UQ) in Large Language Models (LLMs) is essential
for their safe and reliable deployment, particularly in critical applications
where incorrect outputs can have serious consequences. Current UQ methods
typically rely on querying the model multiple times using non-zero temperature
sampling to generate diverse outputs for uncertainty estimation. However, the
impact of selecting a given temperature parameter is understudied, and our
analysis reveals that temperature plays a fundamental role in the quality of
uncertainty estimates. The conventional approach of identifying optimal
temperature values requires expensive hyperparameter optimization (HPO) that
must be repeated for each new model-dataset combination. We propose Monte Carlo
Temperature (MCT), a robust sampling strategy that eliminates the need for
temperature calibration. Our analysis reveals that: 1) MCT provides more robust
uncertainty estimates across a wide range of temperatures, 2) MCT improves the
performance of UQ methods by replacing fixed-temperature strategies that do not
rely on HPO, and 3) MCT achieves statistical parity with oracle temperatures,
which represent the ideal outcome of a well-tuned but computationally expensive
HPO process. These findings demonstrate that effective UQ can be achieved
without the computational burden of temperature parameter calibration.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 17:33:20 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 16:40:21 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Cecere",
"Nicola",
""
],
[
"Bacciu",
"Andrea",
""
],
[
"Tobías",
"Ignacio Fernández",
""
],
[
"Mantrach",
"Amin",
""
]
] | TITLE: Monte Carlo Temperature: a robust sampling strategy for LLM's
uncertainty quantification methods
ABSTRACT: Uncertainty quantification (UQ) in Large Language Models (LLMs) is essential
for their safe and reliable deployment, particularly in critical applications
where incorrect outputs can have serious consequences. Current UQ methods
typically rely on querying the model multiple times using non-zero temperature
sampling to generate diverse outputs for uncertainty estimation. However, the
impact of selecting a given temperature parameter is understudied, and our
analysis reveals that temperature plays a fundamental role in the quality of
uncertainty estimates. The conventional approach of identifying optimal
temperature values requires expensive hyperparameter optimization (HPO) that
must be repeated for each new model-dataset combination. We propose Monte Carlo
Temperature (MCT), a robust sampling strategy that eliminates the need for
temperature calibration. Our analysis reveals that: 1) MCT provides more robust
uncertainty estimates across a wide range of temperatures, 2) MCT improves the
performance of UQ methods by replacing fixed-temperature strategies that do not
rely on HPO, and 3) MCT achieves statistical parity with oracle temperatures,
which represent the ideal outcome of a well-tuned but computationally expensive
HPO process. These findings demonstrate that effective UQ can be achieved
without the computational burden of temperature parameter calibration.
|
2502.19217 | Nikita Shvetsov | Nikita Shvetsov, Thomas K. Kilvaer, Masoud Tafavvoghi, Anders Sildnes,
Kajsa M{\o}llersen, Lill-Tove Rasmussen Busund, Lars Ailo Bongo | A Lightweight and Extensible Cell Segmentation and Classification Model
for Whole Slide Images | 30 pages, 11 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Developing clinically useful cell-level analysis tools in digital pathology
remains challenging due to limitations in dataset granularity, inconsistent
annotations, high computational demands, and difficulties integrating new
technologies into workflows. To address these issues, we propose a solution
that enhances data quality, model performance, and usability by creating a
lightweight, extensible cell segmentation and classification model. First, we
update data labels through cross-relabeling to refine annotations of PanNuke
and MoNuSAC, producing a unified dataset with seven distinct cell types.
Second, we leverage the H-Optimus foundation model as a fixed encoder to
improve feature representation for simultaneous segmentation and classification
tasks. Third, to address foundation models' computational demands, we distill
knowledge to reduce model size and complexity while maintaining comparable
performance. Finally, we integrate the distilled model into QuPath, a widely
used open-source digital pathology platform. Results demonstrate improved
segmentation and classification performance using the H-Optimus-based model
compared to a CNN-based model. Specifically, average $R^2$ improved from 0.575
to 0.871, and average $PQ$ score improved from 0.450 to 0.492, indicating
better alignment with actual cell counts and enhanced segmentation quality. The
distilled model maintains comparable performance while reducing parameter count
by a factor of 48. By reducing computational complexity and integrating into
workflows, this approach may significantly impact diagnostics, reduce
pathologist workload, and improve outcomes. Although the method shows promise,
extensive validation is necessary prior to clinical deployment.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 15:19:52 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 11:06:08 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Shvetsov",
"Nikita",
""
],
[
"Kilvaer",
"Thomas K.",
""
],
[
"Tafavvoghi",
"Masoud",
""
],
[
"Sildnes",
"Anders",
""
],
[
"Møllersen",
"Kajsa",
""
],
[
"Busund",
"Lill-Tove Rasmussen",
""
],
[
"Bongo",
"Lars Ailo",
""
]
] | TITLE: A Lightweight and Extensible Cell Segmentation and Classification Model
for Whole Slide Images
ABSTRACT: Developing clinically useful cell-level analysis tools in digital pathology
remains challenging due to limitations in dataset granularity, inconsistent
annotations, high computational demands, and difficulties integrating new
technologies into workflows. To address these issues, we propose a solution
that enhances data quality, model performance, and usability by creating a
lightweight, extensible cell segmentation and classification model. First, we
update data labels through cross-relabeling to refine annotations of PanNuke
and MoNuSAC, producing a unified dataset with seven distinct cell types.
Second, we leverage the H-Optimus foundation model as a fixed encoder to
improve feature representation for simultaneous segmentation and classification
tasks. Third, to address foundation models' computational demands, we distill
knowledge to reduce model size and complexity while maintaining comparable
performance. Finally, we integrate the distilled model into QuPath, a widely
used open-source digital pathology platform. Results demonstrate improved
segmentation and classification performance using the H-Optimus-based model
compared to a CNN-based model. Specifically, average $R^2$ improved from 0.575
to 0.871, and average $PQ$ score improved from 0.450 to 0.492, indicating
better alignment with actual cell counts and enhanced segmentation quality. The
distilled model maintains comparable performance while reducing parameter count
by a factor of 48. By reducing computational complexity and integrating into
workflows, this approach may significantly impact diagnostics, reduce
pathologist workload, and improve outcomes. Although the method shows promise,
extensive validation is necessary prior to clinical deployment.
|
2503.05639 | Yuxuan Bian | Yuxuan Bian, Zhaoyang Zhang, Xuan Ju, Mingdeng Cao, Liangbin Xie, Ying
Shan, Qiang Xu | VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play
Context Control | Project page available at
https://yxbian23.github.io/project/video-painter | null | null | null | cs.CV cs.AI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video inpainting, which aims to restore corrupted video content, has
experienced substantial progress. Despite these advances, existing methods,
whether propagating unmasked region pixels through optical flow and receptive
field priors, or extending image-inpainting models temporally, face challenges
in generating fully masked objects or balancing the competing objectives of
background context preservation and foreground generation in one model,
respectively. To address these limitations, we propose a novel dual-stream
paradigm VideoPainter that incorporates an efficient context encoder
(comprising only 6% of the backbone parameters) to process masked videos and
inject backbone-aware background contextual cues to any pre-trained video DiT,
producing semantically consistent content in a plug-and-play manner. This
architectural separation significantly reduces the model's learning complexity
while enabling nuanced integration of crucial background context. We also
introduce a novel target region ID resampling technique that enables any-length
video inpainting, greatly enhancing our practical applicability. Additionally,
we establish a scalable dataset pipeline leveraging current vision
understanding models, contributing VPData and VPBench to facilitate
segmentation-based inpainting training and assessment, the largest video
inpainting dataset and benchmark to date with over 390K diverse clips. Using
inpainting as a pipeline basis, we also explore downstream applications
including video editing and video editing pair data generation, demonstrating
competitive performance and significant practical potential. Extensive
experiments demonstrate VideoPainter's superior performance in both any-length
video inpainting and editing, across eight key metrics, including video
quality, mask region preservation, and textual coherence.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 17:59:46 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 18:56:32 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 02:05:33 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Bian",
"Yuxuan",
""
],
[
"Zhang",
"Zhaoyang",
""
],
[
"Ju",
"Xuan",
""
],
[
"Cao",
"Mingdeng",
""
],
[
"Xie",
"Liangbin",
""
],
[
"Shan",
"Ying",
""
],
[
"Xu",
"Qiang",
""
]
] | TITLE: VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play
Context Control
ABSTRACT: Video inpainting, which aims to restore corrupted video content, has
experienced substantial progress. Despite these advances, existing methods,
whether propagating unmasked region pixels through optical flow and receptive
field priors, or extending image-inpainting models temporally, face challenges
in generating fully masked objects or balancing the competing objectives of
background context preservation and foreground generation in one model,
respectively. To address these limitations, we propose a novel dual-stream
paradigm VideoPainter that incorporates an efficient context encoder
(comprising only 6% of the backbone parameters) to process masked videos and
inject backbone-aware background contextual cues to any pre-trained video DiT,
producing semantically consistent content in a plug-and-play manner. This
architectural separation significantly reduces the model's learning complexity
while enabling nuanced integration of crucial background context. We also
introduce a novel target region ID resampling technique that enables any-length
video inpainting, greatly enhancing our practical applicability. Additionally,
we establish a scalable dataset pipeline leveraging current vision
understanding models, contributing VPData and VPBench to facilitate
segmentation-based inpainting training and assessment, the largest video
inpainting dataset and benchmark to date with over 390K diverse clips. Using
inpainting as a pipeline basis, we also explore downstream applications
including video editing and video editing pair data generation, demonstrating
competitive performance and significant practical potential. Extensive
experiments demonstrate VideoPainter's superior performance in both any-length
video inpainting and editing, across eight key metrics, including video
quality, mask region preservation, and textual coherence.
|
2503.08688 | Ariba Khan | Ariba Khan, Stephen Casper, Dylan Hadfield-Menell | Randomness, Not Representation: The Unreliability of Evaluating Cultural
Alignment in LLMs | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Research on the 'cultural alignment' of Large Language Models (LLMs) has
emerged in response to growing interest in understanding representation across
diverse stakeholders. Current approaches to evaluating cultural alignment
through survey-based assessments that borrow from social science methodologies
often overlook systematic robustness checks. Here, we identify and test three
assumptions behind current survey-based evaluation methods: (1) Stability: that
cultural alignment is a property of LLMs rather than an artifact of evaluation
design, (2) Extrapolability: that alignment with one culture on a narrow set of
issues predicts alignment with that culture on others, and (3) Steerability:
that LLMs can be reliably prompted to represent specific cultural perspectives.
Through experiments examining both explicit and implicit preferences of leading
LLMs, we find a high level of instability across presentation formats,
incoherence between evaluated versus held-out cultural dimensions, and erratic
behavior under prompt steering. We show that these inconsistencies can cause
the results of an evaluation to be very sensitive to minor variations in
methodology. Finally, we demonstrate in a case study on evaluation design that
narrow experiments and a selective assessment of evidence can be used to paint
an incomplete picture of LLMs' cultural alignment properties. Overall, these
results highlight significant limitations of current survey-based approaches to
evaluating the cultural alignment of LLMs and highlight a need for systematic
robustness checks and red-teaming for evaluation results. Data and code are
available at
https://huggingface.co./datasets/akhan02/cultural-dimension-cover-letters and
https://github.com/ariba-k/llm-cultural-alignment-evaluation, respectively.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 17:59:53 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 21:11:19 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Khan",
"Ariba",
""
],
[
"Casper",
"Stephen",
""
],
[
"Hadfield-Menell",
"Dylan",
""
]
] | TITLE: Randomness, Not Representation: The Unreliability of Evaluating Cultural
Alignment in LLMs
ABSTRACT: Research on the 'cultural alignment' of Large Language Models (LLMs) has
emerged in response to growing interest in understanding representation across
diverse stakeholders. Current approaches to evaluating cultural alignment
through survey-based assessments that borrow from social science methodologies
often overlook systematic robustness checks. Here, we identify and test three
assumptions behind current survey-based evaluation methods: (1) Stability: that
cultural alignment is a property of LLMs rather than an artifact of evaluation
design, (2) Extrapolability: that alignment with one culture on a narrow set of
issues predicts alignment with that culture on others, and (3) Steerability:
that LLMs can be reliably prompted to represent specific cultural perspectives.
Through experiments examining both explicit and implicit preferences of leading
LLMs, we find a high level of instability across presentation formats,
incoherence between evaluated versus held-out cultural dimensions, and erratic
behavior under prompt steering. We show that these inconsistencies can cause
the results of an evaluation to be very sensitive to minor variations in
methodology. Finally, we demonstrate in a case study on evaluation design that
narrow experiments and a selective assessment of evidence can be used to paint
an incomplete picture of LLMs' cultural alignment properties. Overall, these
results highlight significant limitations of current survey-based approaches to
evaluating the cultural alignment of LLMs and highlight a need for systematic
robustness checks and red-teaming for evaluation results. Data and code are
available at
https://huggingface.co./datasets/akhan02/cultural-dimension-cover-letters and
https://github.com/ariba-k/llm-cultural-alignment-evaluation, respectively.
|
2503.12978 | Yang Ji | Yang Ji, Ying Sun, Hengshu Zhu | Enhancing Job Salary Prediction with Disentangled Composition Effect
Modeling: A Neural Prototyping Approach | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of the knowledge economy, understanding how job skills influence
salary is crucial for promoting recruitment with competitive salary systems and
aligned salary expectations. Despite efforts on salary prediction based on job
positions and talent demographics, there still lacks methods to effectively
discern the set-structured skills' intricate composition effect on job salary.
While recent advances in neural networks have significantly improved accurate
set-based quantitative modeling, their lack of explainability hinders obtaining
insights into the skills' composition effects. Indeed, model explanation for
set data is challenging due to the combinatorial nature, rich semantics, and
unique format. To this end, in this paper, we propose a novel intrinsically
explainable set-based neural prototyping approach, namely \textbf{LGDESetNet},
for explainable salary prediction that can reveal disentangled skill sets that
impact salary from both local and global perspectives. Specifically, we propose
a skill graph-enhanced disentangled discrete subset selection layer to identify
multi-faceted influential input subsets with varied semantics. Furthermore, we
propose a set-oriented prototype learning method to extract globally
influential prototypical sets. The resulting output is transparently derived
from the semantic interplay between these input subsets and global prototypes.
Extensive experiments on four real-world datasets demonstrate that our method
achieves superior performance than state-of-the-art baselines in salary
prediction while providing explainable insights into salary-influencing
patterns.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:36:07 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2025 03:28:19 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Apr 2025 02:23:34 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Ji",
"Yang",
""
],
[
"Sun",
"Ying",
""
],
[
"Zhu",
"Hengshu",
""
]
] | TITLE: Enhancing Job Salary Prediction with Disentangled Composition Effect
Modeling: A Neural Prototyping Approach
ABSTRACT: In the era of the knowledge economy, understanding how job skills influence
salary is crucial for promoting recruitment with competitive salary systems and
aligned salary expectations. Despite efforts on salary prediction based on job
positions and talent demographics, there still lacks methods to effectively
discern the set-structured skills' intricate composition effect on job salary.
While recent advances in neural networks have significantly improved accurate
set-based quantitative modeling, their lack of explainability hinders obtaining
insights into the skills' composition effects. Indeed, model explanation for
set data is challenging due to the combinatorial nature, rich semantics, and
unique format. To this end, in this paper, we propose a novel intrinsically
explainable set-based neural prototyping approach, namely \textbf{LGDESetNet},
for explainable salary prediction that can reveal disentangled skill sets that
impact salary from both local and global perspectives. Specifically, we propose
a skill graph-enhanced disentangled discrete subset selection layer to identify
multi-faceted influential input subsets with varied semantics. Furthermore, we
propose a set-oriented prototype learning method to extract globally
influential prototypical sets. The resulting output is transparently derived
from the semantic interplay between these input subsets and global prototypes.
Extensive experiments on four real-world datasets demonstrate that our method
achieves superior performance than state-of-the-art baselines in salary
prediction while providing explainable insights into salary-influencing
patterns.
|
2503.15050 | Aolin Chen | Aolin Chen, Haojun Wu, Qi Xin, Steven P. Reiss, Jifeng Xuan | Studying and Understanding the Effectiveness and Failures of
Conversational LLM-Based Repair | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated program repair (APR) is designed to automate the process of
bug-fixing. In recent years, thanks to the rapid development of large language
models (LLMs), automated repair has achieved remarkable progress. Advanced APR
techniques powered by conversational LLMs, most notably ChatGPT, have exhibited
impressive repair abilities and gained increasing popularity due to the
capabilities of the underlying LLMs in providing repair feedback and performing
iterative patch improvement. Despite the superiority, conversational APR
techniques still fail to repair a large number of bugs. For example, a
state-of-the-art conversational technique ChatRepair does not correctly repair
over half of the single-function bugs in the Defects4J dataset. To understand
the effectiveness and failures of conversational LLM-based repair and provide
possible directions for improvement, we studied the exemplary ChatRepair with a
focus on comparing the effectiveness of its cloze-style and full function
repair strategies, assessing its key iterative component for patch improvement,
and analyzing the repair failures. Our study has led to a series of findings,
which we believe provide key implications for future research.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:39:32 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 14:18:47 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Chen",
"Aolin",
""
],
[
"Wu",
"Haojun",
""
],
[
"Xin",
"Qi",
""
],
[
"Reiss",
"Steven P.",
""
],
[
"Xuan",
"Jifeng",
""
]
] | TITLE: Studying and Understanding the Effectiveness and Failures of
Conversational LLM-Based Repair
ABSTRACT: Automated program repair (APR) is designed to automate the process of
bug-fixing. In recent years, thanks to the rapid development of large language
models (LLMs), automated repair has achieved remarkable progress. Advanced APR
techniques powered by conversational LLMs, most notably ChatGPT, have exhibited
impressive repair abilities and gained increasing popularity due to the
capabilities of the underlying LLMs in providing repair feedback and performing
iterative patch improvement. Despite the superiority, conversational APR
techniques still fail to repair a large number of bugs. For example, a
state-of-the-art conversational technique ChatRepair does not correctly repair
over half of the single-function bugs in the Defects4J dataset. To understand
the effectiveness and failures of conversational LLM-based repair and provide
possible directions for improvement, we studied the exemplary ChatRepair with a
focus on comparing the effectiveness of its cloze-style and full function
repair strategies, assessing its key iterative component for patch improvement,
and analyzing the repair failures. Our study has led to a series of findings,
which we believe provide key implications for future research.
|
2503.22026 | SaiKiran Tedla | SaiKiran Tedla, Junyong Lee, Beixuan Yang, Mahmoud Afifi, Michael S.
Brown | Multispectral Demosaicing via Dual Cameras | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multispectral (MS) images capture detailed scene information across a wide
range of spectral bands, making them invaluable for applications requiring rich
spectral data. Integrating MS imaging into multi camera devices, such as
smartphones, has the potential to enhance both spectral applications and RGB
image quality. A critical step in processing MS data is demosaicing, which
reconstructs color information from the mosaic MS images captured by the
camera. This paper proposes a method for MS image demosaicing specifically
designed for dual-camera setups where both RGB and MS cameras capture the same
scene. Our approach leverages co-captured RGB images, which typically have
higher spatial fidelity, to guide the demosaicing of lower-fidelity MS images.
We introduce the Dual-camera RGB-MS Dataset - a large collection of paired RGB
and MS mosaiced images with ground-truth demosaiced outputs - that enables
training and evaluation of our method. Experimental results demonstrate that
our method achieves state-of-the-art accuracy compared to existing techniques.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 22:40:55 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 00:18:02 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Tedla",
"SaiKiran",
""
],
[
"Lee",
"Junyong",
""
],
[
"Yang",
"Beixuan",
""
],
[
"Afifi",
"Mahmoud",
""
],
[
"Brown",
"Michael S.",
""
]
] | TITLE: Multispectral Demosaicing via Dual Cameras
ABSTRACT: Multispectral (MS) images capture detailed scene information across a wide
range of spectral bands, making them invaluable for applications requiring rich
spectral data. Integrating MS imaging into multi camera devices, such as
smartphones, has the potential to enhance both spectral applications and RGB
image quality. A critical step in processing MS data is demosaicing, which
reconstructs color information from the mosaic MS images captured by the
camera. This paper proposes a method for MS image demosaicing specifically
designed for dual-camera setups where both RGB and MS cameras capture the same
scene. Our approach leverages co-captured RGB images, which typically have
higher spatial fidelity, to guide the demosaicing of lower-fidelity MS images.
We introduce the Dual-camera RGB-MS Dataset - a large collection of paired RGB
and MS mosaiced images with ground-truth demosaiced outputs - that enables
training and evaluation of our method. Experimental results demonstrate that
our method achieves state-of-the-art accuracy compared to existing techniques.
|
2503.22352 | Bar{\i}\c{s} Batuhan Topal | Bar{\i}\c{s} Batuhan Topal, Umut \"Ozyurt, Zafer Do\u{g}an Budak,
Ramazan Gokberk Cinbis | Meta-LoRA: Meta-Learning LoRA Components for Domain-Aware ID
Personalization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in text-to-image generative models, particularly latent
diffusion models (LDMs), have demonstrated remarkable capabilities in
synthesizing high-quality images from textual prompts. However, achieving
identity personalization-ensuring that a model consistently generates
subject-specific outputs from limited reference images-remains a fundamental
challenge. To address this, we introduce Meta-Low-Rank Adaptation (Meta-LoRA),
a novel framework that leverages meta-learning to encode domain-specific priors
into LoRA-based identity personalization. Our method introduces a structured
three-layer LoRA architecture that separates identity-agnostic knowledge from
identity-specific adaptation. In the first stage, the LoRA Meta-Down layers are
meta-trained across multiple subjects, learning a shared manifold that captures
general identity-related features. In the second stage, only the LoRA-Mid and
LoRA-Up layers are optimized to specialize on a given subject, significantly
reducing adaptation time while improving identity fidelity. To evaluate our
approach, we introduce Meta-PHD, a new benchmark dataset for identity
personalization, and compare Meta-LoRA against state-of-the-art methods. Our
results demonstrate that Meta-LoRA achieves superior identity retention,
computational efficiency, and adaptability across diverse identity conditions.
Our code, model weights, and dataset are released on
barisbatuhan.github.io/Meta-LoRA.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:47:33 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 07:33:11 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Topal",
"Barış Batuhan",
""
],
[
"Özyurt",
"Umut",
""
],
[
"Budak",
"Zafer Doğan",
""
],
[
"Cinbis",
"Ramazan Gokberk",
""
]
] | TITLE: Meta-LoRA: Meta-Learning LoRA Components for Domain-Aware ID
Personalization
ABSTRACT: Recent advancements in text-to-image generative models, particularly latent
diffusion models (LDMs), have demonstrated remarkable capabilities in
synthesizing high-quality images from textual prompts. However, achieving
identity personalization-ensuring that a model consistently generates
subject-specific outputs from limited reference images-remains a fundamental
challenge. To address this, we introduce Meta-Low-Rank Adaptation (Meta-LoRA),
a novel framework that leverages meta-learning to encode domain-specific priors
into LoRA-based identity personalization. Our method introduces a structured
three-layer LoRA architecture that separates identity-agnostic knowledge from
identity-specific adaptation. In the first stage, the LoRA Meta-Down layers are
meta-trained across multiple subjects, learning a shared manifold that captures
general identity-related features. In the second stage, only the LoRA-Mid and
LoRA-Up layers are optimized to specialize on a given subject, significantly
reducing adaptation time while improving identity fidelity. To evaluate our
approach, we introduce Meta-PHD, a new benchmark dataset for identity
personalization, and compare Meta-LoRA against state-of-the-art methods. Our
results demonstrate that Meta-LoRA achieves superior identity retention,
computational efficiency, and adaptability across diverse identity conditions.
Our code, model weights, and dataset are released on
barisbatuhan.github.io/Meta-LoRA.
|
2504.00513 | Asma Yamani | Asma Yamani, Malak Baslyman, Moataz Ahmed | Leveraging LLMs for User Stories in AI Systems: UStAI Dataset | null | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI systems are gaining widespread adoption across various sectors and
domains. Creating high-quality AI system requirements is crucial for aligning
the AI system with business goals and consumer values and for social
responsibility. However, with the uncertain nature of AI systems and the heavy
reliance on sensitive data, more research is needed to address the elicitation
and analysis of AI systems requirements. With the proprietary nature of many AI
systems, there is a lack of open-source requirements artifacts and technical
requirements documents for AI systems, limiting broader research and
investigation. With Large Language Models (LLMs) emerging as a promising
alternative to human-generated text, this paper investigates the potential use
of LLMs to generate user stories for AI systems based on abstracts from
scholarly papers. We conducted an empirical evaluation using three LLMs and
generated $1260$ user stories from $42$ abstracts from $26$ domains. We assess
their quality using the Quality User Story (QUS) framework. Moreover, we
identify relevant non-functional requirements (NFRs) and ethical principles.
Our analysis demonstrates that the investigated LLMs can generate user stories
inspired by the needs of various stakeholders, offering a promising approach
for generating user stories for research purposes and for aiding in the early
requirements elicitation phase of AI systems. We have compiled and curated a
collection of stories generated by various LLMs into a dataset (UStAI), which
is now publicly available for use.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 08:03:40 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Yamani",
"Asma",
""
],
[
"Baslyman",
"Malak",
""
],
[
"Ahmed",
"Moataz",
""
]
] | TITLE: Leveraging LLMs for User Stories in AI Systems: UStAI Dataset
ABSTRACT: AI systems are gaining widespread adoption across various sectors and
domains. Creating high-quality AI system requirements is crucial for aligning
the AI system with business goals and consumer values and for social
responsibility. However, with the uncertain nature of AI systems and the heavy
reliance on sensitive data, more research is needed to address the elicitation
and analysis of AI systems requirements. With the proprietary nature of many AI
systems, there is a lack of open-source requirements artifacts and technical
requirements documents for AI systems, limiting broader research and
investigation. With Large Language Models (LLMs) emerging as a promising
alternative to human-generated text, this paper investigates the potential use
of LLMs to generate user stories for AI systems based on abstracts from
scholarly papers. We conducted an empirical evaluation using three LLMs and
generated $1260$ user stories from $42$ abstracts from $26$ domains. We assess
their quality using the Quality User Story (QUS) framework. Moreover, we
identify relevant non-functional requirements (NFRs) and ethical principles.
Our analysis demonstrates that the investigated LLMs can generate user stories
inspired by the needs of various stakeholders, offering a promising approach
for generating user stories for research purposes and for aiding in the early
requirements elicitation phase of AI systems. We have compiled and curated a
collection of stories generated by various LLMs into a dataset (UStAI), which
is now publicly available for use.
|
2504.00825 | Mohamed Benzaghta | Mohamed Benzaghta, Giovanni Geraci, David L\'opez-P\'erez, and Alvaro
Valcarce | Data-driven Optimization and Transfer Learning for Cellular Network
Antenna Configurations | null | null | null | null | cs.IT cs.NI eess.SP math.IT | http://creativecommons.org/licenses/by/4.0/ | We propose a data-driven approach for large-scale cellular network
optimization, using a production cellular network in London as a case study and
employing Sionna ray tracing for site-specific channel propagation modeling. We
optimize base station antenna tilts and half-power beamwidths, resulting in
more than double the 10\%-worst user rates compared to a 3GPP baseline. In
scenarios involving aerial users, we identify configurations that increase
their median rates fivefold without compromising ground user performance. We
further demonstrate the efficacy of model generalization through transfer
learning, leveraging available data from a scenario source to predict the
optimal solution for a scenario target within a similar number of iterations,
without requiring a new initial dataset, and with a negligible performance
loss.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:13:33 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Benzaghta",
"Mohamed",
""
],
[
"Geraci",
"Giovanni",
""
],
[
"López-Pérez",
"David",
""
],
[
"Valcarce",
"Alvaro",
""
]
] | TITLE: Data-driven Optimization and Transfer Learning for Cellular Network
Antenna Configurations
ABSTRACT: We propose a data-driven approach for large-scale cellular network
optimization, using a production cellular network in London as a case study and
employing Sionna ray tracing for site-specific channel propagation modeling. We
optimize base station antenna tilts and half-power beamwidths, resulting in
more than double the 10\%-worst user rates compared to a 3GPP baseline. In
scenarios involving aerial users, we identify configurations that increase
their median rates fivefold without compromising ground user performance. We
further demonstrate the efficacy of model generalization through transfer
learning, leveraging available data from a scenario source to predict the
optimal solution for a scenario target within a similar number of iterations,
without requiring a new initial dataset, and with a negligible performance
loss.
|
2504.00859 | Mahan Rafidashti | Mahan Rafidashti, Ji Lan, Maryam Fatemi, Junsheng Fu, Lars
Hammarstrand, Lennart Svensson | NeuRadar: Neural Radiance Fields for Automotive Radar Point Clouds | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Radar is an important sensor for autonomous driving (AD) systems due to its
robustness to adverse weather and different lighting conditions. Novel view
synthesis using neural radiance fields (NeRFs) has recently received
considerable attention in AD due to its potential to enable efficient testing
and validation but remains unexplored for radar point clouds. In this paper, we
present NeuRadar, a NeRF-based model that jointly generates radar point clouds,
camera images, and lidar point clouds. We explore set-based object detection
methods such as DETR, and propose an encoder-based solution grounded in the
NeRF geometry for improved generalizability. We propose both a deterministic
and a probabilistic point cloud representation to accurately model the radar
behavior, with the latter being able to capture radar's stochastic behavior. We
achieve realistic reconstruction results for two automotive datasets,
establishing a baseline for NeRF-based radar point cloud simulation models. In
addition, we release radar data for ZOD's Sequences and Drives to enable
further research in this field. To encourage further development of radar
NeRFs, we release the source code for NeuRadar.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:50:19 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 12:30:13 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Rafidashti",
"Mahan",
""
],
[
"Lan",
"Ji",
""
],
[
"Fatemi",
"Maryam",
""
],
[
"Fu",
"Junsheng",
""
],
[
"Hammarstrand",
"Lars",
""
],
[
"Svensson",
"Lennart",
""
]
] | TITLE: NeuRadar: Neural Radiance Fields for Automotive Radar Point Clouds
ABSTRACT: Radar is an important sensor for autonomous driving (AD) systems due to its
robustness to adverse weather and different lighting conditions. Novel view
synthesis using neural radiance fields (NeRFs) has recently received
considerable attention in AD due to its potential to enable efficient testing
and validation but remains unexplored for radar point clouds. In this paper, we
present NeuRadar, a NeRF-based model that jointly generates radar point clouds,
camera images, and lidar point clouds. We explore set-based object detection
methods such as DETR, and propose an encoder-based solution grounded in the
NeRF geometry for improved generalizability. We propose both a deterministic
and a probabilistic point cloud representation to accurately model the radar
behavior, with the latter being able to capture radar's stochastic behavior. We
achieve realistic reconstruction results for two automotive datasets,
establishing a baseline for NeRF-based radar point cloud simulation models. In
addition, we release radar data for ZOD's Sequences and Drives to enable
further research in this field. To encourage further development of radar
NeRFs, we release the source code for NeuRadar.
|
2504.01466 | Kaiwei Zhang | Kaiwei Zhang, Dandan Zhu, Xiongkuo Min, Guangtao Zhai | Mesh Mamba: A Unified State Space Model for Saliency Prediction in
Non-Textured and Textured Meshes | to be published in CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Mesh saliency enhances the adaptability of 3D vision by identifying and
emphasizing regions that naturally attract visual attention. To investigate the
interaction between geometric structure and texture in shaping visual
attention, we establish a comprehensive mesh saliency dataset, which is the
first to systematically capture the differences in saliency distribution under
both textured and non-textured visual conditions. Furthermore, we introduce
mesh Mamba, a unified saliency prediction model based on a state space model
(SSM), designed to adapt across various mesh types. Mesh Mamba effectively
analyzes the geometric structure of the mesh while seamlessly incorporating
texture features into the topological framework, ensuring coherence throughout
appearance-enhanced modeling. More importantly, by subgraph embedding and a
bidirectional SSM, the model enables global context modeling for both local
geometry and texture, preserving the topological structure and improving the
understanding of visual details and structural complexity. Through extensive
theoretical and empirical validation, our model not only improves performance
across various mesh types but also demonstrates high scalability and
versatility, particularly through cross validations of various visual features.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:22:25 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 08:35:39 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Zhang",
"Kaiwei",
""
],
[
"Zhu",
"Dandan",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Zhai",
"Guangtao",
""
]
] | TITLE: Mesh Mamba: A Unified State Space Model for Saliency Prediction in
Non-Textured and Textured Meshes
ABSTRACT: Mesh saliency enhances the adaptability of 3D vision by identifying and
emphasizing regions that naturally attract visual attention. To investigate the
interaction between geometric structure and texture in shaping visual
attention, we establish a comprehensive mesh saliency dataset, which is the
first to systematically capture the differences in saliency distribution under
both textured and non-textured visual conditions. Furthermore, we introduce
mesh Mamba, a unified saliency prediction model based on a state space model
(SSM), designed to adapt across various mesh types. Mesh Mamba effectively
analyzes the geometric structure of the mesh while seamlessly incorporating
texture features into the topological framework, ensuring coherence throughout
appearance-enhanced modeling. More importantly, by subgraph embedding and a
bidirectional SSM, the model enables global context modeling for both local
geometry and texture, preserving the topological structure and improving the
understanding of visual details and structural complexity. Through extensive
theoretical and empirical validation, our model not only improves performance
across various mesh types but also demonstrates high scalability and
versatility, particularly through cross validations of various visual features.
|
2504.01732 | Ulas Gunes | Ulas Gunes, Matias Turkulainen, Xuqian Ren, Arno Solin, Juho Kannala,
Esa Rahtu | FIORD: A Fisheye Indoor-Outdoor Dataset with LIDAR Ground Truth for 3D
Scene Reconstruction and Benchmarking | SCIA 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The development of large-scale 3D scene reconstruction and novel view
synthesis methods mostly rely on datasets comprising perspective images with
narrow fields of view (FoV). While effective for small-scale scenes, these
datasets require large image sets and extensive structure-from-motion (SfM)
processing, limiting scalability. To address this, we introduce a fisheye image
dataset tailored for scene reconstruction tasks. Using dual 200-degree fisheye
lenses, our dataset provides full 360-degree coverage of 5 indoor and 5 outdoor
scenes. Each scene has sparse SfM point clouds and precise LIDAR-derived dense
point clouds that can be used as geometric ground-truth, enabling robust
benchmarking under challenging conditions such as occlusions and reflections.
While the baseline experiments focus on vanilla Gaussian Splatting and NeRF
based Nerfacto methods, the dataset supports diverse approaches for scene
reconstruction, novel view synthesis, and image-based rendering.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 13:41:23 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 13:59:22 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Gunes",
"Ulas",
""
],
[
"Turkulainen",
"Matias",
""
],
[
"Ren",
"Xuqian",
""
],
[
"Solin",
"Arno",
""
],
[
"Kannala",
"Juho",
""
],
[
"Rahtu",
"Esa",
""
]
] | TITLE: FIORD: A Fisheye Indoor-Outdoor Dataset with LIDAR Ground Truth for 3D
Scene Reconstruction and Benchmarking
ABSTRACT: The development of large-scale 3D scene reconstruction and novel view
synthesis methods mostly rely on datasets comprising perspective images with
narrow fields of view (FoV). While effective for small-scale scenes, these
datasets require large image sets and extensive structure-from-motion (SfM)
processing, limiting scalability. To address this, we introduce a fisheye image
dataset tailored for scene reconstruction tasks. Using dual 200-degree fisheye
lenses, our dataset provides full 360-degree coverage of 5 indoor and 5 outdoor
scenes. Each scene has sparse SfM point clouds and precise LIDAR-derived dense
point clouds that can be used as geometric ground-truth, enabling robust
benchmarking under challenging conditions such as occlusions and reflections.
While the baseline experiments focus on vanilla Gaussian Splatting and NeRF
based Nerfacto methods, the dataset supports diverse approaches for scene
reconstruction, novel view synthesis, and image-based rendering.
|
2504.02407 | Ruitong Xiao | Xiaohui Sun, Ruitong Xiao, Jianye Mo, Bowen Wu, Qun Yu, Baoxun Wang | F5R-TTS: Improving Flow-Matching based Text-to-Speech with Group
Relative Policy Optimization | null | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present F5R-TTS, a novel text-to-speech (TTS) system that integrates
Gradient Reward Policy Optimization (GRPO) into a flow-matching based
architecture. By reformulating the deterministic outputs of flow-matching TTS
into probabilistic Gaussian distributions, our approach enables seamless
integration of reinforcement learning algorithms. During pretraining, we train
a probabilistically reformulated flow-matching based model which is derived
from F5-TTS with an open-source dataset. In the subsequent reinforcement
learning (RL) phase, we employ a GRPO-driven enhancement stage that leverages
dual reward metrics: word error rate (WER) computed via automatic speech
recognition and speaker similarity (SIM) assessed by verification models.
Experimental results on zero-shot voice cloning demonstrate that F5R-TTS
achieves significant improvements in both speech intelligibility (a 29.5%
relative reduction in WER) and speaker similarity (a 4.6% relative increase in
SIM score) compared to conventional flow-matching based TTS systems. Audio
samples are available at https://frontierlabs.github.io/F5R.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 08:57:15 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 02:53:57 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Sun",
"Xiaohui",
""
],
[
"Xiao",
"Ruitong",
""
],
[
"Mo",
"Jianye",
""
],
[
"Wu",
"Bowen",
""
],
[
"Yu",
"Qun",
""
],
[
"Wang",
"Baoxun",
""
]
] | TITLE: F5R-TTS: Improving Flow-Matching based Text-to-Speech with Group
Relative Policy Optimization
ABSTRACT: We present F5R-TTS, a novel text-to-speech (TTS) system that integrates
Gradient Reward Policy Optimization (GRPO) into a flow-matching based
architecture. By reformulating the deterministic outputs of flow-matching TTS
into probabilistic Gaussian distributions, our approach enables seamless
integration of reinforcement learning algorithms. During pretraining, we train
a probabilistically reformulated flow-matching based model which is derived
from F5-TTS with an open-source dataset. In the subsequent reinforcement
learning (RL) phase, we employ a GRPO-driven enhancement stage that leverages
dual reward metrics: word error rate (WER) computed via automatic speech
recognition and speaker similarity (SIM) assessed by verification models.
Experimental results on zero-shot voice cloning demonstrate that F5R-TTS
achieves significant improvements in both speech intelligibility (a 29.5%
relative reduction in WER) and speaker similarity (a 4.6% relative increase in
SIM score) compared to conventional flow-matching based TTS systems. Audio
samples are available at https://frontierlabs.github.io/F5R.
|
2504.03043 | Joel Sol | Joel Sol, Shadi Alijani, Homayoun Najjaran | Sliced Wasserstein Discrepancy in Disentangling Representation and
Adaptation Networks for Unsupervised Domain Adaptation | 6 pages, 3 figures, submitted to IEEE conference | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper introduces DRANet-SWD as a novel complete pipeline for
disentangling content and style representations of images for unsupervised
domain adaptation (UDA). The approach builds upon DRANet by incorporating the
sliced Wasserstein discrepancy (SWD) as a style loss instead of the traditional
Gram matrix loss. The potential advantages of SWD over the Gram matrix loss for
capturing style variations in domain adaptation are investigated. Experiments
using digit classification datasets and driving scenario segmentation validate
the method, demonstrating that DRANet-SWD enhances performance. Results
indicate that SWD provides a more robust statistical comparison of feature
distributions, leading to better style adaptation. These findings highlight the
effectiveness of SWD in refining feature alignment and improving domain
adaptation tasks across these benchmarks. Our code can be found here.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 21:43:47 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 05:25:42 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Sol",
"Joel",
""
],
[
"Alijani",
"Shadi",
""
],
[
"Najjaran",
"Homayoun",
""
]
] | TITLE: Sliced Wasserstein Discrepancy in Disentangling Representation and
Adaptation Networks for Unsupervised Domain Adaptation
ABSTRACT: This paper introduces DRANet-SWD as a novel complete pipeline for
disentangling content and style representations of images for unsupervised
domain adaptation (UDA). The approach builds upon DRANet by incorporating the
sliced Wasserstein discrepancy (SWD) as a style loss instead of the traditional
Gram matrix loss. The potential advantages of SWD over the Gram matrix loss for
capturing style variations in domain adaptation are investigated. Experiments
using digit classification datasets and driving scenario segmentation validate
the method, demonstrating that DRANet-SWD enhances performance. Results
indicate that SWD provides a more robust statistical comparison of feature
distributions, leading to better style adaptation. These findings highlight the
effectiveness of SWD in refining feature alignment and improving domain
adaptation tasks across these benchmarks. Our code can be found here.
|
2504.03133 | Zahid Hassan Tushar | Zahid Hassan Tushar, Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang,
Sanjay Purushotham | Joint Retrieval of Cloud properties using Attention-based Deep Learning
Models | 6 Pages, 4 figures, to be published in 2025 IEEE International
Geoscience and Remote Sensing Symposium (IGARSS 2025) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate cloud property retrieval is vital for understanding cloud behavior
and its impact on climate, including applications in weather forecasting,
climate modeling, and estimating Earth's radiation balance. The Independent
Pixel Approximation (IPA), a widely used physics-based approach, simplifies
radiative transfer calculations by assuming each pixel is independent of its
neighbors. While computationally efficient, IPA has significant limitations,
such as inaccuracies from 3D radiative effects, errors at cloud edges, and
ineffectiveness for overlapping or heterogeneous cloud fields. Recent
AI/ML-based deep learning models have improved retrieval accuracy by leveraging
spatial relationships across pixels. However, these models are often
memory-intensive, retrieve only a single cloud property, or struggle with joint
property retrievals. To overcome these challenges, we introduce CloudUNet with
Attention Module (CAM), a compact UNet-based model that employs attention
mechanisms to reduce errors in thick, overlapping cloud regions and a
specialized loss function for joint retrieval of Cloud Optical Thickness (COT)
and Cloud Effective Radius (CER). Experiments on a Large Eddy Simulation (LES)
dataset show that our CAM model outperforms state-of-the-art deep learning
methods, reducing mean absolute errors (MAE) by 34% for COT and 42% for CER,
and achieving 76% and 86% lower MAE for COT and CER retrievals compared to the
IPA method.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 03:01:19 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 13:19:52 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Tushar",
"Zahid Hassan",
""
],
[
"Ademakinwa",
"Adeleke",
""
],
[
"Wang",
"Jianwu",
""
],
[
"Zhang",
"Zhibo",
""
],
[
"Purushotham",
"Sanjay",
""
]
] | TITLE: Joint Retrieval of Cloud properties using Attention-based Deep Learning
Models
ABSTRACT: Accurate cloud property retrieval is vital for understanding cloud behavior
and its impact on climate, including applications in weather forecasting,
climate modeling, and estimating Earth's radiation balance. The Independent
Pixel Approximation (IPA), a widely used physics-based approach, simplifies
radiative transfer calculations by assuming each pixel is independent of its
neighbors. While computationally efficient, IPA has significant limitations,
such as inaccuracies from 3D radiative effects, errors at cloud edges, and
ineffectiveness for overlapping or heterogeneous cloud fields. Recent
AI/ML-based deep learning models have improved retrieval accuracy by leveraging
spatial relationships across pixels. However, these models are often
memory-intensive, retrieve only a single cloud property, or struggle with joint
property retrievals. To overcome these challenges, we introduce CloudUNet with
Attention Module (CAM), a compact UNet-based model that employs attention
mechanisms to reduce errors in thick, overlapping cloud regions and a
specialized loss function for joint retrieval of Cloud Optical Thickness (COT)
and Cloud Effective Radius (CER). Experiments on a Large Eddy Simulation (LES)
dataset show that our CAM model outperforms state-of-the-art deep learning
methods, reducing mean absolute errors (MAE) by 34% for COT and 42% for CER,
and achieving 76% and 86% lower MAE for COT and CER retrievals compared to the
IPA method.
|
2504.03770 | Shenzhe Zhu | Yi Nian, Shenzhe Zhu, Yuehan Qin, Li Li, Ziyi Wang, Chaowei Xiao, Yue
Zhao | JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language
Model | null | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multimodal large language models (MLLMs) excel in vision-language tasks but
also pose significant risks of generating harmful content, particularly through
jailbreak attacks. Jailbreak attacks refer to intentional manipulations that
bypass safety mechanisms in models, leading to the generation of inappropriate
or unsafe content. Detecting such attacks is critical to ensuring the
responsible deployment of MLLMs. Existing jailbreak detection methods face
three primary challenges: (1) Many rely on model hidden states or gradients,
limiting their applicability to white-box models, where the internal workings
of the model are accessible; (2) They involve high computational overhead from
uncertainty-based analysis, which limits real-time detection, and (3) They
require fully labeled harmful datasets, which are often scarce in real-world
settings. To address these issues, we introduce a test-time adaptive framework
called JAILDAM. Our method leverages a memory-based approach guided by
policy-driven unsafe knowledge representations, eliminating the need for
explicit exposure to harmful data. By dynamically updating unsafe knowledge
during test-time, our framework improves generalization to unseen jailbreak
strategies while maintaining efficiency. Experiments on multiple VLM jailbreak
benchmarks demonstrate that JAILDAM delivers state-of-the-art performance in
harmful content detection, improving both accuracy and speed.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 05:00:28 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 20:25:30 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Nian",
"Yi",
""
],
[
"Zhu",
"Shenzhe",
""
],
[
"Qin",
"Yuehan",
""
],
[
"Li",
"Li",
""
],
[
"Wang",
"Ziyi",
""
],
[
"Xiao",
"Chaowei",
""
],
[
"Zhao",
"Yue",
""
]
] | TITLE: JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language
Model
ABSTRACT: Multimodal large language models (MLLMs) excel in vision-language tasks but
also pose significant risks of generating harmful content, particularly through
jailbreak attacks. Jailbreak attacks refer to intentional manipulations that
bypass safety mechanisms in models, leading to the generation of inappropriate
or unsafe content. Detecting such attacks is critical to ensuring the
responsible deployment of MLLMs. Existing jailbreak detection methods face
three primary challenges: (1) Many rely on model hidden states or gradients,
limiting their applicability to white-box models, where the internal workings
of the model are accessible; (2) They involve high computational overhead from
uncertainty-based analysis, which limits real-time detection, and (3) They
require fully labeled harmful datasets, which are often scarce in real-world
settings. To address these issues, we introduce a test-time adaptive framework
called JAILDAM. Our method leverages a memory-based approach guided by
policy-driven unsafe knowledge representations, eliminating the need for
explicit exposure to harmful data. By dynamically updating unsafe knowledge
during test-time, our framework improves generalization to unseen jailbreak
strategies while maintaining efficiency. Experiments on multiple VLM jailbreak
benchmarks demonstrate that JAILDAM delivers state-of-the-art performance in
harmful content detection, improving both accuracy and speed.
|
2504.03784 | Kai Ye | Kai Ye, Hongyi Zhou, Jin Zhu, Francesco Quinzan, Chengchung Shi | Robust Reinforcement Learning from Human Feedback for Large Language
Models Fine-Tuning | null | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning from human feedback (RLHF) has emerged as a key
technique for aligning the output of large language models (LLMs) with human
preferences. To learn the reward function, most existing RLHF algorithms use
the Bradley-Terry model, which relies on assumptions about human preferences
that may not reflect the complexity and variability of real-world judgments. In
this paper, we propose a robust algorithm to enhance the performance of
existing approaches under such reward model misspecifications. Theoretically,
our algorithm reduces the variance of reward and policy estimators, leading to
improved regret bounds. Empirical evaluations on LLM benchmark datasets
demonstrate that the proposed algorithm consistently outperforms existing
methods, with 77-81% of responses being favored over baselines on the Anthropic
Helpful and Harmless dataset.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 16:16:35 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Apr 2025 03:41:09 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Ye",
"Kai",
""
],
[
"Zhou",
"Hongyi",
""
],
[
"Zhu",
"Jin",
""
],
[
"Quinzan",
"Francesco",
""
],
[
"Shi",
"Chengchung",
""
]
] | TITLE: Robust Reinforcement Learning from Human Feedback for Large Language
Models Fine-Tuning
ABSTRACT: Reinforcement learning from human feedback (RLHF) has emerged as a key
technique for aligning the output of large language models (LLMs) with human
preferences. To learn the reward function, most existing RLHF algorithms use
the Bradley-Terry model, which relies on assumptions about human preferences
that may not reflect the complexity and variability of real-world judgments. In
this paper, we propose a robust algorithm to enhance the performance of
existing approaches under such reward model misspecifications. Theoretically,
our algorithm reduces the variance of reward and policy estimators, leading to
improved regret bounds. Empirical evaluations on LLM benchmark datasets
demonstrate that the proposed algorithm consistently outperforms existing
methods, with 77-81% of responses being favored over baselines on the Anthropic
Helpful and Harmless dataset.
|
2504.04079 | Ashwin Vinod | Ashwin Vinod, Chandrajit Bajaj | Scalable Robust Bayesian Co-Clustering with Compositional ELBOs | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Co-clustering exploits the duality of instances and features to
simultaneously uncover meaningful groups in both dimensions, often
outperforming traditional clustering in high-dimensional or sparse data
settings. Although recent deep learning approaches successfully integrate
feature learning and cluster assignment, they remain susceptible to noise and
can suffer from posterior collapse within standard autoencoders. In this paper,
we present the first fully variational Co-clustering framework that directly
learns row and column clusters in the latent space, leveraging a doubly
reparameterized ELBO to improve gradient signal-to-noise separation. Our
unsupervised model integrates a Variational Deep Embedding with a Gaussian
Mixture Model (GMM) prior for both instances and features, providing a built-in
clustering mechanism that naturally aligns latent modes with row and column
clusters. Furthermore, our regularized end-to-end noise learning Compositional
ELBO architecture jointly reconstructs the data while regularizing against
noise through the KL divergence, thus gracefully handling corrupted or missing
inputs in a single training pipeline. To counteract posterior collapse, we
introduce a scale modification that increases the encoder's latent means only
in the reconstruction pathway, preserving richer latent representations without
inflating the KL term. Finally, a mutual information-based cross-loss ensures
coherent co-clustering of rows and columns. Empirical results on diverse
real-world datasets from multiple modalities, numerical, textual, and
image-based, demonstrate that our method not only preserves the advantages of
prior Co-clustering approaches but also exceeds them in accuracy and
robustness, particularly in high-dimensional or noisy settings.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 06:48:05 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 18:02:36 GMT"
}
] | 2025-04-10T00:00:00 | [
[
"Vinod",
"Ashwin",
""
],
[
"Bajaj",
"Chandrajit",
""
]
] | TITLE: Scalable Robust Bayesian Co-Clustering with Compositional ELBOs
ABSTRACT: Co-clustering exploits the duality of instances and features to
simultaneously uncover meaningful groups in both dimensions, often
outperforming traditional clustering in high-dimensional or sparse data
settings. Although recent deep learning approaches successfully integrate
feature learning and cluster assignment, they remain susceptible to noise and
can suffer from posterior collapse within standard autoencoders. In this paper,
we present the first fully variational Co-clustering framework that directly
learns row and column clusters in the latent space, leveraging a doubly
reparameterized ELBO to improve gradient signal-to-noise separation. Our
unsupervised model integrates a Variational Deep Embedding with a Gaussian
Mixture Model (GMM) prior for both instances and features, providing a built-in
clustering mechanism that naturally aligns latent modes with row and column
clusters. Furthermore, our regularized end-to-end noise learning Compositional
ELBO architecture jointly reconstructs the data while regularizing against
noise through the KL divergence, thus gracefully handling corrupted or missing
inputs in a single training pipeline. To counteract posterior collapse, we
introduce a scale modification that increases the encoder's latent means only
in the reconstruction pathway, preserving richer latent representations without
inflating the KL term. Finally, a mutual information-based cross-loss ensures
coherent co-clustering of rows and columns. Empirical results on diverse
real-world datasets from multiple modalities, numerical, textual, and
image-based, demonstrate that our method not only preserves the advantages of
prior Co-clustering approaches but also exceeds them in accuracy and
robustness, particularly in high-dimensional or noisy settings.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.