Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.07209 | Rolf Van Der Hulst | Rolf van der Hulst, Matthias Walter | Implied Integrality in Mixed-Integer Optimization | 21 pages, 2 figures, IPCO 2025 journal version with proofs | null | null | null | cs.DM math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Implied-integer detection is a well-known presolving technique that is used
by many Mixed-Integer Linear Programming solvers. Informally, a variable is
said to be implied integer if its integrality is enforced implicitly by
integrality of other variables and the constraints of a problem. In this paper
we formalize the definition of implied integrality by taking a polyhedral
perspective. Our main result characterizes implied integrality as occurring
when a subset of integer variables is fixed to integer values and the
polyhedron on the remaining variables is integral. While integral polyhedra are
well-understood theoretically, existing detection methods infer implied
integrality only for one variable at a time. We introduce new detection methods
based on the detection of integral polyhedra, extending existing techniques to
multiple variables. Additionally, we discuss the computational complexity of
recognizing implied integers. We conduct experiments using a new detection
method that uses totally unimodular submatrices to identify implied
integrality. For the MIPLIB 2017 collection dataset our results indicate that,
on average, 18.8% of the variables are classified as implied integer after
presolving, compared to just 3.3% identified by state-of-the-art techniques. We
are able to reduce the average percentage of variables whose integrality needs
to be enforced after presolving from 70.2% to 59.0%.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 18:36:22 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"van der Hulst",
"Rolf",
""
],
[
"Walter",
"Matthias",
""
]
] | TITLE: Implied Integrality in Mixed-Integer Optimization
ABSTRACT: Implied-integer detection is a well-known presolving technique that is used
by many Mixed-Integer Linear Programming solvers. Informally, a variable is
said to be implied integer if its integrality is enforced implicitly by
integrality of other variables and the constraints of a problem. In this paper
we formalize the definition of implied integrality by taking a polyhedral
perspective. Our main result characterizes implied integrality as occurring
when a subset of integer variables is fixed to integer values and the
polyhedron on the remaining variables is integral. While integral polyhedra are
well-understood theoretically, existing detection methods infer implied
integrality only for one variable at a time. We introduce new detection methods
based on the detection of integral polyhedra, extending existing techniques to
multiple variables. Additionally, we discuss the computational complexity of
recognizing implied integers. We conduct experiments using a new detection
method that uses totally unimodular submatrices to identify implied
integrality. For the MIPLIB 2017 collection dataset our results indicate that,
on average, 18.8% of the variables are classified as implied integer after
presolving, compared to just 3.3% identified by state-of-the-art techniques. We
are able to reduce the average percentage of variables whose integrality needs
to be enforced after presolving from 70.2% to 59.0%.
|
2504.07210 | Paul Borne--Pons | Paul Borne--Pons (Adobe Research), Mikolaj Czerkawski (Asterisk Labs),
Rosalie Martin (Adobe Research) and Romain Rouffet (Adobe Research) | MESA: Text-Driven Terrain Generation Using Latent Diffusion and Global
Copernicus Data | Accepted at CVPR 2025 Workshop MORSE | null | null | null | cs.GR cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Terrain modeling has traditionally relied on procedural techniques, which
often require extensive domain expertise and handcrafted rules. In this paper,
we present MESA - a novel data-centric alternative by training a diffusion
model on global remote sensing data. This approach leverages large-scale
geospatial information to generate high-quality terrain samples from text
descriptions, showcasing a flexible and scalable solution for terrain
generation. The model's capabilities are demonstrated through extensive
experiments, highlighting its ability to generate realistic and diverse terrain
landscapes. The dataset produced to support this work, the Major TOM Core-DEM
extension dataset, is released openly as a comprehensive resource for global
terrain data. The results suggest that data-driven models, trained on remote
sensing data, can provide a powerful tool for realistic terrain modeling and
generation.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 18:37:24 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Borne--Pons",
"Paul",
"",
"Adobe Research"
],
[
"Czerkawski",
"Mikolaj",
"",
"Asterisk Labs"
],
[
"Martin",
"Rosalie",
"",
"Adobe Research"
],
[
"Rouffet",
"Romain",
"",
"Adobe Research"
]
] | TITLE: MESA: Text-Driven Terrain Generation Using Latent Diffusion and Global
Copernicus Data
ABSTRACT: Terrain modeling has traditionally relied on procedural techniques, which
often require extensive domain expertise and handcrafted rules. In this paper,
we present MESA - a novel data-centric alternative by training a diffusion
model on global remote sensing data. This approach leverages large-scale
geospatial information to generate high-quality terrain samples from text
descriptions, showcasing a flexible and scalable solution for terrain
generation. The model's capabilities are demonstrated through extensive
experiments, highlighting its ability to generate realistic and diverse terrain
landscapes. The dataset produced to support this work, the Major TOM Core-DEM
extension dataset, is released openly as a comprehensive resource for global
terrain data. The results suggest that data-driven models, trained on remote
sensing data, can provide a powerful tool for realistic terrain modeling and
generation.
|
2504.07229 | Karan Singla | Lakshmipathi Balaji, Karan Singla | Visual-Aware Speech Recognition for Noisy Scenarios | null | null | null | null | cs.CL eess.AS eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Humans have the ability to utilize visual cues, such as lip movements and
visual scenes, to enhance auditory perception, particularly in noisy
environments. However, current Automatic Speech Recognition (ASR) or
Audio-Visual Speech Recognition (AVSR) models often struggle in noisy
scenarios. To solve this task, we propose a model that improves transcription
by correlating noise sources to visual cues. Unlike works that rely on lip
motion and require the speaker's visibility, we exploit broader visual
information from the environment. This allows our model to naturally filter
speech from noise and improve transcription, much like humans do in noisy
scenarios. Our method re-purposes pretrained speech and visual encoders,
linking them with multi-headed attention. This approach enables the
transcription of speech and the prediction of noise labels in video inputs. We
introduce a scalable pipeline to develop audio-visual datasets, where visual
cues correlate to noise in the audio. We show significant improvements over
existing audio-only models in noisy scenarios. Results also highlight that
visual cues play a vital role in improved transcription accuracy.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 19:09:54 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Balaji",
"Lakshmipathi",
""
],
[
"Singla",
"Karan",
""
]
] | TITLE: Visual-Aware Speech Recognition for Noisy Scenarios
ABSTRACT: Humans have the ability to utilize visual cues, such as lip movements and
visual scenes, to enhance auditory perception, particularly in noisy
environments. However, current Automatic Speech Recognition (ASR) or
Audio-Visual Speech Recognition (AVSR) models often struggle in noisy
scenarios. To solve this task, we propose a model that improves transcription
by correlating noise sources to visual cues. Unlike works that rely on lip
motion and require the speaker's visibility, we exploit broader visual
information from the environment. This allows our model to naturally filter
speech from noise and improve transcription, much like humans do in noisy
scenarios. Our method re-purposes pretrained speech and visual encoders,
linking them with multi-headed attention. This approach enables the
transcription of speech and the prediction of noise labels in video inputs. We
introduce a scalable pipeline to develop audio-visual datasets, where visual
cues correlate to noise in the audio. We show significant improvements over
existing audio-only models in noisy scenarios. Results also highlight that
visual cues play a vital role in improved transcription accuracy.
|
2504.07231 | David Akhihiero | David Akhihiero and Jason N. Gross | A Pointcloud Registration Framework for Relocalization in Subterranean
Environments | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Relocalization, the process of re-establishing a robot's position within an
environment, is crucial for ensuring accurate navigation and task execution
when external positioning information, such as GPS, is unavailable or has been
lost. Subterranean environments present significant challenges for
relocalization due to limited external positioning information, poor lighting
that affects camera localization, irregular and often non-distinct surfaces,
and dust, which can introduce noise and occlusion in sensor data. In this work,
we propose a robust, computationally friendly framework for relocalization
through point cloud registration utilizing a prior point cloud map. The
framework employs Intrinsic Shape Signatures (ISS) to select feature points in
both the target and prior point clouds. The Fast Point Feature Histogram (FPFH)
algorithm is utilized to create descriptors for these feature points, and
matching these descriptors yields correspondences between the point clouds. A
3D transformation is estimated using the matched points, which initializes a
Normal Distribution Transform (NDT) registration. The transformation result
from NDT is further refined using the Iterative Closest Point (ICP)
registration algorithm. This framework enhances registration accuracy even in
challenging conditions, such as dust interference and significant initial
transformations between the target and source, making it suitable for
autonomous robots operating in underground mines and tunnels. This framework
was validated with experiments in simulated and real-world mine datasets,
demonstrating its potential for improving relocalization.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 19:13:08 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Akhihiero",
"David",
""
],
[
"Gross",
"Jason N.",
""
]
] | TITLE: A Pointcloud Registration Framework for Relocalization in Subterranean
Environments
ABSTRACT: Relocalization, the process of re-establishing a robot's position within an
environment, is crucial for ensuring accurate navigation and task execution
when external positioning information, such as GPS, is unavailable or has been
lost. Subterranean environments present significant challenges for
relocalization due to limited external positioning information, poor lighting
that affects camera localization, irregular and often non-distinct surfaces,
and dust, which can introduce noise and occlusion in sensor data. In this work,
we propose a robust, computationally friendly framework for relocalization
through point cloud registration utilizing a prior point cloud map. The
framework employs Intrinsic Shape Signatures (ISS) to select feature points in
both the target and prior point clouds. The Fast Point Feature Histogram (FPFH)
algorithm is utilized to create descriptors for these feature points, and
matching these descriptors yields correspondences between the point clouds. A
3D transformation is estimated using the matched points, which initializes a
Normal Distribution Transform (NDT) registration. The transformation result
from NDT is further refined using the Iterative Closest Point (ICP)
registration algorithm. This framework enhances registration accuracy even in
challenging conditions, such as dust interference and significant initial
transformations between the target and source, making it suitable for
autonomous robots operating in underground mines and tunnels. This framework
was validated with experiments in simulated and real-world mine datasets,
demonstrating its potential for improving relocalization.
|
2504.07237 | Shenghao Xie | David P. Woodruff, Shenghao Xie, Samson Zhou | Perfect Sampling in Turnstile Streams Beyond Small Moments | To appear in PODS 2025 | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | Given a vector $x \in \mathbb{R}^n$ induced by a turnstile stream $S$, a
non-negative function $G: \mathbb{R} \to \mathbb{R}$, a perfect $G$-sampler
outputs an index $i$ with probability $\frac{G(x_i)}{\sum_{j\in[n]}
G(x_j)}+\frac{1}{\text{poly}(n)}$. Jayaram and Woodruff (FOCS 2018) introduced
a perfect $L_p$-sampler, where $G(z)=|z|^p$, for $p\in(0,2]$. In this paper, we
solve this problem for $p>2$ by a sampling-and-rejection method. Our algorithm
runs in $n^{1-2/p} \cdot \text{polylog}(n)$ bits of space, which is tight up to
polylogarithmic factors in $n$. Our algorithm also provides a
$(1+\varepsilon)$-approximation to the sampled item $x_i$ with high probability
using an additional $\varepsilon^{-2} n^{1-2/p} \cdot \text{polylog}(n)$ bits
of space.
Interestingly, we show our techniques can be generalized to perfect
polynomial samplers on turnstile streams, which is a class of functions that is
not scale-invariant, in contrast to the existing perfect $L_p$ samplers. We
also achieve perfect samplers for the logarithmic function $G(z)=\log(1+|z|)$
and the cap function $G(z)=\min(T,|z|^p)$. Finally, we give an application of
our results to the problem of norm/moment estimation for a subset $\mathcal{Q}$
of coordinates of a vector, revealed only after the data stream is processed,
e.g., when the set $\mathcal{Q}$ represents a range query, or the set
$n\setminus\mathcal{Q}$ represents a collection of entities who wish for their
information to be expunged from the dataset.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 19:25:46 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Woodruff",
"David P.",
""
],
[
"Xie",
"Shenghao",
""
],
[
"Zhou",
"Samson",
""
]
] | TITLE: Perfect Sampling in Turnstile Streams Beyond Small Moments
ABSTRACT: Given a vector $x \in \mathbb{R}^n$ induced by a turnstile stream $S$, a
non-negative function $G: \mathbb{R} \to \mathbb{R}$, a perfect $G$-sampler
outputs an index $i$ with probability $\frac{G(x_i)}{\sum_{j\in[n]}
G(x_j)}+\frac{1}{\text{poly}(n)}$. Jayaram and Woodruff (FOCS 2018) introduced
a perfect $L_p$-sampler, where $G(z)=|z|^p$, for $p\in(0,2]$. In this paper, we
solve this problem for $p>2$ by a sampling-and-rejection method. Our algorithm
runs in $n^{1-2/p} \cdot \text{polylog}(n)$ bits of space, which is tight up to
polylogarithmic factors in $n$. Our algorithm also provides a
$(1+\varepsilon)$-approximation to the sampled item $x_i$ with high probability
using an additional $\varepsilon^{-2} n^{1-2/p} \cdot \text{polylog}(n)$ bits
of space.
Interestingly, we show our techniques can be generalized to perfect
polynomial samplers on turnstile streams, which is a class of functions that is
not scale-invariant, in contrast to the existing perfect $L_p$ samplers. We
also achieve perfect samplers for the logarithmic function $G(z)=\log(1+|z|)$
and the cap function $G(z)=\min(T,|z|^p)$. Finally, we give an application of
our results to the problem of norm/moment estimation for a subset $\mathcal{Q}$
of coordinates of a vector, revealed only after the data stream is processed,
e.g., when the set $\mathcal{Q}$ represents a range query, or the set
$n\setminus\mathcal{Q}$ represents a collection of entities who wish for their
information to be expunged from the dataset.
|
2504.07252 | Rajhans Singh | Rajhans Singh, Rafael Bidese Puhl, Kshitiz Dhakal, Sudhir Sornapudi | Few-Shot Adaptation of Grounding DINO for Agricultural Domain | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning models are transforming agricultural applications by enabling
automated phenotyping, monitoring, and yield estimation. However, their
effectiveness heavily depends on large amounts of annotated training data,
which can be labor and time intensive. Recent advances in open-set object
detection, particularly with models like Grounding-DINO, offer a potential
solution to detect regions of interests based on text prompt input. Initial
zero-shot experiments revealed challenges in crafting effective text prompts,
especially for complex objects like individual leaves and visually similar
classes. To address these limitations, we propose an efficient few-shot
adaptation method that simplifies the Grounding-DINO architecture by removing
the text encoder module (BERT) and introducing a randomly initialized trainable
text embedding. This method achieves superior performance across multiple
agricultural datasets, including plant-weed detection, plant counting, insect
identification, fruit counting, and remote sensing tasks. Specifically, it
demonstrates up to a $\sim24\%$ higher mAP than fully fine-tuned YOLO models on
agricultural datasets and outperforms previous state-of-the-art methods by
$\sim10\%$ in remote sensing, under few-shot learning conditions. Our method
offers a promising solution for automating annotation and accelerating the
development of specialized agricultural AI solutions.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 19:57:25 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Singh",
"Rajhans",
""
],
[
"Puhl",
"Rafael Bidese",
""
],
[
"Dhakal",
"Kshitiz",
""
],
[
"Sornapudi",
"Sudhir",
""
]
] | TITLE: Few-Shot Adaptation of Grounding DINO for Agricultural Domain
ABSTRACT: Deep learning models are transforming agricultural applications by enabling
automated phenotyping, monitoring, and yield estimation. However, their
effectiveness heavily depends on large amounts of annotated training data,
which can be labor and time intensive. Recent advances in open-set object
detection, particularly with models like Grounding-DINO, offer a potential
solution to detect regions of interests based on text prompt input. Initial
zero-shot experiments revealed challenges in crafting effective text prompts,
especially for complex objects like individual leaves and visually similar
classes. To address these limitations, we propose an efficient few-shot
adaptation method that simplifies the Grounding-DINO architecture by removing
the text encoder module (BERT) and introducing a randomly initialized trainable
text embedding. This method achieves superior performance across multiple
agricultural datasets, including plant-weed detection, plant counting, insect
identification, fruit counting, and remote sensing tasks. Specifically, it
demonstrates up to a $\sim24\%$ higher mAP than fully fine-tuned YOLO models on
agricultural datasets and outperforms previous state-of-the-art methods by
$\sim10\%$ in remote sensing, under few-shot learning conditions. Our method
offers a promising solution for automating annotation and accelerating the
development of specialized agricultural AI solutions.
|
2504.07261 | Dheeraj Baby | Dheeraj Baby and Boran Han and Shuai Zhang and Cuixiong Hu and Yuyang
Wang and Yu-Xiang Wang | Adapting to Online Distribution Shifts in Deep Learning: A Black-Box
Approach | To appear at AISTATS 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We study the well-motivated problem of online distribution shift in which the
data arrive in batches and the distribution of each batch can change
arbitrarily over time. Since the shifts can be large or small, abrupt or
gradual, the length of the relevant historical data to learn from may vary over
time, which poses a major challenge in designing algorithms that can
automatically adapt to the best ``attention span'' while remaining
computationally efficient. We propose a meta-algorithm that takes any network
architecture and any Online Learner (OL) algorithm as input and produces a new
algorithm which provably enhances the performance of the given OL under
non-stationarity. Our algorithm is efficient (it requires maintaining only
$O(\log(T))$ OL instances) and adaptive (it automatically chooses OL instances
with the ideal ``attention'' length at every timestamp). Experiments on various
real-world datasets across text and image modalities show that our method
consistently improves the accuracy of user specified OL algorithms for
classification tasks. Key novel algorithmic ingredients include a
\emph{multi-resolution instance} design inspired by wavelet theory and a
cross-validation-through-time technique. Both could be of independent interest.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 20:34:24 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Baby",
"Dheeraj",
""
],
[
"Han",
"Boran",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Hu",
"Cuixiong",
""
],
[
"Wang",
"Yuyang",
""
],
[
"Wang",
"Yu-Xiang",
""
]
] | TITLE: Adapting to Online Distribution Shifts in Deep Learning: A Black-Box
Approach
ABSTRACT: We study the well-motivated problem of online distribution shift in which the
data arrive in batches and the distribution of each batch can change
arbitrarily over time. Since the shifts can be large or small, abrupt or
gradual, the length of the relevant historical data to learn from may vary over
time, which poses a major challenge in designing algorithms that can
automatically adapt to the best ``attention span'' while remaining
computationally efficient. We propose a meta-algorithm that takes any network
architecture and any Online Learner (OL) algorithm as input and produces a new
algorithm which provably enhances the performance of the given OL under
non-stationarity. Our algorithm is efficient (it requires maintaining only
$O(\log(T))$ OL instances) and adaptive (it automatically chooses OL instances
with the ideal ``attention'' length at every timestamp). Experiments on various
real-world datasets across text and image modalities show that our method
consistently improves the accuracy of user specified OL algorithms for
classification tasks. Key novel algorithmic ingredients include a
\emph{multi-resolution instance} design inspired by wavelet theory and a
cross-validation-through-time technique. Both could be of independent interest.
|
2504.07274 | Agam Shah | Nikita Tatarinov, Siddhant Sukhani, Agam Shah, Sudheer Chava | Language Modeling for the Future of Finance: A Quantitative Survey into
Metrics, Tasks, and Data Opportunities | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent advances in language modeling have led to growing interest in applying
Natural Language Processing (NLP) techniques to financial problems, enabling
new approaches to analysis and decision-making. To systematically examine this
trend, we review 374 NLP research papers published between 2017 and 2024 across
38 conferences and workshops, with a focused analysis of 221 papers that
directly address finance-related tasks. We evaluate these papers across 11
qualitative and quantitative dimensions, identifying key trends such as the
increasing use of general-purpose language models, steady progress in sentiment
analysis and information extraction, and emerging efforts around explainability
and privacy-preserving methods. We also discuss the use of evaluation metrics,
highlighting the importance of domain-specific ones to complement standard
machine learning metrics. Our findings emphasize the need for more accessible,
adaptive datasets and highlight the significance of incorporating financial
crisis periods to strengthen model robustness under real-world conditions. This
survey provides a structured overview of NLP research applied to finance and
offers practical insights for researchers and practitioners working at this
intersection.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 21:02:12 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Tatarinov",
"Nikita",
""
],
[
"Sukhani",
"Siddhant",
""
],
[
"Shah",
"Agam",
""
],
[
"Chava",
"Sudheer",
""
]
] | TITLE: Language Modeling for the Future of Finance: A Quantitative Survey into
Metrics, Tasks, and Data Opportunities
ABSTRACT: Recent advances in language modeling have led to growing interest in applying
Natural Language Processing (NLP) techniques to financial problems, enabling
new approaches to analysis and decision-making. To systematically examine this
trend, we review 374 NLP research papers published between 2017 and 2024 across
38 conferences and workshops, with a focused analysis of 221 papers that
directly address finance-related tasks. We evaluate these papers across 11
qualitative and quantitative dimensions, identifying key trends such as the
increasing use of general-purpose language models, steady progress in sentiment
analysis and information extraction, and emerging efforts around explainability
and privacy-preserving methods. We also discuss the use of evaluation metrics,
highlighting the importance of domain-specific ones to complement standard
machine learning metrics. Our findings emphasize the need for more accessible,
adaptive datasets and highlight the significance of incorporating financial
crisis periods to strengthen model robustness under real-world conditions. This
survey provides a structured overview of NLP research applied to finance and
offers practical insights for researchers and practitioners working at this
intersection.
|
2504.07297 | Robert Appleton | Robert J Appleton, Brian C Barnes, Alejandro Strachan | Data Fusion of Deep Learned Molecular Embeddings for Property Prediction | null | null | null | null | cs.LG cond-mat.mtrl-sci | http://creativecommons.org/licenses/by/4.0/ | Data-driven approaches such as deep learning can result in predictive models
for material properties with exceptional accuracy and efficiency. However, in
many problems data is sparse, severely limiting their accuracy and
applicability. To improve predictions, techniques such as transfer learning and
multi-task learning have been used. The performance of multi-task learning
models depends on the strength of the underlying correlations between tasks and
the completeness of the dataset. We find that standard multi-task models tend
to underperform when trained on sparse datasets with weakly correlated
properties. To address this gap, we use data fusion techniques to combine the
learned molecular embeddings of various single-task models and trained a
multi-task model on this combined embedding. We apply this technique to a
widely used benchmark dataset of quantum chemistry data for small molecules as
well as a newly compiled sparse dataset of experimental data collected from
literature and our own quantum chemistry and thermochemical calculations. The
results show that the fused, multi-task models outperform standard multi-task
models for sparse datasets and can provide enhanced prediction on data-limited
properties compared to single-task models.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 21:40:15 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Appleton",
"Robert J",
""
],
[
"Barnes",
"Brian C",
""
],
[
"Strachan",
"Alejandro",
""
]
] | TITLE: Data Fusion of Deep Learned Molecular Embeddings for Property Prediction
ABSTRACT: Data-driven approaches such as deep learning can result in predictive models
for material properties with exceptional accuracy and efficiency. However, in
many problems data is sparse, severely limiting their accuracy and
applicability. To improve predictions, techniques such as transfer learning and
multi-task learning have been used. The performance of multi-task learning
models depends on the strength of the underlying correlations between tasks and
the completeness of the dataset. We find that standard multi-task models tend
to underperform when trained on sparse datasets with weakly correlated
properties. To address this gap, we use data fusion techniques to combine the
learned molecular embeddings of various single-task models and trained a
multi-task model on this combined embedding. We apply this technique to a
widely used benchmark dataset of quantum chemistry data for small molecules as
well as a newly compiled sparse dataset of experimental data collected from
literature and our own quantum chemistry and thermochemical calculations. The
results show that the fused, multi-task models outperform standard multi-task
models for sparse datasets and can provide enhanced prediction on data-limited
properties compared to single-task models.
|
2504.07313 | Benomar Mohammed Lamine | Mohammed Lamine Benomar, Nesma Settouti, Eric Debreuve, Xavier
Descombes, Damien Ambrosetti | Identifying regions of interest in whole slide images of renal cell
carcinoma | null | null | 10.1007/s42600-021-00178-9 | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | The histopathological images contain a huge amount of information, which can
make diagnosis an extremely timeconsuming and tedious task. In this study, we
developed a completely automated system to detect regions of interest (ROIs) in
whole slide images (WSI) of renal cell carcinoma (RCC), to reduce time analysis
and assist pathologists in making more accurate decisions. The proposed
approach is based on an efficient texture descriptor named dominant rotated
local binary pattern (DRLBP) and color transformation to reveal and exploit the
immense texture variability at the microscopic high magnifications level.
Thereby, the DRLBPs retain the structural information and utilize the magnitude
values in a local neighborhood for more discriminative power. For the
classification of the relevant ROIs, feature extraction of WSIs patches was
performed on the color channels separately to form the histograms. Next, we
used the most frequently occurring patterns as a feature selection step to
discard non-informative features. The performances of different classifiers on
a set of 1800 kidney cancer patches originating from 12 whole slide images were
compared and evaluated. Furthermore, the small size of the image dataset allows
to investigate deep learning approach based on transfer learning for image
patches classification by using deep features and fine-tuning methods. High
recognition accuracy was obtained and the classifiers are efficient, the best
precision result was 99.17% achieved with SVM. Moreover, transfer learning
models perform well with comparable performance, and the highest precision
using ResNet-50 reached 98.50%. The proposed approach results revealed a very
efficient image classification and demonstrated efficacy in identifying ROIs.
This study presents an automatic system to detect regions of interest relevant
to the diagnosis of kidney cancer in whole slide histopathology images.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 22:28:26 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Benomar",
"Mohammed Lamine",
""
],
[
"Settouti",
"Nesma",
""
],
[
"Debreuve",
"Eric",
""
],
[
"Descombes",
"Xavier",
""
],
[
"Ambrosetti",
"Damien",
""
]
] | TITLE: Identifying regions of interest in whole slide images of renal cell
carcinoma
ABSTRACT: The histopathological images contain a huge amount of information, which can
make diagnosis an extremely timeconsuming and tedious task. In this study, we
developed a completely automated system to detect regions of interest (ROIs) in
whole slide images (WSI) of renal cell carcinoma (RCC), to reduce time analysis
and assist pathologists in making more accurate decisions. The proposed
approach is based on an efficient texture descriptor named dominant rotated
local binary pattern (DRLBP) and color transformation to reveal and exploit the
immense texture variability at the microscopic high magnifications level.
Thereby, the DRLBPs retain the structural information and utilize the magnitude
values in a local neighborhood for more discriminative power. For the
classification of the relevant ROIs, feature extraction of WSIs patches was
performed on the color channels separately to form the histograms. Next, we
used the most frequently occurring patterns as a feature selection step to
discard non-informative features. The performances of different classifiers on
a set of 1800 kidney cancer patches originating from 12 whole slide images were
compared and evaluated. Furthermore, the small size of the image dataset allows
to investigate deep learning approach based on transfer learning for image
patches classification by using deep features and fine-tuning methods. High
recognition accuracy was obtained and the classifiers are efficient, the best
precision result was 99.17% achieved with SVM. Moreover, transfer learning
models perform well with comparable performance, and the highest precision
using ResNet-50 reached 98.50%. The proposed approach results revealed a very
efficient image classification and demonstrated efficacy in identifying ROIs.
This study presents an automatic system to detect regions of interest relevant
to the diagnosis of kidney cancer in whole slide histopathology images.
|
2504.07334 | Cindy Le | Chendi Lin, Heshan Liu, Qunshu Lin, Zachary Bright, Shitao Tang, Yihui
He, Minghao Liu, Ling Zhu, Cindy Le | Objaverse++: Curated 3D Object Dataset with Quality Annotations | 8 pages, 8 figures. Accepted to CVPR 2025 Workshop on Efficient Large
Vision Models (April 2025) | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper presents Objaverse++, a curated subset of Objaverse enhanced with
detailed attribute annotations by human experts. Recent advances in 3D content
generation have been driven by large-scale datasets such as Objaverse, which
contains over 800,000 3D objects collected from the Internet. Although
Objaverse represents the largest available 3D asset collection, its utility is
limited by the predominance of low-quality models. To address this limitation,
we manually annotate 10,000 3D objects with detailed attributes, including
aesthetic quality scores, texture color classifications, multi-object
composition flags, transparency characteristics, etc. Then, we trained a neural
network capable of annotating the tags for the rest of the Objaverse dataset.
Through experiments and a user study on generation results, we demonstrate that
models pre-trained on our quality-focused subset achieve better performance
than those trained on the larger dataset of Objaverse in image-to-3D generation
tasks. In addition, by comparing multiple subsets of training data filtered by
our tags, our results show that the higher the data quality, the faster the
training loss converges. These findings suggest that careful curation and rich
annotation can compensate for the raw dataset size, potentially offering a more
efficient path to develop 3D generative models. We release our enhanced dataset
of approximately 500,000 curated 3D models to facilitate further research on
various downstream tasks in 3D computer vision. In the near future, we aim to
extend our annotations to cover the entire Objaverse dataset.
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 23:29:08 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Lin",
"Chendi",
""
],
[
"Liu",
"Heshan",
""
],
[
"Lin",
"Qunshu",
""
],
[
"Bright",
"Zachary",
""
],
[
"Tang",
"Shitao",
""
],
[
"He",
"Yihui",
""
],
[
"Liu",
"Minghao",
""
],
[
"Zhu",
"Ling",
""
],
[
"Le",
"Cindy",
""
]
] | TITLE: Objaverse++: Curated 3D Object Dataset with Quality Annotations
ABSTRACT: This paper presents Objaverse++, a curated subset of Objaverse enhanced with
detailed attribute annotations by human experts. Recent advances in 3D content
generation have been driven by large-scale datasets such as Objaverse, which
contains over 800,000 3D objects collected from the Internet. Although
Objaverse represents the largest available 3D asset collection, its utility is
limited by the predominance of low-quality models. To address this limitation,
we manually annotate 10,000 3D objects with detailed attributes, including
aesthetic quality scores, texture color classifications, multi-object
composition flags, transparency characteristics, etc. Then, we trained a neural
network capable of annotating the tags for the rest of the Objaverse dataset.
Through experiments and a user study on generation results, we demonstrate that
models pre-trained on our quality-focused subset achieve better performance
than those trained on the larger dataset of Objaverse in image-to-3D generation
tasks. In addition, by comparing multiple subsets of training data filtered by
our tags, our results show that the higher the data quality, the faster the
training loss converges. These findings suggest that careful curation and rich
annotation can compensate for the raw dataset size, potentially offering a more
efficient path to develop 3D generative models. We release our enhanced dataset
of approximately 500,000 curated 3D models to facilitate further research on
various downstream tasks in 3D computer vision. In the near future, we aim to
extend our annotations to cover the entire Objaverse dataset.
|
2504.07335 | Akash Jadhav | Akash Jadhav, Michael Greenspan | DLTPose: 6DoF Pose Estimation From Accurate Dense Surface Point
Estimates | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose DLTPose, a novel method for 6DoF object pose estimation from RGB-D
images that combines the accuracy of sparse keypoint methods with the
robustness of dense pixel-wise predictions. DLTPose predicts per-pixel radial
distances to a set of minimally four keypoints, which are then fed into our
novel Direct Linear Transform (DLT) formulation to produce accurate 3D object
frame surface estimates, leading to better 6DoF pose estimation. Additionally,
we introduce a novel symmetry-aware keypoint ordering approach, designed to
handle object symmetries that otherwise cause inconsistencies in keypoint
assignments. Previous keypoint-based methods relied on fixed keypoint
orderings, which failed to account for the multiple valid configurations
exhibited by symmetric objects, which our ordering approach exploits to enhance
the model's ability to learn stable keypoint representations. Extensive
experiments on the benchmark LINEMOD, Occlusion LINEMOD and YCB-Video datasets
show that DLTPose outperforms existing methods, especially for symmetric and
occluded objects, demonstrating superior Mean Average Recall values of 86.5%
(LM), 79.7% (LM-O) and 89.5% (YCB-V). The code is available at
https://anonymous.4open.science/r/DLTPose_/ .
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 23:30:22 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Jadhav",
"Akash",
""
],
[
"Greenspan",
"Michael",
""
]
] | TITLE: DLTPose: 6DoF Pose Estimation From Accurate Dense Surface Point
Estimates
ABSTRACT: We propose DLTPose, a novel method for 6DoF object pose estimation from RGB-D
images that combines the accuracy of sparse keypoint methods with the
robustness of dense pixel-wise predictions. DLTPose predicts per-pixel radial
distances to a set of minimally four keypoints, which are then fed into our
novel Direct Linear Transform (DLT) formulation to produce accurate 3D object
frame surface estimates, leading to better 6DoF pose estimation. Additionally,
we introduce a novel symmetry-aware keypoint ordering approach, designed to
handle object symmetries that otherwise cause inconsistencies in keypoint
assignments. Previous keypoint-based methods relied on fixed keypoint
orderings, which failed to account for the multiple valid configurations
exhibited by symmetric objects, which our ordering approach exploits to enhance
the model's ability to learn stable keypoint representations. Extensive
experiments on the benchmark LINEMOD, Occlusion LINEMOD and YCB-Video datasets
show that DLTPose outperforms existing methods, especially for symmetric and
occluded objects, demonstrating superior Mean Average Recall values of 86.5%
(LM), 79.7% (LM-O) and 89.5% (YCB-V). The code is available at
https://anonymous.4open.science/r/DLTPose_/ .
|
2504.07336 | Siyuan Dai | Siyuan Dai, Kai Ye, Guodong Liu, Haoteng Tang, Liang Zhan | Zeus: Zero-shot LLM Instruction for Union Segmentation in Multimodal
Medical Imaging | 21 pages, 4 figures, In Press by a journal | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical image segmentation has achieved remarkable success through the
continuous advancement of UNet-based and Transformer-based foundation
backbones. However, clinical diagnosis in the real world often requires
integrating domain knowledge, especially textual information. Conducting
multimodal learning involves visual and text modalities shown as a solution,
but collecting paired vision-language datasets is expensive and time-consuming,
posing significant challenges. Inspired by the superior ability in numerous
cross-modal tasks for Large Language Models (LLMs), we proposed a novel
Vision-LLM union framework to address the issues. Specifically, we introduce
frozen LLMs for zero-shot instruction generation based on corresponding medical
images, imitating the radiology scanning and report generation process. {To
better approximate real-world diagnostic processes}, we generate more precise
text instruction from multimodal radiology images (e.g., T1-w or T2-w MRI and
CT). Based on the impressive ability of semantic understanding and rich
knowledge of LLMs. This process emphasizes extracting special features from
different modalities and reunion the information for the ultimate clinical
diagnostic. With generated text instruction, our proposed union segmentation
framework can handle multimodal segmentation without prior collected
vision-language datasets. To evaluate our proposed method, we conduct
comprehensive experiments with influential baselines, the statistical results
and the visualized case study demonstrate the superiority of our novel method.}
| [
{
"version": "v1",
"created": "Wed, 9 Apr 2025 23:33:35 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Dai",
"Siyuan",
""
],
[
"Ye",
"Kai",
""
],
[
"Liu",
"Guodong",
""
],
[
"Tang",
"Haoteng",
""
],
[
"Zhan",
"Liang",
""
]
] | TITLE: Zeus: Zero-shot LLM Instruction for Union Segmentation in Multimodal
Medical Imaging
ABSTRACT: Medical image segmentation has achieved remarkable success through the
continuous advancement of UNet-based and Transformer-based foundation
backbones. However, clinical diagnosis in the real world often requires
integrating domain knowledge, especially textual information. Conducting
multimodal learning involves visual and text modalities shown as a solution,
but collecting paired vision-language datasets is expensive and time-consuming,
posing significant challenges. Inspired by the superior ability in numerous
cross-modal tasks for Large Language Models (LLMs), we proposed a novel
Vision-LLM union framework to address the issues. Specifically, we introduce
frozen LLMs for zero-shot instruction generation based on corresponding medical
images, imitating the radiology scanning and report generation process. {To
better approximate real-world diagnostic processes}, we generate more precise
text instruction from multimodal radiology images (e.g., T1-w or T2-w MRI and
CT). Based on the impressive ability of semantic understanding and rich
knowledge of LLMs. This process emphasizes extracting special features from
different modalities and reunion the information for the ultimate clinical
diagnostic. With generated text instruction, our proposed union segmentation
framework can handle multimodal segmentation without prior collected
vision-language datasets. To evaluate our proposed method, we conduct
comprehensive experiments with influential baselines, the statistical results
and the visualized case study demonstrate the superiority of our novel method.}
|
2504.07345 | Minh Quan | Minh K. Quan, Mayuri Wijayasundara, Sujeeva Setunge, Pubudu N.
Pathirana | Quantum-Inspired Genetic Algorithm for Robust Source Separation in Smart
City Acoustics | 6 pages, 2 figures, IEEE International Conference on Communications
(ICC 2025) | null | null | null | cs.SD cs.AI eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cacophony of urban sounds presents a significant challenge for smart city
applications that rely on accurate acoustic scene analysis. Effectively
analyzing these complex soundscapes, often characterized by overlapping sound
sources, diverse acoustic events, and unpredictable noise levels, requires
precise source separation. This task becomes more complicated when only limited
training data is available. This paper introduces a novel Quantum-Inspired
Genetic Algorithm (p-QIGA) for source separation, drawing inspiration from
quantum information theory to enhance acoustic scene analysis in smart cities.
By leveraging quantum superposition for efficient solution space exploration
and entanglement to handle correlated sources, p-QIGA achieves robust
separation even with limited data. These quantum-inspired concepts are
integrated into a genetic algorithm framework to optimize source separation
parameters. The effectiveness of our approach is demonstrated on two datasets:
the TAU Urban Acoustic Scenes 2020 Mobile dataset, representing typical urban
soundscapes, and the Silent Cities dataset, capturing quieter urban
environments during the COVID-19 pandemic. Experimental results show that the
p-QIGA achieves accuracy comparable to state-of-the-art methods while
exhibiting superior resilience to noise and limited training data, achieving up
to 8.2 dB signal-to-distortion ratio (SDR) in noisy environments and
outperforming baseline methods by up to 2 dB with only 10% of the training
data. This research highlights the potential of p-QIGA to advance acoustic
signal processing in smart cities, particularly for noise pollution monitoring
and acoustic surveillance.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 00:05:35 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Quan",
"Minh K.",
""
],
[
"Wijayasundara",
"Mayuri",
""
],
[
"Setunge",
"Sujeeva",
""
],
[
"Pathirana",
"Pubudu N.",
""
]
] | TITLE: Quantum-Inspired Genetic Algorithm for Robust Source Separation in Smart
City Acoustics
ABSTRACT: The cacophony of urban sounds presents a significant challenge for smart city
applications that rely on accurate acoustic scene analysis. Effectively
analyzing these complex soundscapes, often characterized by overlapping sound
sources, diverse acoustic events, and unpredictable noise levels, requires
precise source separation. This task becomes more complicated when only limited
training data is available. This paper introduces a novel Quantum-Inspired
Genetic Algorithm (p-QIGA) for source separation, drawing inspiration from
quantum information theory to enhance acoustic scene analysis in smart cities.
By leveraging quantum superposition for efficient solution space exploration
and entanglement to handle correlated sources, p-QIGA achieves robust
separation even with limited data. These quantum-inspired concepts are
integrated into a genetic algorithm framework to optimize source separation
parameters. The effectiveness of our approach is demonstrated on two datasets:
the TAU Urban Acoustic Scenes 2020 Mobile dataset, representing typical urban
soundscapes, and the Silent Cities dataset, capturing quieter urban
environments during the COVID-19 pandemic. Experimental results show that the
p-QIGA achieves accuracy comparable to state-of-the-art methods while
exhibiting superior resilience to noise and limited training data, achieving up
to 8.2 dB signal-to-distortion ratio (SDR) in noisy environments and
outperforming baseline methods by up to 2 dB with only 10% of the training
data. This research highlights the potential of p-QIGA to advance acoustic
signal processing in smart cities, particularly for noise pollution monitoring
and acoustic surveillance.
|
2504.07360 | Taibiao Zhao | Taibiao Zhao, Xiaobing Chen, and Mingxuan Sun | Enhancing Time Series Forecasting via Multi-Level Text Alignment with
LLMs | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The adaptation of large language models (LLMs) to time series forecasting
poses unique challenges, as time series data is continuous in nature, while
LLMs operate on discrete tokens. Despite the success of LLMs in natural
language processing (NLP) and other structured domains, aligning time series
data with language-based representations while maintaining both predictive
accuracy and interpretability remains a significant hurdle. Existing methods
have attempted to reprogram time series data into text-based forms, but these
often fall short in delivering meaningful, interpretable results. In this
paper, we propose a multi-level text alignment framework for time series
forecasting using LLMs that not only improves prediction accuracy but also
enhances the interpretability of time series representations. Our method
decomposes time series into trend, seasonal, and residual components, which are
then reprogrammed into component-specific text representations. We introduce a
multi-level alignment mechanism, where component-specific embeddings are
aligned with pre-trained word tokens, enabling more interpretable forecasts.
Experiments on multiple datasets demonstrate that our method outperforms
state-of-the-art models in accuracy while providing good interpretability.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 01:02:37 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhao",
"Taibiao",
""
],
[
"Chen",
"Xiaobing",
""
],
[
"Sun",
"Mingxuan",
""
]
] | TITLE: Enhancing Time Series Forecasting via Multi-Level Text Alignment with
LLMs
ABSTRACT: The adaptation of large language models (LLMs) to time series forecasting
poses unique challenges, as time series data is continuous in nature, while
LLMs operate on discrete tokens. Despite the success of LLMs in natural
language processing (NLP) and other structured domains, aligning time series
data with language-based representations while maintaining both predictive
accuracy and interpretability remains a significant hurdle. Existing methods
have attempted to reprogram time series data into text-based forms, but these
often fall short in delivering meaningful, interpretable results. In this
paper, we propose a multi-level text alignment framework for time series
forecasting using LLMs that not only improves prediction accuracy but also
enhances the interpretability of time series representations. Our method
decomposes time series into trend, seasonal, and residual components, which are
then reprogrammed into component-specific text representations. We introduce a
multi-level alignment mechanism, where component-specific embeddings are
aligned with pre-trained word tokens, enabling more interpretable forecasts.
Experiments on multiple datasets demonstrate that our method outperforms
state-of-the-art models in accuracy while providing good interpretability.
|
2504.07363 | Yi Zhang | Yi Zhang, Yiwen Zhang, Yu Wang, Tong Chen, Hongzhi Yin | Towards Distribution Matching between Collaborative and Language Spaces
for Generative Recommendation | Accepted by SIGIR2025 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative recommendation aims to learn the underlying generative process
over the entire item set to produce recommendations for users. Although it
leverages non-linear probabilistic models to surpass the limited modeling
capacity of linear factor models, it is often constrained by a trade-off
between representation ability and tractability. With the rise of a new
generation of generative methods based on pre-trained language models (LMs),
incorporating LMs into general recommendation with implicit feedback has gained
considerable attention. However, adapting them to generative recommendation
remains challenging. The core reason lies in the mismatch between the
input-output formats and semantics of generative models and LMs, making it
challenging to achieve optimal alignment in the feature space. This work
addresses this issue by proposing a model-agnostic generative recommendation
framework called DMRec, which introduces a probabilistic meta-network to bridge
the outputs of LMs with user interactions, thereby enabling an equivalent
probabilistic modeling process. Subsequently, we design three cross-space
distribution matching processes aimed at maximizing shared information while
preserving the unique semantics of each space and filtering out irrelevant
information. We apply DMRec to three different types of generative
recommendation methods and conduct extensive experiments on three public
datasets. The experimental results demonstrate that DMRec can effectively
enhance the recommendation performance of these generative models, and it shows
significant advantages over mainstream LM-enhanced recommendation methods.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 01:09:30 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhang",
"Yi",
""
],
[
"Zhang",
"Yiwen",
""
],
[
"Wang",
"Yu",
""
],
[
"Chen",
"Tong",
""
],
[
"Yin",
"Hongzhi",
""
]
] | TITLE: Towards Distribution Matching between Collaborative and Language Spaces
for Generative Recommendation
ABSTRACT: Generative recommendation aims to learn the underlying generative process
over the entire item set to produce recommendations for users. Although it
leverages non-linear probabilistic models to surpass the limited modeling
capacity of linear factor models, it is often constrained by a trade-off
between representation ability and tractability. With the rise of a new
generation of generative methods based on pre-trained language models (LMs),
incorporating LMs into general recommendation with implicit feedback has gained
considerable attention. However, adapting them to generative recommendation
remains challenging. The core reason lies in the mismatch between the
input-output formats and semantics of generative models and LMs, making it
challenging to achieve optimal alignment in the feature space. This work
addresses this issue by proposing a model-agnostic generative recommendation
framework called DMRec, which introduces a probabilistic meta-network to bridge
the outputs of LMs with user interactions, thereby enabling an equivalent
probabilistic modeling process. Subsequently, we design three cross-space
distribution matching processes aimed at maximizing shared information while
preserving the unique semantics of each space and filtering out irrelevant
information. We apply DMRec to three different types of generative
recommendation methods and conduct extensive experiments on three public
datasets. The experimental results demonstrate that DMRec can effectively
enhance the recommendation performance of these generative models, and it shows
significant advantages over mainstream LM-enhanced recommendation methods.
|
2504.07375 | Junyi Ma | Junyi Ma, Wentao Bao, Jingyi Xu, Guanzhong Sun, Xieyuanli Chen,
Hesheng Wang | Novel Diffusion Models for Multimodal 3D Hand Trajectory Prediction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Predicting hand motion is critical for understanding human intentions and
bridging the action space between human movements and robot manipulations.
Existing hand trajectory prediction (HTP) methods forecast the future hand
waypoints in 3D space conditioned on past egocentric observations. However,
such models are only designed to accommodate 2D egocentric video inputs. There
is a lack of awareness of multimodal environmental information from both 2D and
3D observations, hindering the further improvement of 3D HTP performance. In
addition, these models overlook the synergy between hand movements and headset
camera egomotion, either predicting hand trajectories in isolation or encoding
egomotion only from past frames. To address these limitations, we propose novel
diffusion models (MMTwin) for multimodal 3D hand trajectory prediction. MMTwin
is designed to absorb multimodal information as input encompassing 2D RGB
images, 3D point clouds, past hand waypoints, and text prompt. Besides, two
latent diffusion models, the egomotion diffusion and the HTP diffusion as
twins, are integrated into MMTwin to predict camera egomotion and future hand
trajectories concurrently. We propose a novel hybrid Mamba-Transformer module
as the denoising model of the HTP diffusion to better fuse multimodal features.
The experimental results on three publicly available datasets and our
self-recorded data demonstrate that our proposed MMTwin can predict plausible
future 3D hand trajectories compared to the state-of-the-art baselines, and
generalizes well to unseen environments. The code and pretrained models will be
released at https://github.com/IRMVLab/MMTwin.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 01:29:50 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Ma",
"Junyi",
""
],
[
"Bao",
"Wentao",
""
],
[
"Xu",
"Jingyi",
""
],
[
"Sun",
"Guanzhong",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Wang",
"Hesheng",
""
]
] | TITLE: Novel Diffusion Models for Multimodal 3D Hand Trajectory Prediction
ABSTRACT: Predicting hand motion is critical for understanding human intentions and
bridging the action space between human movements and robot manipulations.
Existing hand trajectory prediction (HTP) methods forecast the future hand
waypoints in 3D space conditioned on past egocentric observations. However,
such models are only designed to accommodate 2D egocentric video inputs. There
is a lack of awareness of multimodal environmental information from both 2D and
3D observations, hindering the further improvement of 3D HTP performance. In
addition, these models overlook the synergy between hand movements and headset
camera egomotion, either predicting hand trajectories in isolation or encoding
egomotion only from past frames. To address these limitations, we propose novel
diffusion models (MMTwin) for multimodal 3D hand trajectory prediction. MMTwin
is designed to absorb multimodal information as input encompassing 2D RGB
images, 3D point clouds, past hand waypoints, and text prompt. Besides, two
latent diffusion models, the egomotion diffusion and the HTP diffusion as
twins, are integrated into MMTwin to predict camera egomotion and future hand
trajectories concurrently. We propose a novel hybrid Mamba-Transformer module
as the denoising model of the HTP diffusion to better fuse multimodal features.
The experimental results on three publicly available datasets and our
self-recorded data demonstrate that our proposed MMTwin can predict plausible
future 3D hand trajectories compared to the state-of-the-art baselines, and
generalizes well to unseen environments. The code and pretrained models will be
released at https://github.com/IRMVLab/MMTwin.
|
2504.07378 | Yongkang Dai | Yongkang Dai, Xiaoshui Huang, Yunpeng Bai, Hao Guo, Hongping Gan, Ling
Yang, Yilei Shi | BRepFormer: Transformer-Based B-rep Geometric Feature Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing geometric features on B-rep models is a cornerstone technique for
multimedia content-based retrieval and has been widely applied in intelligent
manufacturing. However, previous research often merely focused on Machining
Feature Recognition (MFR), falling short in effectively capturing the intricate
topological and geometric characteristics of complex geometry features. In this
paper, we propose BRepFormer, a novel transformer-based model to recognize both
machining feature and complex CAD models' features. BRepFormer encodes and
fuses the geometric and topological features of the models. Afterwards,
BRepFormer utilizes a transformer architecture for feature propagation and a
recognition head to identify geometry features. During each iteration of the
transformer, we incorporate a bias that combines edge features and topology
features to reinforce geometric constraints on each face. In addition, we also
proposed a dataset named Complex B-rep Feature Dataset (CBF), comprising 20,000
B-rep models. By covering more complex B-rep models, it is better aligned with
industrial applications. The experimental results demonstrate that BRepFormer
achieves state-of-the-art accuracy on the MFInstSeg, MFTRCAD, and our CBF
datasets.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 01:36:06 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Dai",
"Yongkang",
""
],
[
"Huang",
"Xiaoshui",
""
],
[
"Bai",
"Yunpeng",
""
],
[
"Guo",
"Hao",
""
],
[
"Gan",
"Hongping",
""
],
[
"Yang",
"Ling",
""
],
[
"Shi",
"Yilei",
""
]
] | TITLE: BRepFormer: Transformer-Based B-rep Geometric Feature Recognition
ABSTRACT: Recognizing geometric features on B-rep models is a cornerstone technique for
multimedia content-based retrieval and has been widely applied in intelligent
manufacturing. However, previous research often merely focused on Machining
Feature Recognition (MFR), falling short in effectively capturing the intricate
topological and geometric characteristics of complex geometry features. In this
paper, we propose BRepFormer, a novel transformer-based model to recognize both
machining feature and complex CAD models' features. BRepFormer encodes and
fuses the geometric and topological features of the models. Afterwards,
BRepFormer utilizes a transformer architecture for feature propagation and a
recognition head to identify geometry features. During each iteration of the
transformer, we incorporate a bias that combines edge features and topology
features to reinforce geometric constraints on each face. In addition, we also
proposed a dataset named Complex B-rep Feature Dataset (CBF), comprising 20,000
B-rep models. By covering more complex B-rep models, it is better aligned with
industrial applications. The experimental results demonstrate that BRepFormer
achieves state-of-the-art accuracy on the MFInstSeg, MFTRCAD, and our CBF
datasets.
|
2504.07382 | Zhishuo Xu | Qingchao Jiang, Zhishuo Xu, Zhiying Zhu, Ning Chen, Haoyue Wang,
Zhongjie Ba | Model Discrepancy Learning: Synthetic Faces Detection Based on
Multi-Reconstruction | 6 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in image generation enable hyper-realistic synthetic faces but also
pose risks, thus making synthetic face detection crucial. Previous research
focuses on the general differences between generated images and real images,
often overlooking the discrepancies among various generative techniques. In
this paper, we explore the intrinsic relationship between synthetic images and
their corresponding generation technologies. We find that specific images
exhibit significant reconstruction discrepancies across different generative
methods and that matching generation techniques provide more accurate
reconstructions. Based on this insight, we propose a Multi-Reconstruction-based
detector. By reversing and reconstructing images using multiple generative
models, we analyze the reconstruction differences among real, GAN-generated,
and DM-generated images to facilitate effective differentiation. Additionally,
we introduce the Asian Synthetic Face Dataset (ASFD), containing synthetic
Asian faces generated with various GANs and DMs. This dataset complements
existing synthetic face datasets. Experimental results demonstrate that our
detector achieves exceptional performance, with strong generalization and
robustness.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 01:54:02 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Jiang",
"Qingchao",
""
],
[
"Xu",
"Zhishuo",
""
],
[
"Zhu",
"Zhiying",
""
],
[
"Chen",
"Ning",
""
],
[
"Wang",
"Haoyue",
""
],
[
"Ba",
"Zhongjie",
""
]
] | TITLE: Model Discrepancy Learning: Synthetic Faces Detection Based on
Multi-Reconstruction
ABSTRACT: Advances in image generation enable hyper-realistic synthetic faces but also
pose risks, thus making synthetic face detection crucial. Previous research
focuses on the general differences between generated images and real images,
often overlooking the discrepancies among various generative techniques. In
this paper, we explore the intrinsic relationship between synthetic images and
their corresponding generation technologies. We find that specific images
exhibit significant reconstruction discrepancies across different generative
methods and that matching generation techniques provide more accurate
reconstructions. Based on this insight, we propose a Multi-Reconstruction-based
detector. By reversing and reconstructing images using multiple generative
models, we analyze the reconstruction differences among real, GAN-generated,
and DM-generated images to facilitate effective differentiation. Additionally,
we introduce the Asian Synthetic Face Dataset (ASFD), containing synthetic
Asian faces generated with various GANs and DMs. This dataset complements
existing synthetic face datasets. Experimental results demonstrate that our
detector achieves exceptional performance, with strong generalization and
robustness.
|
2504.07392 | Darian Toma\v{s}evi\'c | Darian Toma\v{s}evi\'c, Fadi Boutros, Chenhao Lin, Naser Damer,
Vitomir \v{S}truc and Peter Peer | ID-Booth: Identity-consistent Face Generation with Diffusion Models | IEEE International Conference on Automatic Face and Gesture
Recognition (FG) 2025, 14 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in generative modeling have enabled the generation of
high-quality synthetic data that is applicable in a variety of domains,
including face recognition. Here, state-of-the-art generative models typically
rely on conditioning and fine-tuning of powerful pretrained diffusion models to
facilitate the synthesis of realistic images of a desired identity. Yet, these
models often do not consider the identity of subjects during training, leading
to poor consistency between generated and intended identities. In contrast,
methods that employ identity-based training objectives tend to overfit on
various aspects of the identity, and in turn, lower the diversity of images
that can be generated. To address these issues, we present in this paper a
novel generative diffusion-based framework, called ID-Booth. ID-Booth consists
of a denoising network responsible for data generation, a variational
auto-encoder for mapping images to and from a lower-dimensional latent space
and a text encoder that allows for prompt-based control over the generation
procedure. The framework utilizes a novel triplet identity training objective
and enables identity-consistent image generation while retaining the synthesis
capabilities of pretrained diffusion models. Experiments with a
state-of-the-art latent diffusion model and diverse prompts reveal that our
method facilitates better intra-identity consistency and inter-identity
separability than competing methods, while achieving higher image diversity. In
turn, the produced data allows for effective augmentation of small-scale
datasets and training of better-performing recognition models in a
privacy-preserving manner. The source code for the ID-Booth framework is
publicly available at https://github.com/dariant/ID-Booth.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 02:20:18 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Tomašević",
"Darian",
""
],
[
"Boutros",
"Fadi",
""
],
[
"Lin",
"Chenhao",
""
],
[
"Damer",
"Naser",
""
],
[
"Štruc",
"Vitomir",
""
],
[
"Peer",
"Peter",
""
]
] | TITLE: ID-Booth: Identity-consistent Face Generation with Diffusion Models
ABSTRACT: Recent advances in generative modeling have enabled the generation of
high-quality synthetic data that is applicable in a variety of domains,
including face recognition. Here, state-of-the-art generative models typically
rely on conditioning and fine-tuning of powerful pretrained diffusion models to
facilitate the synthesis of realistic images of a desired identity. Yet, these
models often do not consider the identity of subjects during training, leading
to poor consistency between generated and intended identities. In contrast,
methods that employ identity-based training objectives tend to overfit on
various aspects of the identity, and in turn, lower the diversity of images
that can be generated. To address these issues, we present in this paper a
novel generative diffusion-based framework, called ID-Booth. ID-Booth consists
of a denoising network responsible for data generation, a variational
auto-encoder for mapping images to and from a lower-dimensional latent space
and a text encoder that allows for prompt-based control over the generation
procedure. The framework utilizes a novel triplet identity training objective
and enables identity-consistent image generation while retaining the synthesis
capabilities of pretrained diffusion models. Experiments with a
state-of-the-art latent diffusion model and diverse prompts reveal that our
method facilitates better intra-identity consistency and inter-identity
separability than competing methods, while achieving higher image diversity. In
turn, the produced data allows for effective augmentation of small-scale
datasets and training of better-performing recognition models in a
privacy-preserving manner. The source code for the ID-Booth framework is
publicly available at https://github.com/dariant/ID-Booth.
|
2504.07395 | Arya Fayyazi | Arya Fayyazi, Mehdi Kamal, Massoud Pedram | FAIR-SIGHT: Fairness Assurance in Image Recognition via Simultaneous
Conformal Thresholding and Dynamic Output Repair | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce FAIR-SIGHT, an innovative post-hoc framework designed to ensure
fairness in computer vision systems by combining conformal prediction with a
dynamic output repair mechanism. Our approach calculates a fairness-aware
non-conformity score that simultaneously assesses prediction errors and
fairness violations. Using conformal prediction, we establish an adaptive
threshold that provides rigorous finite-sample, distribution-free guarantees.
When the non-conformity score for a new image exceeds the calibrated threshold,
FAIR-SIGHT implements targeted corrective adjustments, such as logit shifts for
classification and confidence recalibration for detection, to reduce both group
and individual fairness disparities, all without the need for retraining or
having access to internal model parameters. Comprehensive theoretical analysis
validates our method's error control and convergence properties. At the same
time, extensive empirical evaluations on benchmark datasets show that
FAIR-SIGHT significantly reduces fairness disparities while preserving high
predictive performance.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 02:23:06 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Fayyazi",
"Arya",
""
],
[
"Kamal",
"Mehdi",
""
],
[
"Pedram",
"Massoud",
""
]
] | TITLE: FAIR-SIGHT: Fairness Assurance in Image Recognition via Simultaneous
Conformal Thresholding and Dynamic Output Repair
ABSTRACT: We introduce FAIR-SIGHT, an innovative post-hoc framework designed to ensure
fairness in computer vision systems by combining conformal prediction with a
dynamic output repair mechanism. Our approach calculates a fairness-aware
non-conformity score that simultaneously assesses prediction errors and
fairness violations. Using conformal prediction, we establish an adaptive
threshold that provides rigorous finite-sample, distribution-free guarantees.
When the non-conformity score for a new image exceeds the calibrated threshold,
FAIR-SIGHT implements targeted corrective adjustments, such as logit shifts for
classification and confidence recalibration for detection, to reduce both group
and individual fairness disparities, all without the need for retraining or
having access to internal model parameters. Comprehensive theoretical analysis
validates our method's error control and convergence properties. At the same
time, extensive empirical evaluations on benchmark datasets show that
FAIR-SIGHT significantly reduces fairness disparities while preserving high
predictive performance.
|
2504.07396 | Kenya Sakka | Kenya Sakka, Kosuke Mitarai and Keisuke Fujii | Automating quantum feature map design via large language models | 39 pages, 6 figures | null | null | null | quant-ph cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum feature maps are a key component of quantum machine learning,
encoding classical data into quantum states to exploit the expressive power of
high-dimensional Hilbert spaces. Despite their theoretical promise, designing
quantum feature maps that offer practical advantages over classical methods
remains an open challenge. In this work, we propose an agentic system that
autonomously generates, evaluates, and refines quantum feature maps using large
language models. The system consists of five component: Generation, Storage,
Validation, Evaluation, and Review. Using these components, it iteratively
improves quantum feature maps. Experiments on the MNIST dataset show that it
can successfully discover and refine feature maps without human intervention.
The best feature map generated outperforms existing quantum baselines and
achieves competitive accuracy compared to classical kernels across MNIST,
Fashion-MNIST, and CIFAR-10. Our approach provides a framework for exploring
dataset-adaptive quantum features and highlights the potential of LLM-driven
automation in quantum algorithm design.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 02:27:45 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Sakka",
"Kenya",
""
],
[
"Mitarai",
"Kosuke",
""
],
[
"Fujii",
"Keisuke",
""
]
] | TITLE: Automating quantum feature map design via large language models
ABSTRACT: Quantum feature maps are a key component of quantum machine learning,
encoding classical data into quantum states to exploit the expressive power of
high-dimensional Hilbert spaces. Despite their theoretical promise, designing
quantum feature maps that offer practical advantages over classical methods
remains an open challenge. In this work, we propose an agentic system that
autonomously generates, evaluates, and refines quantum feature maps using large
language models. The system consists of five component: Generation, Storage,
Validation, Evaluation, and Review. Using these components, it iteratively
improves quantum feature maps. Experiments on the MNIST dataset show that it
can successfully discover and refine feature maps without human intervention.
The best feature map generated outperforms existing quantum baselines and
achieves competitive accuracy compared to classical kernels across MNIST,
Fashion-MNIST, and CIFAR-10. Our approach provides a framework for exploring
dataset-adaptive quantum features and highlights the potential of LLM-driven
automation in quantum algorithm design.
|
2504.07397 | Mojtaba Mohasel | Seyed Mojtaba Mohasel, John Sheppard, Lindsey K. Molina, Richard R.
Neptune, Shane R. Wurdeman, Corey A. Pew | MicroNAS: An Automated Framework for Developing a Fall Detection System | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | This work presents MicroNAS, an automated neural architecture search tool
specifically designed to create models optimized for microcontrollers with
small memory resources. The ESP32 microcontroller, with 320 KB of memory, is
used as the target platform. The artificial intelligence contribution lies in a
novel method for optimizing convolutional neural network and gated recurrent
unit architectures by considering the memory size of the target microcontroller
as a guide. A comparison is made between memory-driven model optimization and
traditional two-stage methods, which use pruning, to show the effectiveness of
the proposed framework. To demonstrate the engineering application of MicroNAS,
a fall detection system (FDS) for lower-limb amputees is developed as a pilot
study. A critical challenge in fall detection studies, class imbalance in the
dataset, is addressed. The results show that MicroNAS models achieved higher
F1-scores than alternative approaches, such as ensemble methods and H2O
Automated Machine Learning, presenting a significant step forward in real-time
FDS development. Biomechanists using body-worn sensors for activity detection
can adopt the open-source code to design machine learning models tailored for
microcontroller platforms with limited memory.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 02:32:47 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Mohasel",
"Seyed Mojtaba",
""
],
[
"Sheppard",
"John",
""
],
[
"Molina",
"Lindsey K.",
""
],
[
"Neptune",
"Richard R.",
""
],
[
"Wurdeman",
"Shane R.",
""
],
[
"Pew",
"Corey A.",
""
]
] | TITLE: MicroNAS: An Automated Framework for Developing a Fall Detection System
ABSTRACT: This work presents MicroNAS, an automated neural architecture search tool
specifically designed to create models optimized for microcontrollers with
small memory resources. The ESP32 microcontroller, with 320 KB of memory, is
used as the target platform. The artificial intelligence contribution lies in a
novel method for optimizing convolutional neural network and gated recurrent
unit architectures by considering the memory size of the target microcontroller
as a guide. A comparison is made between memory-driven model optimization and
traditional two-stage methods, which use pruning, to show the effectiveness of
the proposed framework. To demonstrate the engineering application of MicroNAS,
a fall detection system (FDS) for lower-limb amputees is developed as a pilot
study. A critical challenge in fall detection studies, class imbalance in the
dataset, is addressed. The results show that MicroNAS models achieved higher
F1-scores than alternative approaches, such as ensemble methods and H2O
Automated Machine Learning, presenting a significant step forward in real-time
FDS development. Biomechanists using body-worn sensors for activity detection
can adopt the open-source code to design machine learning models tailored for
microcontroller platforms with limited memory.
|
2504.07398 | Jun Yuan | Jun Yuan | A Novel Mamba-based Sequential Recommendation Method | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential recommendation (SR), which encodes user activity to predict the
next action, has emerged as a widely adopted strategy in developing commercial
personalized recommendation systems. Although Transformer-based models have
proven effective for sequential recommendation, the complexity of the
self-attention module in Transformers scales quadratically with the sequence
length. Controlling model complexity is essential for large-scale
recommendation systems, as these systems may need to handle billion-scale
vocabularies that evolve continuously, as well as user behavior sequences that
can exceed tens of thousands in length. In this paper, we propose a novel
multi-head latent Mamba architecture, which employs multiple low-dimensional
Mamba layers and fully connected layers coupled with positional encoding to
simultaneously capture historical and item information within each latent
subspace. Our proposed method not only enables scaling up to large-scale
parameters but also extends to multi-domain recommendation by integrating and
fine-tuning LLMs. Through extensive experiments on public datasets, we
demonstrate how Hydra effectively addresses the effectiveness-efficiency
dilemma, outperforming state-of-the-art sequential recommendation baselines
with significantly fewer parameters and reduced training time.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 02:43:19 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Yuan",
"Jun",
""
]
] | TITLE: A Novel Mamba-based Sequential Recommendation Method
ABSTRACT: Sequential recommendation (SR), which encodes user activity to predict the
next action, has emerged as a widely adopted strategy in developing commercial
personalized recommendation systems. Although Transformer-based models have
proven effective for sequential recommendation, the complexity of the
self-attention module in Transformers scales quadratically with the sequence
length. Controlling model complexity is essential for large-scale
recommendation systems, as these systems may need to handle billion-scale
vocabularies that evolve continuously, as well as user behavior sequences that
can exceed tens of thousands in length. In this paper, we propose a novel
multi-head latent Mamba architecture, which employs multiple low-dimensional
Mamba layers and fully connected layers coupled with positional encoding to
simultaneously capture historical and item information within each latent
subspace. Our proposed method not only enables scaling up to large-scale
parameters but also extends to multi-domain recommendation by integrating and
fine-tuning LLMs. Through extensive experiments on public datasets, we
demonstrate how Hydra effectively addresses the effectiveness-efficiency
dilemma, outperforming state-of-the-art sequential recommendation baselines
with significantly fewer parameters and reduced training time.
|
2504.07400 | Nishanth Sridhar Nakshatri | Nishanth Nakshatri, Nikhil Mehta, Siyi Liu, Sihao Chen, Daniel J.
Hopkins, Dan Roth, Dan Goldwasser | Talking Point based Ideological Discourse Analysis in News Events | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Analyzing ideological discourse even in the age of LLMs remains a challenge,
as these models often struggle to capture the key elements that shape
real-world narratives. Specifically, LLMs fail to focus on characteristic
elements driving dominant discourses and lack the ability to integrate
contextual information required for understanding abstract ideological views.
To address these limitations, we propose a framework motivated by the theory of
ideological discourse analysis to analyze news articles related to real-world
events. Our framework represents the news articles using a relational structure
- talking points, which captures the interaction between entities, their roles,
and media frames along with a topic of discussion. It then constructs a
vocabulary of repeating themes - prominent talking points, that are used to
generate ideology-specific viewpoints (or partisan perspectives). We evaluate
our framework's ability to generate these perspectives through automated tasks
- ideology and partisan classification tasks, supplemented by human validation.
Additionally, we demonstrate straightforward applicability of our framework in
creating event snapshots, a visual way of interpreting event discourse. We
release resulting dataset and model to the community to support further
research.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 02:52:34 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Nakshatri",
"Nishanth",
""
],
[
"Mehta",
"Nikhil",
""
],
[
"Liu",
"Siyi",
""
],
[
"Chen",
"Sihao",
""
],
[
"Hopkins",
"Daniel J.",
""
],
[
"Roth",
"Dan",
""
],
[
"Goldwasser",
"Dan",
""
]
] | TITLE: Talking Point based Ideological Discourse Analysis in News Events
ABSTRACT: Analyzing ideological discourse even in the age of LLMs remains a challenge,
as these models often struggle to capture the key elements that shape
real-world narratives. Specifically, LLMs fail to focus on characteristic
elements driving dominant discourses and lack the ability to integrate
contextual information required for understanding abstract ideological views.
To address these limitations, we propose a framework motivated by the theory of
ideological discourse analysis to analyze news articles related to real-world
events. Our framework represents the news articles using a relational structure
- talking points, which captures the interaction between entities, their roles,
and media frames along with a topic of discussion. It then constructs a
vocabulary of repeating themes - prominent talking points, that are used to
generate ideology-specific viewpoints (or partisan perspectives). We evaluate
our framework's ability to generate these perspectives through automated tasks
- ideology and partisan classification tasks, supplemented by human validation.
Additionally, we demonstrate straightforward applicability of our framework in
creating event snapshots, a visual way of interpreting event discourse. We
release resulting dataset and model to the community to support further
research.
|
2504.07403 | Sahasrajit Sarmasarkar | Sahasrajit Sarmasarkar, Zhihao Jiang, Ashish Goel, Aleksandra Korolova
and Kamesh Munagala | Multi-Selection for Recommendation Systems | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present the construction of a multi-selection model to answer
differentially private queries in the context of recommendation systems. The
server sends back multiple recommendations and a ``local model'' to the user,
which the user can run locally on its device to select the item that best fits
its private features. We study a setup where the server uses a deep neural
network (trained on the Movielens 25M dataset as the ground truth for movie
recommendation. In the multi-selection paradigm, the average recommendation
utility is approximately 97\% of the optimal utility (as determined by the
ground truth neural network) while maintaining a local differential privacy
guarantee with $\epsilon$ ranging around 1 with respect to feature vectors of
neighboring users. This is in comparison to an average recommendation utility
of 91\% in the non-multi-selection regime under the same constraints.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 02:57:14 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Sarmasarkar",
"Sahasrajit",
""
],
[
"Jiang",
"Zhihao",
""
],
[
"Goel",
"Ashish",
""
],
[
"Korolova",
"Aleksandra",
""
],
[
"Munagala",
"Kamesh",
""
]
] | TITLE: Multi-Selection for Recommendation Systems
ABSTRACT: We present the construction of a multi-selection model to answer
differentially private queries in the context of recommendation systems. The
server sends back multiple recommendations and a ``local model'' to the user,
which the user can run locally on its device to select the item that best fits
its private features. We study a setup where the server uses a deep neural
network (trained on the Movielens 25M dataset as the ground truth for movie
recommendation. In the multi-selection paradigm, the average recommendation
utility is approximately 97\% of the optimal utility (as determined by the
ground truth neural network) while maintaining a local differential privacy
guarantee with $\epsilon$ ranging around 1 with respect to feature vectors of
neighboring users. This is in comparison to an average recommendation utility
of 91\% in the non-multi-selection regime under the same constraints.
|
2504.07406 | Yu-Hua Chen | Yu-Hua Chen, Yuan-Chiao Cheng, Yen-Tung Yeh, Jui-Te Wu, Jyh-Shing
Roger Jang and Yi-Hsuan Yang | Towards Generalizability to Tone and Content Variations in the
Transcription of Amplifier Rendered Electric Guitar Audio | null | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Transcribing electric guitar recordings is challenging due to the scarcity of
diverse datasets and the complex tone-related variations introduced by
amplifiers, cabinets, and effect pedals. To address these issues, we introduce
EGDB-PG, a novel dataset designed to capture a wide range of tone-related
characteristics across various amplifier-cabinet configurations. In addition,
we propose the Tone-informed Transformer (TIT), a Transformer-based
transcription model enhanced with a tone embedding mechanism that leverages
learned representations to improve the model's adaptability to tone-related
nuances. Experiments demonstrate that TIT, trained on EGDB-PG, outperforms
existing baselines across diverse amplifier types, with transcription accuracy
improvements driven by the dataset's diversity and the tone embedding
technique. Through detailed benchmarking and ablation studies, we evaluate the
impact of tone augmentation, content augmentation, audio normalization, and
tone embedding on transcription performance. This work advances electric guitar
transcription by overcoming limitations in dataset diversity and tone modeling,
providing a robust foundation for future research.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 03:01:14 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Chen",
"Yu-Hua",
""
],
[
"Cheng",
"Yuan-Chiao",
""
],
[
"Yeh",
"Yen-Tung",
""
],
[
"Wu",
"Jui-Te",
""
],
[
"Jang",
"Jyh-Shing Roger",
""
],
[
"Yang",
"Yi-Hsuan",
""
]
] | TITLE: Towards Generalizability to Tone and Content Variations in the
Transcription of Amplifier Rendered Electric Guitar Audio
ABSTRACT: Transcribing electric guitar recordings is challenging due to the scarcity of
diverse datasets and the complex tone-related variations introduced by
amplifiers, cabinets, and effect pedals. To address these issues, we introduce
EGDB-PG, a novel dataset designed to capture a wide range of tone-related
characteristics across various amplifier-cabinet configurations. In addition,
we propose the Tone-informed Transformer (TIT), a Transformer-based
transcription model enhanced with a tone embedding mechanism that leverages
learned representations to improve the model's adaptability to tone-related
nuances. Experiments demonstrate that TIT, trained on EGDB-PG, outperforms
existing baselines across diverse amplifier types, with transcription accuracy
improvements driven by the dataset's diversity and the tone embedding
technique. Through detailed benchmarking and ablation studies, we evaluate the
impact of tone augmentation, content augmentation, audio normalization, and
tone embedding on transcription performance. This work advances electric guitar
transcription by overcoming limitations in dataset diversity and tone modeling,
providing a robust foundation for future research.
|
2504.07415 | Jonggwon Park | Kyoyun Choi, Byungmu Yoon, Soobum Kim, Jonggwon Park | Leveraging LLMs for Multimodal Retrieval-Augmented Radiology Report
Generation via Key Phrase Extraction | null | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated radiology report generation (RRG) holds potential to reduce
radiologists' workload, especially as recent advancements in large language
models (LLMs) enable the development of multimodal models for chest X-ray (CXR)
report generation. However, multimodal LLMs (MLLMs) are resource-intensive,
requiring vast datasets and substantial computational cost for training. To
address these challenges, we propose a retrieval-augmented generation approach
that leverages multimodal retrieval and LLMs to generate radiology reports
while mitigating hallucinations and reducing computational demands. Our method
uses LLMs to extract key phrases from radiology reports, effectively focusing
on essential diagnostic information. Through exploring effective training
strategies, including image encoder structure search, adding noise to text
embeddings, and additional training objectives, we combine complementary
pre-trained image encoders and adopt contrastive learning between text and
semantic image embeddings. We evaluate our approach on MIMIC-CXR dataset,
achieving state-of-the-art results on CheXbert metrics and competitive RadGraph
F1 metric alongside MLLMs, without requiring LLM fine-tuning. Our method
demonstrates robust generalization for multi-view RRG, making it suitable for
comprehensive clinical applications.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 03:14:01 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Choi",
"Kyoyun",
""
],
[
"Yoon",
"Byungmu",
""
],
[
"Kim",
"Soobum",
""
],
[
"Park",
"Jonggwon",
""
]
] | TITLE: Leveraging LLMs for Multimodal Retrieval-Augmented Radiology Report
Generation via Key Phrase Extraction
ABSTRACT: Automated radiology report generation (RRG) holds potential to reduce
radiologists' workload, especially as recent advancements in large language
models (LLMs) enable the development of multimodal models for chest X-ray (CXR)
report generation. However, multimodal LLMs (MLLMs) are resource-intensive,
requiring vast datasets and substantial computational cost for training. To
address these challenges, we propose a retrieval-augmented generation approach
that leverages multimodal retrieval and LLMs to generate radiology reports
while mitigating hallucinations and reducing computational demands. Our method
uses LLMs to extract key phrases from radiology reports, effectively focusing
on essential diagnostic information. Through exploring effective training
strategies, including image encoder structure search, adding noise to text
embeddings, and additional training objectives, we combine complementary
pre-trained image encoders and adopt contrastive learning between text and
semantic image embeddings. We evaluate our approach on MIMIC-CXR dataset,
achieving state-of-the-art results on CheXbert metrics and competitive RadGraph
F1 metric alongside MLLMs, without requiring LLM fine-tuning. Our method
demonstrates robust generalization for multi-view RRG, making it suitable for
comprehensive clinical applications.
|
2504.07418 | Anning Hu | Anning Hu, Ang Li, Xirui Jin, and Danping Zou | ThermoStereoRT: Thermal Stereo Matching in Real Time via Knowledge
Distillation and Attention-based Refinement | 7 pages, 6 figures, 4 tables. Accepted to IEEE ICRA 2025. This is the
preprint version | IEEE International Conference on Robotics and Automation (ICRA),
2025 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce ThermoStereoRT, a real-time thermal stereo matching method
designed for all-weather conditions that recovers disparity from two rectified
thermal stereo images, envisioning applications such as night-time drone
surveillance or under-bed cleaning robots. Leveraging a lightweight yet
powerful backbone, ThermoStereoRT constructs a 3D cost volume from thermal
images and employs multi-scale attention mechanisms to produce an initial
disparity map. To refine this map, we design a novel channel and spatial
attention module. Addressing the challenge of sparse ground truth data in
thermal imagery, we utilize knowledge distillation to boost performance without
increasing computational demands. Comprehensive evaluations on multiple
datasets demonstrate that ThermoStereoRT delivers both real-time capacity and
robust accuracy, making it a promising solution for real-world deployment in
various challenging environments. Our code will be released on
https://github.com/SJTU-ViSYS-team/ThermoStereoRT
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 03:24:21 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Hu",
"Anning",
""
],
[
"Li",
"Ang",
""
],
[
"Jin",
"Xirui",
""
],
[
"Zou",
"Danping",
""
]
] | TITLE: ThermoStereoRT: Thermal Stereo Matching in Real Time via Knowledge
Distillation and Attention-based Refinement
ABSTRACT: We introduce ThermoStereoRT, a real-time thermal stereo matching method
designed for all-weather conditions that recovers disparity from two rectified
thermal stereo images, envisioning applications such as night-time drone
surveillance or under-bed cleaning robots. Leveraging a lightweight yet
powerful backbone, ThermoStereoRT constructs a 3D cost volume from thermal
images and employs multi-scale attention mechanisms to produce an initial
disparity map. To refine this map, we design a novel channel and spatial
attention module. Addressing the challenge of sparse ground truth data in
thermal imagery, we utilize knowledge distillation to boost performance without
increasing computational demands. Comprehensive evaluations on multiple
datasets demonstrate that ThermoStereoRT delivers both real-time capacity and
robust accuracy, making it a promising solution for real-world deployment in
various challenging environments. Our code will be released on
https://github.com/SJTU-ViSYS-team/ThermoStereoRT
|
2504.07421 | Amirhossein Abaskohi | Amirhossein Abaskohi, Amrutha Varshini Ramesh, Shailesh Nanisetty,
Chirag Goel, David Vazquez, Christopher Pal, Spandana Gella, Giuseppe
Carenini, Issam H. Laradji | AgentAda: Skill-Adaptive Data Analytics for Tailored Insight Discovery | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We introduce AgentAda, the first LLM-powered analytics agent that can learn
and use new analytics skills to extract more specialized insights. Unlike
existing methods that require users to manually decide which data analytics
method to apply, AgentAda automatically identifies the skill needed from a
library of analytical skills to perform the analysis. This also allows AgentAda
to use skills that existing LLMs cannot perform out of the box. The library
covers a range of methods, including clustering, predictive modeling, and NLP
techniques like BERT, which allow AgentAda to handle complex analytics tasks
based on what the user needs. AgentAda's dataset-to-insight extraction strategy
consists of three key steps: (I) a question generator to generate queries
relevant to the user's goal and persona, (II) a hybrid Retrieval-Augmented
Generation (RAG)-based skill matcher to choose the best data analytics skill
from the skill library, and (III) a code generator that produces executable
code based on the retrieved skill's documentation to extract key patterns. We
also introduce KaggleBench, a benchmark of curated notebooks across diverse
domains, to evaluate AgentAda's performance. We conducted a human evaluation
demonstrating that AgentAda provides more insightful analytics than existing
tools, with 48.78% of evaluators preferring its analyses, compared to 27.67%
for the unskilled agent. We also propose a novel LLM-as-a-judge approach that
we show is aligned with human evaluation as a way to automate insight quality
evaluation at larger scale.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 03:27:25 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Abaskohi",
"Amirhossein",
""
],
[
"Ramesh",
"Amrutha Varshini",
""
],
[
"Nanisetty",
"Shailesh",
""
],
[
"Goel",
"Chirag",
""
],
[
"Vazquez",
"David",
""
],
[
"Pal",
"Christopher",
""
],
[
"Gella",
"Spandana",
""
],
[
"Carenini",
"Giuseppe",
""
],
[
"Laradji",
"Issam H.",
""
]
] | TITLE: AgentAda: Skill-Adaptive Data Analytics for Tailored Insight Discovery
ABSTRACT: We introduce AgentAda, the first LLM-powered analytics agent that can learn
and use new analytics skills to extract more specialized insights. Unlike
existing methods that require users to manually decide which data analytics
method to apply, AgentAda automatically identifies the skill needed from a
library of analytical skills to perform the analysis. This also allows AgentAda
to use skills that existing LLMs cannot perform out of the box. The library
covers a range of methods, including clustering, predictive modeling, and NLP
techniques like BERT, which allow AgentAda to handle complex analytics tasks
based on what the user needs. AgentAda's dataset-to-insight extraction strategy
consists of three key steps: (I) a question generator to generate queries
relevant to the user's goal and persona, (II) a hybrid Retrieval-Augmented
Generation (RAG)-based skill matcher to choose the best data analytics skill
from the skill library, and (III) a code generator that produces executable
code based on the retrieved skill's documentation to extract key patterns. We
also introduce KaggleBench, a benchmark of curated notebooks across diverse
domains, to evaluate AgentAda's performance. We conducted a human evaluation
demonstrating that AgentAda provides more insightful analytics than existing
tools, with 48.78% of evaluators preferring its analyses, compared to 27.67%
for the unskilled agent. We also propose a novel LLM-as-a-judge approach that
we show is aligned with human evaluation as a way to automate insight quality
evaluation at larger scale.
|
2504.07422 | Yixin Zhang | Yixin Zhang, Yisong Chen | The Role of Machine Learning in Reducing Healthcare Costs: The Impact of
Medication Adherence and Preventive Care on Hospitalization Expenses | null | null | null | null | cs.LG cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study reveals the important role of prevention care and medication
adherence in reducing hospitalizations. By using a structured dataset of 1,171
patients, four machine learning models Logistic Regression, Gradient Boosting,
Random Forest, and Artificial Neural Networks are applied to predict five-year
hospitalization risk, with the Gradient Boosting model achieving the highest
accuracy of 81.2%. The result demonstrated that patients with high medication
adherence and consistent preventive care can reduce 38.3% and 37.7% in
hospitalization risk. The finding also suggests that targeted preventive care
can have positive Return on Investment (ROI), and therefore ML models can
effectively direct personalized interventions and contribute to long-term
medical savings.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 03:28:42 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhang",
"Yixin",
""
],
[
"Chen",
"Yisong",
""
]
] | TITLE: The Role of Machine Learning in Reducing Healthcare Costs: The Impact of
Medication Adherence and Preventive Care on Hospitalization Expenses
ABSTRACT: This study reveals the important role of prevention care and medication
adherence in reducing hospitalizations. By using a structured dataset of 1,171
patients, four machine learning models Logistic Regression, Gradient Boosting,
Random Forest, and Artificial Neural Networks are applied to predict five-year
hospitalization risk, with the Gradient Boosting model achieving the highest
accuracy of 81.2%. The result demonstrated that patients with high medication
adherence and consistent preventive care can reduce 38.3% and 37.7% in
hospitalization risk. The finding also suggests that targeted preventive care
can have positive Return on Investment (ROI), and therefore ML models can
effectively direct personalized interventions and contribute to long-term
medical savings.
|
2504.07426 | Xinyu Tian | Xinyu Tian and Xiaotong Shen | Conditional Data Synthesis Augmentation | null | null | null | null | stat.ME cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable machine learning and statistical analysis rely on diverse,
well-distributed training data. However, real-world datasets are often limited
in size and exhibit underrepresentation across key subpopulations, leading to
biased predictions and reduced performance, particularly in supervised tasks
such as classification. To address these challenges, we propose Conditional
Data Synthesis Augmentation (CoDSA), a novel framework that leverages
generative models, such as diffusion models, to synthesize high-fidelity data
for improving model performance across multimodal domains including tabular,
textual, and image data. CoDSA generates synthetic samples that faithfully
capture the conditional distributions of the original data, with a focus on
under-sampled or high-interest regions. Through transfer learning, CoDSA
fine-tunes pre-trained generative models to enhance the realism of synthetic
data and increase sample density in sparse areas. This process preserves
inter-modal relationships, mitigates data imbalance, improves domain
adaptation, and boosts generalization. We also introduce a theoretical
framework that quantifies the statistical accuracy improvements enabled by
CoDSA as a function of synthetic sample volume and targeted region allocation,
providing formal guarantees of its effectiveness. Extensive experiments
demonstrate that CoDSA consistently outperforms non-adaptive augmentation
strategies and state-of-the-art baselines in both supervised and unsupervised
settings.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 03:38:11 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Tian",
"Xinyu",
""
],
[
"Shen",
"Xiaotong",
""
]
] | TITLE: Conditional Data Synthesis Augmentation
ABSTRACT: Reliable machine learning and statistical analysis rely on diverse,
well-distributed training data. However, real-world datasets are often limited
in size and exhibit underrepresentation across key subpopulations, leading to
biased predictions and reduced performance, particularly in supervised tasks
such as classification. To address these challenges, we propose Conditional
Data Synthesis Augmentation (CoDSA), a novel framework that leverages
generative models, such as diffusion models, to synthesize high-fidelity data
for improving model performance across multimodal domains including tabular,
textual, and image data. CoDSA generates synthetic samples that faithfully
capture the conditional distributions of the original data, with a focus on
under-sampled or high-interest regions. Through transfer learning, CoDSA
fine-tunes pre-trained generative models to enhance the realism of synthetic
data and increase sample density in sparse areas. This process preserves
inter-modal relationships, mitigates data imbalance, improves domain
adaptation, and boosts generalization. We also introduce a theoretical
framework that quantifies the statistical accuracy improvements enabled by
CoDSA as a function of synthetic sample volume and targeted region allocation,
providing formal guarantees of its effectiveness. Extensive experiments
demonstrate that CoDSA consistently outperforms non-adaptive augmentation
strategies and state-of-the-art baselines in both supervised and unsupervised
settings.
|
2504.07439 | Qi Liu | Qi Liu, Haozhe Duan, Yiqun Chen, Quanfeng Lu, Weiwei Sun, Jiaxin Mao | LLM4Ranking: An Easy-to-use Framework of Utilizing Large Language Models
for Document Reranking | null | null | null | null | cs.IR cs.CL | http://creativecommons.org/licenses/by/4.0/ | Utilizing large language models (LLMs) for document reranking has been a
popular and promising research direction in recent years, many studies are
dedicated to improving the performance and efficiency of using LLMs for
reranking. Besides, it can also be applied in many real-world applications,
such as search engines or retrieval-augmented generation. In response to the
growing demand for research and application in practice, we introduce a unified
framework, \textbf{LLM4Ranking}, which enables users to adopt different ranking
methods using open-source or closed-source API-based LLMs. Our framework
provides a simple and extensible interface for document reranking with LLMs, as
well as easy-to-use evaluation and fine-tuning scripts for this task. We
conducted experiments based on this framework and evaluated various models and
methods on several widely used datasets, providing reproducibility results on
utilizing LLMs for document reranking. Our code is publicly available at
https://github.com/liuqi6777/llm4ranking.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 04:08:38 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Liu",
"Qi",
""
],
[
"Duan",
"Haozhe",
""
],
[
"Chen",
"Yiqun",
""
],
[
"Lu",
"Quanfeng",
""
],
[
"Sun",
"Weiwei",
""
],
[
"Mao",
"Jiaxin",
""
]
] | TITLE: LLM4Ranking: An Easy-to-use Framework of Utilizing Large Language Models
for Document Reranking
ABSTRACT: Utilizing large language models (LLMs) for document reranking has been a
popular and promising research direction in recent years, many studies are
dedicated to improving the performance and efficiency of using LLMs for
reranking. Besides, it can also be applied in many real-world applications,
such as search engines or retrieval-augmented generation. In response to the
growing demand for research and application in practice, we introduce a unified
framework, \textbf{LLM4Ranking}, which enables users to adopt different ranking
methods using open-source or closed-source API-based LLMs. Our framework
provides a simple and extensible interface for document reranking with LLMs, as
well as easy-to-use evaluation and fine-tuning scripts for this task. We
conducted experiments based on this framework and evaluated various models and
methods on several widely used datasets, providing reproducibility results on
utilizing LLMs for document reranking. Our code is publicly available at
https://github.com/liuqi6777/llm4ranking.
|
2504.07441 | Pengyu Wang | Huilin Yin, Pengyu Wang, Senmao Li, Jun Yan, and Daniel Watzenig | WS-DETR: Robust Water Surface Object Detection through Vision-Radar
Fusion with Detection Transformer | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust object detection for Unmanned Surface Vehicles (USVs) in complex water
environments is essential for reliable navigation and operation. Specifically,
water surface object detection faces challenges from blurred edges and diverse
object scales. Although vision-radar fusion offers a feasible solution,
existing approaches suffer from cross-modal feature conflicts, which negatively
affect model robustness. To address this problem, we propose a robust
vision-radar fusion model WS-DETR. In particular, we first introduce a
Multi-Scale Edge Information Integration (MSEII) module to enhance edge
perception and a Hierarchical Feature Aggregator (HiFA) to boost multi-scale
object detection in the encoder. Then, we adopt self-moving point
representations for continuous convolution and residual connection to
efficiently extract irregular features under the scenarios of irregular point
cloud data. To further mitigate cross-modal conflicts, an Adaptive Feature
Interactive Fusion (AFIF) module is introduced to integrate visual and radar
features through geometric alignment and semantic fusion. Extensive experiments
on the WaterScenes dataset demonstrate that WS-DETR achieves state-of-the-art
(SOTA) performance, maintaining its superiority even under adverse weather and
lighting conditions.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 04:16:46 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Yin",
"Huilin",
""
],
[
"Wang",
"Pengyu",
""
],
[
"Li",
"Senmao",
""
],
[
"Yan",
"Jun",
""
],
[
"Watzenig",
"Daniel",
""
]
] | TITLE: WS-DETR: Robust Water Surface Object Detection through Vision-Radar
Fusion with Detection Transformer
ABSTRACT: Robust object detection for Unmanned Surface Vehicles (USVs) in complex water
environments is essential for reliable navigation and operation. Specifically,
water surface object detection faces challenges from blurred edges and diverse
object scales. Although vision-radar fusion offers a feasible solution,
existing approaches suffer from cross-modal feature conflicts, which negatively
affect model robustness. To address this problem, we propose a robust
vision-radar fusion model WS-DETR. In particular, we first introduce a
Multi-Scale Edge Information Integration (MSEII) module to enhance edge
perception and a Hierarchical Feature Aggregator (HiFA) to boost multi-scale
object detection in the encoder. Then, we adopt self-moving point
representations for continuous convolution and residual connection to
efficiently extract irregular features under the scenarios of irregular point
cloud data. To further mitigate cross-modal conflicts, an Adaptive Feature
Interactive Fusion (AFIF) module is introduced to integrate visual and radar
features through geometric alignment and semantic fusion. Extensive experiments
on the WaterScenes dataset demonstrate that WS-DETR achieves state-of-the-art
(SOTA) performance, maintaining its superiority even under adverse weather and
lighting conditions.
|
2504.07450 | Weijie Chen | Weijie Chen, James Wang, Alan McMillan | Synthetic CT Generation from Time-of-Flight Non-Attenutaion-Corrected
PET for Whole-Body PET Attenuation Correction | 4 pages, 2 figures, ISBI 2025 | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Positron Emission Tomography (PET) imaging requires accurate attenuation
correction (AC) to account for photon loss due to tissue density variations. In
PET/MR systems, computed tomography (CT), which offers a straightforward
estimation of AC is not available. This study presents a deep learning approach
to generate synthetic CT (sCT) images directly from Time-of-Flight (TOF)
non-attenuation corrected (NAC) PET images, enhancing AC for PET/MR. We first
evaluated models pre-trained on large-scale natural image datasets for a
CT-to-CT reconstruction task, finding that the pre-trained model outperformed
those trained solely on medical datasets. The pre-trained model was then
fine-tuned using an institutional dataset of 35 TOF NAC PET and CT volume
pairs, achieving the lowest mean absolute error (MAE) of 74.49 HU and highest
peak signal-to-noise ratio (PSNR) of 28.66 dB within the body contour region.
Visual assessments demonstrated improved reconstruction of both bone and soft
tissue structures from TOF NAC PET images. This work highlights the
effectiveness of using pre-trained deep learning models for medical image
translation tasks. Future work will assess the impact of sCT on PET attenuation
correction and explore additional neural network architectures and datasets to
further enhance performance and practical applications in PET imaging.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 04:49:41 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Chen",
"Weijie",
""
],
[
"Wang",
"James",
""
],
[
"McMillan",
"Alan",
""
]
] | TITLE: Synthetic CT Generation from Time-of-Flight Non-Attenutaion-Corrected
PET for Whole-Body PET Attenuation Correction
ABSTRACT: Positron Emission Tomography (PET) imaging requires accurate attenuation
correction (AC) to account for photon loss due to tissue density variations. In
PET/MR systems, computed tomography (CT), which offers a straightforward
estimation of AC is not available. This study presents a deep learning approach
to generate synthetic CT (sCT) images directly from Time-of-Flight (TOF)
non-attenuation corrected (NAC) PET images, enhancing AC for PET/MR. We first
evaluated models pre-trained on large-scale natural image datasets for a
CT-to-CT reconstruction task, finding that the pre-trained model outperformed
those trained solely on medical datasets. The pre-trained model was then
fine-tuned using an institutional dataset of 35 TOF NAC PET and CT volume
pairs, achieving the lowest mean absolute error (MAE) of 74.49 HU and highest
peak signal-to-noise ratio (PSNR) of 28.66 dB within the body contour region.
Visual assessments demonstrated improved reconstruction of both bone and soft
tissue structures from TOF NAC PET images. This work highlights the
effectiveness of using pre-trained deep learning models for medical image
translation tasks. Future work will assess the impact of sCT on PET attenuation
correction and explore additional neural network architectures and datasets to
further enhance performance and practical applications in PET imaging.
|
2504.07453 | Anzhen Li | Anzhen Li, Shufan Qing, Xiaochang Li, Rui Mao and Mingchen Feng | Probability Estimation and Scheduling Optimization for Battery Swap
Stations via LRU-Enhanced Genetic Algorithm and Dual-Factor Decision System | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To address the challenges of limited Battery Swap Stations datasets, high
operational costs, and fluctuating user charging demand, this research proposes
a probability estimation model based on charging pile data and constructs nine
scenario-specific battery swap demand datasets. In addition, this study
combines Least Recently Used strategy with Genetic Algorithm and incorporates a
guided search mechanism, which effectively enhances the global optimization
capability. Thus, a dual-factor decision-making based charging schedule
optimization system is constructed. Experimental results show that the
constructed datasets exhibit stable trend characteristics, adhering to 24-hour
and 168-hour periodicity patterns, with outlier ratios consistently below
3.26%, confirming data validity. Compared to baseline, the improved algorithm
achieves better fitness individuals in 80% of test regions under the same
iterations. When benchmarked against immediate swap-and-charge strategy, our
algorithm achieves a peak cost reduction of 13.96%. Moreover, peak user
satisfaction reaches 98.57%, while the average iteration time remains below 0.6
seconds, demonstrating good computational efficiency. The complete datasets and
optimization algorithm are open-sourced at
https://github.com/qingshufan/GA-EVLRU.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 04:58:24 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Li",
"Anzhen",
""
],
[
"Qing",
"Shufan",
""
],
[
"Li",
"Xiaochang",
""
],
[
"Mao",
"Rui",
""
],
[
"Feng",
"Mingchen",
""
]
] | TITLE: Probability Estimation and Scheduling Optimization for Battery Swap
Stations via LRU-Enhanced Genetic Algorithm and Dual-Factor Decision System
ABSTRACT: To address the challenges of limited Battery Swap Stations datasets, high
operational costs, and fluctuating user charging demand, this research proposes
a probability estimation model based on charging pile data and constructs nine
scenario-specific battery swap demand datasets. In addition, this study
combines Least Recently Used strategy with Genetic Algorithm and incorporates a
guided search mechanism, which effectively enhances the global optimization
capability. Thus, a dual-factor decision-making based charging schedule
optimization system is constructed. Experimental results show that the
constructed datasets exhibit stable trend characteristics, adhering to 24-hour
and 168-hour periodicity patterns, with outlier ratios consistently below
3.26%, confirming data validity. Compared to baseline, the improved algorithm
achieves better fitness individuals in 80% of test regions under the same
iterations. When benchmarked against immediate swap-and-charge strategy, our
algorithm achieves a peak cost reduction of 13.96%. Moreover, peak user
satisfaction reaches 98.57%, while the average iteration time remains below 0.6
seconds, demonstrating good computational efficiency. The complete datasets and
optimization algorithm are open-sourced at
https://github.com/qingshufan/GA-EVLRU.
|
2504.07454 | Zitian Tang | Zitian Tang, Shijie Wang, Junho Cho, Jaewook Yoo, Chen Sun | How Can Objects Help Video-Language Understanding? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How multimodal large language models (MLLMs) perceive the visual world
remains a mystery. To one extreme, object and relation modeling may be
implicitly implemented with inductive biases, for example by treating objects
as tokens. To the other extreme, empirical results reveal the surprising
finding that simply performing visual captioning, which tends to ignore spatial
configuration of the objects, serves as a strong baseline for video
understanding. We aim to answer the question: how can objects help
video-language understanding in MLLMs? We tackle the question from the object
representation and adaptation perspectives. Specifically, we investigate the
trade-off between representation expressiveness (e.g., distributed versus
symbolic) and integration difficulty (e.g., data-efficiency when learning the
adapters). Through extensive evaluations on five video question answering
datasets, we confirm that explicit integration of object-centric representation
remains necessary, and the symbolic objects can be most easily integrated while
being performant for question answering. We hope our findings can encourage the
community to explore the explicit integration of perception modules into MLLM
design. Our code and models will be publicly released.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 04:59:28 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Tang",
"Zitian",
""
],
[
"Wang",
"Shijie",
""
],
[
"Cho",
"Junho",
""
],
[
"Yoo",
"Jaewook",
""
],
[
"Sun",
"Chen",
""
]
] | TITLE: How Can Objects Help Video-Language Understanding?
ABSTRACT: How multimodal large language models (MLLMs) perceive the visual world
remains a mystery. To one extreme, object and relation modeling may be
implicitly implemented with inductive biases, for example by treating objects
as tokens. To the other extreme, empirical results reveal the surprising
finding that simply performing visual captioning, which tends to ignore spatial
configuration of the objects, serves as a strong baseline for video
understanding. We aim to answer the question: how can objects help
video-language understanding in MLLMs? We tackle the question from the object
representation and adaptation perspectives. Specifically, we investigate the
trade-off between representation expressiveness (e.g., distributed versus
symbolic) and integration difficulty (e.g., data-efficiency when learning the
adapters). Through extensive evaluations on five video question answering
datasets, we confirm that explicit integration of object-centric representation
remains necessary, and the symbolic objects can be most easily integrated while
being performant for question answering. We hope our findings can encourage the
community to explore the explicit integration of perception modules into MLLM
design. Our code and models will be publicly released.
|
2504.07461 | Yijiang Li | Yiting Zhang, Yijiang Li, Tianwei Zhao, Kaijie Zhu, Haohan Wang, Nuno
Vasconcelos | Achilles Heel of Distributed Multi-Agent Systems | null | null | null | null | cs.MA | http://creativecommons.org/licenses/by/4.0/ | Multi-agent system (MAS) has demonstrated exceptional capabilities in
addressing complex challenges, largely due to the integration of multiple large
language models (LLMs). However, the heterogeneity of LLMs, the scalability of
quantities of LLMs, and local computational constraints pose significant
challenges to hosting these models locally. To address these issues, we propose
a new framework termed Distributed Multi-Agent System (DMAS). In DMAS,
heterogeneous third-party agents function as service providers managed remotely
by a central MAS server and each agent offers its services through API
interfaces. However, the distributed nature of DMAS introduces several concerns
about trustworthiness. In this paper, we study the Achilles heel of distributed
multi-agent systems, identifying four critical trustworthiness challenges: free
riding, susceptibility to malicious attacks, communication inefficiencies, and
system instability. Extensive experiments across seven frameworks and four
datasets reveal significant vulnerabilities of the DMAS. These attack
strategies can lead to a performance degradation of up to 80% and attain a 100%
success rate in executing free riding and malicious attacks. We envision our
work will serve as a useful red-teaming tool for evaluating future multi-agent
systems and spark further research on trustworthiness challenges in distributed
multi-agent systems.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 05:16:11 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhang",
"Yiting",
""
],
[
"Li",
"Yijiang",
""
],
[
"Zhao",
"Tianwei",
""
],
[
"Zhu",
"Kaijie",
""
],
[
"Wang",
"Haohan",
""
],
[
"Vasconcelos",
"Nuno",
""
]
] | TITLE: Achilles Heel of Distributed Multi-Agent Systems
ABSTRACT: Multi-agent system (MAS) has demonstrated exceptional capabilities in
addressing complex challenges, largely due to the integration of multiple large
language models (LLMs). However, the heterogeneity of LLMs, the scalability of
quantities of LLMs, and local computational constraints pose significant
challenges to hosting these models locally. To address these issues, we propose
a new framework termed Distributed Multi-Agent System (DMAS). In DMAS,
heterogeneous third-party agents function as service providers managed remotely
by a central MAS server and each agent offers its services through API
interfaces. However, the distributed nature of DMAS introduces several concerns
about trustworthiness. In this paper, we study the Achilles heel of distributed
multi-agent systems, identifying four critical trustworthiness challenges: free
riding, susceptibility to malicious attacks, communication inefficiencies, and
system instability. Extensive experiments across seven frameworks and four
datasets reveal significant vulnerabilities of the DMAS. These attack
strategies can lead to a performance degradation of up to 80% and attain a 100%
success rate in executing free riding and malicious attacks. We envision our
work will serve as a useful red-teaming tool for evaluating future multi-agent
systems and spark further research on trustworthiness challenges in distributed
multi-agent systems.
|
2504.07462 | Hengrun Zhao | Hengrun Zhao, Yunzhi Zhuge, Yifan Wang, Lijun Wang, Huchuan Lu, Yu
Zeng | Learning Universal Features for Generalizable Image Forgery Localization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, advanced image editing and generation methods have rapidly
evolved, making detecting and locating forged image content increasingly
challenging. Most existing image forgery detection methods rely on identifying
the edited traces left in the image. However, because the traces of different
forgeries are distinct, these methods can identify familiar forgeries included
in the training data but struggle to handle unseen ones. In response, we
present an approach for Generalizable Image Forgery Localization (GIFL). Once
trained, our model can detect both seen and unseen forgeries, providing a more
practical and efficient solution to counter false information in the era of
generative AI. Our method focuses on learning general features from the
pristine content rather than traces of specific forgeries, which are relatively
consistent across different types of forgeries and therefore can be used as
universal features to locate unseen forgeries. Additionally, as existing image
forgery datasets are still dominated by traditional hand-crafted forgeries, we
construct a new dataset consisting of images edited by various popular deep
generative image editing methods to further encourage research in detecting
images manipulated by deep generative models. Extensive experimental results
show that the proposed approach outperforms state-of-the-art methods in the
detection of unseen forgeries and also demonstrates competitive results for
seen forgeries. The code and dataset are available at
https://github.com/ZhaoHengrun/GIFL.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 05:20:29 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhao",
"Hengrun",
""
],
[
"Zhuge",
"Yunzhi",
""
],
[
"Wang",
"Yifan",
""
],
[
"Wang",
"Lijun",
""
],
[
"Lu",
"Huchuan",
""
],
[
"Zeng",
"Yu",
""
]
] | TITLE: Learning Universal Features for Generalizable Image Forgery Localization
ABSTRACT: In recent years, advanced image editing and generation methods have rapidly
evolved, making detecting and locating forged image content increasingly
challenging. Most existing image forgery detection methods rely on identifying
the edited traces left in the image. However, because the traces of different
forgeries are distinct, these methods can identify familiar forgeries included
in the training data but struggle to handle unseen ones. In response, we
present an approach for Generalizable Image Forgery Localization (GIFL). Once
trained, our model can detect both seen and unseen forgeries, providing a more
practical and efficient solution to counter false information in the era of
generative AI. Our method focuses on learning general features from the
pristine content rather than traces of specific forgeries, which are relatively
consistent across different types of forgeries and therefore can be used as
universal features to locate unseen forgeries. Additionally, as existing image
forgery datasets are still dominated by traditional hand-crafted forgeries, we
construct a new dataset consisting of images edited by various popular deep
generative image editing methods to further encourage research in detecting
images manipulated by deep generative models. Extensive experimental results
show that the proposed approach outperforms state-of-the-art methods in the
detection of unseen forgeries and also demonstrates competitive results for
seen forgeries. The code and dataset are available at
https://github.com/ZhaoHengrun/GIFL.
|
2504.07468 | Santanu Roy Dr | Santanu Roy, Ashvath Suresh, Palak Sahu, and Tulika Rudra Gupta | Novel Pooling-based VGG-Lite for Pneumonia and Covid-19 Detection from
Imbalanced Chest X-Ray Datasets | 12 pages | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a novel pooling-based VGG-Lite model in order to mitigate
class imbalance issues in Chest X-Ray (CXR) datasets. Automatic Pneumonia
detection from CXR images by deep learning model has emerged as a prominent and
dynamic area of research, since the inception of the new Covid-19 variant in
2020. However, the standard Convolutional Neural Network (CNN) models encounter
challenges associated with class imbalance, a prevalent issue found in many
medical datasets. The innovations introduced in the proposed model architecture
include: (I) A very lightweight CNN model, `VGG-Lite', is proposed as a base
model, inspired by VGG-16 and MobileNet-V2 architecture. (II) On top of this
base model, we leverage an ``Edge Enhanced Module (EEM)" through a parallel
branch, consisting of a ``negative image layer", and a novel custom pooling
layer ``2Max-Min Pooling". This 2Max-Min Pooling layer is entirely novel in
this investigation, providing more attention to edge components within
pneumonia CXR images. Thus, it works as an efficient spatial attention module
(SAM). We have implemented the proposed framework on two separate CXR datasets.
The first dataset is obtained from a readily available source on the internet,
and the second dataset is a more challenging CXR dataset, assembled by our
research team from three different sources. Experimental results reveal that
our proposed framework has outperformed pre-trained CNN models, and three
recent trend existing models ``Vision Transformer", ``Pooling-based Vision
Transformer (PiT)'' and ``PneuNet", by substantial margins on both datasets.
The proposed framework VGG-Lite with EEM, has achieved a macro average of 95%
accuracy, 97.1% precision, 96.1% recall, and 96.6% F1 score on the ``Pneumonia
Imbalance CXR dataset", without employing any pre-processing technique.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 05:38:46 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Roy",
"Santanu",
""
],
[
"Suresh",
"Ashvath",
""
],
[
"Sahu",
"Palak",
""
],
[
"Gupta",
"Tulika Rudra",
""
]
] | TITLE: Novel Pooling-based VGG-Lite for Pneumonia and Covid-19 Detection from
Imbalanced Chest X-Ray Datasets
ABSTRACT: This paper proposes a novel pooling-based VGG-Lite model in order to mitigate
class imbalance issues in Chest X-Ray (CXR) datasets. Automatic Pneumonia
detection from CXR images by deep learning model has emerged as a prominent and
dynamic area of research, since the inception of the new Covid-19 variant in
2020. However, the standard Convolutional Neural Network (CNN) models encounter
challenges associated with class imbalance, a prevalent issue found in many
medical datasets. The innovations introduced in the proposed model architecture
include: (I) A very lightweight CNN model, `VGG-Lite', is proposed as a base
model, inspired by VGG-16 and MobileNet-V2 architecture. (II) On top of this
base model, we leverage an ``Edge Enhanced Module (EEM)" through a parallel
branch, consisting of a ``negative image layer", and a novel custom pooling
layer ``2Max-Min Pooling". This 2Max-Min Pooling layer is entirely novel in
this investigation, providing more attention to edge components within
pneumonia CXR images. Thus, it works as an efficient spatial attention module
(SAM). We have implemented the proposed framework on two separate CXR datasets.
The first dataset is obtained from a readily available source on the internet,
and the second dataset is a more challenging CXR dataset, assembled by our
research team from three different sources. Experimental results reveal that
our proposed framework has outperformed pre-trained CNN models, and three
recent trend existing models ``Vision Transformer", ``Pooling-based Vision
Transformer (PiT)'' and ``PneuNet", by substantial margins on both datasets.
The proposed framework VGG-Lite with EEM, has achieved a macro average of 95%
accuracy, 97.1% precision, 96.1% recall, and 96.6% F1 score on the ``Pneumonia
Imbalance CXR dataset", without employing any pre-processing technique.
|
2504.07471 | Yongcheol Kim | Erdenebileg Batbaatar, Jeonggeol Kim, Yongcheol Kim, Young Yoon | Traversal Learning Coordination For Lossless And Efficient Distributed
Learning | null | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce Traversal Learning (TL), a novel approach
designed to address the problem of decreased quality encountered in popular
distributed learning (DL) paradigms such as Federated Learning (FL), Split
Learning (SL), and SplitFed Learning (SFL). Traditional FL experiences from an
accuracy drop during aggregation due to its averaging function, while SL and
SFL face increased loss due to the independent gradient updates on each split
network. TL adopts a unique strategy where the model traverses the nodes during
forward propagation (FP) and performs backward propagation (BP) on the
orchestrator, effectively implementing centralized learning (CL) principles
within a distributed environment. The orchestrator is tasked with generating
virtual batches and planning the sequential node visits of the model during FP,
aligning them with the ordered index of the data within these batches. We
conducted experiments on six datasets representing diverse characteristics
across various domains. Our evaluation demonstrates that TL is on par with
classic CL approaches in terms of accurate inference, thereby offering a viable
and robust solution for DL tasks. TL outperformed other DL methods and improved
accuracy by 7.85% for independent and identically distributed (IID) datasets,
macro F1-score by 1.06% for non-IID datasets, accuracy by 2.60% for text
classification, and AUC by 3.88% and 4.54% for medical and financial datasets,
respectively. By effectively preserving data privacy while maintaining
performance, TL represents a significant advancement in DL methodologies.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 05:48:57 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Batbaatar",
"Erdenebileg",
""
],
[
"Kim",
"Jeonggeol",
""
],
[
"Kim",
"Yongcheol",
""
],
[
"Yoon",
"Young",
""
]
] | TITLE: Traversal Learning Coordination For Lossless And Efficient Distributed
Learning
ABSTRACT: In this paper, we introduce Traversal Learning (TL), a novel approach
designed to address the problem of decreased quality encountered in popular
distributed learning (DL) paradigms such as Federated Learning (FL), Split
Learning (SL), and SplitFed Learning (SFL). Traditional FL experiences from an
accuracy drop during aggregation due to its averaging function, while SL and
SFL face increased loss due to the independent gradient updates on each split
network. TL adopts a unique strategy where the model traverses the nodes during
forward propagation (FP) and performs backward propagation (BP) on the
orchestrator, effectively implementing centralized learning (CL) principles
within a distributed environment. The orchestrator is tasked with generating
virtual batches and planning the sequential node visits of the model during FP,
aligning them with the ordered index of the data within these batches. We
conducted experiments on six datasets representing diverse characteristics
across various domains. Our evaluation demonstrates that TL is on par with
classic CL approaches in terms of accurate inference, thereby offering a viable
and robust solution for DL tasks. TL outperformed other DL methods and improved
accuracy by 7.85% for independent and identically distributed (IID) datasets,
macro F1-score by 1.06% for non-IID datasets, accuracy by 2.60% for text
classification, and AUC by 3.88% and 4.54% for medical and financial datasets,
respectively. By effectively preserving data privacy while maintaining
performance, TL represents a significant advancement in DL methodologies.
|
2504.07476 | Yan Xu | Yan Xu, Zhenqiang Zhang, Zhiwei Zhou, Liting Geng, Yue Li and Jintao
Li | CMEdataset Advancing China Map Detection and Standardization with
Digital Image Resources | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital images of Chinas maps play a crucial role in map detection,
particularly in ensuring national sovereignty, territorial integrity, and map
compliance. However, there is currently no publicly available dataset
specifically dedicated to problematic maps the CME dataset. Existing datasets
primarily focus on general map data and are insufficient for effectively
identifying complex issues such as national boundary misrepresentations,
missing elements, and blurred boundaries. Therefore, this study creates a
Problematic Map dataset that covers five key problem areas, aiming to provide
diverse samples for problematic map detection technologies, support
high-precision map compliance detection, and enhance map data quality and
timeliness. This dataset not only provides essential resources for map
compliance, national security monitoring, and map updates, but also fosters
innovation and application of related technologies.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 06:04:16 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Xu",
"Yan",
""
],
[
"Zhang",
"Zhenqiang",
""
],
[
"Zhou",
"Zhiwei",
""
],
[
"Geng",
"Liting",
""
],
[
"Li",
"Yue",
""
],
[
"Li",
"Jintao",
""
]
] | TITLE: CMEdataset Advancing China Map Detection and Standardization with
Digital Image Resources
ABSTRACT: Digital images of Chinas maps play a crucial role in map detection,
particularly in ensuring national sovereignty, territorial integrity, and map
compliance. However, there is currently no publicly available dataset
specifically dedicated to problematic maps the CME dataset. Existing datasets
primarily focus on general map data and are insufficient for effectively
identifying complex issues such as national boundary misrepresentations,
missing elements, and blurred boundaries. Therefore, this study creates a
Problematic Map dataset that covers five key problem areas, aiming to provide
diverse samples for problematic map detection technologies, support
high-precision map compliance detection, and enhance map data quality and
timeliness. This dataset not only provides essential resources for map
compliance, national security monitoring, and map updates, but also fosters
innovation and application of related technologies.
|
2504.07478 | Caroline Panggabean | Caroline Panggabean, Chandrasekar Venkatachalam, Priyanka Shah, Sincy
John, Renuka Devi P, and Shanmugavalli Venkatachalam | Intelligent DoS and DDoS Detection: A Hybrid GRU-NTM Approach to Network
Security | Accepted at the 2024 5th International Conference on Smart
Electronics and Communication (ICOSEC). This is the accepted manuscript
version. The final version is published by IEEE at
https://doi.org/10.1109/ICOSEC61587.2024.10722438 | null | 10.1109/ICOSEC61587.2024.10722438 | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting Denial of Service (DoS) and Distributed Denial of Service (DDoS)
attacks remains a critical challenge in cybersecurity. This research introduces
a hybrid deep learning model combining Gated Recurrent Units (GRUs) and a
Neural Turing Machine (NTM) for enhanced intrusion detection. Trained on the
UNSW-NB15 and BoT-IoT datasets, the model employs GRU layers for sequential
data processing and an NTM for long-term pattern recognition. The proposed
approach achieves 99% accuracy in distinguishing between normal, DoS, and DDoS
traffic. These findings offer promising advancements in real-time threat
detection and contribute to improved network security across various domains.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 06:08:04 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Panggabean",
"Caroline",
""
],
[
"Venkatachalam",
"Chandrasekar",
""
],
[
"Shah",
"Priyanka",
""
],
[
"John",
"Sincy",
""
],
[
"P",
"Renuka Devi",
""
],
[
"Venkatachalam",
"Shanmugavalli",
""
]
] | TITLE: Intelligent DoS and DDoS Detection: A Hybrid GRU-NTM Approach to Network
Security
ABSTRACT: Detecting Denial of Service (DoS) and Distributed Denial of Service (DDoS)
attacks remains a critical challenge in cybersecurity. This research introduces
a hybrid deep learning model combining Gated Recurrent Units (GRUs) and a
Neural Turing Machine (NTM) for enhanced intrusion detection. Trained on the
UNSW-NB15 and BoT-IoT datasets, the model employs GRU layers for sequential
data processing and an NTM for long-term pattern recognition. The proposed
approach achieves 99% accuracy in distinguishing between normal, DoS, and DDoS
traffic. These findings offer promising advancements in real-time threat
detection and contribute to improved network security across various domains.
|
2504.07480 | Marios Papachristou | Marios Papachristou, Jon Kleinberg | Echoes of Disagreement: Measuring Disparity in Social Consensus | null | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Public discourse and opinions stem from multiple social groups. Each group
has beliefs about a topic (such as vaccination, abortion, gay marriage, etc.),
and opinions are exchanged and blended to produce consensus. A particular
measure of interest corresponds to measuring the influence of each group on the
consensus and the disparity between groups on the extent to which they
influence the consensus. In this paper, we study and give provable algorithms
for optimizing the disparity under the DeGroot or the Friedkin-Johnsen models
of opinion dynamics. Our findings provide simple poly-time algorithms to
optimize disparity for most cases, fully characterize the instances that
optimize disparity, and show how simple interventions such as contracting
vertices or adding links affect disparity. Finally, we test our developed
algorithms in a variety of real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 06:18:27 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Papachristou",
"Marios",
""
],
[
"Kleinberg",
"Jon",
""
]
] | TITLE: Echoes of Disagreement: Measuring Disparity in Social Consensus
ABSTRACT: Public discourse and opinions stem from multiple social groups. Each group
has beliefs about a topic (such as vaccination, abortion, gay marriage, etc.),
and opinions are exchanged and blended to produce consensus. A particular
measure of interest corresponds to measuring the influence of each group on the
consensus and the disparity between groups on the extent to which they
influence the consensus. In this paper, we study and give provable algorithms
for optimizing the disparity under the DeGroot or the Friedkin-Johnsen models
of opinion dynamics. Our findings provide simple poly-time algorithms to
optimize disparity for most cases, fully characterize the instances that
optimize disparity, and show how simple interventions such as contracting
vertices or adding links affect disparity. Finally, we test our developed
algorithms in a variety of real-world datasets.
|
2504.07485 | Armin Bernstetter | Markus Schl\"uter, Tom Kwasnitschka, Armin Bernstetter, Jens Karstens | Rendering Large Volume Datasets in Unreal Engine 5: A Survey | Technical Report | null | null | null | cs.GR | http://creativecommons.org/licenses/by-sa/4.0/ | In this technical report, we discuss several approaches to in-core rendering
of large volumetric datasets in Unreal Engine 5 (UE5). We explore the following
methods: the TBRayMarcher Plugin, the Niagara Fluids Plugin , and various
approaches using Sparse Volume Textures (SVT), with a particular focus on
Heterogeneous Volumes (HV). We found the HV approach to be the most promising.
The biggest challenge we encountered with other approaches was the need to
chunk datasets so that each fits into volume textures smaller than one
gigavoxel. While this enables display of the entire dataset at reasonable frame
rates, it introduces noticeable artifacts at chunk borders due to incorrect
lighting, as each chunk lacks information about its neighbors. After addressing
some (signed) int32 overflows in the Engine's SVT-related source code by
converting them to to (unsigned) uint32 or int64, the SVT-based HV system
allows us to render sparse datasets up to 32k x 32k x 16k voxels, provided the
compressed tile data (including MIP data and padding for correct interpolation)
does not exceed 4 gigavoxels. In the future, we intend to extend the existing
SVT streaming functionality to support out-of-core rendering, in order to
eventually overcome VRAM limitations, graphics API constraints, and the
performance issues associated with 64-bit arithmetic in GPU shaders.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 06:42:19 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Schlüter",
"Markus",
""
],
[
"Kwasnitschka",
"Tom",
""
],
[
"Bernstetter",
"Armin",
""
],
[
"Karstens",
"Jens",
""
]
] | TITLE: Rendering Large Volume Datasets in Unreal Engine 5: A Survey
ABSTRACT: In this technical report, we discuss several approaches to in-core rendering
of large volumetric datasets in Unreal Engine 5 (UE5). We explore the following
methods: the TBRayMarcher Plugin, the Niagara Fluids Plugin , and various
approaches using Sparse Volume Textures (SVT), with a particular focus on
Heterogeneous Volumes (HV). We found the HV approach to be the most promising.
The biggest challenge we encountered with other approaches was the need to
chunk datasets so that each fits into volume textures smaller than one
gigavoxel. While this enables display of the entire dataset at reasonable frame
rates, it introduces noticeable artifacts at chunk borders due to incorrect
lighting, as each chunk lacks information about its neighbors. After addressing
some (signed) int32 overflows in the Engine's SVT-related source code by
converting them to to (unsigned) uint32 or int64, the SVT-based HV system
allows us to render sparse datasets up to 32k x 32k x 16k voxels, provided the
compressed tile data (including MIP data and padding for correct interpolation)
does not exceed 4 gigavoxels. In the future, we intend to extend the existing
SVT streaming functionality to support out-of-core rendering, in order to
eventually overcome VRAM limitations, graphics API constraints, and the
performance issues associated with 64-bit arithmetic in GPU shaders.
|
2504.07494 | Shihong Gao | Shihong Gao, Xin Zhang, Yanyan Shen, Lei Chen | Apt-Serve: Adaptive Request Scheduling on Hybrid Cache for Scalable LLM
Inference Serving | null | null | 10.1145/3725394 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language model (LLM) inference serving systems are essential to various
LLM-based applications. As demand for LLM services continues to grow, scaling
these systems to handle high request rates while meeting latency Service-Level
Objectives (SLOs), referred to as effective throughput, becomes critical.
However, existing systems often struggle to improve effective throughput,
primarily due to a significant decline in Time To First Token (TTFT) SLO
attainment. We identify two major causes of this bottleneck: (1)
memory-intensive KV cache that limits batch size expansion under GPU memory
constraints, and (2) rigid batch composition enforced by the default
First-Come-First-Serve scheduling policy. In this paper, we introduce
Apt-Serve, a scalable framework designed to enhance effective throughput in LLM
inference serving. Apt-Serve features a new hybrid cache scheme that combines
KV cache with a memory-efficient hidden cache for reusable input hidden state
vectors, allowing large batch sizes and improving request concurrency. Based on
the hybrid cache, Apt-Serve employs an adaptive runtime scheduling mechanism
that dynamically optimizes batch composition. We formally define the adaptive
scheduling optimization problem and propose an efficient algorithm with
theoretical guarantees. Extensive evaluations on three real-world datasets and
LLMs ranging from 13B to 66B parameters demonstrate that Apt-Serve achieves up
to 8.8x improvement in effective throughput compared to the state-of-the-art
inference serving systems.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 06:51:23 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Gao",
"Shihong",
""
],
[
"Zhang",
"Xin",
""
],
[
"Shen",
"Yanyan",
""
],
[
"Chen",
"Lei",
""
]
] | TITLE: Apt-Serve: Adaptive Request Scheduling on Hybrid Cache for Scalable LLM
Inference Serving
ABSTRACT: Large language model (LLM) inference serving systems are essential to various
LLM-based applications. As demand for LLM services continues to grow, scaling
these systems to handle high request rates while meeting latency Service-Level
Objectives (SLOs), referred to as effective throughput, becomes critical.
However, existing systems often struggle to improve effective throughput,
primarily due to a significant decline in Time To First Token (TTFT) SLO
attainment. We identify two major causes of this bottleneck: (1)
memory-intensive KV cache that limits batch size expansion under GPU memory
constraints, and (2) rigid batch composition enforced by the default
First-Come-First-Serve scheduling policy. In this paper, we introduce
Apt-Serve, a scalable framework designed to enhance effective throughput in LLM
inference serving. Apt-Serve features a new hybrid cache scheme that combines
KV cache with a memory-efficient hidden cache for reusable input hidden state
vectors, allowing large batch sizes and improving request concurrency. Based on
the hybrid cache, Apt-Serve employs an adaptive runtime scheduling mechanism
that dynamically optimizes batch composition. We formally define the adaptive
scheduling optimization problem and propose an efficient algorithm with
theoretical guarantees. Extensive evaluations on three real-world datasets and
LLMs ranging from 13B to 66B parameters demonstrate that Apt-Serve achieves up
to 8.8x improvement in effective throughput compared to the state-of-the-art
inference serving systems.
|
2504.07503 | Jinze Chen | Jinze Chen, Wei Zhai, Yang Cao, Bin Li, Zheng-Jun Zha | Event Signal Filtering via Probability Flux Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Events offer a novel paradigm for capturing scene dynamics via asynchronous
sensing, but their inherent randomness often leads to degraded signal quality.
Event signal filtering is thus essential for enhancing fidelity by reducing
this internal randomness and ensuring consistent outputs across diverse
acquisition conditions. Unlike traditional time series that rely on fixed
temporal sampling to capture steady-state behaviors, events encode transient
dynamics through polarity and event intervals, making signal modeling
significantly more complex. To address this, the theoretical foundation of
event generation is revisited through the lens of diffusion processes. The
state and process information within events is modeled as continuous
probability flux at threshold boundaries of the underlying irradiance
diffusion. Building on this insight, a generative, online filtering framework
called Event Density Flow Filter (EDFilter) is introduced. EDFilter estimates
event correlation by reconstructing the continuous probability flux from
discrete events using nonparametric kernel smoothing, and then resamples
filtered events from this flux. To optimize fidelity over time, spatial and
temporal kernels are employed in a time-varying optimization framework. A fast
recursive solver with O(1) complexity is proposed, leveraging state-space
models and lookup tables for efficient likelihood computation. Furthermore, a
new real-world benchmark Rotary Event Dataset (RED) is released, offering
microsecond-level ground truth irradiance for full-reference event filtering
evaluation. Extensive experiments validate EDFilter's performance across tasks
like event filtering, super-resolution, and direct event-based blob tracking.
Significant gains in downstream applications such as SLAM and video
reconstruction underscore its robustness and effectiveness.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 07:03:08 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Chen",
"Jinze",
""
],
[
"Zhai",
"Wei",
""
],
[
"Cao",
"Yang",
""
],
[
"Li",
"Bin",
""
],
[
"Zha",
"Zheng-Jun",
""
]
] | TITLE: Event Signal Filtering via Probability Flux Estimation
ABSTRACT: Events offer a novel paradigm for capturing scene dynamics via asynchronous
sensing, but their inherent randomness often leads to degraded signal quality.
Event signal filtering is thus essential for enhancing fidelity by reducing
this internal randomness and ensuring consistent outputs across diverse
acquisition conditions. Unlike traditional time series that rely on fixed
temporal sampling to capture steady-state behaviors, events encode transient
dynamics through polarity and event intervals, making signal modeling
significantly more complex. To address this, the theoretical foundation of
event generation is revisited through the lens of diffusion processes. The
state and process information within events is modeled as continuous
probability flux at threshold boundaries of the underlying irradiance
diffusion. Building on this insight, a generative, online filtering framework
called Event Density Flow Filter (EDFilter) is introduced. EDFilter estimates
event correlation by reconstructing the continuous probability flux from
discrete events using nonparametric kernel smoothing, and then resamples
filtered events from this flux. To optimize fidelity over time, spatial and
temporal kernels are employed in a time-varying optimization framework. A fast
recursive solver with O(1) complexity is proposed, leveraging state-space
models and lookup tables for efficient likelihood computation. Furthermore, a
new real-world benchmark Rotary Event Dataset (RED) is released, offering
microsecond-level ground truth irradiance for full-reference event filtering
evaluation. Extensive experiments validate EDFilter's performance across tasks
like event filtering, super-resolution, and direct event-based blob tracking.
Significant gains in downstream applications such as SLAM and video
reconstruction underscore its robustness and effectiveness.
|
2504.07507 | Zhiwei Zhang | Zhiwei Zhang, Ruichen Yang, Ke Wu, Zijun Xu, Jingchu Liu, Lisen Mu,
Zhongxue Gan and Wenchao Ding | Drive in Corridors: Enhancing the Safety of End-to-end Autonomous
Driving via Corridor Learning and Planning | 8 pages, 4 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Safety remains one of the most critical challenges in autonomous driving
systems. In recent years, the end-to-end driving has shown great promise in
advancing vehicle autonomy in a scalable manner. However, existing approaches
often face safety risks due to the lack of explicit behavior constraints. To
address this issue, we uncover a new paradigm by introducing the corridor as
the intermediate representation. Widely adopted in robotics planning, the
corridors represents spatio-temporal obstacle-free zones for the vehicle to
traverse. To ensure accurate corridor prediction in diverse traffic scenarios,
we develop a comprehensive learning pipeline including data annotation,
architecture refinement and loss formulation. The predicted corridor is further
integrated as the constraint in a trajectory optimization process. By extending
the differentiability of the optimization, we enable the optimized trajectory
to be seamlessly trained within the end-to-end learning framework, improving
both safety and interpretability. Experimental results on the nuScenes dataset
demonstrate state-of-the-art performance of our approach, showing a 66.7%
reduction in collisions with agents and a 46.5% reduction with curbs,
significantly enhancing the safety of end-to-end driving. Additionally,
incorporating the corridor contributes to higher success rates in closed-loop
evaluations.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 07:10:40 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhang",
"Zhiwei",
""
],
[
"Yang",
"Ruichen",
""
],
[
"Wu",
"Ke",
""
],
[
"Xu",
"Zijun",
""
],
[
"Liu",
"Jingchu",
""
],
[
"Mu",
"Lisen",
""
],
[
"Gan",
"Zhongxue",
""
],
[
"Ding",
"Wenchao",
""
]
] | TITLE: Drive in Corridors: Enhancing the Safety of End-to-end Autonomous
Driving via Corridor Learning and Planning
ABSTRACT: Safety remains one of the most critical challenges in autonomous driving
systems. In recent years, the end-to-end driving has shown great promise in
advancing vehicle autonomy in a scalable manner. However, existing approaches
often face safety risks due to the lack of explicit behavior constraints. To
address this issue, we uncover a new paradigm by introducing the corridor as
the intermediate representation. Widely adopted in robotics planning, the
corridors represents spatio-temporal obstacle-free zones for the vehicle to
traverse. To ensure accurate corridor prediction in diverse traffic scenarios,
we develop a comprehensive learning pipeline including data annotation,
architecture refinement and loss formulation. The predicted corridor is further
integrated as the constraint in a trajectory optimization process. By extending
the differentiability of the optimization, we enable the optimized trajectory
to be seamlessly trained within the end-to-end learning framework, improving
both safety and interpretability. Experimental results on the nuScenes dataset
demonstrate state-of-the-art performance of our approach, showing a 66.7%
reduction in collisions with agents and a 46.5% reduction with curbs,
significantly enhancing the safety of end-to-end driving. Additionally,
incorporating the corridor contributes to higher success rates in closed-loop
evaluations.
|
2504.07522 | Jose Cribeiro-Ramallo | Jose Cribeiro-Ramallo, Federico Matteucci, Paul Enciu, Alexander
Jenke, Vadim Arzamasov, Thorsten Strufe, Klemens B\"ohm | Adversarial Subspace Generation for Outlier Detection in
High-Dimensional Data | 35 pages, pre-print | null | null | null | cs.LG cs.AI math.ST stat.TH | http://creativecommons.org/licenses/by/4.0/ | Outlier detection in high-dimensional tabular data is challenging since data
is often distributed across multiple lower-dimensional subspaces -- a
phenomenon known as the Multiple Views effect (MV). This effect led to a large
body of research focused on mining such subspaces, known as subspace selection.
However, as the precise nature of the MV effect was not well understood,
traditional methods had to rely on heuristic-driven search schemes that
struggle to accurately capture the true structure of the data. Properly
identifying these subspaces is critical for unsupervised tasks such as outlier
detection or clustering, where misrepresenting the underlying data structure
can hinder the performance. We introduce Myopic Subspace Theory (MST), a new
theoretical framework that mathematically formulates the Multiple Views effect
and writes subspace selection as a stochastic optimization problem. Based on
MST, we introduce V-GAN, a generative method trained to solve such an
optimization problem. This approach avoids any exhaustive search over the
feature space while ensuring that the intrinsic data structure is preserved.
Experiments on 42 real-world datasets show that using V-GAN subspaces to build
ensemble methods leads to a significant increase in one-class classification
performance -- compared to existing subspace selection, feature selection, and
embedding methods. Further experiments on synthetic data show that V-GAN
identifies subspaces more accurately while scaling better than other relevant
subspace selection methods. These results confirm the theoretical guarantees of
our approach and also highlight its practical viability in high-dimensional
settings.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 07:40:02 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Cribeiro-Ramallo",
"Jose",
""
],
[
"Matteucci",
"Federico",
""
],
[
"Enciu",
"Paul",
""
],
[
"Jenke",
"Alexander",
""
],
[
"Arzamasov",
"Vadim",
""
],
[
"Strufe",
"Thorsten",
""
],
[
"Böhm",
"Klemens",
""
]
] | TITLE: Adversarial Subspace Generation for Outlier Detection in
High-Dimensional Data
ABSTRACT: Outlier detection in high-dimensional tabular data is challenging since data
is often distributed across multiple lower-dimensional subspaces -- a
phenomenon known as the Multiple Views effect (MV). This effect led to a large
body of research focused on mining such subspaces, known as subspace selection.
However, as the precise nature of the MV effect was not well understood,
traditional methods had to rely on heuristic-driven search schemes that
struggle to accurately capture the true structure of the data. Properly
identifying these subspaces is critical for unsupervised tasks such as outlier
detection or clustering, where misrepresenting the underlying data structure
can hinder the performance. We introduce Myopic Subspace Theory (MST), a new
theoretical framework that mathematically formulates the Multiple Views effect
and writes subspace selection as a stochastic optimization problem. Based on
MST, we introduce V-GAN, a generative method trained to solve such an
optimization problem. This approach avoids any exhaustive search over the
feature space while ensuring that the intrinsic data structure is preserved.
Experiments on 42 real-world datasets show that using V-GAN subspaces to build
ensemble methods leads to a significant increase in one-class classification
performance -- compared to existing subspace selection, feature selection, and
embedding methods. Further experiments on synthetic data show that V-GAN
identifies subspaces more accurately while scaling better than other relevant
subspace selection methods. These results confirm the theoretical guarantees of
our approach and also highlight its practical viability in high-dimensional
settings.
|
2504.07524 | Xu Zhao | Xu Zhao, Pengju Zhang, Bo Liu, and Yihong Wu | DGOcc: Depth-aware Global Query-based Network for Monocular 3D Occupancy
Prediction | under review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monocular 3D occupancy prediction, aiming to predict the occupancy and
semantics within interesting regions of 3D scenes from only 2D images, has
garnered increasing attention recently for its vital role in 3D scene
understanding. Predicting the 3D occupancy of large-scale outdoor scenes from
2D images is ill-posed and resource-intensive. In this paper, we present
\textbf{DGOcc}, a \textbf{D}epth-aware \textbf{G}lobal query-based network for
monocular 3D \textbf{Occ}upancy prediction. We first explore prior depth maps
to extract depth context features that provide explicit geometric information
for the occupancy network. Then, in order to fully exploit the depth context
features, we propose a Global Query-based (GQ) Module. The cooperation of
attention mechanisms and scale-aware operations facilitates the feature
interaction between images and 3D voxels. Moreover, a Hierarchical Supervision
Strategy (HSS) is designed to avoid upsampling the high-dimension 3D voxel
features to full resolution, which mitigates GPU memory utilization and time
cost. Extensive experiments on SemanticKITTI and SSCBench-KITTI-360 datasets
demonstrate that the proposed method achieves the best performance on monocular
semantic occupancy prediction while reducing GPU and time overhead.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 07:44:55 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhao",
"Xu",
""
],
[
"Zhang",
"Pengju",
""
],
[
"Liu",
"Bo",
""
],
[
"Wu",
"Yihong",
""
]
] | TITLE: DGOcc: Depth-aware Global Query-based Network for Monocular 3D Occupancy
Prediction
ABSTRACT: Monocular 3D occupancy prediction, aiming to predict the occupancy and
semantics within interesting regions of 3D scenes from only 2D images, has
garnered increasing attention recently for its vital role in 3D scene
understanding. Predicting the 3D occupancy of large-scale outdoor scenes from
2D images is ill-posed and resource-intensive. In this paper, we present
\textbf{DGOcc}, a \textbf{D}epth-aware \textbf{G}lobal query-based network for
monocular 3D \textbf{Occ}upancy prediction. We first explore prior depth maps
to extract depth context features that provide explicit geometric information
for the occupancy network. Then, in order to fully exploit the depth context
features, we propose a Global Query-based (GQ) Module. The cooperation of
attention mechanisms and scale-aware operations facilitates the feature
interaction between images and 3D voxels. Moreover, a Hierarchical Supervision
Strategy (HSS) is designed to avoid upsampling the high-dimension 3D voxel
features to full resolution, which mitigates GPU memory utilization and time
cost. Extensive experiments on SemanticKITTI and SSCBench-KITTI-360 datasets
demonstrate that the proposed method achieves the best performance on monocular
semantic occupancy prediction while reducing GPU and time overhead.
|
2504.07532 | Tuhin Chakrabarty Mr | Tuhin Chakrabarty, Philippe Laban, Chien-Sheng Wu | AI-Slop to AI-Polish? Aligning Language Models through Edit-Based
Writing Rewards and Test-time Computation | Under Submission | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | AI-generated text is proliferating across domains, from creative writing and
journalism to marketing content and scientific articles. Models can follow
user-provided instructions to generate coherent and grammatically correct
outputs but in this work, we study a more fundamental question: how do we
evaluate and improve the writing quality of AI-generated text? Writing quality
assessment has received less attention from the community, in part because it
is fundamentally subjective and requires expertise. We first introduce the
Writing Quality Benchmark (WQ) by consolidating five writing-preference
datasets into 4,729 writing quality judgments. Our experiments show that
competitive baselines, including state-of-the-art LLMs that excel at reasoning
tasks, barely outperform random baselines on WQ. We then train specialized
Writing Quality Reward Models (WQRM) of various sizes for writing quality
assessment that demonstrate strong generalization on four out-of-distribution
test sets and 74% accuracy on the WQ benchmark. To further show WQRM's
practical benefits during inference, we leverage additional test-time compute
to generate and rank multiple candidate revisions, allowing us to select
higher-quality outputs from an initial draft. Human evaluation with 9
experienced writers confirm that WQRM-based selection produces writing samples
preferred by experts 66% overall, and 72.2% when the reward gap is larger than
1 point. We release our datasets and models to encourage community engagement
with writing quality assessment and development of AI writing systems better
aligned with human preferences.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 07:58:05 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Chakrabarty",
"Tuhin",
""
],
[
"Laban",
"Philippe",
""
],
[
"Wu",
"Chien-Sheng",
""
]
] | TITLE: AI-Slop to AI-Polish? Aligning Language Models through Edit-Based
Writing Rewards and Test-time Computation
ABSTRACT: AI-generated text is proliferating across domains, from creative writing and
journalism to marketing content and scientific articles. Models can follow
user-provided instructions to generate coherent and grammatically correct
outputs but in this work, we study a more fundamental question: how do we
evaluate and improve the writing quality of AI-generated text? Writing quality
assessment has received less attention from the community, in part because it
is fundamentally subjective and requires expertise. We first introduce the
Writing Quality Benchmark (WQ) by consolidating five writing-preference
datasets into 4,729 writing quality judgments. Our experiments show that
competitive baselines, including state-of-the-art LLMs that excel at reasoning
tasks, barely outperform random baselines on WQ. We then train specialized
Writing Quality Reward Models (WQRM) of various sizes for writing quality
assessment that demonstrate strong generalization on four out-of-distribution
test sets and 74% accuracy on the WQ benchmark. To further show WQRM's
practical benefits during inference, we leverage additional test-time compute
to generate and rank multiple candidate revisions, allowing us to select
higher-quality outputs from an initial draft. Human evaluation with 9
experienced writers confirm that WQRM-based selection produces writing samples
preferred by experts 66% overall, and 72.2% when the reward gap is larger than
1 point. We release our datasets and models to encourage community engagement
with writing quality assessment and development of AI writing systems better
aligned with human preferences.
|
2504.07540 | Jos\'e I. Orlicki | Jos\'e I. Orlicki | PoGO: A Scalable Proof of Useful Work via Quantized Gradient Descent and
Merkle Proofs | 14 pages, 1 figure, 1 table | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present a design called \emph{Proof of Gradient Optimization} (PoGO) for
blockchain consensus, where miners produce verifiable evidence of training
large-scale machine-learning models. Building on previous work, we incorporate
\emph{quantized gradients} (4-bit precision) to reduce storage and computation
requirements, while still preserving the ability of verifiers to check that
real progress has been made on lowering the model's loss. Additionally, we
employ Merkle proofs over the full 32-bit model to handle large parameter sets
and to enable random leaf checks with minimal on-chain data. We illustrate
these ideas using GPT-3 (175B parameters) as a reference example and also refer
to smaller but high-performance models (e.g., \emph{Gemma~3} with 27B
parameters). We provide an empirical cost analysis showing that verification is
significantly cheaper than training, thanks in part to quantization and
sampling. We also discuss the necessity of longer block times (potentially
hours) when incorporating meaningful training steps, the trade-offs when using
specialized GPU hardware, and how binary diffs may incrementally optimize
updates. Finally, we note that fine-tuning can be handled in a similar manner,
merely changing the dataset and the manner of sampling but preserving the
overall verification flow. Our protocol allows verifiers to issue either
\emph{positive} or \emph{negative} attestations; these are aggregated at
finalization to either confirm the update or slash the miner.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 08:09:34 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Orlicki",
"José I.",
""
]
] | TITLE: PoGO: A Scalable Proof of Useful Work via Quantized Gradient Descent and
Merkle Proofs
ABSTRACT: We present a design called \emph{Proof of Gradient Optimization} (PoGO) for
blockchain consensus, where miners produce verifiable evidence of training
large-scale machine-learning models. Building on previous work, we incorporate
\emph{quantized gradients} (4-bit precision) to reduce storage and computation
requirements, while still preserving the ability of verifiers to check that
real progress has been made on lowering the model's loss. Additionally, we
employ Merkle proofs over the full 32-bit model to handle large parameter sets
and to enable random leaf checks with minimal on-chain data. We illustrate
these ideas using GPT-3 (175B parameters) as a reference example and also refer
to smaller but high-performance models (e.g., \emph{Gemma~3} with 27B
parameters). We provide an empirical cost analysis showing that verification is
significantly cheaper than training, thanks in part to quantization and
sampling. We also discuss the necessity of longer block times (potentially
hours) when incorporating meaningful training steps, the trade-offs when using
specialized GPU hardware, and how binary diffs may incrementally optimize
updates. Finally, we note that fine-tuning can be handled in a similar manner,
merely changing the dataset and the manner of sampling but preserving the
overall verification flow. Our protocol allows verifiers to issue either
\emph{positive} or \emph{negative} attestations; these are aggregated at
finalization to either confirm the update or slash the miner.
|
2504.07542 | Hongyu Lyu | Hongyu Lyu, Julie Stephany Berrio, Mao Shan, Stewart Worrall | SydneyScapes: Image Segmentation for Australian Environments | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous Vehicles (AVs) are being partially deployed and tested across
various global locations, including China, the USA, Germany, France, Japan,
Korea, and the UK, but with limited demonstrations in Australia. The
integration of machine learning (ML) into AV perception systems highlights the
need for locally labelled datasets to develop and test algorithms in specific
environments. To address this, we introduce SydneyScapes - a dataset tailored
for computer vision tasks of image semantic, instance, and panoptic
segmentation. This dataset, collected from Sydney and surrounding cities in New
South Wales (NSW), Australia, consists of 756 images with high-quality
pixel-level annotations. It is designed to assist AV industry and researchers
by providing annotated data and tools for algorithm development, testing, and
deployment in the Australian context. Additionally, we offer benchmarking
results using state-of-the-art algorithms to establish reference points for
future research and development. The dataset is publicly available at
https://hdl.handle.net/2123/33051.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 08:11:17 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Lyu",
"Hongyu",
""
],
[
"Berrio",
"Julie Stephany",
""
],
[
"Shan",
"Mao",
""
],
[
"Worrall",
"Stewart",
""
]
] | TITLE: SydneyScapes: Image Segmentation for Australian Environments
ABSTRACT: Autonomous Vehicles (AVs) are being partially deployed and tested across
various global locations, including China, the USA, Germany, France, Japan,
Korea, and the UK, but with limited demonstrations in Australia. The
integration of machine learning (ML) into AV perception systems highlights the
need for locally labelled datasets to develop and test algorithms in specific
environments. To address this, we introduce SydneyScapes - a dataset tailored
for computer vision tasks of image semantic, instance, and panoptic
segmentation. This dataset, collected from Sydney and surrounding cities in New
South Wales (NSW), Australia, consists of 756 images with high-quality
pixel-level annotations. It is designed to assist AV industry and researchers
by providing annotated data and tools for algorithm development, testing, and
deployment in the Australian context. Additionally, we offer benchmarking
results using state-of-the-art algorithms to establish reference points for
future research and development. The dataset is publicly available at
https://hdl.handle.net/2123/33051.
|
2504.07560 | Moritz Rempe | Moritz Rempe, Fabian H\"orst, Helmut Becker, Marco Schlimbach, Lukas
Rotkopf, Kevin Kr\"oninger, Jens Kleesiek | PhaseGen: A Diffusion-Based Approach for Complex-Valued MRI Data
Generation | null | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Magnetic resonance imaging (MRI) raw data, or k-Space data, is
complex-valued, containing both magnitude and phase information. However,
clinical and existing Artificial Intelligence (AI)-based methods focus only on
magnitude images, discarding the phase data despite its potential for
downstream tasks, such as tumor segmentation and classification. In this work,
we introduce $\textit{PhaseGen}$, a novel complex-valued diffusion model for
generating synthetic MRI raw data conditioned on magnitude images, commonly
used in clinical practice. This enables the creation of artificial
complex-valued raw data, allowing pretraining for models that require k-Space
information. We evaluate PhaseGen on two tasks: skull-stripping directly in
k-Space and MRI reconstruction using the publicly available FastMRI dataset.
Our results show that training with synthetic phase data significantly improves
generalization for skull-stripping on real-world data, with an increased
segmentation accuracy from $41.1\%$ to $80.1\%$, and enhances MRI
reconstruction when combined with limited real-world data. This work presents a
step forward in utilizing generative AI to bridge the gap between
magnitude-based datasets and the complex-valued nature of MRI raw data. This
approach allows researchers to leverage the vast amount of avaliable image
domain data in combination with the information-rich k-Space data for more
accurate and efficient diagnostic tasks. We make our code publicly
$\href{https://github.com/TIO-IKIM/PhaseGen}{\text{available here}}$.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 08:44:19 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Rempe",
"Moritz",
""
],
[
"Hörst",
"Fabian",
""
],
[
"Becker",
"Helmut",
""
],
[
"Schlimbach",
"Marco",
""
],
[
"Rotkopf",
"Lukas",
""
],
[
"Kröninger",
"Kevin",
""
],
[
"Kleesiek",
"Jens",
""
]
] | TITLE: PhaseGen: A Diffusion-Based Approach for Complex-Valued MRI Data
Generation
ABSTRACT: Magnetic resonance imaging (MRI) raw data, or k-Space data, is
complex-valued, containing both magnitude and phase information. However,
clinical and existing Artificial Intelligence (AI)-based methods focus only on
magnitude images, discarding the phase data despite its potential for
downstream tasks, such as tumor segmentation and classification. In this work,
we introduce $\textit{PhaseGen}$, a novel complex-valued diffusion model for
generating synthetic MRI raw data conditioned on magnitude images, commonly
used in clinical practice. This enables the creation of artificial
complex-valued raw data, allowing pretraining for models that require k-Space
information. We evaluate PhaseGen on two tasks: skull-stripping directly in
k-Space and MRI reconstruction using the publicly available FastMRI dataset.
Our results show that training with synthetic phase data significantly improves
generalization for skull-stripping on real-world data, with an increased
segmentation accuracy from $41.1\%$ to $80.1\%$, and enhances MRI
reconstruction when combined with limited real-world data. This work presents a
step forward in utilizing generative AI to bridge the gap between
magnitude-based datasets and the complex-valued nature of MRI raw data. This
approach allows researchers to leverage the vast amount of avaliable image
domain data in combination with the information-rich k-Space data for more
accurate and efficient diagnostic tasks. We make our code publicly
$\href{https://github.com/TIO-IKIM/PhaseGen}{\text{available here}}$.
|
2504.07566 | Fabrizio Garuti | Fabrizio Garuti, Enver Sangineto, Simone Luetto, Lorenzo Forni, Rita
Cucchiara | Diffusion Transformers for Tabular Data Time Series Generation | 26 pages, 19 figures, 13 tables | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tabular data generation has recently attracted a growing interest due to its
different application scenarios. However, generating time series of tabular
data, where each element of the series depends on the others, remains a largely
unexplored domain. This gap is probably due to the difficulty of jointly
solving different problems, the main of which are the heterogeneity of tabular
data (a problem common to non-time-dependent approaches) and the variable
length of a time series. In this paper, we propose a Diffusion Transformers
(DiTs) based approach for tabular data series generation. Inspired by the
recent success of DiTs in image and video generation, we extend this framework
to deal with heterogeneous data and variable-length sequences. Using extensive
experiments on six datasets, we show that the proposed approach outperforms
previous work by a large margin.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 08:56:09 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Garuti",
"Fabrizio",
""
],
[
"Sangineto",
"Enver",
""
],
[
"Luetto",
"Simone",
""
],
[
"Forni",
"Lorenzo",
""
],
[
"Cucchiara",
"Rita",
""
]
] | TITLE: Diffusion Transformers for Tabular Data Time Series Generation
ABSTRACT: Tabular data generation has recently attracted a growing interest due to its
different application scenarios. However, generating time series of tabular
data, where each element of the series depends on the others, remains a largely
unexplored domain. This gap is probably due to the difficulty of jointly
solving different problems, the main of which are the heterogeneity of tabular
data (a problem common to non-time-dependent approaches) and the variable
length of a time series. In this paper, we propose a Diffusion Transformers
(DiTs) based approach for tabular data series generation. Inspired by the
recent success of DiTs in image and video generation, we extend this framework
to deal with heterogeneous data and variable-length sequences. Using extensive
experiments on six datasets, we show that the proposed approach outperforms
previous work by a large margin.
|
2504.07567 | Urszula Czerwinska | Urszula Czerwinska, Cenk Bircanoglu and Jeremy Chamoux | Benchmarking Image Embeddings for E-Commerce: Evaluating Off-the Shelf
Foundation Models, Fine-Tuning Strategies and Practical Trade-offs | accepted at Future Technologies Conference (FTC 2025) | null | null | 11AB1 | cs.CV cs.AI cs.CE cs.IR cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We benchmark foundation models image embeddings for classification and
retrieval in e-Commerce, evaluating their suitability for real-world
applications. Our study spans embeddings from pre-trained convolutional and
transformer models trained via supervised, self-supervised, and text-image
contrastive learning. We assess full fine-tuning and transfer learning
(top-tuning) on six diverse e-Commerce datasets: fashion, consumer goods, cars,
food, and retail. Results show full fine-tuning consistently performs well,
while text-image and self-supervised embeddings can match its performance with
less training. While supervised embeddings remain stable across architectures,
SSL and contrastive embeddings vary significantly, often benefiting from
top-tuning. Top-tuning emerges as an efficient alternative to full fine-tuning,
reducing computational costs. We also explore cross-tuning, noting its impact
depends on dataset characteristics. Our findings offer practical guidelines for
embedding selection and fine-tuning strategies, balancing efficiency and
performance.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 08:57:28 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Czerwinska",
"Urszula",
""
],
[
"Bircanoglu",
"Cenk",
""
],
[
"Chamoux",
"Jeremy",
""
]
] | TITLE: Benchmarking Image Embeddings for E-Commerce: Evaluating Off-the Shelf
Foundation Models, Fine-Tuning Strategies and Practical Trade-offs
ABSTRACT: We benchmark foundation models image embeddings for classification and
retrieval in e-Commerce, evaluating their suitability for real-world
applications. Our study spans embeddings from pre-trained convolutional and
transformer models trained via supervised, self-supervised, and text-image
contrastive learning. We assess full fine-tuning and transfer learning
(top-tuning) on six diverse e-Commerce datasets: fashion, consumer goods, cars,
food, and retail. Results show full fine-tuning consistently performs well,
while text-image and self-supervised embeddings can match its performance with
less training. While supervised embeddings remain stable across architectures,
SSL and contrastive embeddings vary significantly, often benefiting from
top-tuning. Top-tuning emerges as an efficient alternative to full fine-tuning,
reducing computational costs. We also explore cross-tuning, noting its impact
depends on dataset characteristics. Our findings offer practical guidelines for
embedding selection and fine-tuning strategies, balancing efficiency and
performance.
|
2504.07570 | Erhan Zhang | Erhan Zhang, Xingzhu Wang, Peiyuan Gong, Zixuan Yang, Jiaxin Mao | Exploring Human-Like Thinking in Search Simulations with Large Language
Models | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulating user search behavior is a critical task in information retrieval,
which can be employed for user behavior modeling, data augmentation, and system
evaluation. Recent advancements in large language models (LLMs) have opened up
new possibilities for generating human-like actions including querying,
browsing, and clicking. In this work, we explore the integration of human-like
thinking into search simulations by leveraging LLMs to simulate users' hidden
cognitive processes. Specifically, given a search task and context, we prompt
LLMs to first think like a human before executing the corresponding action. As
existing search datasets do not include users' thought processes, we conducted
a user study to collect a new dataset enriched with users' explicit thinking.
We investigate the impact of incorporating such human-like thinking on
simulation performance and apply supervised fine-tuning (SFT) to teach LLMs to
emulate both human thinking and actions. Our experiments span two dimensions in
leveraging LLMs for user simulation: (1) with or without explicit thinking, and
(2) with or without fine-tuning on the thinking-augmented dataset. The results
demonstrate the feasibility and potential of incorporating human-like thinking
in user simulations, though performance improvements on some metrics remain
modest. We believe this exploration provides new avenues and inspirations for
advancing user behavior modeling in search simulations.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 09:04:58 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhang",
"Erhan",
""
],
[
"Wang",
"Xingzhu",
""
],
[
"Gong",
"Peiyuan",
""
],
[
"Yang",
"Zixuan",
""
],
[
"Mao",
"Jiaxin",
""
]
] | TITLE: Exploring Human-Like Thinking in Search Simulations with Large Language
Models
ABSTRACT: Simulating user search behavior is a critical task in information retrieval,
which can be employed for user behavior modeling, data augmentation, and system
evaluation. Recent advancements in large language models (LLMs) have opened up
new possibilities for generating human-like actions including querying,
browsing, and clicking. In this work, we explore the integration of human-like
thinking into search simulations by leveraging LLMs to simulate users' hidden
cognitive processes. Specifically, given a search task and context, we prompt
LLMs to first think like a human before executing the corresponding action. As
existing search datasets do not include users' thought processes, we conducted
a user study to collect a new dataset enriched with users' explicit thinking.
We investigate the impact of incorporating such human-like thinking on
simulation performance and apply supervised fine-tuning (SFT) to teach LLMs to
emulate both human thinking and actions. Our experiments span two dimensions in
leveraging LLMs for user simulation: (1) with or without explicit thinking, and
(2) with or without fine-tuning on the thinking-augmented dataset. The results
demonstrate the feasibility and potential of incorporating human-like thinking
in user simulations, though performance improvements on some metrics remain
modest. We believe this exploration provides new avenues and inspirations for
advancing user behavior modeling in search simulations.
|
2504.07575 | Shanshan Wu | Shanshan Wu, Shuchang Liu, Shuai Zhang, Xiaoyu Yang, Xiang Li, Lantao
Hu, Han Li | Explicit Uncertainty Modeling for Video Watch Time Prediction | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In video recommendation, a critical component that determines the system's
recommendation accuracy is the watch-time prediction module, since how long a
user watches a video directly reflects personalized preferences. One of the key
challenges of this problem is the user's stochastic watch-time behavior. To
improve the prediction accuracy for such an uncertain behavior, existing
approaches show that one can either reduce the noise through duration bias
modeling or formulate a distribution modeling task to capture the uncertainty.
However, the uncontrolled uncertainty is not always equally distributed across
users and videos, inducing a balancing paradox between the model accuracy and
the ability to capture out-of-distribution samples. In practice, we find that
the uncertainty of the watch-time prediction model also provides key
information about user behavior, which, in turn, could benefit the prediction
task itself. Following this notion, we derive an explicit uncertainty modeling
strategy for the prediction model and propose an adversarial optimization
framework that can better exploit the user watch-time behavior. This framework
has been deployed online on an industrial video sharing platform that serves
hundreds of millions of daily active users, which obtains a significant
increase in users' video watch time by 0.31% through the online A/B test.
Furthermore, extended offline experiments on two public datasets verify the
effectiveness of the proposed framework across various watch-time prediction
backbones.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 09:19:19 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Wu",
"Shanshan",
""
],
[
"Liu",
"Shuchang",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Yang",
"Xiaoyu",
""
],
[
"Li",
"Xiang",
""
],
[
"Hu",
"Lantao",
""
],
[
"Li",
"Han",
""
]
] | TITLE: Explicit Uncertainty Modeling for Video Watch Time Prediction
ABSTRACT: In video recommendation, a critical component that determines the system's
recommendation accuracy is the watch-time prediction module, since how long a
user watches a video directly reflects personalized preferences. One of the key
challenges of this problem is the user's stochastic watch-time behavior. To
improve the prediction accuracy for such an uncertain behavior, existing
approaches show that one can either reduce the noise through duration bias
modeling or formulate a distribution modeling task to capture the uncertainty.
However, the uncontrolled uncertainty is not always equally distributed across
users and videos, inducing a balancing paradox between the model accuracy and
the ability to capture out-of-distribution samples. In practice, we find that
the uncertainty of the watch-time prediction model also provides key
information about user behavior, which, in turn, could benefit the prediction
task itself. Following this notion, we derive an explicit uncertainty modeling
strategy for the prediction model and propose an adversarial optimization
framework that can better exploit the user watch-time behavior. This framework
has been deployed online on an industrial video sharing platform that serves
hundreds of millions of daily active users, which obtains a significant
increase in users' video watch time by 0.31% through the online A/B test.
Furthermore, extended offline experiments on two public datasets verify the
effectiveness of the proposed framework across various watch-time prediction
backbones.
|
2504.07578 | Federico Mazzone | Federico Mazzone, Trevor Brown, Florian Kerschbaum, Kevin H. Wilson,
Maarten Everts, Florian Hahn, Andreas Peter | Privacy-Preserving Vertical K-Means Clustering | null | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Clustering is a fundamental data processing task used for grouping records
based on one or more features. In the vertically partitioned setting, data is
distributed among entities, with each holding only a subset of those features.
A key challenge in this scenario is that computing distances between records
requires access to all distributed features, which may be privacy-sensitive and
cannot be directly shared with other parties. The goal is to compute the joint
clusters while preserving the privacy of each entity's dataset. Existing
solutions using secret sharing or garbled circuits implement privacy-preserving
variants of Lloyd's algorithm but incur high communication costs, scaling as
O(nkt), where n is the number of data points, k the number of clusters, and t
the number of rounds. These methods become impractical for large datasets or
several parties, limiting their use to LAN settings only. On the other hand, a
different line of solutions rely on differential privacy (DP) to outsource the
local features of the parties to a central server. However, they often
significantly degrade the utility of the clustering outcome due to excessive
noise. In this work, we propose a novel solution based on homomorphic
encryption and DP, reducing communication complexity to O(n+kt). In our method,
parties securely outsource their features once, allowing a computing party to
perform clustering operations under encryption. DP is applied only to the
clusters' centroids, ensuring privacy with minimal impact on utility. Our
solution clusters 100,000 two-dimensional points into five clusters using only
73MB of communication, compared to 101GB for existing works, and completes in
just under 3 minutes on a 100Mbps network, whereas existing works take over 1
day. This makes our solution practical even for WAN deployments, all while
maintaining accuracy comparable to plaintext k-means algorithms.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 09:20:56 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Mazzone",
"Federico",
""
],
[
"Brown",
"Trevor",
""
],
[
"Kerschbaum",
"Florian",
""
],
[
"Wilson",
"Kevin H.",
""
],
[
"Everts",
"Maarten",
""
],
[
"Hahn",
"Florian",
""
],
[
"Peter",
"Andreas",
""
]
] | TITLE: Privacy-Preserving Vertical K-Means Clustering
ABSTRACT: Clustering is a fundamental data processing task used for grouping records
based on one or more features. In the vertically partitioned setting, data is
distributed among entities, with each holding only a subset of those features.
A key challenge in this scenario is that computing distances between records
requires access to all distributed features, which may be privacy-sensitive and
cannot be directly shared with other parties. The goal is to compute the joint
clusters while preserving the privacy of each entity's dataset. Existing
solutions using secret sharing or garbled circuits implement privacy-preserving
variants of Lloyd's algorithm but incur high communication costs, scaling as
O(nkt), where n is the number of data points, k the number of clusters, and t
the number of rounds. These methods become impractical for large datasets or
several parties, limiting their use to LAN settings only. On the other hand, a
different line of solutions rely on differential privacy (DP) to outsource the
local features of the parties to a central server. However, they often
significantly degrade the utility of the clustering outcome due to excessive
noise. In this work, we propose a novel solution based on homomorphic
encryption and DP, reducing communication complexity to O(n+kt). In our method,
parties securely outsource their features once, allowing a computing party to
perform clustering operations under encryption. DP is applied only to the
clusters' centroids, ensuring privacy with minimal impact on utility. Our
solution clusters 100,000 two-dimensional points into five clusters using only
73MB of communication, compared to 101GB for existing works, and completes in
just under 3 minutes on a 100Mbps network, whereas existing works take over 1
day. This makes our solution practical even for WAN deployments, all while
maintaining accuracy comparable to plaintext k-means algorithms.
|
2504.07583 | Patrick Fernandes | Patrick Fernandes, Sweta Agrawal, Emmanouil Zaranis, Andr\'e F.T.
Martins, Graham Neubig | Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with
Question Answering | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the steady progress in machine translation evaluation, existing
automatic metrics struggle to capture how well meaning is preserved beyond
sentence boundaries. We posit that reliance on a single intrinsic quality
score, trained to mimic human judgments, might be insufficient for evaluating
translations of long, complex passages, and a more ``pragmatic'' approach that
assesses how accurately key information is conveyed by a translation in context
is needed. We introduce TREQA (Translation Evaluation via Question-Answering),
a framework that extrinsically evaluates translation quality by assessing how
accurately candidate translations answer reading comprehension questions that
target key information in the original source or reference texts. In
challenging domains that require long-range understanding, such as literary
texts, we show that TREQA is competitive with and, in some cases, outperforms
state-of-the-art neural and LLM-based metrics in ranking alternative
paragraph-level translations, despite never being explicitly optimized to
correlate with human judgments. Furthermore, the generated questions and
answers offer interpretability: empirical analysis shows that they effectively
target translation errors identified by experts in evaluated datasets. Our code
is available at https://github.com/deep-spin/treqa
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 09:24:54 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Fernandes",
"Patrick",
""
],
[
"Agrawal",
"Sweta",
""
],
[
"Zaranis",
"Emmanouil",
""
],
[
"Martins",
"André F. T.",
""
],
[
"Neubig",
"Graham",
""
]
] | TITLE: Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with
Question Answering
ABSTRACT: Despite the steady progress in machine translation evaluation, existing
automatic metrics struggle to capture how well meaning is preserved beyond
sentence boundaries. We posit that reliance on a single intrinsic quality
score, trained to mimic human judgments, might be insufficient for evaluating
translations of long, complex passages, and a more ``pragmatic'' approach that
assesses how accurately key information is conveyed by a translation in context
is needed. We introduce TREQA (Translation Evaluation via Question-Answering),
a framework that extrinsically evaluates translation quality by assessing how
accurately candidate translations answer reading comprehension questions that
target key information in the original source or reference texts. In
challenging domains that require long-range understanding, such as literary
texts, we show that TREQA is competitive with and, in some cases, outperforms
state-of-the-art neural and LLM-based metrics in ranking alternative
paragraph-level translations, despite never being explicitly optimized to
correlate with human judgments. Furthermore, the generated questions and
answers offer interpretability: empirical analysis shows that they effectively
target translation errors identified by experts in evaluated datasets. Our code
is available at https://github.com/deep-spin/treqa
|
2504.07590 | Xingyuan Wei | Xingyuan Wei, Zijun Cheng, Ning Li, Qiujian Lv, Ziyang Yu, Degang Sun | DWFS-Obfuscation: Dynamic Weighted Feature Selection for Robust Malware
Familial Classification under Obfuscation | 15 pages, 1 figure | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Due to its open-source nature, the Android operating system has consistently
been a primary target for attackers. Learning-based methods have made
significant progress in the field of Android malware detection. However,
traditional detection methods based on static features struggle to identify
obfuscated malicious code, while methods relying on dynamic analysis suffer
from low efficiency. To address this, we propose a dynamic weighted feature
selection method that analyzes the importance and stability of features,
calculates scores to filter out the most robust features, and combines these
selected features with the program's structural information. We then utilize
graph neural networks for classification, thereby improving the robustness and
accuracy of the detection system. We analyzed 8,664 malware samples from eight
malware families and tested a total of 44,940 malware variants generated using
seven obfuscation strategies. Experiments demonstrate that our proposed method
achieves an F1-score of 95.56% on the unobfuscated dataset and 92.28% on the
obfuscated dataset, indicating that the model can effectively detect obfuscated
malware.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 09:37:43 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Wei",
"Xingyuan",
""
],
[
"Cheng",
"Zijun",
""
],
[
"Li",
"Ning",
""
],
[
"Lv",
"Qiujian",
""
],
[
"Yu",
"Ziyang",
""
],
[
"Sun",
"Degang",
""
]
] | TITLE: DWFS-Obfuscation: Dynamic Weighted Feature Selection for Robust Malware
Familial Classification under Obfuscation
ABSTRACT: Due to its open-source nature, the Android operating system has consistently
been a primary target for attackers. Learning-based methods have made
significant progress in the field of Android malware detection. However,
traditional detection methods based on static features struggle to identify
obfuscated malicious code, while methods relying on dynamic analysis suffer
from low efficiency. To address this, we propose a dynamic weighted feature
selection method that analyzes the importance and stability of features,
calculates scores to filter out the most robust features, and combines these
selected features with the program's structural information. We then utilize
graph neural networks for classification, thereby improving the robustness and
accuracy of the detection system. We analyzed 8,664 malware samples from eight
malware families and tested a total of 44,940 malware variants generated using
seven obfuscation strategies. Experiments demonstrate that our proposed method
achieves an F1-score of 95.56% on the unobfuscated dataset and 92.28% on the
obfuscated dataset, indicating that the model can effectively detect obfuscated
malware.
|
2504.07597 | Zhenliang Zhang | Zhe Sun, Rujie Wu, Xiaodong Yang, Hongzhao Xie, Haiyan Jiang, Junda
Bi, Zhenliang Zhang | Learning Long Short-Term Intention within Human Daily Behaviors | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the domain of autonomous household robots, it is of utmost importance for
robots to understand human behaviors and provide appropriate services. This
requires the robots to possess the capability to analyze complex human
behaviors and predict the true intentions of humans. Traditionally, humans are
perceived as flawless, with their decisions acting as the standards that robots
should strive to align with. However, this raises a pertinent question: What if
humans make mistakes? In this research, we present a unique task, termed "long
short-term intention prediction". This task requires robots can predict the
long-term intention of humans, which aligns with human values, and the short
term intention of humans, which reflects the immediate action intention.
Meanwhile, the robots need to detect the potential non-consistency between the
short-term and long-term intentions, and provide necessary warnings and
suggestions. To facilitate this task, we propose a long short-term intention
model to represent the complex intention states, and build a dataset to train
this intention model. Then we propose a two-stage method to integrate the
intention model for robots: i) predicting human intentions of both value-based
long-term intentions and action-based short-term intentions; and 2) analyzing
the consistency between the long-term and short-term intentions. Experimental
results indicate that the proposed long short-term intention model can assist
robots in comprehending human behavioral patterns over both long-term and
short-term durations, which helps determine the consistency between long-term
and short-term intentions of humans.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 09:50:18 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Sun",
"Zhe",
""
],
[
"Wu",
"Rujie",
""
],
[
"Yang",
"Xiaodong",
""
],
[
"Xie",
"Hongzhao",
""
],
[
"Jiang",
"Haiyan",
""
],
[
"Bi",
"Junda",
""
],
[
"Zhang",
"Zhenliang",
""
]
] | TITLE: Learning Long Short-Term Intention within Human Daily Behaviors
ABSTRACT: In the domain of autonomous household robots, it is of utmost importance for
robots to understand human behaviors and provide appropriate services. This
requires the robots to possess the capability to analyze complex human
behaviors and predict the true intentions of humans. Traditionally, humans are
perceived as flawless, with their decisions acting as the standards that robots
should strive to align with. However, this raises a pertinent question: What if
humans make mistakes? In this research, we present a unique task, termed "long
short-term intention prediction". This task requires robots can predict the
long-term intention of humans, which aligns with human values, and the short
term intention of humans, which reflects the immediate action intention.
Meanwhile, the robots need to detect the potential non-consistency between the
short-term and long-term intentions, and provide necessary warnings and
suggestions. To facilitate this task, we propose a long short-term intention
model to represent the complex intention states, and build a dataset to train
this intention model. Then we propose a two-stage method to integrate the
intention model for robots: i) predicting human intentions of both value-based
long-term intentions and action-based short-term intentions; and 2) analyzing
the consistency between the long-term and short-term intentions. Experimental
results indicate that the proposed long short-term intention model can assist
robots in comprehending human behavioral patterns over both long-term and
short-term durations, which helps determine the consistency between long-term
and short-term intentions of humans.
|
2504.07598 | Ioan-Adrian Cosma Mr. | Adrian Cosma and Andy C\v{a}trun\v{a} and Emilian R\v{a}doi | On Model and Data Scaling for Skeleton-based Self-Supervised Gait
Recognition | 10 pages, 10 Figures, 3 Tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gait recognition from video streams is a challenging problem in computer
vision biometrics due to the subtle differences between gaits and numerous
confounding factors. Recent advancements in self-supervised pretraining have
led to the development of robust gait recognition models that are invariant to
walking covariates. While neural scaling laws have transformed model
development in other domains by linking performance to data, model size, and
compute, their applicability to gait remains unexplored. In this work, we
conduct the first empirical study scaling on skeleton-based self-supervised
gait recognition to quantify the effect of data quantity, model size and
compute on downstream gait recognition performance. We pretrain multiple
variants of GaitPT - a transformer-based architecture - on a dataset of 2.7
million walking sequences collected in the wild. We evaluate zero-shot
performance across four benchmark datasets to derive scaling laws for data,
model size, and compute. Our findings demonstrate predictable power-law
improvements in performance with increased scale and confirm that data and
compute scaling significantly influence downstream accuracy. We further isolate
architectural contributions by comparing GaitPT with GaitFormer under
controlled compute budgets. These results provide practical insights into
resource allocation and performance estimation for real-world gait recognition
systems.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 09:51:22 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Cosma",
"Adrian",
""
],
[
"Cǎtrunǎ",
"Andy",
""
],
[
"Rǎdoi",
"Emilian",
""
]
] | TITLE: On Model and Data Scaling for Skeleton-based Self-Supervised Gait
Recognition
ABSTRACT: Gait recognition from video streams is a challenging problem in computer
vision biometrics due to the subtle differences between gaits and numerous
confounding factors. Recent advancements in self-supervised pretraining have
led to the development of robust gait recognition models that are invariant to
walking covariates. While neural scaling laws have transformed model
development in other domains by linking performance to data, model size, and
compute, their applicability to gait remains unexplored. In this work, we
conduct the first empirical study scaling on skeleton-based self-supervised
gait recognition to quantify the effect of data quantity, model size and
compute on downstream gait recognition performance. We pretrain multiple
variants of GaitPT - a transformer-based architecture - on a dataset of 2.7
million walking sequences collected in the wild. We evaluate zero-shot
performance across four benchmark datasets to derive scaling laws for data,
model size, and compute. Our findings demonstrate predictable power-law
improvements in performance with increased scale and confirm that data and
compute scaling significantly influence downstream accuracy. We further isolate
architectural contributions by comparing GaitPT with GaitFormer under
controlled compute budgets. These results provide practical insights into
resource allocation and performance estimation for real-world gait recognition
systems.
|
2504.07603 | Youngwan Jin | Youngwan Jin, Michal Kovac, Yagiz Nalcakan, Hyeongjin Ju, Hanbin Song,
Sanghyeop Yeo and Shiho Kim | RASMD: RGB And SWIR Multispectral Driving Dataset for Robust Perception
in Adverse Conditions | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Current autonomous driving algorithms heavily rely on the visible spectrum,
which is prone to performance degradation in adverse conditions like fog, rain,
snow, glare, and high contrast. Although other spectral bands like
near-infrared (NIR) and long-wave infrared (LWIR) can enhance vision perception
in such situations, they have limitations and lack large-scale datasets and
benchmarks. Short-wave infrared (SWIR) imaging offers several advantages over
NIR and LWIR. However, no publicly available large-scale datasets currently
incorporate SWIR data for autonomous driving. To address this gap, we introduce
the RGB and SWIR Multispectral Driving (RASMD) dataset, which comprises 100,000
synchronized and spatially aligned RGB-SWIR image pairs collected across
diverse locations, lighting, and weather conditions. In addition, we provide a
subset for RGB-SWIR translation and object detection annotations for a subset
of challenging traffic scenarios to demonstrate the utility of SWIR imaging
through experiments on both object detection and RGB-to-SWIR image translation.
Our experiments show that combining RGB and SWIR data in an ensemble framework
significantly improves detection accuracy compared to RGB-only approaches,
particularly in conditions where visible-spectrum sensors struggle. We
anticipate that the RASMD dataset will advance research in multispectral
imaging for autonomous driving and robust perception systems.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 09:54:57 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Jin",
"Youngwan",
""
],
[
"Kovac",
"Michal",
""
],
[
"Nalcakan",
"Yagiz",
""
],
[
"Ju",
"Hyeongjin",
""
],
[
"Song",
"Hanbin",
""
],
[
"Yeo",
"Sanghyeop",
""
],
[
"Kim",
"Shiho",
""
]
] | TITLE: RASMD: RGB And SWIR Multispectral Driving Dataset for Robust Perception
in Adverse Conditions
ABSTRACT: Current autonomous driving algorithms heavily rely on the visible spectrum,
which is prone to performance degradation in adverse conditions like fog, rain,
snow, glare, and high contrast. Although other spectral bands like
near-infrared (NIR) and long-wave infrared (LWIR) can enhance vision perception
in such situations, they have limitations and lack large-scale datasets and
benchmarks. Short-wave infrared (SWIR) imaging offers several advantages over
NIR and LWIR. However, no publicly available large-scale datasets currently
incorporate SWIR data for autonomous driving. To address this gap, we introduce
the RGB and SWIR Multispectral Driving (RASMD) dataset, which comprises 100,000
synchronized and spatially aligned RGB-SWIR image pairs collected across
diverse locations, lighting, and weather conditions. In addition, we provide a
subset for RGB-SWIR translation and object detection annotations for a subset
of challenging traffic scenarios to demonstrate the utility of SWIR imaging
through experiments on both object detection and RGB-to-SWIR image translation.
Our experiments show that combining RGB and SWIR data in an ensemble framework
significantly improves detection accuracy compared to RGB-only approaches,
particularly in conditions where visible-spectrum sensors struggle. We
anticipate that the RASMD dataset will advance research in multispectral
imaging for autonomous driving and robust perception systems.
|
2504.07645 | Mohamed Barakathullah Malik | Malik M Barakathullah and Immanuel Koh | Prediction of Usage Probabilities of Shopping-Mall Corridors Using
Heterogeneous Graph Neural Networks | 17 pages, working manuscript with partial results | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present a method based on graph neural network (GNN) for prediction of
probabilities of usage of shopping-mall corridors. The heterogeneous graph
network of shops and corridor paths are obtained from floorplans of the malls
by creating vector layers for corridors, shops and entrances. These are
subsequently assimilated into nodes and edges of graphs. The prediction of the
usage probability is based on the shop features, namely, the area and usage
categories they fall into, and on the graph connecting these shops, corridor
junctions and entrances by corridor paths. Though the presented method is
applicable for training on datasets obtained from a field survey or from
pedestrian-detecting sensors, the target data of the supervised deep-learning
work flow in this work are obtained from a probability method. We also include
a context-specific representation learning of latent features. The
usage-probability prediction is made on each edge, which is a connection by a
section of corridor path between the adjacent nodes representing the shops or
corridor points. To create a feature for each edge, the hidden-layer feature
vectors acquired in the message-passing GNN layers at the nodes of each edge
are averaged and concatenated with the vector obtained by their multiplication.
These edge-features are then passed to multilayer perceptrons (MLP) to make the
final prediction of usage probability on each edge. The samples of synthetic
learning dataset for each shopping mall are obtained by changing the shops'
usage and area categories, and by subsequently feeding the graph into the
probability model.
When including different shopping malls in a single dataset, we also propose
to consider graph-level features to inform the model with specific identifying
features of each mall.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 10:48:36 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Barakathullah",
"Malik M",
""
],
[
"Koh",
"Immanuel",
""
]
] | TITLE: Prediction of Usage Probabilities of Shopping-Mall Corridors Using
Heterogeneous Graph Neural Networks
ABSTRACT: We present a method based on graph neural network (GNN) for prediction of
probabilities of usage of shopping-mall corridors. The heterogeneous graph
network of shops and corridor paths are obtained from floorplans of the malls
by creating vector layers for corridors, shops and entrances. These are
subsequently assimilated into nodes and edges of graphs. The prediction of the
usage probability is based on the shop features, namely, the area and usage
categories they fall into, and on the graph connecting these shops, corridor
junctions and entrances by corridor paths. Though the presented method is
applicable for training on datasets obtained from a field survey or from
pedestrian-detecting sensors, the target data of the supervised deep-learning
work flow in this work are obtained from a probability method. We also include
a context-specific representation learning of latent features. The
usage-probability prediction is made on each edge, which is a connection by a
section of corridor path between the adjacent nodes representing the shops or
corridor points. To create a feature for each edge, the hidden-layer feature
vectors acquired in the message-passing GNN layers at the nodes of each edge
are averaged and concatenated with the vector obtained by their multiplication.
These edge-features are then passed to multilayer perceptrons (MLP) to make the
final prediction of usage probability on each edge. The samples of synthetic
learning dataset for each shopping mall are obtained by changing the shops'
usage and area categories, and by subsequently feeding the graph into the
probability model.
When including different shopping malls in a single dataset, we also propose
to consider graph-level features to inform the model with specific identifying
features of each mall.
|
2504.07646 | Alfredo Garrachon | Alfredo Garrach\'on Ruiz, Tom\'as de la Rosa, Daniel Borrajo | On the Temporal Question-Answering Capabilities of Large Language Models
Over Anonymized Data | 18 pages, 7 tables, 5 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The applicability of Large Language Models (LLMs) in temporal reasoning tasks
over data that is not present during training is still a field that remains to
be explored. In this paper we work on this topic, focusing on structured and
semi-structured anonymized data. We not only develop a direct LLM pipeline, but
also compare various methodologies and conduct an in-depth analysis. We
identified and examined seventeen common temporal reasoning tasks in natural
language, focusing on their algorithmic components. To assess LLM performance,
we created the \textit{Reasoning and Answering Temporal Ability} dataset
(RATA), featuring semi-structured anonymized data to ensure reliance on
reasoning rather than on prior knowledge. We compared several methodologies,
involving SoTA techniques such as Tree-of-Thought, self-reflexion and code
execution, tuned specifically for this scenario. Our results suggest that
achieving scalable and reliable solutions requires more than just standalone
LLMs, highlighting the need for integrated approaches.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 10:48:42 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Ruiz",
"Alfredo Garrachón",
""
],
[
"de la Rosa",
"Tomás",
""
],
[
"Borrajo",
"Daniel",
""
]
] | TITLE: On the Temporal Question-Answering Capabilities of Large Language Models
Over Anonymized Data
ABSTRACT: The applicability of Large Language Models (LLMs) in temporal reasoning tasks
over data that is not present during training is still a field that remains to
be explored. In this paper we work on this topic, focusing on structured and
semi-structured anonymized data. We not only develop a direct LLM pipeline, but
also compare various methodologies and conduct an in-depth analysis. We
identified and examined seventeen common temporal reasoning tasks in natural
language, focusing on their algorithmic components. To assess LLM performance,
we created the \textit{Reasoning and Answering Temporal Ability} dataset
(RATA), featuring semi-structured anonymized data to ensure reliance on
reasoning rather than on prior knowledge. We compared several methodologies,
involving SoTA techniques such as Tree-of-Thought, self-reflexion and code
execution, tuned specifically for this scenario. Our results suggest that
achieving scalable and reliable solutions requires more than just standalone
LLMs, highlighting the need for integrated approaches.
|
2504.07661 | Xiaowu Zhang | Xiaowu Zhang and Hongfei Zhao and Jingyi Hou and Zhijie Liu | Unveiling the Impact of Multimodal Features on Chinese Spelling
Correction: From Analysis to Design | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Chinese Spelling Correction (CSC) task focuses on detecting and
correcting spelling errors in sentences. Current research primarily explores
two approaches: traditional multimodal pre-trained models and large language
models (LLMs). However, LLMs face limitations in CSC, particularly
over-correction, making them suboptimal for this task. While existing studies
have investigated the use of phonetic and graphemic information in multimodal
CSC models, effectively leveraging these features to enhance correction
performance remains a challenge. To address this, we propose the Multimodal
Analysis for Character Usage (\textbf{MACU}) experiment, identifying potential
improvements for multimodal correctison. Based on empirical findings, we
introduce \textbf{NamBert}, a novel multimodal model for Chinese spelling
correction. Experiments on benchmark datasets demonstrate NamBert's superiority
over SOTA methods. We also conduct a comprehensive comparison between NamBert
and LLMs, systematically evaluating their strengths and limitations in CSC. Our
code and model are available at https://github.com/iioSnail/NamBert.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 11:19:09 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhang",
"Xiaowu",
""
],
[
"Zhao",
"Hongfei",
""
],
[
"Hou",
"Jingyi",
""
],
[
"Liu",
"Zhijie",
""
]
] | TITLE: Unveiling the Impact of Multimodal Features on Chinese Spelling
Correction: From Analysis to Design
ABSTRACT: The Chinese Spelling Correction (CSC) task focuses on detecting and
correcting spelling errors in sentences. Current research primarily explores
two approaches: traditional multimodal pre-trained models and large language
models (LLMs). However, LLMs face limitations in CSC, particularly
over-correction, making them suboptimal for this task. While existing studies
have investigated the use of phonetic and graphemic information in multimodal
CSC models, effectively leveraging these features to enhance correction
performance remains a challenge. To address this, we propose the Multimodal
Analysis for Character Usage (\textbf{MACU}) experiment, identifying potential
improvements for multimodal correctison. Based on empirical findings, we
introduce \textbf{NamBert}, a novel multimodal model for Chinese spelling
correction. Experiments on benchmark datasets demonstrate NamBert's superiority
over SOTA methods. We also conduct a comprehensive comparison between NamBert
and LLMs, systematically evaluating their strengths and limitations in CSC. Our
code and model are available at https://github.com/iioSnail/NamBert.
|
2504.07664 | Asma Yamani | Asma Yamani, Nadeen AlAmoudi, Salma Albilali, Malak Baslyman,
Jameleddine Hassine | Data Requirement Goal Modeling for Machine Learning Systems | null | null | null | null | cs.SE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine Learning (ML) has been integrated into various software and systems.
Two main components are essential for training an ML model: the training data
and the ML algorithm. Given the critical role of data in ML system development,
it has become increasingly important to assess the quality of data attributes
and ensure that the data meets specific requirements before its utilization.
This work proposes an approach to guide non-experts in identifying data
requirements for ML systems using goal modeling. In this approach, we first
develop the Data Requirement Goal Model (DRGM) by surveying the white
literature to identify and categorize the issues and challenges faced by data
scientists and requirement engineers working on ML-related projects. An initial
DRGM was built to accommodate common tasks that would generalize across
projects. Then, based on insights from both white and gray literature, a
customization mechanism is built to help adjust the tasks, KPIs, and goals'
importance of different elements within the DRGM. The generated model can aid
its users in evaluating different datasets using GRL evaluation strategies. We
then validate the approach through two illustrative examples based on
real-world projects. The results from the illustrative examples demonstrate
that the data requirements identified by the proposed approach align with the
requirements of real-world projects, demonstrating the practicality and
effectiveness of the proposed framework. The proposed dataset selection
customization mechanism and the proposed DRGM are helpful in guiding
non-experts in identifying the data requirements for machine learning systems
tailored to a specific ML problem. This approach also aids in evaluating
different dataset alternatives to choose the optimum dataset for the problem.
For future work, we recommend implementing tool support to generate the DRGM
based on a chatbot interface.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 11:30:25 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Yamani",
"Asma",
""
],
[
"AlAmoudi",
"Nadeen",
""
],
[
"Albilali",
"Salma",
""
],
[
"Baslyman",
"Malak",
""
],
[
"Hassine",
"Jameleddine",
""
]
] | TITLE: Data Requirement Goal Modeling for Machine Learning Systems
ABSTRACT: Machine Learning (ML) has been integrated into various software and systems.
Two main components are essential for training an ML model: the training data
and the ML algorithm. Given the critical role of data in ML system development,
it has become increasingly important to assess the quality of data attributes
and ensure that the data meets specific requirements before its utilization.
This work proposes an approach to guide non-experts in identifying data
requirements for ML systems using goal modeling. In this approach, we first
develop the Data Requirement Goal Model (DRGM) by surveying the white
literature to identify and categorize the issues and challenges faced by data
scientists and requirement engineers working on ML-related projects. An initial
DRGM was built to accommodate common tasks that would generalize across
projects. Then, based on insights from both white and gray literature, a
customization mechanism is built to help adjust the tasks, KPIs, and goals'
importance of different elements within the DRGM. The generated model can aid
its users in evaluating different datasets using GRL evaluation strategies. We
then validate the approach through two illustrative examples based on
real-world projects. The results from the illustrative examples demonstrate
that the data requirements identified by the proposed approach align with the
requirements of real-world projects, demonstrating the practicality and
effectiveness of the proposed framework. The proposed dataset selection
customization mechanism and the proposed DRGM are helpful in guiding
non-experts in identifying the data requirements for machine learning systems
tailored to a specific ML problem. This approach also aids in evaluating
different dataset alternatives to choose the optimum dataset for the problem.
For future work, we recommend implementing tool support to generate the DRGM
based on a chatbot interface.
|
2504.07667 | Yujin Wang | Yujin Wang, Jiarui Wu, Yichen Bian, Fan Zhang, Tianfan Xue | S2R-HDR: A Large-Scale Rendered Dataset for HDR Fusion | https://openimaginglab.github.io/S2R-HDR | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generalization of learning-based high dynamic range (HDR) fusion is often
limited by the availability of training data, as collecting large-scale HDR
images from dynamic scenes is both costly and technically challenging. To
address these challenges, we propose S2R-HDR, the first large-scale
high-quality synthetic dataset for HDR fusion, with 24,000 HDR samples. Using
Unreal Engine 5, we design a diverse set of realistic HDR scenes that encompass
various dynamic elements, motion types, high dynamic range scenes, and
lighting. Additionally, we develop an efficient rendering pipeline to generate
realistic HDR images. To further mitigate the domain gap between synthetic and
real-world data, we introduce S2R-Adapter, a domain adaptation designed to
bridge this gap and enhance the generalization ability of models. Experimental
results on real-world datasets demonstrate that our approach achieves
state-of-the-art HDR reconstruction performance. Dataset and code will be
available at https://openimaginglab.github.io/S2R-HDR.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 11:39:56 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Wang",
"Yujin",
""
],
[
"Wu",
"Jiarui",
""
],
[
"Bian",
"Yichen",
""
],
[
"Zhang",
"Fan",
""
],
[
"Xue",
"Tianfan",
""
]
] | TITLE: S2R-HDR: A Large-Scale Rendered Dataset for HDR Fusion
ABSTRACT: The generalization of learning-based high dynamic range (HDR) fusion is often
limited by the availability of training data, as collecting large-scale HDR
images from dynamic scenes is both costly and technically challenging. To
address these challenges, we propose S2R-HDR, the first large-scale
high-quality synthetic dataset for HDR fusion, with 24,000 HDR samples. Using
Unreal Engine 5, we design a diverse set of realistic HDR scenes that encompass
various dynamic elements, motion types, high dynamic range scenes, and
lighting. Additionally, we develop an efficient rendering pipeline to generate
realistic HDR images. To further mitigate the domain gap between synthetic and
real-world data, we introduce S2R-Adapter, a domain adaptation designed to
bridge this gap and enhance the generalization ability of models. Experimental
results on real-world datasets demonstrate that our approach achieves
state-of-the-art HDR reconstruction performance. Dataset and code will be
available at https://openimaginglab.github.io/S2R-HDR.
|
2504.07670 | Anne-Sofie Maerten | Anne-Sofie Maerten and Li-Wei Chen and Stefanie De Winter and
Christophe Bossens and Johan Wagemans | LAPIS: A novel dataset for personalized image aesthetic assessment | accepted at the CVPR 2025 workshop on AI for Creative Visual Content
Generation Editing and Understanding (CVEU) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present the Leuven Art Personalized Image Set (LAPIS), a novel dataset for
personalized image aesthetic assessment (PIAA). It is the first dataset with
images of artworks that is suitable for PIAA. LAPIS consists of 11,723 images
and was meticulously curated in collaboration with art historians. Each image
has an aesthetics score and a set of image attributes known to relate to
aesthetic appreciation. Besides rich image attributes, LAPIS offers rich
personal attributes of each annotator. We implemented two existing
state-of-the-art PIAA models and assessed their performance on LAPIS. We assess
the contribution of personal attributes and image attributes through ablation
studies and find that performance deteriorates when certain personal and image
attributes are removed. An analysis of failure cases reveals that both existing
models make similar incorrect predictions, highlighting the need for
improvements in artistic image aesthetic assessment. The LAPIS project page can
be found at: https://github.com/Anne-SofieMaerten/LAPIS
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 11:42:56 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Maerten",
"Anne-Sofie",
""
],
[
"Chen",
"Li-Wei",
""
],
[
"De Winter",
"Stefanie",
""
],
[
"Bossens",
"Christophe",
""
],
[
"Wagemans",
"Johan",
""
]
] | TITLE: LAPIS: A novel dataset for personalized image aesthetic assessment
ABSTRACT: We present the Leuven Art Personalized Image Set (LAPIS), a novel dataset for
personalized image aesthetic assessment (PIAA). It is the first dataset with
images of artworks that is suitable for PIAA. LAPIS consists of 11,723 images
and was meticulously curated in collaboration with art historians. Each image
has an aesthetics score and a set of image attributes known to relate to
aesthetic appreciation. Besides rich image attributes, LAPIS offers rich
personal attributes of each annotator. We implemented two existing
state-of-the-art PIAA models and assessed their performance on LAPIS. We assess
the contribution of personal attributes and image attributes through ablation
studies and find that performance deteriorates when certain personal and image
attributes are removed. An analysis of failure cases reveals that both existing
models make similar incorrect predictions, highlighting the need for
improvements in artistic image aesthetic assessment. The LAPIS project page can
be found at: https://github.com/Anne-SofieMaerten/LAPIS
|
2504.07677 | Jiyong Oh Dr. | Hye-Min Won, Jieun Lee, Jiyong Oh | Localization Meets Uncertainty: Uncertainty-Aware Multi-Modal
Localization | 14 pages, 6 figures | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Reliable localization is critical for robot navigation in complex indoor
environments. In this paper, we propose an uncertainty-aware localization
method that enhances the reliability of localization outputs without modifying
the prediction model itself. This study introduces a percentile-based rejection
strategy that filters out unreliable 3-DoF pose predictions based on aleatoric
and epistemic uncertainties the network estimates. We apply this approach to a
multi-modal end-to-end localization that fuses RGB images and 2D LiDAR data,
and we evaluate it across three real-world datasets collected using a
commercialized serving robot. Experimental results show that applying stricter
uncertainty thresholds consistently improves pose accuracy. Specifically, the
mean position error is reduced by 41.0%, 56.7%, and 69.4%, and the mean
orientation error by 55.6%, 65.7%, and 73.3%, when applying 90%, 80%, and 70%
thresholds, respectively. Furthermore, the rejection strategy effectively
removes extreme outliers, resulting in better alignment with ground truth
trajectories. To the best of our knowledge, this is the first study to
quantitatively demonstrate the benefits of percentile-based uncertainty
rejection in multi-modal end-to-end localization tasks. Our approach provides a
practical means to enhance the reliability and accuracy of localization systems
in real-world deployments.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 12:07:24 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Won",
"Hye-Min",
""
],
[
"Lee",
"Jieun",
""
],
[
"Oh",
"Jiyong",
""
]
] | TITLE: Localization Meets Uncertainty: Uncertainty-Aware Multi-Modal
Localization
ABSTRACT: Reliable localization is critical for robot navigation in complex indoor
environments. In this paper, we propose an uncertainty-aware localization
method that enhances the reliability of localization outputs without modifying
the prediction model itself. This study introduces a percentile-based rejection
strategy that filters out unreliable 3-DoF pose predictions based on aleatoric
and epistemic uncertainties the network estimates. We apply this approach to a
multi-modal end-to-end localization that fuses RGB images and 2D LiDAR data,
and we evaluate it across three real-world datasets collected using a
commercialized serving robot. Experimental results show that applying stricter
uncertainty thresholds consistently improves pose accuracy. Specifically, the
mean position error is reduced by 41.0%, 56.7%, and 69.4%, and the mean
orientation error by 55.6%, 65.7%, and 73.3%, when applying 90%, 80%, and 70%
thresholds, respectively. Furthermore, the rejection strategy effectively
removes extreme outliers, resulting in better alignment with ground truth
trajectories. To the best of our knowledge, this is the first study to
quantitatively demonstrate the benefits of percentile-based uncertainty
rejection in multi-modal end-to-end localization tasks. Our approach provides a
practical means to enhance the reliability and accuracy of localization systems
in real-world deployments.
|
2504.07687 | Yihao Wang | Yihao Wang, Zhong Qian, Peifeng Li | FMNV: A Dataset of Media-Published News Videos for Fake News Detection | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | News media, particularly video-based platforms, have become deeply embedded
in daily life, concurrently amplifying risks of misinformation dissemination.
Consequently, multimodal fake news detection has garnered significant research
attention. However, existing datasets predominantly comprise user-generated
videos characterized by crude editing and limited public engagement, whereas
professionally crafted fake news videos disseminated by media outlets often
politically or virally motivated pose substantially greater societal harm. To
address this gap, we construct FMNV, a novel dataset exclusively composed of
news videos published by media organizations. Through empirical analysis of
existing datasets and our curated collection, we categorize fake news videos
into four distinct types. Building upon this taxonomy, we employ Large Language
Models (LLMs) to automatically generate deceptive content by manipulating
authentic media-published news videos. Furthermore, we propose FMNVD, a
baseline model featuring a dual-stream architecture integrating CLIP and Faster
R-CNN for video feature extraction, enhanced by co-attention mechanisms for
feature refinement and multimodal aggregation. Comparative experiments
demonstrate both the generalization capability of FMNV across multiple
baselines and the superior detection efficacy of FMNVD. This work establishes
critical benchmarks for detecting high-impact fake news in media ecosystems
while advancing methodologies for cross-modal inconsistency analysis.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 12:16:32 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Wang",
"Yihao",
""
],
[
"Qian",
"Zhong",
""
],
[
"Li",
"Peifeng",
""
]
] | TITLE: FMNV: A Dataset of Media-Published News Videos for Fake News Detection
ABSTRACT: News media, particularly video-based platforms, have become deeply embedded
in daily life, concurrently amplifying risks of misinformation dissemination.
Consequently, multimodal fake news detection has garnered significant research
attention. However, existing datasets predominantly comprise user-generated
videos characterized by crude editing and limited public engagement, whereas
professionally crafted fake news videos disseminated by media outlets often
politically or virally motivated pose substantially greater societal harm. To
address this gap, we construct FMNV, a novel dataset exclusively composed of
news videos published by media organizations. Through empirical analysis of
existing datasets and our curated collection, we categorize fake news videos
into four distinct types. Building upon this taxonomy, we employ Large Language
Models (LLMs) to automatically generate deceptive content by manipulating
authentic media-published news videos. Furthermore, we propose FMNVD, a
baseline model featuring a dual-stream architecture integrating CLIP and Faster
R-CNN for video feature extraction, enhanced by co-attention mechanisms for
feature refinement and multimodal aggregation. Comparative experiments
demonstrate both the generalization capability of FMNV across multiple
baselines and the superior detection efficacy of FMNVD. This work establishes
critical benchmarks for detecting high-impact fake news in media ecosystems
while advancing methodologies for cross-modal inconsistency analysis.
|
2504.07698 | Shiki Sato | Shiki Sato, Jun Baba, Asahi Hentona, Shinji Iwata, Akifumi Yoshimoto,
Koichiro Yoshino | Proactive User Information Acquisition via Chats on User-Favored Topics | 23 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chat-oriented dialogue systems designed to provide tangible benefits, such as
sharing the latest news or preventing frailty in senior citizens, often require
Proactive acquisition of specific user Information via chats on user-faVOred
Topics (PIVOT). This study proposes the PIVOT task, designed to advance the
technical foundation for these systems. In this task, a system needs to acquire
the answers of a user to predefined questions without making the user feel
abrupt while engaging in a chat on a predefined topic. We found that even
recent large language models (LLMs) show a low success rate in the PIVOT task.
We constructed a dataset suitable for the analysis to develop more effective
systems. Finally, we developed a simple but effective system for this task by
incorporating insights obtained through the analysis of this dataset.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 12:32:16 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Sato",
"Shiki",
""
],
[
"Baba",
"Jun",
""
],
[
"Hentona",
"Asahi",
""
],
[
"Iwata",
"Shinji",
""
],
[
"Yoshimoto",
"Akifumi",
""
],
[
"Yoshino",
"Koichiro",
""
]
] | TITLE: Proactive User Information Acquisition via Chats on User-Favored Topics
ABSTRACT: Chat-oriented dialogue systems designed to provide tangible benefits, such as
sharing the latest news or preventing frailty in senior citizens, often require
Proactive acquisition of specific user Information via chats on user-faVOred
Topics (PIVOT). This study proposes the PIVOT task, designed to advance the
technical foundation for these systems. In this task, a system needs to acquire
the answers of a user to predefined questions without making the user feel
abrupt while engaging in a chat on a predefined topic. We found that even
recent large language models (LLMs) show a low success rate in the PIVOT task.
We constructed a dataset suitable for the analysis to develop more effective
systems. Finally, we developed a simple but effective system for this task by
incorporating insights obtained through the analysis of this dataset.
|
2504.07717 | Yang Jiao | Yang Jiao, Xiaodong Wang, Kai Yang | PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented
Generation in Large Language Models via Bilevel Optimization | Accepted at SIGIR 2025 | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have demonstrated remarkable performance across
a wide range of applications, e.g., medical question-answering, mathematical
sciences, and code generation. However, they also exhibit inherent limitations,
such as outdated knowledge and susceptibility to hallucinations.
Retrieval-Augmented Generation (RAG) has emerged as a promising paradigm to
address these issues, but it also introduces new vulnerabilities. Recent
efforts have focused on the security of RAG-based LLMs, yet existing attack
methods face three critical challenges: (1) their effectiveness declines
sharply when only a limited number of poisoned texts can be injected into the
knowledge database, (2) they lack sufficient stealth, as the attacks are often
detectable by anomaly detection systems, which compromises their effectiveness,
and (3) they rely on heuristic approaches to generate poisoned texts, lacking
formal optimization frameworks and theoretic guarantees, which limits their
effectiveness and applicability. To address these issues, we propose
coordinated Prompt-RAG attack (PR-attack), a novel optimization-driven attack
that introduces a small number of poisoned texts into the knowledge database
while embedding a backdoor trigger within the prompt. When activated, the
trigger causes the LLM to generate pre-designed responses to targeted queries,
while maintaining normal behavior in other contexts. This ensures both high
effectiveness and stealth. We formulate the attack generation process as a
bilevel optimization problem leveraging a principled optimization framework to
develop optimal poisoned texts and triggers. Extensive experiments across
diverse LLMs and datasets demonstrate the effectiveness of PR-Attack, achieving
a high attack success rate even with a limited number of poisoned texts and
significantly improved stealth compared to existing methods.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:09:50 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Jiao",
"Yang",
""
],
[
"Wang",
"Xiaodong",
""
],
[
"Yang",
"Kai",
""
]
] | TITLE: PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented
Generation in Large Language Models via Bilevel Optimization
ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable performance across
a wide range of applications, e.g., medical question-answering, mathematical
sciences, and code generation. However, they also exhibit inherent limitations,
such as outdated knowledge and susceptibility to hallucinations.
Retrieval-Augmented Generation (RAG) has emerged as a promising paradigm to
address these issues, but it also introduces new vulnerabilities. Recent
efforts have focused on the security of RAG-based LLMs, yet existing attack
methods face three critical challenges: (1) their effectiveness declines
sharply when only a limited number of poisoned texts can be injected into the
knowledge database, (2) they lack sufficient stealth, as the attacks are often
detectable by anomaly detection systems, which compromises their effectiveness,
and (3) they rely on heuristic approaches to generate poisoned texts, lacking
formal optimization frameworks and theoretic guarantees, which limits their
effectiveness and applicability. To address these issues, we propose
coordinated Prompt-RAG attack (PR-attack), a novel optimization-driven attack
that introduces a small number of poisoned texts into the knowledge database
while embedding a backdoor trigger within the prompt. When activated, the
trigger causes the LLM to generate pre-designed responses to targeted queries,
while maintaining normal behavior in other contexts. This ensures both high
effectiveness and stealth. We formulate the attack generation process as a
bilevel optimization problem leveraging a principled optimization framework to
develop optimal poisoned texts and triggers. Extensive experiments across
diverse LLMs and datasets demonstrate the effectiveness of PR-Attack, achieving
a high attack success rate even with a limited number of poisoned texts and
significantly improved stealth compared to existing methods.
|
2504.07718 | Zehong Ma | Zehong Ma, Hao Chen, Wei Zeng, Limin Su, and Shiliang Zhang | Multi-modal Reference Learning for Fine-grained Text-to-Image Retrieval | TMM25 | null | 10.1109/TMM.2025.3543066 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained text-to-image retrieval aims to retrieve a fine-grained target
image with a given text query. Existing methods typically assume that each
training image is accurately depicted by its textual descriptions. However,
textual descriptions can be ambiguous and fail to depict discriminative visual
details in images, leading to inaccurate representation learning. To alleviate
the effects of text ambiguity, we propose a Multi-Modal Reference learning
framework to learn robust representations. We first propose a multi-modal
reference construction module to aggregate all visual and textual details of
the same object into a comprehensive multi-modal reference. The multi-modal
reference hence facilitates the subsequent representation learning and
retrieval similarity computation. Specifically, a reference-guided
representation learning module is proposed to use multi-modal references to
learn more accurate visual and textual representations. Additionally, we
introduce a reference-based refinement method that employs the object
references to compute a reference-based similarity that refines the initial
retrieval results. Extensive experiments are conducted on five fine-grained
text-to-image retrieval datasets for different text-to-image retrieval tasks.
The proposed method has achieved superior performance over state-of-the-art
methods. For instance, on the text-to-person image retrieval dataset RSTPReid,
our method achieves the Rank1 accuracy of 56.2\%, surpassing the recent CFine
by 5.6\%.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:09:52 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Ma",
"Zehong",
""
],
[
"Chen",
"Hao",
""
],
[
"Zeng",
"Wei",
""
],
[
"Su",
"Limin",
""
],
[
"Zhang",
"Shiliang",
""
]
] | TITLE: Multi-modal Reference Learning for Fine-grained Text-to-Image Retrieval
ABSTRACT: Fine-grained text-to-image retrieval aims to retrieve a fine-grained target
image with a given text query. Existing methods typically assume that each
training image is accurately depicted by its textual descriptions. However,
textual descriptions can be ambiguous and fail to depict discriminative visual
details in images, leading to inaccurate representation learning. To alleviate
the effects of text ambiguity, we propose a Multi-Modal Reference learning
framework to learn robust representations. We first propose a multi-modal
reference construction module to aggregate all visual and textual details of
the same object into a comprehensive multi-modal reference. The multi-modal
reference hence facilitates the subsequent representation learning and
retrieval similarity computation. Specifically, a reference-guided
representation learning module is proposed to use multi-modal references to
learn more accurate visual and textual representations. Additionally, we
introduce a reference-based refinement method that employs the object
references to compute a reference-based similarity that refines the initial
retrieval results. Extensive experiments are conducted on five fine-grained
text-to-image retrieval datasets for different text-to-image retrieval tasks.
The proposed method has achieved superior performance over state-of-the-art
methods. For instance, on the text-to-person image retrieval dataset RSTPReid,
our method achieves the Rank1 accuracy of 56.2\%, surpassing the recent CFine
by 5.6\%.
|
2504.07724 | Penglei Sun | Yixiang Chen, Penglei Sun, Xiang Li and Xiaowen Chu | MRD-RAG: Enhancing Medical Diagnosis with Multi-Round
Retrieval-Augmented Generation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In recent years, accurately and quickly deploying medical large language
models (LLMs) has become a significant trend. Among these, retrieval-augmented
generation (RAG) has garnered significant attention due to its features of
rapid deployment and privacy protection. However, existing medical RAG
frameworks still have shortcomings. Most existing medical RAG frameworks are
designed for single-round question answering tasks and are not suitable for
multi-round diagnostic dialogue. On the other hand, existing medical
multi-round RAG frameworks do not consider the interconnections between
potential diseases to inquire precisely like a doctor. To address these issues,
we propose a Multi-Round Diagnostic RAG (MRD-RAG) framework that mimics the
doctor's diagnostic process. This RAG framework can analyze diagnosis
information of potential diseases and accurately conduct multi-round diagnosis
like a doctor. To evaluate the effectiveness of our proposed frameworks, we
conduct experiments on two modern medical datasets and two traditional Chinese
medicine datasets, with evaluations by GPT and human doctors on different
methods. The results indicate that our RAG framework can significantly enhance
the diagnostic performance of LLMs, highlighting the potential of our approach
in medical diagnosis. The code and data can be found in our project website
https://github.com/YixiangCh/MRD-RAG/tree/master.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:17:51 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Chen",
"Yixiang",
""
],
[
"Sun",
"Penglei",
""
],
[
"Li",
"Xiang",
""
],
[
"Chu",
"Xiaowen",
""
]
] | TITLE: MRD-RAG: Enhancing Medical Diagnosis with Multi-Round
Retrieval-Augmented Generation
ABSTRACT: In recent years, accurately and quickly deploying medical large language
models (LLMs) has become a significant trend. Among these, retrieval-augmented
generation (RAG) has garnered significant attention due to its features of
rapid deployment and privacy protection. However, existing medical RAG
frameworks still have shortcomings. Most existing medical RAG frameworks are
designed for single-round question answering tasks and are not suitable for
multi-round diagnostic dialogue. On the other hand, existing medical
multi-round RAG frameworks do not consider the interconnections between
potential diseases to inquire precisely like a doctor. To address these issues,
we propose a Multi-Round Diagnostic RAG (MRD-RAG) framework that mimics the
doctor's diagnostic process. This RAG framework can analyze diagnosis
information of potential diseases and accurately conduct multi-round diagnosis
like a doctor. To evaluate the effectiveness of our proposed frameworks, we
conduct experiments on two modern medical datasets and two traditional Chinese
medicine datasets, with evaluations by GPT and human doctors on different
methods. The results indicate that our RAG framework can significantly enhance
the diagnostic performance of LLMs, highlighting the potential of our approach
in medical diagnosis. The code and data can be found in our project website
https://github.com/YixiangCh/MRD-RAG/tree/master.
|
2504.07726 | Riya Bansal | Riya Bansal, Nikhil Kumar Rajput | Quantum Machine Learning: Unveiling Trends, Impacts through Bibliometric
Analysis | null | null | null | null | cs.DL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum Machine Learning (QML) is the intersection of two revolutionary
fields: quantum computing and machine learning. It promises to unlock
unparalleled capabilities in data analysis, model building, and problem-solving
by harnessing the unique properties of quantum mechanics. This research
endeavors to conduct a comprehensive bibliometric analysis of scientific
information pertaining to QML covering the period from 2000 to 2023. An
extensive dataset comprising 9493 scholarly works is meticulously examined to
unveil notable trends, impact factors, and funding patterns within the domain.
Additionally, the study employs bibliometric mapping techniques to visually
illustrate the network relationships among key countries, institutions,
authors, patent citations and significant keywords in QML research. The
analysis reveals a consistent growth in publications over the examined period.
The findings highlight the United States and China as prominent contributors,
exhibiting substantial publication and citation metrics. Notably, the study
concludes that QML, as a research subject, is currently in a formative stage,
characterized by robust scholarly activity and ongoing development.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:18:48 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Bansal",
"Riya",
""
],
[
"Rajput",
"Nikhil Kumar",
""
]
] | TITLE: Quantum Machine Learning: Unveiling Trends, Impacts through Bibliometric
Analysis
ABSTRACT: Quantum Machine Learning (QML) is the intersection of two revolutionary
fields: quantum computing and machine learning. It promises to unlock
unparalleled capabilities in data analysis, model building, and problem-solving
by harnessing the unique properties of quantum mechanics. This research
endeavors to conduct a comprehensive bibliometric analysis of scientific
information pertaining to QML covering the period from 2000 to 2023. An
extensive dataset comprising 9493 scholarly works is meticulously examined to
unveil notable trends, impact factors, and funding patterns within the domain.
Additionally, the study employs bibliometric mapping techniques to visually
illustrate the network relationships among key countries, institutions,
authors, patent citations and significant keywords in QML research. The
analysis reveals a consistent growth in publications over the examined period.
The findings highlight the United States and China as prominent contributors,
exhibiting substantial publication and citation metrics. Notably, the study
concludes that QML, as a research subject, is currently in a formative stage,
characterized by robust scholarly activity and ongoing development.
|
2504.07729 | Tejas Sudharshan Mathai | Nicole Tran, Anisa Prasad, Yan Zhuang, Tejas Sudharshan Mathai, Boah
Kim, Sydney Lewis, Pritam Mukherjee, Jianfei Liu, Ronald M. Summers | Benchmarking Multi-Organ Segmentation Tools for Multi-Parametric
T1-weighted Abdominal MRI | Published at SPIE Medical Imaging 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The segmentation of multiple organs in multi-parametric MRI studies is
critical for many applications in radiology, such as correlating imaging
biomarkers with disease status (e.g., cirrhosis, diabetes). Recently, three
publicly available tools, such as MRSegmentator (MRSeg), TotalSegmentator MRI
(TS), and TotalVibeSegmentator (VIBE), have been proposed for multi-organ
segmentation in MRI. However, the performance of these tools on specific MRI
sequence types has not yet been quantified. In this work, a subset of 40
volumes from the public Duke Liver Dataset was curated. The curated dataset
contained 10 volumes each from the pre-contrast fat saturated T1, arterial T1w,
venous T1w, and delayed T1w phases, respectively. Ten abdominal structures were
manually annotated in these volumes. Next, the performance of the three public
tools was benchmarked on this curated dataset. The results indicated that MRSeg
obtained a Dice score of 80.7 $\pm$ 18.6 and Hausdorff Distance (HD) error of
8.9 $\pm$ 10.4 mm. It fared the best ($p < .05$) across the different sequence
types in contrast to TS and VIBE.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:27:27 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Tran",
"Nicole",
""
],
[
"Prasad",
"Anisa",
""
],
[
"Zhuang",
"Yan",
""
],
[
"Mathai",
"Tejas Sudharshan",
""
],
[
"Kim",
"Boah",
""
],
[
"Lewis",
"Sydney",
""
],
[
"Mukherjee",
"Pritam",
""
],
[
"Liu",
"Jianfei",
""
],
[
"Summers",
"Ronald M.",
""
]
] | TITLE: Benchmarking Multi-Organ Segmentation Tools for Multi-Parametric
T1-weighted Abdominal MRI
ABSTRACT: The segmentation of multiple organs in multi-parametric MRI studies is
critical for many applications in radiology, such as correlating imaging
biomarkers with disease status (e.g., cirrhosis, diabetes). Recently, three
publicly available tools, such as MRSegmentator (MRSeg), TotalSegmentator MRI
(TS), and TotalVibeSegmentator (VIBE), have been proposed for multi-organ
segmentation in MRI. However, the performance of these tools on specific MRI
sequence types has not yet been quantified. In this work, a subset of 40
volumes from the public Duke Liver Dataset was curated. The curated dataset
contained 10 volumes each from the pre-contrast fat saturated T1, arterial T1w,
venous T1w, and delayed T1w phases, respectively. Ten abdominal structures were
manually annotated in these volumes. Next, the performance of the three public
tools was benchmarked on this curated dataset. The results indicated that MRSeg
obtained a Dice score of 80.7 $\pm$ 18.6 and Hausdorff Distance (HD) error of
8.9 $\pm$ 10.4 mm. It fared the best ($p < .05$) across the different sequence
types in contrast to TS and VIBE.
|
2504.07740 | Keyu Liang | Keyu Liang, Zhongxin Liu, Chao Liu, Zhiyuan Wan, David Lo and Xiaohu
Yang | Zero-Shot Cross-Domain Code Search without Fine-Tuning | null | null | 10.1145/3729357 | null | cs.SE cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code search aims to retrieve semantically relevant code snippets for natural
language queries. While pre-trained language models (PLMs) have shown
remarkable performance in this task, they struggle in cross-domain scenarios,
often requiring costly fine-tuning or facing performance drops in zero-shot
settings. RAPID, which generates synthetic data for model fine-tuning, is
currently the only effective method for zero-shot cross-domain code search.
Despite its effectiveness, RAPID demands substantial computational resources
for fine-tuning and needs to maintain specialized models for each domain,
underscoring the need for a zero-shot, fine-tuning-free approach for
cross-domain code search.
The key to tackling zero-shot cross-domain code search lies in bridging the
gaps among domains. In this work, we propose to break the query-code matching
process of code search into two simpler tasks: query-comment matching and
code-code matching. Our empirical study reveals the strong complementarity
among the three matching schemas in zero-shot cross-domain settings, i.e.,
query-code, query-comment, and code-code matching. Based on the findings, we
propose CodeBridge, a zero-shot, fine-tuning-free approach for cross-domain
code search. Specifically, CodeBridge uses Large Language Models (LLMs) to
generate comments and pseudo-code, then combines query-code, query-comment, and
code-code matching via PLM-based similarity scoring and sampling-based fusion.
Experimental results show that our approach outperforms the state-of-the-art
PLM-based code search approaches, i.e., CoCoSoDa and UniXcoder, by an average
of 21.4% and 24.9% in MRR, respectively, across three datasets. Our approach
also yields results that are better than or comparable to those of the
zero-shot cross-domain code search approach RAPID, which requires costly
fine-tuning.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:36:37 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Liang",
"Keyu",
""
],
[
"Liu",
"Zhongxin",
""
],
[
"Liu",
"Chao",
""
],
[
"Wan",
"Zhiyuan",
""
],
[
"Lo",
"David",
""
],
[
"Yang",
"Xiaohu",
""
]
] | TITLE: Zero-Shot Cross-Domain Code Search without Fine-Tuning
ABSTRACT: Code search aims to retrieve semantically relevant code snippets for natural
language queries. While pre-trained language models (PLMs) have shown
remarkable performance in this task, they struggle in cross-domain scenarios,
often requiring costly fine-tuning or facing performance drops in zero-shot
settings. RAPID, which generates synthetic data for model fine-tuning, is
currently the only effective method for zero-shot cross-domain code search.
Despite its effectiveness, RAPID demands substantial computational resources
for fine-tuning and needs to maintain specialized models for each domain,
underscoring the need for a zero-shot, fine-tuning-free approach for
cross-domain code search.
The key to tackling zero-shot cross-domain code search lies in bridging the
gaps among domains. In this work, we propose to break the query-code matching
process of code search into two simpler tasks: query-comment matching and
code-code matching. Our empirical study reveals the strong complementarity
among the three matching schemas in zero-shot cross-domain settings, i.e.,
query-code, query-comment, and code-code matching. Based on the findings, we
propose CodeBridge, a zero-shot, fine-tuning-free approach for cross-domain
code search. Specifically, CodeBridge uses Large Language Models (LLMs) to
generate comments and pseudo-code, then combines query-code, query-comment, and
code-code matching via PLM-based similarity scoring and sampling-based fusion.
Experimental results show that our approach outperforms the state-of-the-art
PLM-based code search approaches, i.e., CoCoSoDa and UniXcoder, by an average
of 21.4% and 24.9% in MRR, respectively, across three datasets. Our approach
also yields results that are better than or comparable to those of the
zero-shot cross-domain code search approach RAPID, which requires costly
fine-tuning.
|
2504.07742 | Haowei Wang | Qiyu Wei, Haowei Wang, Zirui Cao, Songhao Wang, Richard Allmendinger,
Mauricio A \'Alvarez | Gradient-based Sample Selection for Faster Bayesian Optimization | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian optimization (BO) is an effective technique for black-box
optimization. However, its applicability is typically limited to
moderate-budget problems due to the cubic complexity in computing the Gaussian
process (GP) surrogate model. In large-budget scenarios, directly employing the
standard GP model faces significant challenges in computational time and
resource requirements. In this paper, we propose a novel approach,
gradient-based sample selection Bayesian Optimization (GSSBO), to enhance the
computational efficiency of BO. The GP model is constructed on a selected set
of samples instead of the whole dataset. These samples are selected by
leveraging gradient information to maintain diversity and representation. We
provide a theoretical analysis of the gradient-based sample selection strategy
and obtain explicit sublinear regret bounds for our proposed framework.
Extensive experiments on synthetic and real-world tasks demonstrate that our
approach significantly reduces the computational cost of GP fitting in BO while
maintaining optimization performance comparable to baseline methods.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:38:15 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Wei",
"Qiyu",
""
],
[
"Wang",
"Haowei",
""
],
[
"Cao",
"Zirui",
""
],
[
"Wang",
"Songhao",
""
],
[
"Allmendinger",
"Richard",
""
],
[
"Álvarez",
"Mauricio A",
""
]
] | TITLE: Gradient-based Sample Selection for Faster Bayesian Optimization
ABSTRACT: Bayesian optimization (BO) is an effective technique for black-box
optimization. However, its applicability is typically limited to
moderate-budget problems due to the cubic complexity in computing the Gaussian
process (GP) surrogate model. In large-budget scenarios, directly employing the
standard GP model faces significant challenges in computational time and
resource requirements. In this paper, we propose a novel approach,
gradient-based sample selection Bayesian Optimization (GSSBO), to enhance the
computational efficiency of BO. The GP model is constructed on a selected set
of samples instead of the whole dataset. These samples are selected by
leveraging gradient information to maintain diversity and representation. We
provide a theoretical analysis of the gradient-based sample selection strategy
and obtain explicit sublinear regret bounds for our proposed framework.
Extensive experiments on synthetic and real-world tasks demonstrate that our
approach significantly reduces the computational cost of GP fitting in BO while
maintaining optimization performance comparable to baseline methods.
|
2504.07744 | Jenna Kline | Jenna Kline, Samuel Stevens, Guy Maalouf, Camille Rondeau Saint-Jean,
Dat Nguyen Ngoc, Majid Mirmehdi, David Guerin, Tilo Burghardt, Elzbieta
Pastucha, Blair Costelloe, Matthew Watson, Thomas Richardson, and Ulrik Pagh
Schultz Lundquist | MMLA: Multi-Environment, Multi-Species, Low-Altitude Aerial Footage
Dataset | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Real-time wildlife detection in drone imagery is critical for numerous
applications, including animal ecology, conservation, and biodiversity
monitoring. Low-altitude drone missions are effective for collecting
fine-grained animal movement and behavior data, particularly if missions are
automated for increased speed and consistency. However, little work exists on
evaluating computer vision models on low-altitude aerial imagery and
generalizability across different species and settings. To fill this gap, we
present a novel multi-environment, multi-species, low-altitude aerial footage
(MMLA) dataset. MMLA consists of drone footage collected across three diverse
environments: Ol Pejeta Conservancy and Mpala Research Centre in Kenya, and The
Wilds Conservation Center in Ohio, which includes five species: Plains zebras,
Grevy's zebras, giraffes, onagers, and African Painted Dogs. We comprehensively
evaluate three YOLO models (YOLOv5m, YOLOv8m, and YOLOv11m) for detecting
animals. Results demonstrate significant performance disparities across
locations and species-specific detection variations. Our work highlights the
importance of evaluating detection algorithms across different environments for
robust wildlife monitoring applications using drones.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:40:27 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Kline",
"Jenna",
""
],
[
"Stevens",
"Samuel",
""
],
[
"Maalouf",
"Guy",
""
],
[
"Saint-Jean",
"Camille Rondeau",
""
],
[
"Ngoc",
"Dat Nguyen",
""
],
[
"Mirmehdi",
"Majid",
""
],
[
"Guerin",
"David",
""
],
[
"Burghardt",
"Tilo",
""
],
[
"Pastucha",
"Elzbieta",
""
],
[
"Costelloe",
"Blair",
""
],
[
"Watson",
"Matthew",
""
],
[
"Richardson",
"Thomas",
""
],
[
"Lundquist",
"Ulrik Pagh Schultz",
""
]
] | TITLE: MMLA: Multi-Environment, Multi-Species, Low-Altitude Aerial Footage
Dataset
ABSTRACT: Real-time wildlife detection in drone imagery is critical for numerous
applications, including animal ecology, conservation, and biodiversity
monitoring. Low-altitude drone missions are effective for collecting
fine-grained animal movement and behavior data, particularly if missions are
automated for increased speed and consistency. However, little work exists on
evaluating computer vision models on low-altitude aerial imagery and
generalizability across different species and settings. To fill this gap, we
present a novel multi-environment, multi-species, low-altitude aerial footage
(MMLA) dataset. MMLA consists of drone footage collected across three diverse
environments: Ol Pejeta Conservancy and Mpala Research Centre in Kenya, and The
Wilds Conservation Center in Ohio, which includes five species: Plains zebras,
Grevy's zebras, giraffes, onagers, and African Painted Dogs. We comprehensively
evaluate three YOLO models (YOLOv5m, YOLOv8m, and YOLOv11m) for detecting
animals. Results demonstrate significant performance disparities across
locations and species-specific detection variations. Our work highlights the
importance of evaluating detection algorithms across different environments for
robust wildlife monitoring applications using drones.
|
2504.07745 | Zikai Song | Yangliu Hu, Zikai Song, Na Feng, Yawei Luo, Junqing Yu, Yi-Ping Phoebe
Chen, Wei Yang | SF2T: Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained
Understanding | Accepted to CVPR2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video-based Large Language Models (Video-LLMs) have witnessed substantial
advancements in recent years, propelled by the advancement in multi-modal LLMs.
Although these models have demonstrated proficiency in providing the overall
description of videos, they struggle with fine-grained understanding,
particularly in aspects such as visual dynamics and video details inquiries. To
tackle these shortcomings, we find that fine-tuning Video-LLMs on
self-supervised fragment tasks, greatly improve their fine-grained video
understanding abilities. Hence we propose two key contributions:(1)
Self-Supervised Fragment Fine-Tuning (SF$^2$T), a novel effortless fine-tuning
method, employs the rich inherent characteristics of videos for training, while
unlocking more fine-grained understanding ability of Video-LLMs. Moreover, it
relieves researchers from labor-intensive annotations and smartly circumvents
the limitations of natural language, which often fails to capture the complex
spatiotemporal variations in videos; (2) A novel benchmark dataset, namely
FineVidBench, for rigorously assessing Video-LLMs' performance at both the
scene and fragment levels, offering a comprehensive evaluation of their
capabilities. We assessed multiple models and validated the effectiveness of
SF$^2$T on them. Experimental results reveal that our approach improves their
ability to capture and interpret spatiotemporal details.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:40:34 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Hu",
"Yangliu",
""
],
[
"Song",
"Zikai",
""
],
[
"Feng",
"Na",
""
],
[
"Luo",
"Yawei",
""
],
[
"Yu",
"Junqing",
""
],
[
"Chen",
"Yi-Ping Phoebe",
""
],
[
"Yang",
"Wei",
""
]
] | TITLE: SF2T: Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained
Understanding
ABSTRACT: Video-based Large Language Models (Video-LLMs) have witnessed substantial
advancements in recent years, propelled by the advancement in multi-modal LLMs.
Although these models have demonstrated proficiency in providing the overall
description of videos, they struggle with fine-grained understanding,
particularly in aspects such as visual dynamics and video details inquiries. To
tackle these shortcomings, we find that fine-tuning Video-LLMs on
self-supervised fragment tasks, greatly improve their fine-grained video
understanding abilities. Hence we propose two key contributions:(1)
Self-Supervised Fragment Fine-Tuning (SF$^2$T), a novel effortless fine-tuning
method, employs the rich inherent characteristics of videos for training, while
unlocking more fine-grained understanding ability of Video-LLMs. Moreover, it
relieves researchers from labor-intensive annotations and smartly circumvents
the limitations of natural language, which often fails to capture the complex
spatiotemporal variations in videos; (2) A novel benchmark dataset, namely
FineVidBench, for rigorously assessing Video-LLMs' performance at both the
scene and fragment levels, offering a comprehensive evaluation of their
capabilities. We assessed multiple models and validated the effectiveness of
SF$^2$T on them. Experimental results reveal that our approach improves their
ability to capture and interpret spatiotemporal details.
|
2504.07749 | Erik Velldal | Vladislav Mikhailov, Tita Enstad, David Samuel, Hans Christian
Farseth{\aa}s, Andrey Kutuzov, Erik Velldal, Lilja {\O}vrelid | NorEval: A Norwegian Language Understanding and Generation Evaluation
Benchmark | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces NorEval, a new and comprehensive evaluation suite for
large-scale standardized benchmarking of Norwegian generative language models
(LMs). NorEval consists of 24 high-quality human-created datasets -- of which
five are created from scratch. In contrast to existing benchmarks for
Norwegian, NorEval covers a broad spectrum of task categories targeting
Norwegian language understanding and generation, establishes human baselines,
and focuses on both of the official written standards of the Norwegian
language: Bokm{\aa}l and Nynorsk. All our datasets and a collection of over 100
human-written prompts are integrated into LM Evaluation Harness, ensuring
flexible and reproducible evaluation. We describe the NorEval design and
present the results of benchmarking 19 open-source pre-trained and
instruction-tuned LMs for Norwegian in various scenarios. Our benchmark,
evaluation framework, and annotation materials are publicly available.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:44:55 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Mikhailov",
"Vladislav",
""
],
[
"Enstad",
"Tita",
""
],
[
"Samuel",
"David",
""
],
[
"Farsethås",
"Hans Christian",
""
],
[
"Kutuzov",
"Andrey",
""
],
[
"Velldal",
"Erik",
""
],
[
"Øvrelid",
"Lilja",
""
]
] | TITLE: NorEval: A Norwegian Language Understanding and Generation Evaluation
Benchmark
ABSTRACT: This paper introduces NorEval, a new and comprehensive evaluation suite for
large-scale standardized benchmarking of Norwegian generative language models
(LMs). NorEval consists of 24 high-quality human-created datasets -- of which
five are created from scratch. In contrast to existing benchmarks for
Norwegian, NorEval covers a broad spectrum of task categories targeting
Norwegian language understanding and generation, establishes human baselines,
and focuses on both of the official written standards of the Norwegian
language: Bokm{\aa}l and Nynorsk. All our datasets and a collection of over 100
human-written prompts are integrated into LM Evaluation Harness, ensuring
flexible and reproducible evaluation. We describe the NorEval design and
present the results of benchmarking 19 open-source pre-trained and
instruction-tuned LMs for Norwegian in various scenarios. Our benchmark,
evaluation framework, and annotation materials are publicly available.
|
2504.07753 | Zini Chen | Zini Chen, Yao Xiao, Junyan Zhang, Shaoyu Wang, Liu Shi and Qiegen Liu | Virtual-mask Informed Prior for Sparse-view Dual-Energy CT
Reconstruction | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse-view sampling in dual-energy computed tomography (DECT) significantly
reduces radiation dose and increases imaging speed, yet is highly prone to
artifacts. Although diffusion models have demonstrated potential in effectively
handling incomplete data, most existing methods in this field focus on the
image do-main and lack global constraints, which consequently leads to
insufficient reconstruction quality. In this study, we propose a dual-domain
virtual-mask in-formed diffusion model for sparse-view reconstruction by
leveraging the high inter-channel correlation in DECT. Specifically, the study
designs a virtual mask and applies it to the high-energy and low-energy data to
perform perturbation operations, thus constructing high-dimensional tensors
that serve as the prior information of the diffusion model. In addition, a
dual-domain collaboration strategy is adopted to integrate the information of
the randomly selected high-frequency components in the wavelet domain with the
information in the projection domain, for the purpose of optimizing the global
struc-tures and local details. Experimental results indicated that the present
method exhibits excellent performance across multiple datasets.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:54:26 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Chen",
"Zini",
""
],
[
"Xiao",
"Yao",
""
],
[
"Zhang",
"Junyan",
""
],
[
"Wang",
"Shaoyu",
""
],
[
"Shi",
"Liu",
""
],
[
"Liu",
"Qiegen",
""
]
] | TITLE: Virtual-mask Informed Prior for Sparse-view Dual-Energy CT
Reconstruction
ABSTRACT: Sparse-view sampling in dual-energy computed tomography (DECT) significantly
reduces radiation dose and increases imaging speed, yet is highly prone to
artifacts. Although diffusion models have demonstrated potential in effectively
handling incomplete data, most existing methods in this field focus on the
image do-main and lack global constraints, which consequently leads to
insufficient reconstruction quality. In this study, we propose a dual-domain
virtual-mask in-formed diffusion model for sparse-view reconstruction by
leveraging the high inter-channel correlation in DECT. Specifically, the study
designs a virtual mask and applies it to the high-energy and low-energy data to
perform perturbation operations, thus constructing high-dimensional tensors
that serve as the prior information of the diffusion model. In addition, a
dual-domain collaboration strategy is adopted to integrate the information of
the randomly selected high-frequency components in the wavelet domain with the
information in the projection domain, for the purpose of optimizing the global
struc-tures and local details. Experimental results indicated that the present
method exhibits excellent performance across multiple datasets.
|
2504.07754 | Bo Zhang | Bo Zhang, Hui Ma, Dailin Li, Jian Ding, Jian Wang, Bo Xu, HongFei Lin | Efficient Tuning of Large Language Models for Knowledge-Grounded
Dialogue Generation | Accepted at TACL; pre-MIT Press publication version. Code and data
are available at https://github.com/zhangbo-nlp/KEDiT | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) demonstrate remarkable text comprehension and
generation capabilities but often lack the ability to utilize up-to-date or
domain-specific knowledge not included in their training data. To address this
gap, we introduce KEDiT, an efficient method for fine-tuning LLMs for
knowledge-grounded dialogue generation. KEDiT operates in two main phases:
first, it employs an information bottleneck to compress retrieved knowledge
into learnable parameters, retaining essential information while minimizing
computational overhead. Second, a lightweight knowledge-aware adapter
integrates these compressed knowledge vectors into the LLM during fine-tuning,
updating less than 2\% of the model parameters. The experimental results on the
Wizard of Wikipedia and a newly constructed PubMed-Dialog dataset demonstrate
that KEDiT excels in generating contextually relevant and informative
responses, outperforming competitive baselines in automatic, LLM-based, and
human evaluations. This approach effectively combines the strengths of
pretrained LLMs with the adaptability needed for incorporating dynamic
knowledge, presenting a scalable solution for fields such as medicine.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:54:36 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhang",
"Bo",
""
],
[
"Ma",
"Hui",
""
],
[
"Li",
"Dailin",
""
],
[
"Ding",
"Jian",
""
],
[
"Wang",
"Jian",
""
],
[
"Xu",
"Bo",
""
],
[
"Lin",
"HongFei",
""
]
] | TITLE: Efficient Tuning of Large Language Models for Knowledge-Grounded
Dialogue Generation
ABSTRACT: Large language models (LLMs) demonstrate remarkable text comprehension and
generation capabilities but often lack the ability to utilize up-to-date or
domain-specific knowledge not included in their training data. To address this
gap, we introduce KEDiT, an efficient method for fine-tuning LLMs for
knowledge-grounded dialogue generation. KEDiT operates in two main phases:
first, it employs an information bottleneck to compress retrieved knowledge
into learnable parameters, retaining essential information while minimizing
computational overhead. Second, a lightweight knowledge-aware adapter
integrates these compressed knowledge vectors into the LLM during fine-tuning,
updating less than 2\% of the model parameters. The experimental results on the
Wizard of Wikipedia and a newly constructed PubMed-Dialog dataset demonstrate
that KEDiT excels in generating contextually relevant and informative
responses, outperforming competitive baselines in automatic, LLM-based, and
human evaluations. This approach effectively combines the strengths of
pretrained LLMs with the adaptability needed for incorporating dynamic
knowledge, presenting a scalable solution for fields such as medicine.
|
2504.07760 | Zhenhuan Zhou | Zhenhuan Zhou, Yuchen Zhang, Ruihong Xu, Xuansen Zhao and Tao Li | PRAD: Periapical Radiograph Analysis Dataset and Benchmark Model
Development | 11 pages & Under Review | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning (DL), a pivotal technology in artificial intelligence, has
recently gained substantial traction in the domain of dental auxiliary
diagnosis. However, its application has predominantly been confined to imaging
modalities such as panoramic radiographs and Cone Beam Computed Tomography,
with limited focus on auxiliary analysis specifically targeting Periapical
Radiographs (PR). PR are the most extensively utilized imaging modality in
endodontics and periodontics due to their capability to capture detailed local
lesions at a low cost. Nevertheless, challenges such as resolution limitations
and artifacts complicate the annotation and recognition of PR, leading to a
scarcity of publicly available, large-scale, high-quality PR analysis datasets.
This scarcity has somewhat impeded the advancement of DL applications in PR
analysis. In this paper, we present PRAD-10K, a dataset for PR analysis.
PRAD-10K comprises 10,000 clinical periapical radiograph images, with
pixel-level annotations provided by professional dentists for nine distinct
anatomical structures, lesions, and artificial restorations or medical devices,
We also include classification labels for images with typical conditions or
lesions. Furthermore, we introduce a DL network named PRNet to establish
benchmarks for PR segmentation tasks. Experimental results demonstrate that
PRNet surpasses previous state-of-the-art medical image segmentation models on
the PRAD-10K dataset. The codes and dataset will be made publicly available.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 13:58:58 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhou",
"Zhenhuan",
""
],
[
"Zhang",
"Yuchen",
""
],
[
"Xu",
"Ruihong",
""
],
[
"Zhao",
"Xuansen",
""
],
[
"Li",
"Tao",
""
]
] | TITLE: PRAD: Periapical Radiograph Analysis Dataset and Benchmark Model
Development
ABSTRACT: Deep learning (DL), a pivotal technology in artificial intelligence, has
recently gained substantial traction in the domain of dental auxiliary
diagnosis. However, its application has predominantly been confined to imaging
modalities such as panoramic radiographs and Cone Beam Computed Tomography,
with limited focus on auxiliary analysis specifically targeting Periapical
Radiographs (PR). PR are the most extensively utilized imaging modality in
endodontics and periodontics due to their capability to capture detailed local
lesions at a low cost. Nevertheless, challenges such as resolution limitations
and artifacts complicate the annotation and recognition of PR, leading to a
scarcity of publicly available, large-scale, high-quality PR analysis datasets.
This scarcity has somewhat impeded the advancement of DL applications in PR
analysis. In this paper, we present PRAD-10K, a dataset for PR analysis.
PRAD-10K comprises 10,000 clinical periapical radiograph images, with
pixel-level annotations provided by professional dentists for nine distinct
anatomical structures, lesions, and artificial restorations or medical devices,
We also include classification labels for images with typical conditions or
lesions. Furthermore, we introduce a DL network named PRNet to establish
benchmarks for PR segmentation tasks. Experimental results demonstrate that
PRNet surpasses previous state-of-the-art medical image segmentation models on
the PRAD-10K dataset. The codes and dataset will be made publicly available.
|
2504.07775 | Lorenzo Lasagni | Lorenzo Lasagni, Antonio Ciccarone, Renzo Guerrini, Matteo Lenge and
Ludovico D'incerti | Focal Cortical Dysplasia Type II Detection Using Cross Modality Transfer
Learning and Grad-CAM in 3D-CNNs for MRI Analysis | null | null | null | null | eess.IV cs.CV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Focal cortical dysplasia (FCD) type II is a major cause of drug-resistant
epilepsy, often curable only by surgery. Despite its clinical importance, the
diagnosis of FCD is very difficult in MRI because of subtle abnormalities,
leading to misdiagnosis. This study investigates the use of 3D convolutional
neural networks (3D-CNNs) for FCD detection, using a dataset of 170 subjects
(85 FCD patients and 85 controls) composed of T1-weighted and FLAIR MRI scans.
In particular, it investigates the benefits obtained from cross-modality
transfer learning and explainable artificial intelligence (XAI) techniques, in
particular Gradient-weighted Class Activation Mapping (Grad-CAM). ResNet
architectures (ResNet-18, -34, and -50) were implemented, employing transfer
learning strategies that used pre-trained weights from segmentation tasks.
Results indicate that transfer learning significantly enhances classification
accuracy (up to 80.3%) and interpretability, as measured by a novel Heat-Score
metric, which evaluates the model's focus on clinically relevant regions.
Improvements in the Heat-Score metric underscore the model's seizure zone
localization capabilities, bringing AI predictions and clinical insights closer
together. These results highlight the importance of transfer learning,
including cross-modality, and XAI in advancing AI-based medical diagnostics,
especially for difficult-to-diagnose pathologies such as FCD.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 14:15:16 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Lasagni",
"Lorenzo",
""
],
[
"Ciccarone",
"Antonio",
""
],
[
"Guerrini",
"Renzo",
""
],
[
"Lenge",
"Matteo",
""
],
[
"D'incerti",
"Ludovico",
""
]
] | TITLE: Focal Cortical Dysplasia Type II Detection Using Cross Modality Transfer
Learning and Grad-CAM in 3D-CNNs for MRI Analysis
ABSTRACT: Focal cortical dysplasia (FCD) type II is a major cause of drug-resistant
epilepsy, often curable only by surgery. Despite its clinical importance, the
diagnosis of FCD is very difficult in MRI because of subtle abnormalities,
leading to misdiagnosis. This study investigates the use of 3D convolutional
neural networks (3D-CNNs) for FCD detection, using a dataset of 170 subjects
(85 FCD patients and 85 controls) composed of T1-weighted and FLAIR MRI scans.
In particular, it investigates the benefits obtained from cross-modality
transfer learning and explainable artificial intelligence (XAI) techniques, in
particular Gradient-weighted Class Activation Mapping (Grad-CAM). ResNet
architectures (ResNet-18, -34, and -50) were implemented, employing transfer
learning strategies that used pre-trained weights from segmentation tasks.
Results indicate that transfer learning significantly enhances classification
accuracy (up to 80.3%) and interpretability, as measured by a novel Heat-Score
metric, which evaluates the model's focus on clinically relevant regions.
Improvements in the Heat-Score metric underscore the model's seizure zone
localization capabilities, bringing AI predictions and clinical insights closer
together. These results highlight the importance of transfer learning,
including cross-modality, and XAI in advancing AI-based medical diagnostics,
especially for difficult-to-diagnose pathologies such as FCD.
|
2504.07785 | Zhun Zhong | Yan Zhang and Lechao Cheng and Yaxiong Wang and Zhun Zhong and Meng
Wang | Towards Micro-Action Recognition with Limited Annotations: An
Asynchronous Pseudo Labeling and Training Approach | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Micro-Action Recognition (MAR) aims to classify subtle human actions in
video. However, annotating MAR datasets is particularly challenging due to the
subtlety of actions. To this end, we introduce the setting of Semi-Supervised
MAR (SSMAR), where only a part of samples are labeled. We first evaluate
traditional Semi-Supervised Learning (SSL) methods to SSMAR and find that these
methods tend to overfit on inaccurate pseudo-labels, leading to error
accumulation and degraded performance. This issue primarily arises from the
common practice of directly using the predictions of classifier as
pseudo-labels to train the model. To solve this issue, we propose a novel
framework, called Asynchronous Pseudo Labeling and Training (APLT), which
explicitly separates the pseudo-labeling process from model training.
Specifically, we introduce a semi-supervised clustering method during the
offline pseudo-labeling phase to generate more accurate pseudo-labels.
Moreover, a self-adaptive thresholding strategy is proposed to dynamically
filter noisy labels of different classes. We then build a memory-based
prototype classifier based on the filtered pseudo-labels, which is fixed and
used to guide the subsequent model training phase. By alternating the two
pseudo-labeling and model training phases in an asynchronous manner, the model
can not only be learned with more accurate pseudo-labels but also avoid the
overfitting issue. Experiments on three MAR datasets show that our APLT largely
outperforms state-of-the-art SSL methods. For instance, APLT improves accuracy
by 14.5\% over FixMatch on the MA-12 dataset when using only 50\% labeled data.
Code will be publicly available.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 14:22:15 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhang",
"Yan",
""
],
[
"Cheng",
"Lechao",
""
],
[
"Wang",
"Yaxiong",
""
],
[
"Zhong",
"Zhun",
""
],
[
"Wang",
"Meng",
""
]
] | TITLE: Towards Micro-Action Recognition with Limited Annotations: An
Asynchronous Pseudo Labeling and Training Approach
ABSTRACT: Micro-Action Recognition (MAR) aims to classify subtle human actions in
video. However, annotating MAR datasets is particularly challenging due to the
subtlety of actions. To this end, we introduce the setting of Semi-Supervised
MAR (SSMAR), where only a part of samples are labeled. We first evaluate
traditional Semi-Supervised Learning (SSL) methods to SSMAR and find that these
methods tend to overfit on inaccurate pseudo-labels, leading to error
accumulation and degraded performance. This issue primarily arises from the
common practice of directly using the predictions of classifier as
pseudo-labels to train the model. To solve this issue, we propose a novel
framework, called Asynchronous Pseudo Labeling and Training (APLT), which
explicitly separates the pseudo-labeling process from model training.
Specifically, we introduce a semi-supervised clustering method during the
offline pseudo-labeling phase to generate more accurate pseudo-labels.
Moreover, a self-adaptive thresholding strategy is proposed to dynamically
filter noisy labels of different classes. We then build a memory-based
prototype classifier based on the filtered pseudo-labels, which is fixed and
used to guide the subsequent model training phase. By alternating the two
pseudo-labeling and model training phases in an asynchronous manner, the model
can not only be learned with more accurate pseudo-labels but also avoid the
overfitting issue. Experiments on three MAR datasets show that our APLT largely
outperforms state-of-the-art SSL methods. For instance, APLT improves accuracy
by 14.5\% over FixMatch on the MA-12 dataset when using only 50\% labeled data.
Code will be publicly available.
|
2504.07792 | Jakob Gr\"avinghoff | Alexander Brettmann, Jakob Gr\"avinghoff, Marlene R\"uschoff, Marie
Westhues | Breaking the Barriers: Video Vision Transformers for Word-Level Sign
Language Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sign language is a fundamental means of communication for the deaf and
hard-of-hearing (DHH) community, enabling nuanced expression through gestures,
facial expressions, and body movements. Despite its critical role in
facilitating interaction within the DHH population, significant barriers
persist due to the limited fluency in sign language among the hearing
population. Overcoming this communication gap through automatic sign language
recognition (SLR) remains a challenge, particularly at a dynamic word-level,
where temporal and spatial dependencies must be effectively recognized. While
Convolutional Neural Networks have shown potential in SLR, they are
computationally intensive and have difficulties in capturing global temporal
dependencies between video sequences. To address these limitations, we propose
a Video Vision Transformer (ViViT) model for word-level American Sign Language
(ASL) recognition. Transformer models make use of self-attention mechanisms to
effectively capture global relationships across spatial and temporal
dimensions, which makes them suitable for complex gesture recognition tasks.
The VideoMAE model achieves a Top-1 accuracy of 75.58% on the WLASL100 dataset,
highlighting its strong performance compared to traditional CNNs with 65.89%.
Our study demonstrates that transformer-based architectures have great
potential to advance SLR, overcome communication barriers and promote the
inclusion of DHH individuals.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 14:27:25 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Brettmann",
"Alexander",
""
],
[
"Grävinghoff",
"Jakob",
""
],
[
"Rüschoff",
"Marlene",
""
],
[
"Westhues",
"Marie",
""
]
] | TITLE: Breaking the Barriers: Video Vision Transformers for Word-Level Sign
Language Recognition
ABSTRACT: Sign language is a fundamental means of communication for the deaf and
hard-of-hearing (DHH) community, enabling nuanced expression through gestures,
facial expressions, and body movements. Despite its critical role in
facilitating interaction within the DHH population, significant barriers
persist due to the limited fluency in sign language among the hearing
population. Overcoming this communication gap through automatic sign language
recognition (SLR) remains a challenge, particularly at a dynamic word-level,
where temporal and spatial dependencies must be effectively recognized. While
Convolutional Neural Networks have shown potential in SLR, they are
computationally intensive and have difficulties in capturing global temporal
dependencies between video sequences. To address these limitations, we propose
a Video Vision Transformer (ViViT) model for word-level American Sign Language
(ASL) recognition. Transformer models make use of self-attention mechanisms to
effectively capture global relationships across spatial and temporal
dimensions, which makes them suitable for complex gesture recognition tasks.
The VideoMAE model achieves a Top-1 accuracy of 75.58% on the WLASL100 dataset,
highlighting its strong performance compared to traditional CNNs with 65.89%.
Our study demonstrates that transformer-based architectures have great
potential to advance SLR, overcome communication barriers and promote the
inclusion of DHH individuals.
|
2504.07794 | Alireza Salemi | Alireza Salemi, Chris Samarinas, Hamed Zamani | Plan-and-Refine: Diverse and Comprehensive Retrieval-Augmented
Generation | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the limitations of (retrieval-augmented) large language
models (LLMs) in generating diverse and comprehensive responses, and introduces
the Plan-and-Refine (P&R) framework based on a two phase system design. In the
global exploration phase, P&R generates a diverse set of plans for the given
input, where each plan consists of a list of diverse query aspects with
corresponding additional descriptions. This phase is followed by a local
exploitation phase that generates a response proposal for the input query
conditioned on each plan and iteratively refines the proposal for improving the
proposal quality. Finally, a reward model is employed to select the proposal
with the highest factuality and coverage. We conduct our experiments based on
the ICAT evaluation methodology--a recent approach for answer factuality and
comprehensiveness evaluation. Experiments on the two diverse information
seeking benchmarks adopted from non-factoid question answering and TREC search
result diversification tasks demonstrate that P&R significantly outperforms
baselines, achieving up to a 13.1% improvement on the ANTIQUE dataset and a
15.41% improvement on the TREC dataset. Furthermore, a smaller scale user study
confirms the substantial efficacy of the P&R framework.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 14:32:32 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Salemi",
"Alireza",
""
],
[
"Samarinas",
"Chris",
""
],
[
"Zamani",
"Hamed",
""
]
] | TITLE: Plan-and-Refine: Diverse and Comprehensive Retrieval-Augmented
Generation
ABSTRACT: This paper studies the limitations of (retrieval-augmented) large language
models (LLMs) in generating diverse and comprehensive responses, and introduces
the Plan-and-Refine (P&R) framework based on a two phase system design. In the
global exploration phase, P&R generates a diverse set of plans for the given
input, where each plan consists of a list of diverse query aspects with
corresponding additional descriptions. This phase is followed by a local
exploitation phase that generates a response proposal for the input query
conditioned on each plan and iteratively refines the proposal for improving the
proposal quality. Finally, a reward model is employed to select the proposal
with the highest factuality and coverage. We conduct our experiments based on
the ICAT evaluation methodology--a recent approach for answer factuality and
comprehensiveness evaluation. Experiments on the two diverse information
seeking benchmarks adopted from non-factoid question answering and TREC search
result diversification tasks demonstrate that P&R significantly outperforms
baselines, achieving up to a 13.1% improvement on the ANTIQUE dataset and a
15.41% improvement on the TREC dataset. Furthermore, a smaller scale user study
confirms the substantial efficacy of the P&R framework.
|
2504.07810 | Julia Navarro | Daniel Torres, Joan Duran, Julia Navarro, Catalina Sbert | Nonlocal Retinex-Based Variational Model and its Deep Unfolding Twin for
Low-Light Image Enhancement | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Images captured under low-light conditions present significant limitations in
many applications, as poor lighting can obscure details, reduce contrast, and
hide noise. Removing the illumination effects and enhancing the quality of such
images is crucial for many tasks, such as image segmentation and object
detection. In this paper, we propose a variational method for low-light image
enhancement based on the Retinex decomposition into illumination, reflectance,
and noise components. A color correction pre-processing step is applied to the
low-light image, which is then used as the observed input in the decomposition.
Moreover, our model integrates a novel nonlocal gradient-type fidelity term
designed to preserve structural details. Additionally, we propose an automatic
gamma correction module. Building on the proposed variational approach, we
extend the model by introducing its deep unfolding counterpart, in which the
proximal operators are replaced with learnable networks. We propose
cross-attention mechanisms to capture long-range dependencies in both the
nonlocal prior of the reflectance and the nonlocal gradient-based constraint.
Experimental results demonstrate that both methods compare favorably with
several recent and state-of-the-art techniques across different datasets. In
particular, despite not relying on learning strategies, the variational model
outperforms most deep learning approaches both visually and in terms of quality
metrics.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 14:48:26 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Torres",
"Daniel",
""
],
[
"Duran",
"Joan",
""
],
[
"Navarro",
"Julia",
""
],
[
"Sbert",
"Catalina",
""
]
] | TITLE: Nonlocal Retinex-Based Variational Model and its Deep Unfolding Twin for
Low-Light Image Enhancement
ABSTRACT: Images captured under low-light conditions present significant limitations in
many applications, as poor lighting can obscure details, reduce contrast, and
hide noise. Removing the illumination effects and enhancing the quality of such
images is crucial for many tasks, such as image segmentation and object
detection. In this paper, we propose a variational method for low-light image
enhancement based on the Retinex decomposition into illumination, reflectance,
and noise components. A color correction pre-processing step is applied to the
low-light image, which is then used as the observed input in the decomposition.
Moreover, our model integrates a novel nonlocal gradient-type fidelity term
designed to preserve structural details. Additionally, we propose an automatic
gamma correction module. Building on the proposed variational approach, we
extend the model by introducing its deep unfolding counterpart, in which the
proximal operators are replaced with learnable networks. We propose
cross-attention mechanisms to capture long-range dependencies in both the
nonlocal prior of the reflectance and the nonlocal gradient-based constraint.
Experimental results demonstrate that both methods compare favorably with
several recent and state-of-the-art techniques across different datasets. In
particular, despite not relying on learning strategies, the variational model
outperforms most deep learning approaches both visually and in terms of quality
metrics.
|
2504.07822 | Wanna Cui | Wanna Cui and Peizheng Wang and Faliang Yin | DG-STMTL: A Novel Graph Convolutional Network for Multi-Task
Spatio-Temporal Traffic Forecasting | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Spatio-temporal traffic prediction is crucial in intelligent transportation
systems. The key challenge of accurate prediction is how to model the complex
spatio-temporal dependencies and adapt to the inherent dynamics in data.
Traditional Graph Convolutional Networks (GCNs) often struggle with static
adjacency matrices that introduce domain bias or learnable matrices that may be
overfitting to specific patterns. This challenge becomes more complex when
considering Multi-Task Learning (MTL). While MTL has the potential to enhance
prediction accuracy through task synergies, it can also face significant
hurdles due to task interference. To overcome these challenges, this study
introduces a novel MTL framework, Dynamic Group-wise Spatio-Temporal Multi-Task
Learning (DG-STMTL). DG-STMTL proposes a hybrid adjacency matrix generation
module that combines static matrices with dynamic ones through a task-specific
gating mechanism. We also introduce a group-wise GCN module to enhance the
modelling capability of spatio-temporal dependencies. We conduct extensive
experiments on two real-world datasets to evaluate our method. Results show
that our method outperforms other state-of-the-arts, indicating its
effectiveness and robustness.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:00:20 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Cui",
"Wanna",
""
],
[
"Wang",
"Peizheng",
""
],
[
"Yin",
"Faliang",
""
]
] | TITLE: DG-STMTL: A Novel Graph Convolutional Network for Multi-Task
Spatio-Temporal Traffic Forecasting
ABSTRACT: Spatio-temporal traffic prediction is crucial in intelligent transportation
systems. The key challenge of accurate prediction is how to model the complex
spatio-temporal dependencies and adapt to the inherent dynamics in data.
Traditional Graph Convolutional Networks (GCNs) often struggle with static
adjacency matrices that introduce domain bias or learnable matrices that may be
overfitting to specific patterns. This challenge becomes more complex when
considering Multi-Task Learning (MTL). While MTL has the potential to enhance
prediction accuracy through task synergies, it can also face significant
hurdles due to task interference. To overcome these challenges, this study
introduces a novel MTL framework, Dynamic Group-wise Spatio-Temporal Multi-Task
Learning (DG-STMTL). DG-STMTL proposes a hybrid adjacency matrix generation
module that combines static matrices with dynamic ones through a task-specific
gating mechanism. We also introduce a group-wise GCN module to enhance the
modelling capability of spatio-temporal dependencies. We conduct extensive
experiments on two real-world datasets to evaluate our method. Results show
that our method outperforms other state-of-the-arts, indicating its
effectiveness and robustness.
|
2504.07827 | Yi Huang | Yi Huang, Ke Zhang, Wei Liu, Yuanyuan Wang, Vishal M. Patel, Le Lu, Xu
Han, Dakai Jin and Ke Yan | HarmonySeg: Tubular Structure Segmentation with Deep-Shallow Feature
Fusion and Growth-Suppression Balanced Loss | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate segmentation of tubular structures in medical images, such as
vessels and airway trees, is crucial for computer-aided diagnosis,
radiotherapy, and surgical planning. However, significant challenges exist in
algorithm design when faced with diverse sizes, complex topologies, and (often)
incomplete data annotation of these structures. We address these difficulties
by proposing a new tubular structure segmentation framework named HarmonySeg.
First, we design a deep-to-shallow decoder network featuring flexible
convolution blocks with varying receptive fields, which enables the model to
effectively adapt to tubular structures of different scales. Second, to
highlight potential anatomical regions and improve the recall of small tubular
structures, we incorporate vesselness maps as auxiliary information. These maps
are aligned with image features through a shallow-and-deep fusion module, which
simultaneously eliminates unreasonable candidates to maintain high precision.
Finally, we introduce a topology-preserving loss function that leverages
contextual and shape priors to balance the growth and suppression of tubular
structures, which also allows the model to handle low-quality and incomplete
annotations. Extensive quantitative experiments are conducted on four public
datasets. The results show that our model can accurately segment 2D and 3D
tubular structures and outperform existing state-of-the-art methods. External
validation on a private dataset also demonstrates good generalizability.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:04:42 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Huang",
"Yi",
""
],
[
"Zhang",
"Ke",
""
],
[
"Liu",
"Wei",
""
],
[
"Wang",
"Yuanyuan",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Lu",
"Le",
""
],
[
"Han",
"Xu",
""
],
[
"Jin",
"Dakai",
""
],
[
"Yan",
"Ke",
""
]
] | TITLE: HarmonySeg: Tubular Structure Segmentation with Deep-Shallow Feature
Fusion and Growth-Suppression Balanced Loss
ABSTRACT: Accurate segmentation of tubular structures in medical images, such as
vessels and airway trees, is crucial for computer-aided diagnosis,
radiotherapy, and surgical planning. However, significant challenges exist in
algorithm design when faced with diverse sizes, complex topologies, and (often)
incomplete data annotation of these structures. We address these difficulties
by proposing a new tubular structure segmentation framework named HarmonySeg.
First, we design a deep-to-shallow decoder network featuring flexible
convolution blocks with varying receptive fields, which enables the model to
effectively adapt to tubular structures of different scales. Second, to
highlight potential anatomical regions and improve the recall of small tubular
structures, we incorporate vesselness maps as auxiliary information. These maps
are aligned with image features through a shallow-and-deep fusion module, which
simultaneously eliminates unreasonable candidates to maintain high precision.
Finally, we introduce a topology-preserving loss function that leverages
contextual and shape priors to balance the growth and suppression of tubular
structures, which also allows the model to handle low-quality and incomplete
annotations. Extensive quantitative experiments are conducted on four public
datasets. The results show that our model can accurately segment 2D and 3D
tubular structures and outperform existing state-of-the-art methods. External
validation on a private dataset also demonstrates good generalizability.
|
2504.07835 | Xinye Chen | Erin Carson, Xinye Chen | Pychop: Emulating Low-Precision Arithmetic in Numerical Methods and
Neural Networks | null | null | null | null | cs.LG cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | Motivated by the growing demand for low-precision arithmetic in computational
science, we exploit lower-precision emulation in Python -- widely regarded as
the dominant programming language for numerical analysis and machine learning.
Low-precision training has revolutionized deep learning by enabling more
efficient computation and reduced memory and energy consumption while
maintaining model fidelity. To better enable numerical experimentation with and
exploration of low precision computation, we developed the Pychop library,
which supports customizable floating-point formats and a comprehensive set of
rounding modes in Python, allowing users to benefit from fast, low-precision
emulation in numerous applications. Pychop also introduces interfaces for both
PyTorch and JAX, enabling efficient low-precision emulation on GPUs for neural
network training and inference with unparalleled flexibility.
In this paper, we offer a comprehensive exposition of the design,
implementation, validation, and practical application of Pychop, establishing
it as a foundational tool for advancing efficient mixed-precision algorithms.
Furthermore, we present empirical results on low-precision emulation for image
classification and object detection using published datasets, illustrating the
sensitivity of the use of low precision and offering valuable insights into its
impact. Pychop enables in-depth investigations into the effects of numerical
precision, facilitates the development of novel hardware accelerators, and
integrates seamlessly into existing deep learning workflows. Software and
experimental code are publicly available at
https://github.com/inEXASCALE/pychop.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:12:29 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Carson",
"Erin",
""
],
[
"Chen",
"Xinye",
""
]
] | TITLE: Pychop: Emulating Low-Precision Arithmetic in Numerical Methods and
Neural Networks
ABSTRACT: Motivated by the growing demand for low-precision arithmetic in computational
science, we exploit lower-precision emulation in Python -- widely regarded as
the dominant programming language for numerical analysis and machine learning.
Low-precision training has revolutionized deep learning by enabling more
efficient computation and reduced memory and energy consumption while
maintaining model fidelity. To better enable numerical experimentation with and
exploration of low precision computation, we developed the Pychop library,
which supports customizable floating-point formats and a comprehensive set of
rounding modes in Python, allowing users to benefit from fast, low-precision
emulation in numerous applications. Pychop also introduces interfaces for both
PyTorch and JAX, enabling efficient low-precision emulation on GPUs for neural
network training and inference with unparalleled flexibility.
In this paper, we offer a comprehensive exposition of the design,
implementation, validation, and practical application of Pychop, establishing
it as a foundational tool for advancing efficient mixed-precision algorithms.
Furthermore, we present empirical results on low-precision emulation for image
classification and object detection using published datasets, illustrating the
sensitivity of the use of low precision and offering valuable insights into its
impact. Pychop enables in-depth investigations into the effects of numerical
precision, facilitates the development of novel hardware accelerators, and
integrates seamlessly into existing deep learning workflows. Software and
experimental code are publicly available at
https://github.com/inEXASCALE/pychop.
|
2504.07836 | Junli Liu | Junli Liu, Qizhi Chen, Zhigang Wang, Yiwen Tang, Yiting Zhang, Chi
Yan, Dong Wang, Xuelong Li, Bin Zhao | AerialVG: A Challenging Benchmark for Aerial Visual Grounding by
Exploring Positional Relations | 8 pages, 6 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual grounding (VG) aims to localize target objects in an image based on
natural language descriptions. In this paper, we propose AerialVG, a new task
focusing on visual grounding from aerial views. Compared to traditional VG,
AerialVG poses new challenges, \emph{e.g.}, appearance-based grounding is
insufficient to distinguish among multiple visually similar objects, and
positional relations should be emphasized. Besides, existing VG models struggle
when applied to aerial imagery, where high-resolution images cause significant
difficulties. To address these challenges, we introduce the first AerialVG
dataset, consisting of 5K real-world aerial images, 50K manually annotated
descriptions, and 103K objects. Particularly, each annotation in AerialVG
dataset contains multiple target objects annotated with relative spatial
relations, requiring models to perform comprehensive spatial reasoning.
Furthermore, we propose an innovative model especially for the AerialVG task,
where a Hierarchical Cross-Attention is devised to focus on target regions, and
a Relation-Aware Grounding module is designed to infer positional relations.
Experimental results validate the effectiveness of our dataset and method,
highlighting the importance of spatial reasoning in aerial visual grounding.
The code and dataset will be released.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:13:00 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Liu",
"Junli",
""
],
[
"Chen",
"Qizhi",
""
],
[
"Wang",
"Zhigang",
""
],
[
"Tang",
"Yiwen",
""
],
[
"Zhang",
"Yiting",
""
],
[
"Yan",
"Chi",
""
],
[
"Wang",
"Dong",
""
],
[
"Li",
"Xuelong",
""
],
[
"Zhao",
"Bin",
""
]
] | TITLE: AerialVG: A Challenging Benchmark for Aerial Visual Grounding by
Exploring Positional Relations
ABSTRACT: Visual grounding (VG) aims to localize target objects in an image based on
natural language descriptions. In this paper, we propose AerialVG, a new task
focusing on visual grounding from aerial views. Compared to traditional VG,
AerialVG poses new challenges, \emph{e.g.}, appearance-based grounding is
insufficient to distinguish among multiple visually similar objects, and
positional relations should be emphasized. Besides, existing VG models struggle
when applied to aerial imagery, where high-resolution images cause significant
difficulties. To address these challenges, we introduce the first AerialVG
dataset, consisting of 5K real-world aerial images, 50K manually annotated
descriptions, and 103K objects. Particularly, each annotation in AerialVG
dataset contains multiple target objects annotated with relative spatial
relations, requiring models to perform comprehensive spatial reasoning.
Furthermore, we propose an innovative model especially for the AerialVG task,
where a Hierarchical Cross-Attention is devised to focus on target regions, and
a Relation-Aware Grounding module is designed to infer positional relations.
Experimental results validate the effectiveness of our dataset and method,
highlighting the importance of spatial reasoning in aerial visual grounding.
The code and dataset will be released.
|
2504.07839 | Zhiwei Xu | Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang,
Hai Wan, Xibin Zhao | Deep Learning-based Intrusion Detection Systems: A Survey | 40 pages, 238 citations | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intrusion Detection Systems (IDS) have long been a hot topic in the
cybersecurity community. In recent years, with the introduction of deep
learning (DL) techniques, IDS have made great progress due to their increasing
generalizability. The rationale behind this is that by learning the underlying
patterns of known system behaviors, IDS detection can be generalized to
intrusions that exploit zero-day vulnerabilities. In this survey, we refer to
this type of IDS as DL-based IDS (DL-IDS). From the perspective of DL, this
survey systematically reviews all the stages of DL-IDS, including data
collection, log storage, log parsing, graph summarization, attack detection,
and attack investigation. To accommodate current researchers, a section
describing the publicly available benchmark datasets is included. This survey
further discusses current challenges and potential future research directions,
aiming to help researchers understand the basic ideas and visions of DL-IDS
research, as well as to motivate their research interests.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:18:56 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Xu",
"Zhiwei",
""
],
[
"Wu",
"Yujuan",
""
],
[
"Wang",
"Shiheng",
""
],
[
"Gao",
"Jiabao",
""
],
[
"Qiu",
"Tian",
""
],
[
"Wang",
"Ziqi",
""
],
[
"Wan",
"Hai",
""
],
[
"Zhao",
"Xibin",
""
]
] | TITLE: Deep Learning-based Intrusion Detection Systems: A Survey
ABSTRACT: Intrusion Detection Systems (IDS) have long been a hot topic in the
cybersecurity community. In recent years, with the introduction of deep
learning (DL) techniques, IDS have made great progress due to their increasing
generalizability. The rationale behind this is that by learning the underlying
patterns of known system behaviors, IDS detection can be generalized to
intrusions that exploit zero-day vulnerabilities. In this survey, we refer to
this type of IDS as DL-based IDS (DL-IDS). From the perspective of DL, this
survey systematically reviews all the stages of DL-IDS, including data
collection, log storage, log parsing, graph summarization, attack detection,
and attack investigation. To accommodate current researchers, a section
describing the publicly available benchmark datasets is included. This survey
further discusses current challenges and potential future research directions,
aiming to help researchers understand the basic ideas and visions of DL-IDS
research, as well as to motivate their research interests.
|
2504.07840 | Cansu Koyuturk | Cansu Koyuturk, Emily Theophilou, Sabrina Patania, Gregor Donabauer,
Andrea Martinenghi, Chiara Antico, Alessia Telari, Alessia Testa, Sathya
Bursic, Franca Garzotto, Davinia Hernandez-Leo, Udo Kruschwitz, Davide Taibi,
Simona Amenta, Martin Ruskov and Dimitri Ognibene | Understanding Learner-LLM Chatbot Interactions and the Impact of
Prompting Guidelines | Accepted for AIED 2025, the 26th International Conference on
Artificial Intelligence in Education, July 22 - 26, 2025, Palermo, Italy | null | null | null | cs.HC cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have transformed human-computer interaction by
enabling natural language-based communication with AI-powered chatbots. These
models are designed to be intuitive and user-friendly, allowing users to
articulate requests with minimal effort. However, despite their accessibility,
studies reveal that users often struggle with effective prompting, resulting in
inefficient responses. Existing research has highlighted both the limitations
of LLMs in interpreting vague or poorly structured prompts and the difficulties
users face in crafting precise queries. This study investigates learner-AI
interactions through an educational experiment in which participants receive
structured guidance on effective prompting. We introduce and compare three
types of prompting guidelines: a task-specific framework developed through a
structured methodology and two baseline approaches. To assess user behavior and
prompting efficacy, we analyze a dataset of 642 interactions from 107 users.
Using Von NeuMidas, an extended pragmatic annotation schema for LLM interaction
analysis, we categorize common prompting errors and identify recurring
behavioral patterns. We then evaluate the impact of different guidelines by
examining changes in user behavior, adherence to prompting strategies, and the
overall quality of AI-generated responses. Our findings provide a deeper
understanding of how users engage with LLMs and the role of structured
prompting guidance in enhancing AI-assisted communication. By comparing
different instructional frameworks, we offer insights into more effective
approaches for improving user competency in AI interactions, with implications
for AI literacy, chatbot usability, and the design of more responsive AI
systems.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:20:43 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Koyuturk",
"Cansu",
""
],
[
"Theophilou",
"Emily",
""
],
[
"Patania",
"Sabrina",
""
],
[
"Donabauer",
"Gregor",
""
],
[
"Martinenghi",
"Andrea",
""
],
[
"Antico",
"Chiara",
""
],
[
"Telari",
"Alessia",
""
],
[
"Testa",
"Alessia",
""
],
[
"Bursic",
"Sathya",
""
],
[
"Garzotto",
"Franca",
""
],
[
"Hernandez-Leo",
"Davinia",
""
],
[
"Kruschwitz",
"Udo",
""
],
[
"Taibi",
"Davide",
""
],
[
"Amenta",
"Simona",
""
],
[
"Ruskov",
"Martin",
""
],
[
"Ognibene",
"Dimitri",
""
]
] | TITLE: Understanding Learner-LLM Chatbot Interactions and the Impact of
Prompting Guidelines
ABSTRACT: Large Language Models (LLMs) have transformed human-computer interaction by
enabling natural language-based communication with AI-powered chatbots. These
models are designed to be intuitive and user-friendly, allowing users to
articulate requests with minimal effort. However, despite their accessibility,
studies reveal that users often struggle with effective prompting, resulting in
inefficient responses. Existing research has highlighted both the limitations
of LLMs in interpreting vague or poorly structured prompts and the difficulties
users face in crafting precise queries. This study investigates learner-AI
interactions through an educational experiment in which participants receive
structured guidance on effective prompting. We introduce and compare three
types of prompting guidelines: a task-specific framework developed through a
structured methodology and two baseline approaches. To assess user behavior and
prompting efficacy, we analyze a dataset of 642 interactions from 107 users.
Using Von NeuMidas, an extended pragmatic annotation schema for LLM interaction
analysis, we categorize common prompting errors and identify recurring
behavioral patterns. We then evaluate the impact of different guidelines by
examining changes in user behavior, adherence to prompting strategies, and the
overall quality of AI-generated responses. Our findings provide a deeper
understanding of how users engage with LLMs and the role of structured
prompting guidance in enhancing AI-assisted communication. By comparing
different instructional frameworks, we offer insights into more effective
approaches for improving user competency in AI interactions, with implications
for AI literacy, chatbot usability, and the design of more responsive AI
systems.
|
2504.07853 | Jiayin Zhao | Jiayin Zhao, Zhenqi Fu, Tao Yu, Hui Qiao | V2V3D: View-to-View Denoised 3D Reconstruction for Light-Field
Microscopy | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Light field microscopy (LFM) has gained significant attention due to its
ability to capture snapshot-based, large-scale 3D fluorescence images. However,
existing LFM reconstruction algorithms are highly sensitive to sensor noise or
require hard-to-get ground-truth annotated data for training. To address these
challenges, this paper introduces V2V3D, an unsupervised view2view-based
framework that establishes a new paradigm for joint optimization of image
denoising and 3D reconstruction in a unified architecture. We assume that the
LF images are derived from a consistent 3D signal, with the noise in each view
being independent. This enables V2V3D to incorporate the principle of
noise2noise for effective denoising. To enhance the recovery of high-frequency
details, we propose a novel wave-optics-based feature alignment technique,
which transforms the point spread function, used for forward propagation in
wave optics, into convolution kernels specifically designed for feature
alignment. Moreover, we introduce an LFM dataset containing LF images and their
corresponding 3D intensity volumes. Extensive experiments demonstrate that our
approach achieves high computational efficiency and outperforms the other
state-of-the-art methods. These advancements position V2V3D as a promising
solution for 3D imaging under challenging conditions.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:29:26 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Zhao",
"Jiayin",
""
],
[
"Fu",
"Zhenqi",
""
],
[
"Yu",
"Tao",
""
],
[
"Qiao",
"Hui",
""
]
] | TITLE: V2V3D: View-to-View Denoised 3D Reconstruction for Light-Field
Microscopy
ABSTRACT: Light field microscopy (LFM) has gained significant attention due to its
ability to capture snapshot-based, large-scale 3D fluorescence images. However,
existing LFM reconstruction algorithms are highly sensitive to sensor noise or
require hard-to-get ground-truth annotated data for training. To address these
challenges, this paper introduces V2V3D, an unsupervised view2view-based
framework that establishes a new paradigm for joint optimization of image
denoising and 3D reconstruction in a unified architecture. We assume that the
LF images are derived from a consistent 3D signal, with the noise in each view
being independent. This enables V2V3D to incorporate the principle of
noise2noise for effective denoising. To enhance the recovery of high-frequency
details, we propose a novel wave-optics-based feature alignment technique,
which transforms the point spread function, used for forward propagation in
wave optics, into convolution kernels specifically designed for feature
alignment. Moreover, we introduce an LFM dataset containing LF images and their
corresponding 3D intensity volumes. Extensive experiments demonstrate that our
approach achieves high computational efficiency and outperforms the other
state-of-the-art methods. These advancements position V2V3D as a promising
solution for 3D imaging under challenging conditions.
|
2504.07867 | Joshua Li | Joshua Li, Fernando Jose Pena Cantu, Emily Yu, Alexander Wong, Yuchen
Cui, Yuhao Chen | SAMJAM: Zero-Shot Video Scene Graph Generation for Egocentric Kitchen
Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Video Scene Graph Generation (VidSGG) is an important topic in understanding
dynamic kitchen environments. Current models for VidSGG require extensive
training to produce scene graphs. Recently, Vision Language Models (VLM) and
Vision Foundation Models (VFM) have demonstrated impressive zero-shot
capabilities in a variety of tasks. However, VLMs like Gemini struggle with the
dynamics for VidSGG, failing to maintain stable object identities across
frames. To overcome this limitation, we propose SAMJAM, a zero-shot pipeline
that combines SAM2's temporal tracking with Gemini's semantic understanding.
SAM2 also improves upon Gemini's object grounding by producing more accurate
bounding boxes. In our method, we first prompt Gemini to generate a frame-level
scene graph. Then, we employ a matching algorithm to map each object in the
scene graph with a SAM2-generated or SAM2-propagated mask, producing a
temporally-consistent scene graph in dynamic environments. Finally, we repeat
this process again in each of the following frames. We empirically demonstrate
that SAMJAM outperforms Gemini by 8.33% in mean recall on the EPIC-KITCHENS and
EPIC-KITCHENS-100 datasets.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:43:10 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Li",
"Joshua",
""
],
[
"Cantu",
"Fernando Jose Pena",
""
],
[
"Yu",
"Emily",
""
],
[
"Wong",
"Alexander",
""
],
[
"Cui",
"Yuchen",
""
],
[
"Chen",
"Yuhao",
""
]
] | TITLE: SAMJAM: Zero-Shot Video Scene Graph Generation for Egocentric Kitchen
Videos
ABSTRACT: Video Scene Graph Generation (VidSGG) is an important topic in understanding
dynamic kitchen environments. Current models for VidSGG require extensive
training to produce scene graphs. Recently, Vision Language Models (VLM) and
Vision Foundation Models (VFM) have demonstrated impressive zero-shot
capabilities in a variety of tasks. However, VLMs like Gemini struggle with the
dynamics for VidSGG, failing to maintain stable object identities across
frames. To overcome this limitation, we propose SAMJAM, a zero-shot pipeline
that combines SAM2's temporal tracking with Gemini's semantic understanding.
SAM2 also improves upon Gemini's object grounding by producing more accurate
bounding boxes. In our method, we first prompt Gemini to generate a frame-level
scene graph. Then, we employ a matching algorithm to map each object in the
scene graph with a SAM2-generated or SAM2-propagated mask, producing a
temporally-consistent scene graph in dynamic environments. Finally, we repeat
this process again in each of the following frames. We empirically demonstrate
that SAMJAM outperforms Gemini by 8.33% in mean recall on the EPIC-KITCHENS and
EPIC-KITCHENS-100 datasets.
|
2504.07870 | Yize Chen | Ben Cheng, Yize Chen | Open Datasets for Grid Modeling and Visualization: An Alberta Power
Network Case | In submission, code available at
https://github.com/BenCheng2/CarbonDistributionMap | null | null | null | cs.HC cs.SY eess.SP eess.SY | http://creativecommons.org/licenses/by/4.0/ | In the power and energy industry, multiple entities in grid operational logs
are frequently recorded and updated. Thanks to recent advances in IT facilities
and smart metering services, a variety of datasets such as system load,
generation mix, and grid connection are often publicly available. While these
resources are valuable in evaluating power grid's operational conditions and
system resilience, the lack of fine-grained, accurate locational information
constrain the usage of current data, which further hinders the development of
smart grid and renewables integration. For instance, electricity end users are
not aware of nodal generation mix or carbon emissions, while the general public
have limited understanding about the effect of demand response or renewables
integration if only the whole system's demands and generations are available.
In this work, we focus on recovering power grid topology and line flow
directions from open public dataset. Taking the Alberta grid as a working
example, we start from mapping multi-modal power system datasets to the grid
topology integrated with geographical information. By designing a novel
optimization-based scheme to recover line flow directions, we are able to
analyze and visualize the interactions between generations and demand vectors
in an efficient manner. Proposed research is fully open-sourced and highly
generalizable, which can help model and visualize grid information, create
synthetic dataset, and facilitate analytics and decision-making framework for
clean energy transition.
| [
{
"version": "v1",
"created": "Thu, 10 Apr 2025 15:45:07 GMT"
}
] | 2025-04-11T00:00:00 | [
[
"Cheng",
"Ben",
""
],
[
"Chen",
"Yize",
""
]
] | TITLE: Open Datasets for Grid Modeling and Visualization: An Alberta Power
Network Case
ABSTRACT: In the power and energy industry, multiple entities in grid operational logs
are frequently recorded and updated. Thanks to recent advances in IT facilities
and smart metering services, a variety of datasets such as system load,
generation mix, and grid connection are often publicly available. While these
resources are valuable in evaluating power grid's operational conditions and
system resilience, the lack of fine-grained, accurate locational information
constrain the usage of current data, which further hinders the development of
smart grid and renewables integration. For instance, electricity end users are
not aware of nodal generation mix or carbon emissions, while the general public
have limited understanding about the effect of demand response or renewables
integration if only the whole system's demands and generations are available.
In this work, we focus on recovering power grid topology and line flow
directions from open public dataset. Taking the Alberta grid as a working
example, we start from mapping multi-modal power system datasets to the grid
topology integrated with geographical information. By designing a novel
optimization-based scheme to recover line flow directions, we are able to
analyze and visualize the interactions between generations and demand vectors
in an efficient manner. Proposed research is fully open-sourced and highly
generalizable, which can help model and visualize grid information, create
synthetic dataset, and facilitate analytics and decision-making framework for
clean energy transition.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.