Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
id
string
submitter
string
authors
string
title
string
comments
string
journal-ref
string
doi
string
report-no
string
categories
string
license
string
abstract
string
versions
list
update_date
timestamp[s]
authors_parsed
sequence
prompt
string
2204.07661
Soumyajit Gupta
Soumyajit Gupta, Venelin Kovatchev, Anubrata Das, Maria De-Arteaga, Matthew Lease
Finding Pareto Trade-offs in Fair and Accurate Detection of Toxic Speech
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Optimizing NLP models for fairness poses many challenges. Lack of differentiable fairness measures prevents gradient-based loss training or requires surrogate losses that diverge from the true metric of interest. In addition, competing objectives (e.g., accuracy vs. fairness) often require making trade-offs based on stakeholder preferences, but stakeholders may not know their preferences before seeing system performance under different trade-off settings. To address these challenges, we begin by formulating a differentiable version of a popular fairness measure, Accuracy Parity, to provide balanced accuracy across demographic groups. Next, we show how model-agnostic, HyperNetwork optimization can efficiently train arbitrary NLP model architectures to learn Pareto-optimal trade-offs between competing metrics. Focusing on the task of toxic language detection, we show the generality and efficacy of our methods across two datasets, three neural architectures, and three fairness losses.
[ { "version": "v1", "created": "Fri, 15 Apr 2022 22:11:25 GMT" }, { "version": "v2", "created": "Tue, 10 May 2022 18:36:41 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 00:29:44 GMT" } ]
2025-04-11T00:00:00
[ [ "Gupta", "Soumyajit", "" ], [ "Kovatchev", "Venelin", "" ], [ "Das", "Anubrata", "" ], [ "De-Arteaga", "Maria", "" ], [ "Lease", "Matthew", "" ] ]
TITLE: Finding Pareto Trade-offs in Fair and Accurate Detection of Toxic Speech ABSTRACT: Optimizing NLP models for fairness poses many challenges. Lack of differentiable fairness measures prevents gradient-based loss training or requires surrogate losses that diverge from the true metric of interest. In addition, competing objectives (e.g., accuracy vs. fairness) often require making trade-offs based on stakeholder preferences, but stakeholders may not know their preferences before seeing system performance under different trade-off settings. To address these challenges, we begin by formulating a differentiable version of a popular fairness measure, Accuracy Parity, to provide balanced accuracy across demographic groups. Next, we show how model-agnostic, HyperNetwork optimization can efficiently train arbitrary NLP model architectures to learn Pareto-optimal trade-offs between competing metrics. Focusing on the task of toxic language detection, we show the generality and efficacy of our methods across two datasets, three neural architectures, and three fairness losses.
2303.17408
Yucheng Ruan
Yucheng Ruan, Xiang Lan, Daniel J. Tan, Hairil Rizal Abdullah, Mengling Feng
P-Transformer: A Prompt-based Multimodal Transformer Architecture For Medical Tabular Data
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical tabular data, abundant in Electronic Health Records (EHRs), is a valuable resource for diverse medical tasks such as risk prediction. While deep learning approaches, particularly transformer-based models, have shown remarkable performance in tabular data prediction, there are still problems remaining for existing work to be effectively adapted into medical domain, such as ignoring unstructured free-texts and underutilizing the textual information in structured data. To address these issues, we propose PTransformer, a \underline{P}rompt-based multimodal \underline{Transformer} architecture designed specifically for medical tabular data. This framework consists of two critical components: a tabular cell embedding generator and a tabular transformer. The former efficiently encodes diverse modalities from both structured and unstructured tabular data into a harmonized language semantic space with the help of pre-trained sentence encoder and medical prompts. The latter integrates cell representations to generate patient embeddings for various medical tasks. In comprehensive experiments on two real-world datasets for three medical tasks, PTransformer demonstrated the improvements with 10.9%/11.0% on RMSE/MAE, 0.5%/2.2% on RMSE/MAE, and 1.6%/0.8% on BACC/AUROC compared to state-of-the-art (SOTA) baselines in predictability.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 14:25:44 GMT" }, { "version": "v2", "created": "Wed, 9 Aug 2023 08:58:25 GMT" }, { "version": "v3", "created": "Tue, 9 Jan 2024 10:28:00 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 06:24:36 GMT" } ]
2025-04-11T00:00:00
[ [ "Ruan", "Yucheng", "" ], [ "Lan", "Xiang", "" ], [ "Tan", "Daniel J.", "" ], [ "Abdullah", "Hairil Rizal", "" ], [ "Feng", "Mengling", "" ] ]
TITLE: P-Transformer: A Prompt-based Multimodal Transformer Architecture For Medical Tabular Data ABSTRACT: Medical tabular data, abundant in Electronic Health Records (EHRs), is a valuable resource for diverse medical tasks such as risk prediction. While deep learning approaches, particularly transformer-based models, have shown remarkable performance in tabular data prediction, there are still problems remaining for existing work to be effectively adapted into medical domain, such as ignoring unstructured free-texts and underutilizing the textual information in structured data. To address these issues, we propose PTransformer, a \underline{P}rompt-based multimodal \underline{Transformer} architecture designed specifically for medical tabular data. This framework consists of two critical components: a tabular cell embedding generator and a tabular transformer. The former efficiently encodes diverse modalities from both structured and unstructured tabular data into a harmonized language semantic space with the help of pre-trained sentence encoder and medical prompts. The latter integrates cell representations to generate patient embeddings for various medical tasks. In comprehensive experiments on two real-world datasets for three medical tasks, PTransformer demonstrated the improvements with 10.9%/11.0% on RMSE/MAE, 0.5%/2.2% on RMSE/MAE, and 1.6%/0.8% on BACC/AUROC compared to state-of-the-art (SOTA) baselines in predictability.
2305.03535
Jonas Hein
Jonas Hein, Nicola Cavalcanti, Daniel Suter, Lukas Zingg, Fabio Carrillo, Lilian Calvet, Mazda Farshad, Marc Pollefeys, Nassir Navab, Philipp F\"urnstahl
Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose Estimation of Surgical Instruments
Accepted for publication in Medical Image Analysis. Project page: https://jonashein.github.io/mvpsp/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
State-of-the-art research of traditional computer vision is increasingly leveraged in the surgical domain. A particular focus in computer-assisted surgery is to replace marker-based tracking systems for instrument localization with pure image-based 6DoF pose estimation using deep-learning methods. However, state-of-the-art single-view pose estimation methods do not yet meet the accuracy required for surgical navigation. In this context, we investigate the benefits of multi-view setups for highly accurate and occlusion-robust 6DoF pose estimation of surgical instruments and derive recommendations for an ideal camera system that addresses the challenges in the operating room. The contributions of this work are threefold. First, we present a multi-camera capture setup consisting of static and head-mounted cameras, which allows us to study the performance of pose estimation methods under various camera configurations. Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre and including rich annotations for surgeon, instrument, and patient anatomy. Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments and analyze the influence of camera configurations, training data, and occlusions on the pose accuracy and generalization ability. The best method utilizes five cameras in a multi-view pose optimization and achieves an average position and orientation error of 1.01 mm and 0.89{\deg} for a surgical drill as well as 2.79 mm and 3.33{\deg} for a screwdriver under optimal conditions. Our results demonstrate that marker-less tracking of surgical instruments is becoming a feasible alternative to existing marker-based systems.
[ { "version": "v1", "created": "Fri, 5 May 2023 13:42:19 GMT" }, { "version": "v2", "created": "Fri, 22 Dec 2023 20:52:50 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 17:23:33 GMT" } ]
2025-04-11T00:00:00
[ [ "Hein", "Jonas", "" ], [ "Cavalcanti", "Nicola", "" ], [ "Suter", "Daniel", "" ], [ "Zingg", "Lukas", "" ], [ "Carrillo", "Fabio", "" ], [ "Calvet", "Lilian", "" ], [ "Farshad", "Mazda", "" ], [ "Pollefeys", "Marc", "" ], [ "Navab", "Nassir", "" ], [ "Fürnstahl", "Philipp", "" ] ]
TITLE: Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose Estimation of Surgical Instruments ABSTRACT: State-of-the-art research of traditional computer vision is increasingly leveraged in the surgical domain. A particular focus in computer-assisted surgery is to replace marker-based tracking systems for instrument localization with pure image-based 6DoF pose estimation using deep-learning methods. However, state-of-the-art single-view pose estimation methods do not yet meet the accuracy required for surgical navigation. In this context, we investigate the benefits of multi-view setups for highly accurate and occlusion-robust 6DoF pose estimation of surgical instruments and derive recommendations for an ideal camera system that addresses the challenges in the operating room. The contributions of this work are threefold. First, we present a multi-camera capture setup consisting of static and head-mounted cameras, which allows us to study the performance of pose estimation methods under various camera configurations. Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre and including rich annotations for surgeon, instrument, and patient anatomy. Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments and analyze the influence of camera configurations, training data, and occlusions on the pose accuracy and generalization ability. The best method utilizes five cameras in a multi-view pose optimization and achieves an average position and orientation error of 1.01 mm and 0.89{\deg} for a surgical drill as well as 2.79 mm and 3.33{\deg} for a screwdriver under optimal conditions. Our results demonstrate that marker-less tracking of surgical instruments is becoming a feasible alternative to existing marker-based systems.
2305.15932
Jie He
Jie He and Simon Chi Lok U and V\'ictor Guti\'errez-Basulto and Jeff Z. Pan
BUCA: A Binary Classification Approach to Unsupervised Commonsense Question Answering
There is a text error in Table 10
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Unsupervised commonsense reasoning (UCR) is becoming increasingly popular as the construction of commonsense reasoning datasets is expensive, and they are inevitably limited in their scope. A popular approach to UCR is to fine-tune language models with external knowledge (e.g., knowledge graphs), but this usually requires a large number of training examples. In this paper, we propose to transform the downstream multiple choice question answering task into a simpler binary classification task by ranking all candidate answers according to their reasonableness. To this end, for training the model, we convert the knowledge graph triples into reasonable and unreasonable texts. Extensive experimental results show the effectiveness of our approach on various multiple choice question answering benchmarks. Furthermore, compared with existing UCR approaches using KGs, ours is less data hungry. Our code is available at https://github.com/probe2/BUCA.
[ { "version": "v1", "created": "Thu, 25 May 2023 10:59:47 GMT" }, { "version": "v2", "created": "Wed, 7 Jun 2023 20:33:09 GMT" }, { "version": "v3", "created": "Mon, 10 Mar 2025 09:28:29 GMT" }, { "version": "v4", "created": "Wed, 9 Apr 2025 21:40:10 GMT" } ]
2025-04-11T00:00:00
[ [ "He", "Jie", "" ], [ "U", "Simon Chi Lok", "" ], [ "Gutiérrez-Basulto", "Víctor", "" ], [ "Pan", "Jeff Z.", "" ] ]
TITLE: BUCA: A Binary Classification Approach to Unsupervised Commonsense Question Answering ABSTRACT: Unsupervised commonsense reasoning (UCR) is becoming increasingly popular as the construction of commonsense reasoning datasets is expensive, and they are inevitably limited in their scope. A popular approach to UCR is to fine-tune language models with external knowledge (e.g., knowledge graphs), but this usually requires a large number of training examples. In this paper, we propose to transform the downstream multiple choice question answering task into a simpler binary classification task by ranking all candidate answers according to their reasonableness. To this end, for training the model, we convert the knowledge graph triples into reasonable and unreasonable texts. Extensive experimental results show the effectiveness of our approach on various multiple choice question answering benchmarks. Furthermore, compared with existing UCR approaches using KGs, ours is less data hungry. Our code is available at https://github.com/probe2/BUCA.
2308.07223
Melanie Roschewitz
M\'elanie Roschewitz and Ben Glocker
Distance Matters For Improving Performance Estimation Under Covariate Shift
Accepted to ICCV Workshop on Uncertainty Quantification for Computer Vision 2023
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 4549-4559
10.1109/ICCVW60793.2023.00489
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Performance estimation under covariate shift is a crucial component of safe AI model deployment, especially for sensitive use-cases. Recently, several solutions were proposed to tackle this problem, most leveraging model predictions or softmax confidence to derive accuracy estimates. However, under dataset shifts, confidence scores may become ill-calibrated if samples are too far from the training distribution. In this work, we show that taking into account distances of test samples to their expected training distribution can significantly improve performance estimation under covariate shift. Precisely, we introduce a "distance-check" to flag samples that lie too far from the expected distribution, to avoid relying on their untrustworthy model outputs in the accuracy estimation step. We demonstrate the effectiveness of this method on 13 image classification tasks, across a wide-range of natural and synthetic distribution shifts and hundreds of models, with a median relative MAE improvement of 27% over the best baseline across all tasks, and SOTA performance on 10 out of 13 tasks. Our code is publicly available at https://github.com/melanibe/distance_matters_performance_estimation.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 15:49:19 GMT" } ]
2025-04-11T00:00:00
[ [ "Roschewitz", "Mélanie", "" ], [ "Glocker", "Ben", "" ] ]
TITLE: Distance Matters For Improving Performance Estimation Under Covariate Shift ABSTRACT: Performance estimation under covariate shift is a crucial component of safe AI model deployment, especially for sensitive use-cases. Recently, several solutions were proposed to tackle this problem, most leveraging model predictions or softmax confidence to derive accuracy estimates. However, under dataset shifts, confidence scores may become ill-calibrated if samples are too far from the training distribution. In this work, we show that taking into account distances of test samples to their expected training distribution can significantly improve performance estimation under covariate shift. Precisely, we introduce a "distance-check" to flag samples that lie too far from the expected distribution, to avoid relying on their untrustworthy model outputs in the accuracy estimation step. We demonstrate the effectiveness of this method on 13 image classification tasks, across a wide-range of natural and synthetic distribution shifts and hundreds of models, with a median relative MAE improvement of 27% over the best baseline across all tasks, and SOTA performance on 10 out of 13 tasks. Our code is publicly available at https://github.com/melanibe/distance_matters_performance_estimation.
2309.15408
Mario Beraha
Mario Beraha, Stefano Favaro, Matteo Sesia
A smoothed-Bayesian approach to frequency recovery from sketched data
null
null
null
null
stat.ME cs.DS cs.IR math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a novel statistical perspective on a classical problem at the intersection of computer science and information theory: recovering the empirical frequency of a symbol in a large discrete dataset using only a compressed representation, or sketch, obtained via random hashing. Departing from traditional algorithmic approaches, recent works have proposed Bayesian nonparametric (BNP) methods that can provide more informative frequency estimates by leveraging modeling assumptions about the distribution of the sketched data. In this paper, we propose a smoothed-Bayesian method, inspired by existing BNP approaches but designed in a frequentist framework to overcome the computational limitations of the BNP approaches when dealing with large-scale data from realistic distributions, including those with power-law tail behaviors. For sketches obtained with a single hash function, our approach is supported by rigorous frequentist properties, including unbiasedness and optimality under a squared error loss function within an intuitive class of linear estimators. For sketches with multiple hash functions, we introduce an approach based on multi-view learning to construct computationally efficient frequency estimators. We validate our method on synthetic and real data, comparing its performance to that of existing alternatives.
[ { "version": "v1", "created": "Wed, 27 Sep 2023 05:20:53 GMT" }, { "version": "v2", "created": "Wed, 12 Jun 2024 13:15:11 GMT" }, { "version": "v3", "created": "Thu, 28 Nov 2024 13:46:38 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 08:21:29 GMT" } ]
2025-04-11T00:00:00
[ [ "Beraha", "Mario", "" ], [ "Favaro", "Stefano", "" ], [ "Sesia", "Matteo", "" ] ]
TITLE: A smoothed-Bayesian approach to frequency recovery from sketched data ABSTRACT: We provide a novel statistical perspective on a classical problem at the intersection of computer science and information theory: recovering the empirical frequency of a symbol in a large discrete dataset using only a compressed representation, or sketch, obtained via random hashing. Departing from traditional algorithmic approaches, recent works have proposed Bayesian nonparametric (BNP) methods that can provide more informative frequency estimates by leveraging modeling assumptions about the distribution of the sketched data. In this paper, we propose a smoothed-Bayesian method, inspired by existing BNP approaches but designed in a frequentist framework to overcome the computational limitations of the BNP approaches when dealing with large-scale data from realistic distributions, including those with power-law tail behaviors. For sketches obtained with a single hash function, our approach is supported by rigorous frequentist properties, including unbiasedness and optimality under a squared error loss function within an intuitive class of linear estimators. For sketches with multiple hash functions, we introduce an approach based on multi-view learning to construct computationally efficient frequency estimators. We validate our method on synthetic and real data, comparing its performance to that of existing alternatives.
2311.06293
Zeynab Kaseb
Zeynab Kaseb, Matthias Moller, Giorgio Tosti Balducci, Peter Palensky, Pedro P. Vergara
Quantum Neural Networks for Power Flow Analysis
8 pages, 13 figures
null
10.1016/j.epsr.2024.110677
null
quant-ph cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
This paper explores the potential application of quantum and hybrid quantum-classical neural networks in power flow analysis. Experiments are conducted using two datasets based on 4-bus and 33-bus test systems. A systematic performance comparison is also conducted among quantum, hybrid quantum-classical, and classical neural networks. The comparison is based on (i) generalization ability, (ii) robustness, (iii) training dataset size needed, (iv) training error, and (v) training process stability. The results show that the developed hybrid quantum-classical neural network outperforms both quantum and classical neural networks, and hence can improve deep learning-based power flow analysis in the noisy-intermediate-scale quantum (NISQ) and fault-tolerant quantum (FTQ) era.
[ { "version": "v1", "created": "Sat, 4 Nov 2023 11:25:31 GMT" }, { "version": "v2", "created": "Sun, 10 Mar 2024 15:49:57 GMT" } ]
2025-04-11T00:00:00
[ [ "Kaseb", "Zeynab", "" ], [ "Moller", "Matthias", "" ], [ "Balducci", "Giorgio Tosti", "" ], [ "Palensky", "Peter", "" ], [ "Vergara", "Pedro P.", "" ] ]
TITLE: Quantum Neural Networks for Power Flow Analysis ABSTRACT: This paper explores the potential application of quantum and hybrid quantum-classical neural networks in power flow analysis. Experiments are conducted using two datasets based on 4-bus and 33-bus test systems. A systematic performance comparison is also conducted among quantum, hybrid quantum-classical, and classical neural networks. The comparison is based on (i) generalization ability, (ii) robustness, (iii) training dataset size needed, (iv) training error, and (v) training process stability. The results show that the developed hybrid quantum-classical neural network outperforms both quantum and classical neural networks, and hence can improve deep learning-based power flow analysis in the noisy-intermediate-scale quantum (NISQ) and fault-tolerant quantum (FTQ) era.
2311.13706
Nicol\'as Gaggion Ph.D.
Nicol\'as Gaggion, Benjamin A. Matheson, Yan Xia, Rodrigo Bonazzola, Nishant Ravikumar, Zeike A. Taylor, Diego H. Milone, Alejandro F. Frangi, Enzo Ferrante
Multi-view Hybrid Graph Convolutional Network for Volume-to-mesh Reconstruction in Cardiovascular MRI
null
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Cardiovascular magnetic resonance imaging is emerging as a crucial tool to examine cardiac morphology and function. Essential to this endeavour are anatomical 3D surface and volumetric meshes derived from CMR images, which facilitate computational anatomy studies, biomarker discovery, and in-silico simulations. Traditional approaches typically follow complex multi-step pipelines, first segmenting images and then reconstructing meshes, making them time-consuming and prone to error propagation. In response, we introduce HybridVNet, a novel architecture for direct image-to-mesh extraction seamlessly integrating standard convolutional neural networks with graph convolutions, which we prove can efficiently handle surface and volumetric meshes by encoding them as graph structures. To further enhance accuracy, we propose a multi-view HybridVNet architecture which processes both long axis and short axis CMR, showing that it can increase the performance of cardiac MR mesh generation. Our model combines traditional convolutional networks with variational graph generative models, deep supervision and mesh-specific regularisation. Experiments on a comprehensive dataset from the UK Biobank confirm the potential of HybridVNet to significantly advance cardiac imaging and computational cardiology by efficiently generating high-fidelity meshes from CMR images. Multi-view HybridVNet outperforms the state-of-the-art, achieving improvements of up to $\sim$27\% reduction in Mean Contour Distance (from 1.86 mm to 1.35 mm for the LV Myocardium), up to $\sim$18\% improvement in Hausdorff distance (from 4.74 mm to 3.89mm, for the LV Endocardium), and up to $\sim$8\% in Dice Coefficient (from 0.78 to 0.84, for the LV Myocardium), highlighting its superior accuracy.
[ { "version": "v1", "created": "Wed, 22 Nov 2023 21:51:29 GMT" }, { "version": "v2", "created": "Tue, 13 Aug 2024 19:18:41 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 16:25:45 GMT" } ]
2025-04-11T00:00:00
[ [ "Gaggion", "Nicolás", "" ], [ "Matheson", "Benjamin A.", "" ], [ "Xia", "Yan", "" ], [ "Bonazzola", "Rodrigo", "" ], [ "Ravikumar", "Nishant", "" ], [ "Taylor", "Zeike A.", "" ], [ "Milone", "Diego H.", "" ], [ "Frangi", "Alejandro F.", "" ], [ "Ferrante", "Enzo", "" ] ]
TITLE: Multi-view Hybrid Graph Convolutional Network for Volume-to-mesh Reconstruction in Cardiovascular MRI ABSTRACT: Cardiovascular magnetic resonance imaging is emerging as a crucial tool to examine cardiac morphology and function. Essential to this endeavour are anatomical 3D surface and volumetric meshes derived from CMR images, which facilitate computational anatomy studies, biomarker discovery, and in-silico simulations. Traditional approaches typically follow complex multi-step pipelines, first segmenting images and then reconstructing meshes, making them time-consuming and prone to error propagation. In response, we introduce HybridVNet, a novel architecture for direct image-to-mesh extraction seamlessly integrating standard convolutional neural networks with graph convolutions, which we prove can efficiently handle surface and volumetric meshes by encoding them as graph structures. To further enhance accuracy, we propose a multi-view HybridVNet architecture which processes both long axis and short axis CMR, showing that it can increase the performance of cardiac MR mesh generation. Our model combines traditional convolutional networks with variational graph generative models, deep supervision and mesh-specific regularisation. Experiments on a comprehensive dataset from the UK Biobank confirm the potential of HybridVNet to significantly advance cardiac imaging and computational cardiology by efficiently generating high-fidelity meshes from CMR images. Multi-view HybridVNet outperforms the state-of-the-art, achieving improvements of up to $\sim$27\% reduction in Mean Contour Distance (from 1.86 mm to 1.35 mm for the LV Myocardium), up to $\sim$18\% improvement in Hausdorff distance (from 4.74 mm to 3.89mm, for the LV Endocardium), and up to $\sim$8\% in Dice Coefficient (from 0.78 to 0.84, for the LV Myocardium), highlighting its superior accuracy.
2401.03048
Xin Ma
Xin Ma, Yaohui Wang, Xinyuan Chen, Gengyun Jia, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, Yu Qiao
Latte: Latent Diffusion Transformer for Video Generation
Accepted by Transactions on Machine Learning Research 2025; Project page: https://maxin-cn.github.io/latte_project
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.
[ { "version": "v1", "created": "Fri, 5 Jan 2024 19:55:15 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 09:28:20 GMT" } ]
2025-04-11T00:00:00
[ [ "Ma", "Xin", "" ], [ "Wang", "Yaohui", "" ], [ "Chen", "Xinyuan", "" ], [ "Jia", "Gengyun", "" ], [ "Liu", "Ziwei", "" ], [ "Li", "Yuan-Fang", "" ], [ "Chen", "Cunjian", "" ], [ "Qiao", "Yu", "" ] ]
TITLE: Latte: Latent Diffusion Transformer for Video Generation ABSTRACT: We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.
2402.18206
Abhijnya Bhat
Rishubh Parihar, Abhijnya Bhat, Abhipsa Basu, Saswat Mallick, Jogendra Nath Kundu, R. Venkatesh Babu
Balancing Act: Distribution-Guided Debiasing in Diffusion Models
CVPR 2024. Project Page : https://ab-34.github.io/balancing_act/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Diffusion Models (DMs) have emerged as powerful generative models with unprecedented image generation capability. These models are widely used for data augmentation and creative applications. However, DMs reflect the biases present in the training datasets. This is especially concerning in the context of faces, where the DM prefers one demographic subgroup vs others (eg. female vs male). In this work, we present a method for debiasing DMs without relying on additional data or model retraining. Specifically, we propose Distribution Guidance, which enforces the generated images to follow the prescribed attribute distribution. To realize this, we build on the key insight that the latent features of denoising UNet hold rich demographic semantics, and the same can be leveraged to guide debiased generation. We train Attribute Distribution Predictor (ADP) - a small mlp that maps the latent features to the distribution of attributes. ADP is trained with pseudo labels generated from existing attribute classifiers. The proposed Distribution Guidance with ADP enables us to do fair generation. Our method reduces bias across single/multiple attributes and outperforms the baseline by a significant margin for unconditional and text-conditional diffusion models. Further, we present a downstream task of training a fair attribute classifier by rebalancing the training set with our generated data.
[ { "version": "v1", "created": "Wed, 28 Feb 2024 09:53:17 GMT" }, { "version": "v2", "created": "Wed, 22 May 2024 17:23:22 GMT" }, { "version": "v3", "created": "Wed, 29 May 2024 13:33:57 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 14:39:59 GMT" } ]
2025-04-11T00:00:00
[ [ "Parihar", "Rishubh", "" ], [ "Bhat", "Abhijnya", "" ], [ "Basu", "Abhipsa", "" ], [ "Mallick", "Saswat", "" ], [ "Kundu", "Jogendra Nath", "" ], [ "Babu", "R. Venkatesh", "" ] ]
TITLE: Balancing Act: Distribution-Guided Debiasing in Diffusion Models ABSTRACT: Diffusion Models (DMs) have emerged as powerful generative models with unprecedented image generation capability. These models are widely used for data augmentation and creative applications. However, DMs reflect the biases present in the training datasets. This is especially concerning in the context of faces, where the DM prefers one demographic subgroup vs others (eg. female vs male). In this work, we present a method for debiasing DMs without relying on additional data or model retraining. Specifically, we propose Distribution Guidance, which enforces the generated images to follow the prescribed attribute distribution. To realize this, we build on the key insight that the latent features of denoising UNet hold rich demographic semantics, and the same can be leveraged to guide debiased generation. We train Attribute Distribution Predictor (ADP) - a small mlp that maps the latent features to the distribution of attributes. ADP is trained with pseudo labels generated from existing attribute classifiers. The proposed Distribution Guidance with ADP enables us to do fair generation. Our method reduces bias across single/multiple attributes and outperforms the baseline by a significant margin for unconditional and text-conditional diffusion models. Further, we present a downstream task of training a fair attribute classifier by rebalancing the training set with our generated data.
2402.18307
Joanne Lin
Joanne Lin, Nantheera Anantrasirichai, David Bull
Multi-Scale Denoising in the Feature Space for Low-Light Instance Segmentation
Accepted by ICASSP 2025
2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 2025, pp. 1-5
10.1109/ICASSP49660.2025.10889336
ICASSP 2025
cs.CV
http://creativecommons.org/licenses/by/4.0/
Instance segmentation for low-light imagery remains largely unexplored due to the challenges imposed by such conditions, for example shot noise due to low photon count, color distortions and reduced contrast. In this paper, we propose an end-to-end solution to address this challenging task. Our proposed method implements weighted non-local blocks (wNLB) in the feature extractor. This integration enables an inherent denoising process at the feature level. As a result, our method eliminates the need for aligned ground truth images during training, thus supporting training on real-world low-light datasets. We introduce additional learnable weights at each layer in order to enhance the network's adaptability to real-world noise characteristics, which affect different feature scales in different ways. Experimental results on several object detectors show that the proposed method outperforms the pretrained networks with an Average Precision (AP) improvement of at least +7.6, with the introduction of wNLB further enhancing AP by upto +1.3.
[ { "version": "v1", "created": "Wed, 28 Feb 2024 13:07:16 GMT" }, { "version": "v2", "created": "Tue, 10 Sep 2024 08:50:11 GMT" }, { "version": "v3", "created": "Thu, 2 Jan 2025 23:06:37 GMT" } ]
2025-04-11T00:00:00
[ [ "Lin", "Joanne", "" ], [ "Anantrasirichai", "Nantheera", "" ], [ "Bull", "David", "" ] ]
TITLE: Multi-Scale Denoising in the Feature Space for Low-Light Instance Segmentation ABSTRACT: Instance segmentation for low-light imagery remains largely unexplored due to the challenges imposed by such conditions, for example shot noise due to low photon count, color distortions and reduced contrast. In this paper, we propose an end-to-end solution to address this challenging task. Our proposed method implements weighted non-local blocks (wNLB) in the feature extractor. This integration enables an inherent denoising process at the feature level. As a result, our method eliminates the need for aligned ground truth images during training, thus supporting training on real-world low-light datasets. We introduce additional learnable weights at each layer in order to enhance the network's adaptability to real-world noise characteristics, which affect different feature scales in different ways. Experimental results on several object detectors show that the proposed method outperforms the pretrained networks with an Average Precision (AP) improvement of at least +7.6, with the introduction of wNLB further enhancing AP by upto +1.3.
2404.05297
Sujin Han
Sujin Han, Jinseo Kim, Sung-Ju Lee, Insu Yun
Automated Attack Synthesis for Constant Product Market Makers
22 pages, 16 figures, 8 tables. Accepted at ACM ISSTA 2025
null
10.1145/3728872
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decentralized Finance (DeFi) enables many novel applications that were impossible in traditional finances. However, it also introduces new types of vulnerabilities. An example of such vulnerabilities is a composability bug between token contracts and Decentralized Exchange (DEX) that follows the Constant Product Market Maker (CPMM) model. This type of bug, which we refer to as CPMM composability bug, originates from issues in token contracts that make them incompatible with CPMMs, thereby endangering other tokens within the CPMM ecosystem. Since 2022, 23 exploits of such kind have resulted in a total loss of 2.2M USD. BlockSec, a smart contract auditing company, reported that 138 exploits of such kind occurred just in February 2023. In this paper, we propose CPMMX , a tool that automatically detects CPMM composability bugs across entire blockchains. To achieve such scalability, we first formalized CPMM composability bugs and found that these bugs can be induced by breaking two safety invariants. Based on this finding, we designed CPMMX equipped with a two-step approach, called shallow-then-deep search. In more detail, it first uses shallow search to find transactions that break the invariants. Then, it uses deep search to refine these transactions, making them profitable for the attacker. We evaluated CPMMX against five baselines on two public datasets and one synthetic dataset. In our evaluation, CPMMX detected 2.5x to 1.5x more vulnerabilities compared to baseline methods. It also analyzed contracts significantly faster, achieving higher F1 scores than the baselines. Additionally, we applied CPMMX to all contracts on the latest blocks of the Ethereum and Binance networks and discovered 26 new exploits that can result in 15.7K USD profit in total.
[ { "version": "v1", "created": "Mon, 8 Apr 2024 08:35:15 GMT" }, { "version": "v2", "created": "Thu, 25 Apr 2024 01:02:53 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 06:19:13 GMT" } ]
2025-04-11T00:00:00
[ [ "Han", "Sujin", "" ], [ "Kim", "Jinseo", "" ], [ "Lee", "Sung-Ju", "" ], [ "Yun", "Insu", "" ] ]
TITLE: Automated Attack Synthesis for Constant Product Market Makers ABSTRACT: Decentralized Finance (DeFi) enables many novel applications that were impossible in traditional finances. However, it also introduces new types of vulnerabilities. An example of such vulnerabilities is a composability bug between token contracts and Decentralized Exchange (DEX) that follows the Constant Product Market Maker (CPMM) model. This type of bug, which we refer to as CPMM composability bug, originates from issues in token contracts that make them incompatible with CPMMs, thereby endangering other tokens within the CPMM ecosystem. Since 2022, 23 exploits of such kind have resulted in a total loss of 2.2M USD. BlockSec, a smart contract auditing company, reported that 138 exploits of such kind occurred just in February 2023. In this paper, we propose CPMMX , a tool that automatically detects CPMM composability bugs across entire blockchains. To achieve such scalability, we first formalized CPMM composability bugs and found that these bugs can be induced by breaking two safety invariants. Based on this finding, we designed CPMMX equipped with a two-step approach, called shallow-then-deep search. In more detail, it first uses shallow search to find transactions that break the invariants. Then, it uses deep search to refine these transactions, making them profitable for the attacker. We evaluated CPMMX against five baselines on two public datasets and one synthetic dataset. In our evaluation, CPMMX detected 2.5x to 1.5x more vulnerabilities compared to baseline methods. It also analyzed contracts significantly faster, achieving higher F1 scores than the baselines. Additionally, we applied CPMMX to all contracts on the latest blocks of the Ethereum and Binance networks and discovered 26 new exploits that can result in 15.7K USD profit in total.
2404.10702
Arka Ujjal Dey
Arka Ujjal Dey, Artemis Llabr\'es, Ernest Valveny and Dimosthenis Karatzas
Retrieval Augmented Verification for Zero-Shot Detection of Multimodal Disinformation
null
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rise of disinformation on social media, especially through the strategic manipulation or repurposing of images, paired with provocative text, presents a complex challenge for traditional fact-checking methods. In this paper, we introduce a novel zero-shot approach to identify and interpret such multimodal disinformation, leveraging real-time evidence from credible sources. Our framework goes beyond simple true-or-false classifications by analyzing both the textual and visual components of social media claims in a structured, interpretable manner. By constructing a graph-based representation of entities and relationships within the claim, combined with pretrained visual features, our system automatically retrieves and matches external evidence to identify inconsistencies. Unlike traditional models dependent on labeled datasets, our method empowers users with transparency, illuminating exactly which aspects of the claim hold up to scrutiny and which do not. Our framework achieves competitive performance with state-of-the-art methods while offering enhanced explainability.
[ { "version": "v1", "created": "Tue, 16 Apr 2024 16:19:22 GMT" }, { "version": "v2", "created": "Mon, 29 Apr 2024 17:19:53 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 22:23:08 GMT" } ]
2025-04-11T00:00:00
[ [ "Dey", "Arka Ujjal", "" ], [ "Llabrés", "Artemis", "" ], [ "Valveny", "Ernest", "" ], [ "Karatzas", "Dimosthenis", "" ] ]
TITLE: Retrieval Augmented Verification for Zero-Shot Detection of Multimodal Disinformation ABSTRACT: The rise of disinformation on social media, especially through the strategic manipulation or repurposing of images, paired with provocative text, presents a complex challenge for traditional fact-checking methods. In this paper, we introduce a novel zero-shot approach to identify and interpret such multimodal disinformation, leveraging real-time evidence from credible sources. Our framework goes beyond simple true-or-false classifications by analyzing both the textual and visual components of social media claims in a structured, interpretable manner. By constructing a graph-based representation of entities and relationships within the claim, combined with pretrained visual features, our system automatically retrieves and matches external evidence to identify inconsistencies. Unlike traditional models dependent on labeled datasets, our method empowers users with transparency, illuminating exactly which aspects of the claim hold up to scrutiny and which do not. Our framework achieves competitive performance with state-of-the-art methods while offering enhanced explainability.
2405.16924
Francesco Montagna
Francesco Montagna, Max Cairney-Leeming, Dhanya Sridhar, Francesco Locatello
Demystifying amortized causal discovery with transformers
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supervised learning approaches for causal discovery from observational data often achieve competitive performance despite seemingly avoiding explicit assumptions that traditional methods make for identifiability. In this work, we investigate CSIvA (Ke et al., 2023), a transformer-based model promising to train on synthetic data and transfer to real data. First, we bridge the gap with existing identifiability theory and show that constraints on the training data distribution implicitly define a prior on the test observations. Consistent with classical approaches, good performance is achieved when we have a good prior on the test data, and the underlying model is identifiable. At the same time, we find new trade-offs. Training on datasets generated from different classes of causal models, unambiguously identifiable in isolation, improves the test generalization. Performance is still guaranteed, as the ambiguous cases resulting from the mixture of identifiable causal models are unlikely to occur (which we formally prove). Overall, our study finds that amortized causal discovery still needs to obey identifiability theory, but it also differs from classical methods in how the assumptions are formulated, trading more reliance on assumptions on the noise type for fewer hypotheses on the mechanisms.
[ { "version": "v1", "created": "Mon, 27 May 2024 08:17:49 GMT" }, { "version": "v2", "created": "Wed, 9 Apr 2025 20:30:46 GMT" } ]
2025-04-11T00:00:00
[ [ "Montagna", "Francesco", "" ], [ "Cairney-Leeming", "Max", "" ], [ "Sridhar", "Dhanya", "" ], [ "Locatello", "Francesco", "" ] ]
TITLE: Demystifying amortized causal discovery with transformers ABSTRACT: Supervised learning approaches for causal discovery from observational data often achieve competitive performance despite seemingly avoiding explicit assumptions that traditional methods make for identifiability. In this work, we investigate CSIvA (Ke et al., 2023), a transformer-based model promising to train on synthetic data and transfer to real data. First, we bridge the gap with existing identifiability theory and show that constraints on the training data distribution implicitly define a prior on the test observations. Consistent with classical approaches, good performance is achieved when we have a good prior on the test data, and the underlying model is identifiable. At the same time, we find new trade-offs. Training on datasets generated from different classes of causal models, unambiguously identifiable in isolation, improves the test generalization. Performance is still guaranteed, as the ambiguous cases resulting from the mixture of identifiable causal models are unlikely to occur (which we formally prove). Overall, our study finds that amortized causal discovery still needs to obey identifiability theory, but it also differs from classical methods in how the assumptions are formulated, trading more reliance on assumptions on the noise type for fewer hypotheses on the mechanisms.
2405.18560
Shubhang Bhatnagar
Shubhang Bhatnagar, Narendra Ahuja
Potential Field Based Deep Metric Learning
Accepted to CVPR 2025
null
null
null
cs.CV cs.AI cs.IR cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep metric learning (DML) involves training a network to learn a semantically meaningful representation space. Many current approaches mine n-tuples of examples and model interactions within each tuplets. We present a novel, compositional DML model that instead of in tuples, represents the influence of each example (embedding) by a continuous potential field, and superposes the fields to obtain their combined global potential field. We use attractive/repulsive potential fields to represent interactions among embeddings from images of the same/different classes. Contrary to typical learning methods, where mutual influence of samples is proportional to their distance, we enforce reduction in such influence with distance, leading to a decaying field. We show that such decay helps improve performance on real world datasets with large intra-class variations and label noise. Like other proxy-based methods, we also use proxies to succinctly represent sub-populations of examples. We evaluate our method on three standard DML benchmarks- Cars-196, CUB-200-2011, and SOP datasets where it outperforms state-of-the-art baselines.
[ { "version": "v1", "created": "Tue, 28 May 2024 20:10:06 GMT" }, { "version": "v2", "created": "Sun, 1 Dec 2024 05:22:22 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 04:49:39 GMT" } ]
2025-04-11T00:00:00
[ [ "Bhatnagar", "Shubhang", "" ], [ "Ahuja", "Narendra", "" ] ]
TITLE: Potential Field Based Deep Metric Learning ABSTRACT: Deep metric learning (DML) involves training a network to learn a semantically meaningful representation space. Many current approaches mine n-tuples of examples and model interactions within each tuplets. We present a novel, compositional DML model that instead of in tuples, represents the influence of each example (embedding) by a continuous potential field, and superposes the fields to obtain their combined global potential field. We use attractive/repulsive potential fields to represent interactions among embeddings from images of the same/different classes. Contrary to typical learning methods, where mutual influence of samples is proportional to their distance, we enforce reduction in such influence with distance, leading to a decaying field. We show that such decay helps improve performance on real world datasets with large intra-class variations and label noise. Like other proxy-based methods, we also use proxies to succinctly represent sub-populations of examples. We evaluate our method on three standard DML benchmarks- Cars-196, CUB-200-2011, and SOP datasets where it outperforms state-of-the-art baselines.
2406.07409
Juntao You
HanQin Cai, Longxiu Huang, Xiliang Lu, Juntao You
Accelerating Ill-conditioned Hankel Matrix Recovery via Structured Newton-like Descent
null
null
null
null
stat.ML cs.IT cs.LG eess.SP math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the robust Hankel recovery problem, which simultaneously removes the sparse outliers and fulfills missing entries from the partial observation. We propose a novel non-convex algorithm, coined Hankel Structured Newton-Like Descent (HSNLD), to tackle the robust Hankel recovery problem. HSNLD is highly efficient with linear convergence, and its convergence rate is independent of the condition number of the underlying Hankel matrix. The recovery guarantee has been established under some mild conditions. Numerical experiments on both synthetic and real datasets show the superior performance of HSNLD against state-of-the-art algorithms.
[ { "version": "v1", "created": "Tue, 11 Jun 2024 16:14:30 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 07:55:52 GMT" } ]
2025-04-11T00:00:00
[ [ "Cai", "HanQin", "" ], [ "Huang", "Longxiu", "" ], [ "Lu", "Xiliang", "" ], [ "You", "Juntao", "" ] ]
TITLE: Accelerating Ill-conditioned Hankel Matrix Recovery via Structured Newton-like Descent ABSTRACT: This paper studies the robust Hankel recovery problem, which simultaneously removes the sparse outliers and fulfills missing entries from the partial observation. We propose a novel non-convex algorithm, coined Hankel Structured Newton-Like Descent (HSNLD), to tackle the robust Hankel recovery problem. HSNLD is highly efficient with linear convergence, and its convergence rate is independent of the condition number of the underlying Hankel matrix. The recovery guarantee has been established under some mild conditions. Numerical experiments on both synthetic and real datasets show the superior performance of HSNLD against state-of-the-art algorithms.
2406.12123
Jingxi Xu
Jingxi Xu, Runsheng Wang, Siqi Shang, Ava Chen, Lauren Winterbottom, To-Liang Hsu, Wenxi Chen, Khondoker Ahmed, Pedro Leandro La Rotta, Xinyue Zhu, Dawn M. Nilsen, Joel Stein, Matei Ciocarlie
ChatEMG: Synthetic Data Generation to Control a Robotic Hand Orthosis for Stroke
8 pages; accepted to RA-L in November 2024; published at RA-L in February 2025
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intent inferral on a hand orthosis for stroke patients is challenging due to the difficulty of data collection. Additionally, EMG signals exhibit significant variations across different conditions, sessions, and subjects, making it hard for classifiers to generalize. Traditional approaches require a large labeled dataset from the new condition, session, or subject to train intent classifiers; however, this data collection process is burdensome and time-consuming. In this paper, we propose ChatEMG, an autoregressive generative model that can generate synthetic EMG signals conditioned on prompts (i.e., a given sequence of EMG signals). ChatEMG enables us to collect only a small dataset from the new condition, session, or subject and expand it with synthetic samples conditioned on prompts from this new context. ChatEMG leverages a vast repository of previous data via generative training while still remaining context-specific via prompting. Our experiments show that these synthetic samples are classifier-agnostic and can improve intent inferral accuracy for different types of classifiers. We demonstrate that our complete approach can be integrated into a single patient session, including the use of the classifier for functional orthosis-assisted tasks. To the best of our knowledge, this is the first time an intent classifier trained partially on synthetic data has been deployed for functional control of an orthosis by a stroke survivor. Videos, source code, and additional information can be found at https://jxu.ai/chatemg.
[ { "version": "v1", "created": "Mon, 17 Jun 2024 22:04:44 GMT" }, { "version": "v2", "created": "Fri, 22 Nov 2024 23:15:26 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 21:49:04 GMT" } ]
2025-04-11T00:00:00
[ [ "Xu", "Jingxi", "" ], [ "Wang", "Runsheng", "" ], [ "Shang", "Siqi", "" ], [ "Chen", "Ava", "" ], [ "Winterbottom", "Lauren", "" ], [ "Hsu", "To-Liang", "" ], [ "Chen", "Wenxi", "" ], [ "Ahmed", "Khondoker", "" ], [ "La Rotta", "Pedro Leandro", "" ], [ "Zhu", "Xinyue", "" ], [ "Nilsen", "Dawn M.", "" ], [ "Stein", "Joel", "" ], [ "Ciocarlie", "Matei", "" ] ]
TITLE: ChatEMG: Synthetic Data Generation to Control a Robotic Hand Orthosis for Stroke ABSTRACT: Intent inferral on a hand orthosis for stroke patients is challenging due to the difficulty of data collection. Additionally, EMG signals exhibit significant variations across different conditions, sessions, and subjects, making it hard for classifiers to generalize. Traditional approaches require a large labeled dataset from the new condition, session, or subject to train intent classifiers; however, this data collection process is burdensome and time-consuming. In this paper, we propose ChatEMG, an autoregressive generative model that can generate synthetic EMG signals conditioned on prompts (i.e., a given sequence of EMG signals). ChatEMG enables us to collect only a small dataset from the new condition, session, or subject and expand it with synthetic samples conditioned on prompts from this new context. ChatEMG leverages a vast repository of previous data via generative training while still remaining context-specific via prompting. Our experiments show that these synthetic samples are classifier-agnostic and can improve intent inferral accuracy for different types of classifiers. We demonstrate that our complete approach can be integrated into a single patient session, including the use of the classifier for functional orthosis-assisted tasks. To the best of our knowledge, this is the first time an intent classifier trained partially on synthetic data has been deployed for functional control of an orthosis by a stroke survivor. Videos, source code, and additional information can be found at https://jxu.ai/chatemg.
2406.19388
Moayed Haji-Ali
Moayed Haji-Ali, Willi Menapace, Aliaksandr Siarohin, Guha Balakrishnan, Sergey Tulyakov, Vicente Ordonez
Taming Data and Transformers for Scalable Audio Generation
Project Webpage: https://snap-research.github.io/GenAU/
null
null
null
cs.SD cs.CL cs.CV cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scalability of ambient sound generators is hindered by data scarcity, insufficient caption quality, and limited scalability in model architecture. This work addresses these challenges by advancing both data and model scaling. First, we propose an efficient and scalable dataset collection pipeline tailored for ambient audio generation, resulting in AutoReCap-XL, the largest ambient audio-text dataset with over 47 million clips. To provide high-quality textual annotations, we propose AutoCap, a high-quality automatic audio captioning model. By adopting a Q-Former module and leveraging audio metadata, AutoCap substantially enhances caption quality, reaching a CIDEr score of $83.2$, a $3.2\%$ improvement over previous captioning models. Finally, we propose GenAu, a scalable transformer-based audio generation architecture that we scale up to 1.25B parameters. We demonstrate its benefits from data scaling with synthetic captions as well as model size scaling. When compared to baseline audio generators trained at similar size and data scale, GenAu obtains significant improvements of $4.7\%$ in FAD score, $11.1\%$ in IS, and $13.5\%$ in CLAP score. Our code, model checkpoints, and dataset are publicly available.
[ { "version": "v1", "created": "Thu, 27 Jun 2024 17:58:54 GMT" }, { "version": "v2", "created": "Thu, 24 Oct 2024 17:56:21 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 17:55:02 GMT" } ]
2025-04-11T00:00:00
[ [ "Haji-Ali", "Moayed", "" ], [ "Menapace", "Willi", "" ], [ "Siarohin", "Aliaksandr", "" ], [ "Balakrishnan", "Guha", "" ], [ "Tulyakov", "Sergey", "" ], [ "Ordonez", "Vicente", "" ] ]
TITLE: Taming Data and Transformers for Scalable Audio Generation ABSTRACT: The scalability of ambient sound generators is hindered by data scarcity, insufficient caption quality, and limited scalability in model architecture. This work addresses these challenges by advancing both data and model scaling. First, we propose an efficient and scalable dataset collection pipeline tailored for ambient audio generation, resulting in AutoReCap-XL, the largest ambient audio-text dataset with over 47 million clips. To provide high-quality textual annotations, we propose AutoCap, a high-quality automatic audio captioning model. By adopting a Q-Former module and leveraging audio metadata, AutoCap substantially enhances caption quality, reaching a CIDEr score of $83.2$, a $3.2\%$ improvement over previous captioning models. Finally, we propose GenAu, a scalable transformer-based audio generation architecture that we scale up to 1.25B parameters. We demonstrate its benefits from data scaling with synthetic captions as well as model size scaling. When compared to baseline audio generators trained at similar size and data scale, GenAu obtains significant improvements of $4.7\%$ in FAD score, $11.1\%$ in IS, and $13.5\%$ in CLAP score. Our code, model checkpoints, and dataset are publicly available.
2407.15817
Baudouin Denis de Senneville PhD
Florian Robert, Alexia Calovoulos, Laurent Facq, Fanny Decoeur, Etienne Gontier, Christophe F. Grosset, Baudouin Denis de Senneville
Enhancing Cell Instance Segmentation in Scanning Electron Microscopy Images via a Deep Contour Closing Operator
13 pages, 8 figures, 2 tables
null
10.1016/j.compbiomed.2025.109972
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurately segmenting and individualizing cells in SEM images is a highly promising technique for elucidating tissue architecture in oncology. While current AI-based methods are effective, errors persist, necessitating time-consuming manual corrections, particularly in areas where the quality of cell contours in the image is poor and requires gap filling. This study presents a novel AI-driven approach for refining cell boundary delineation to improve instance-based cell segmentation in SEM images, also reducing the necessity for residual manual correction. A CNN COp-Net is introduced to address gaps in cell contours, effectively filling in regions with deficient or absent information. The network takes as input cell contour probability maps with potentially inadequate or missing information and outputs corrected cell contour delineations. The lack of training data was addressed by generating low integrity probability maps using a tailored PDE. We showcase the efficacy of our approach in augmenting cell boundary precision using both private SEM images from PDX hepatoblastoma tissues and publicly accessible images datasets. The proposed cell contour closing operator exhibits a notable improvement in tested datasets, achieving respectively close to 50% (private data) and 10% (public data) increase in the accurately-delineated cell proportion compared to state-of-the-art methods. Additionally, the need for manual corrections was significantly reduced, therefore facilitating the overall digitalization process. Our results demonstrate a notable enhancement in the accuracy of cell instance segmentation, particularly in highly challenging regions where image quality compromises the integrity of cell boundaries, necessitating gap filling. Therefore, our work should ultimately facilitate the study of tumour tissue bioarchitecture in onconanotomy field.
[ { "version": "v1", "created": "Mon, 22 Jul 2024 17:32:06 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 09:20:30 GMT" } ]
2025-04-11T00:00:00
[ [ "Robert", "Florian", "" ], [ "Calovoulos", "Alexia", "" ], [ "Facq", "Laurent", "" ], [ "Decoeur", "Fanny", "" ], [ "Gontier", "Etienne", "" ], [ "Grosset", "Christophe F.", "" ], [ "de Senneville", "Baudouin Denis", "" ] ]
TITLE: Enhancing Cell Instance Segmentation in Scanning Electron Microscopy Images via a Deep Contour Closing Operator ABSTRACT: Accurately segmenting and individualizing cells in SEM images is a highly promising technique for elucidating tissue architecture in oncology. While current AI-based methods are effective, errors persist, necessitating time-consuming manual corrections, particularly in areas where the quality of cell contours in the image is poor and requires gap filling. This study presents a novel AI-driven approach for refining cell boundary delineation to improve instance-based cell segmentation in SEM images, also reducing the necessity for residual manual correction. A CNN COp-Net is introduced to address gaps in cell contours, effectively filling in regions with deficient or absent information. The network takes as input cell contour probability maps with potentially inadequate or missing information and outputs corrected cell contour delineations. The lack of training data was addressed by generating low integrity probability maps using a tailored PDE. We showcase the efficacy of our approach in augmenting cell boundary precision using both private SEM images from PDX hepatoblastoma tissues and publicly accessible images datasets. The proposed cell contour closing operator exhibits a notable improvement in tested datasets, achieving respectively close to 50% (private data) and 10% (public data) increase in the accurately-delineated cell proportion compared to state-of-the-art methods. Additionally, the need for manual corrections was significantly reduced, therefore facilitating the overall digitalization process. Our results demonstrate a notable enhancement in the accuracy of cell instance segmentation, particularly in highly challenging regions where image quality compromises the integrity of cell boundaries, necessitating gap filling. Therefore, our work should ultimately facilitate the study of tumour tissue bioarchitecture in onconanotomy field.
2409.10365
Melanie Roschewitz
M\'elanie Roschewitz, Fabio De Sousa Ribeiro, Tian Xia, Galvin Khara, Ben Glocker
Robust image representations with counterfactual contrastive learning
Code available at https://github.com/biomedia-mira/counterfactual-contrastive/
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Contrastive pretraining can substantially increase model generalisation and downstream performance. However, the quality of the learned representations is highly dependent on the data augmentation strategy applied to generate positive pairs. Positive contrastive pairs should preserve semantic meaning while discarding unwanted variations related to the data acquisition domain. Traditional contrastive pipelines attempt to simulate domain shifts through pre-defined generic image transformations. However, these do not always mimic realistic and relevant domain variations for medical imaging, such as scanner differences. To tackle this issue, we herein introduce counterfactual contrastive learning, a novel framework leveraging recent advances in causal image synthesis to create contrastive positive pairs that faithfully capture relevant domain variations. Our method, evaluated across five datasets encompassing both chest radiography and mammography data, for two established contrastive objectives (SimCLR and DINO-v2), outperforms standard contrastive learning in terms of robustness to acquisition shift. Notably, counterfactual contrastive learning achieves superior downstream performance on both in-distribution and external datasets, especially for images acquired with scanners under-represented in the training set. Further experiments show that the proposed framework extends beyond acquisition shifts, with models trained with counterfactual contrastive learning reducing subgroup disparities across biological sex.
[ { "version": "v1", "created": "Mon, 16 Sep 2024 15:11:00 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 16:19:20 GMT" } ]
2025-04-11T00:00:00
[ [ "Roschewitz", "Mélanie", "" ], [ "Ribeiro", "Fabio De Sousa", "" ], [ "Xia", "Tian", "" ], [ "Khara", "Galvin", "" ], [ "Glocker", "Ben", "" ] ]
TITLE: Robust image representations with counterfactual contrastive learning ABSTRACT: Contrastive pretraining can substantially increase model generalisation and downstream performance. However, the quality of the learned representations is highly dependent on the data augmentation strategy applied to generate positive pairs. Positive contrastive pairs should preserve semantic meaning while discarding unwanted variations related to the data acquisition domain. Traditional contrastive pipelines attempt to simulate domain shifts through pre-defined generic image transformations. However, these do not always mimic realistic and relevant domain variations for medical imaging, such as scanner differences. To tackle this issue, we herein introduce counterfactual contrastive learning, a novel framework leveraging recent advances in causal image synthesis to create contrastive positive pairs that faithfully capture relevant domain variations. Our method, evaluated across five datasets encompassing both chest radiography and mammography data, for two established contrastive objectives (SimCLR and DINO-v2), outperforms standard contrastive learning in terms of robustness to acquisition shift. Notably, counterfactual contrastive learning achieves superior downstream performance on both in-distribution and external datasets, especially for images acquired with scanners under-represented in the training set. Further experiments show that the proposed framework extends beyond acquisition shifts, with models trained with counterfactual contrastive learning reducing subgroup disparities across biological sex.
2409.19075
Jie He
Yu Fu, Jie He, Yifan Yang, Qun Liu, Deyi Xiong
Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning
There is a text error in table 6
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Meta learning has been widely used to exploit rich-resource source tasks to improve the performance of low-resource target tasks. Unfortunately, most existing meta learning approaches treat different source tasks equally, ignoring the relatedness of source tasks to the target task in knowledge transfer. To mitigate this issue, we propose a reinforcement-based multi-source meta-transfer learning framework (Meta-RTL) for low-resource commonsense reasoning. In this framework, we present a reinforcement-based approach to dynamically estimating source task weights that measure the contribution of the corresponding tasks to the target task in the meta-transfer learning. The differences between the general loss of the meta model and task-specific losses of source-specific temporal meta models on sampled target data are fed into the policy network of the reinforcement learning module as rewards. The policy network is built upon LSTMs that capture long-term dependencies on source task weight estimation across meta learning iterations. We evaluate the proposed Meta-RTL using both BERT and ALBERT as the backbone of the meta model on three commonsense reasoning benchmark datasets. Experimental results demonstrate that Meta-RTL substantially outperforms strong baselines and previous task selection strategies and achieves larger improvements on extremely low-resource settings.
[ { "version": "v1", "created": "Fri, 27 Sep 2024 18:22:22 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 09:31:15 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 21:49:23 GMT" } ]
2025-04-11T00:00:00
[ [ "Fu", "Yu", "" ], [ "He", "Jie", "" ], [ "Yang", "Yifan", "" ], [ "Liu", "Qun", "" ], [ "Xiong", "Deyi", "" ] ]
TITLE: Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning ABSTRACT: Meta learning has been widely used to exploit rich-resource source tasks to improve the performance of low-resource target tasks. Unfortunately, most existing meta learning approaches treat different source tasks equally, ignoring the relatedness of source tasks to the target task in knowledge transfer. To mitigate this issue, we propose a reinforcement-based multi-source meta-transfer learning framework (Meta-RTL) for low-resource commonsense reasoning. In this framework, we present a reinforcement-based approach to dynamically estimating source task weights that measure the contribution of the corresponding tasks to the target task in the meta-transfer learning. The differences between the general loss of the meta model and task-specific losses of source-specific temporal meta models on sampled target data are fed into the policy network of the reinforcement learning module as rewards. The policy network is built upon LSTMs that capture long-term dependencies on source task weight estimation across meta learning iterations. We evaluate the proposed Meta-RTL using both BERT and ALBERT as the backbone of the meta model on three commonsense reasoning benchmark datasets. Experimental results demonstrate that Meta-RTL substantially outperforms strong baselines and previous task selection strategies and achieves larger improvements on extremely low-resource settings.
2410.01595
Amin Karimi Monsefi
Pouyan Navard, Amin Karimi Monsefi, Mengxi Zhou, Wei-Lun Chao, Alper Yilmaz, Rajiv Ramnath
KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models
Accepted to CVPR 2025 Workshop on CVEU
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advances in diffusion models have significantly improved text-to-image (T2I) generation, but they often struggle to balance fine-grained precision with high-level control. Methods like ControlNet and T2I-Adapter excel at following sketches by seasoned artists but tend to be overly rigid, replicating unintentional flaws in sketches from novice users. Meanwhile, coarse-grained methods, such as sketch-based abstraction frameworks, offer more accessible input handling but lack the precise control needed for detailed, professional use. To address these limitations, we propose KnobGen, a dual-pathway framework that democratizes sketch-based image generation by seamlessly adapting to varying levels of sketch complexity and user skill. KnobGen uses a Coarse-Grained Controller (CGC) module for high-level semantics and a Fine-Grained Controller (FGC) module for detailed refinement. The relative strength of these two modules can be adjusted through our knob inference mechanism to align with the user's specific needs. These mechanisms ensure that KnobGen can flexibly generate images from both novice sketches and those drawn by seasoned artists. This maintains control over the final output while preserving the natural appearance of the image, as evidenced on the MultiGen-20M dataset and a newly collected sketch dataset.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 14:33:12 GMT" }, { "version": "v2", "created": "Fri, 11 Oct 2024 12:47:48 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 22:27:10 GMT" } ]
2025-04-11T00:00:00
[ [ "Navard", "Pouyan", "" ], [ "Monsefi", "Amin Karimi", "" ], [ "Zhou", "Mengxi", "" ], [ "Chao", "Wei-Lun", "" ], [ "Yilmaz", "Alper", "" ], [ "Ramnath", "Rajiv", "" ] ]
TITLE: KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models ABSTRACT: Recent advances in diffusion models have significantly improved text-to-image (T2I) generation, but they often struggle to balance fine-grained precision with high-level control. Methods like ControlNet and T2I-Adapter excel at following sketches by seasoned artists but tend to be overly rigid, replicating unintentional flaws in sketches from novice users. Meanwhile, coarse-grained methods, such as sketch-based abstraction frameworks, offer more accessible input handling but lack the precise control needed for detailed, professional use. To address these limitations, we propose KnobGen, a dual-pathway framework that democratizes sketch-based image generation by seamlessly adapting to varying levels of sketch complexity and user skill. KnobGen uses a Coarse-Grained Controller (CGC) module for high-level semantics and a Fine-Grained Controller (FGC) module for detailed refinement. The relative strength of these two modules can be adjusted through our knob inference mechanism to align with the user's specific needs. These mechanisms ensure that KnobGen can flexibly generate images from both novice sketches and those drawn by seasoned artists. This maintains control over the final output while preserving the natural appearance of the image, as evidenced on the MultiGen-20M dataset and a newly collected sketch dataset.
2410.03749
Larry Liebovitch
K. Lian (1), L. S. Liebovitch (1), M. Wild (1), H. West (1), P. T. Coleman (1), F. Chen (2), E. Kimani (2), K. Sieck (2) ((1) Columbia University, (2) Toyota Research Institute)
Machine Learning Classification of Peaceful Countries: A Comparative Analysis and Dataset Optimization
5 pages, 5 figures
2025 59th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 2025, pp. 1-5
10.1109/CISS64860.2025.10944706
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper presents a machine learning approach to classify countries as peaceful or non-peaceful using linguistic patterns extracted from global media articles. We employ vector embeddings and cosine similarity to develop a supervised classification model that effectively identifies peaceful countries. Additionally, we explore the impact of dataset size on model performance, investigating how shrinking the dataset influences classification accuracy. Our results highlight the challenges and opportunities associated with using large-scale text data for peace studies.
[ { "version": "v1", "created": "Tue, 1 Oct 2024 19:28:03 GMT" } ]
2025-04-11T00:00:00
[ [ "Lian", "K.", "" ], [ "Liebovitch", "L. S.", "" ], [ "Wild", "M.", "" ], [ "West", "H.", "" ], [ "Coleman", "P. T.", "" ], [ "Chen", "F.", "" ], [ "Kimani", "E.", "" ], [ "Sieck", "K.", "" ] ]
TITLE: Machine Learning Classification of Peaceful Countries: A Comparative Analysis and Dataset Optimization ABSTRACT: This paper presents a machine learning approach to classify countries as peaceful or non-peaceful using linguistic patterns extracted from global media articles. We employ vector embeddings and cosine similarity to develop a supervised classification model that effectively identifies peaceful countries. Additionally, we explore the impact of dataset size on model performance, investigating how shrinking the dataset influences classification accuracy. Our results highlight the challenges and opportunities associated with using large-scale text data for peace studies.
2410.08206
Idil Esen Zulfikar
Ilya Fradlin, Idil Esen Zulfikar, Kadir Yilmaz, Theodora Kontogianni, Bastian Leibe
Interactive4D: Interactive 4D LiDAR Segmentation
Accepted to ICRA2025!
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactive segmentation has an important role in facilitating the annotation process of future LiDAR datasets. Existing approaches sequentially segment individual objects at each LiDAR scan, repeating the process throughout the entire sequence, which is redundant and ineffective. In this work, we propose interactive 4D segmentation, a new paradigm that allows segmenting multiple objects on multiple LiDAR scans simultaneously, and Interactive4D, the first interactive 4D segmentation model that segments multiple objects on superimposed consecutive LiDAR scans in a single iteration by utilizing the sequential nature of LiDAR data. While performing interactive segmentation, our model leverages the entire space-time volume, leading to more efficient segmentation. Operating on the 4D volume, it directly provides consistent instance IDs over time and also simplifies tracking annotations. Moreover, we show that click simulations are crucial for successful model training on LiDAR point clouds. To this end, we design a click simulation strategy that is better suited for the characteristics of LiDAR data. To demonstrate its accuracy and effectiveness, we evaluate Interactive4D on multiple LiDAR datasets, where Interactive4D achieves a new state-of-the-art by a large margin. We publicly release the code and models at https://vision.rwth-aachen.de/Interactive4D.
[ { "version": "v1", "created": "Thu, 10 Oct 2024 17:59:45 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 17:59:53 GMT" } ]
2025-04-11T00:00:00
[ [ "Fradlin", "Ilya", "" ], [ "Zulfikar", "Idil Esen", "" ], [ "Yilmaz", "Kadir", "" ], [ "Kontogianni", "Theodora", "" ], [ "Leibe", "Bastian", "" ] ]
TITLE: Interactive4D: Interactive 4D LiDAR Segmentation ABSTRACT: Interactive segmentation has an important role in facilitating the annotation process of future LiDAR datasets. Existing approaches sequentially segment individual objects at each LiDAR scan, repeating the process throughout the entire sequence, which is redundant and ineffective. In this work, we propose interactive 4D segmentation, a new paradigm that allows segmenting multiple objects on multiple LiDAR scans simultaneously, and Interactive4D, the first interactive 4D segmentation model that segments multiple objects on superimposed consecutive LiDAR scans in a single iteration by utilizing the sequential nature of LiDAR data. While performing interactive segmentation, our model leverages the entire space-time volume, leading to more efficient segmentation. Operating on the 4D volume, it directly provides consistent instance IDs over time and also simplifies tracking annotations. Moreover, we show that click simulations are crucial for successful model training on LiDAR point clouds. To this end, we design a click simulation strategy that is better suited for the characteristics of LiDAR data. To demonstrate its accuracy and effectiveness, we evaluate Interactive4D on multiple LiDAR datasets, where Interactive4D achieves a new state-of-the-art by a large margin. We publicly release the code and models at https://vision.rwth-aachen.de/Interactive4D.
2410.13453
Ant Duru
Ant Duru, Alptekin Temizel
Adaptive Augmentation Policy Optimization with LLM Feedback
15 pages, 4 tables, 3 figures submitted for consideration to 2025 Medical Image Understanding and Analysis Conference (MIUA)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data augmentation is a critical component of deep learning pipelines, enhancing model generalization by increasing dataset diversity. Traditional augmentation strategies rely on manually designed transformations, stochastic sampling, or automated search-based approaches. Although automated methods improve performance, they often require extensive computational resources and are tailored to specific datasets. In this work, we propose a Large Language Model (LLM)-guided augmentation optimization strategy that refines augmentation policies based on model performance feedback. We introduce two approaches: (1) LLM-Guided Augmentation Policy Optimization, where augmentation policies are selected by an LLM prior to training and iteratively refined across multiple training cycles, and (2) Adaptive LLM-Guided Augmentation Policy Optimization, where policies adapt in real-time based on performance metrics. This in-training approach eliminates the need for full model retraining before receiving LLM feedback, thereby reducing computational costs while improving performance. Our methodology employs an LLM to dynamically select augmentation transformations based on dataset characteristics, model architecture, and prior training outcomes. Unlike traditional search-based methods, our approach leverages the contextual knowledge of LLMs, particularly in specialized domains like medical imaging, to recommend augmentation strategies tailored to domain-specific data. We evaluate our approach on multiple domain-specific image classification datasets where augmentation is key to model robustness. Results show that LLM-guided augmentation optimization outperforms traditional methods, improving model accuracy. These findings highlight the potential of LLMs in automating and adapting deep learning training workflows.
[ { "version": "v1", "created": "Thu, 17 Oct 2024 11:26:10 GMT" }, { "version": "v2", "created": "Tue, 8 Apr 2025 11:05:01 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 18:00:00 GMT" } ]
2025-04-11T00:00:00
[ [ "Duru", "Ant", "" ], [ "Temizel", "Alptekin", "" ] ]
TITLE: Adaptive Augmentation Policy Optimization with LLM Feedback ABSTRACT: Data augmentation is a critical component of deep learning pipelines, enhancing model generalization by increasing dataset diversity. Traditional augmentation strategies rely on manually designed transformations, stochastic sampling, or automated search-based approaches. Although automated methods improve performance, they often require extensive computational resources and are tailored to specific datasets. In this work, we propose a Large Language Model (LLM)-guided augmentation optimization strategy that refines augmentation policies based on model performance feedback. We introduce two approaches: (1) LLM-Guided Augmentation Policy Optimization, where augmentation policies are selected by an LLM prior to training and iteratively refined across multiple training cycles, and (2) Adaptive LLM-Guided Augmentation Policy Optimization, where policies adapt in real-time based on performance metrics. This in-training approach eliminates the need for full model retraining before receiving LLM feedback, thereby reducing computational costs while improving performance. Our methodology employs an LLM to dynamically select augmentation transformations based on dataset characteristics, model architecture, and prior training outcomes. Unlike traditional search-based methods, our approach leverages the contextual knowledge of LLMs, particularly in specialized domains like medical imaging, to recommend augmentation strategies tailored to domain-specific data. We evaluate our approach on multiple domain-specific image classification datasets where augmentation is key to model robustness. Results show that LLM-guided augmentation optimization outperforms traditional methods, improving model accuracy. These findings highlight the potential of LLMs in automating and adapting deep learning training workflows.
2410.22318
Can Chen
Can Chen, Jun-Kun Wang
Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developing algorithms to differentiate between machine-generated texts and human-written texts has garnered substantial attention in recent years. Existing methods in this direction typically concern an offline setting where a dataset containing a mix of real and machine-generated texts is given upfront, and the task is to determine whether each sample in the dataset is from a large language model (LLM) or a human. However, in many practical scenarios, sources such as news websites, social media accounts, or on other forums publish content in a streaming fashion. Therefore, in this online scenario, how to quickly and accurately determine whether the source is an LLM with strong statistical guarantees is crucial for these media or platforms to function effectively and prevent the spread of misinformation and other potential misuse of LLMs. To tackle the problem of online detection, we develop an algorithm based on the techniques of sequential hypothesis testing by betting that not only builds upon and complements existing offline detection techniques but also enjoys statistical guarantees, which include a controlled false positive rate and the expected time to correctly identify a source as an LLM. Experiments were conducted to demonstrate the effectiveness of our method.
[ { "version": "v1", "created": "Tue, 29 Oct 2024 17:55:14 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 00:51:20 GMT" } ]
2025-04-11T00:00:00
[ [ "Chen", "Can", "" ], [ "Wang", "Jun-Kun", "" ] ]
TITLE: Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting ABSTRACT: Developing algorithms to differentiate between machine-generated texts and human-written texts has garnered substantial attention in recent years. Existing methods in this direction typically concern an offline setting where a dataset containing a mix of real and machine-generated texts is given upfront, and the task is to determine whether each sample in the dataset is from a large language model (LLM) or a human. However, in many practical scenarios, sources such as news websites, social media accounts, or on other forums publish content in a streaming fashion. Therefore, in this online scenario, how to quickly and accurately determine whether the source is an LLM with strong statistical guarantees is crucial for these media or platforms to function effectively and prevent the spread of misinformation and other potential misuse of LLMs. To tackle the problem of online detection, we develop an algorithm based on the techniques of sequential hypothesis testing by betting that not only builds upon and complements existing offline detection techniques but also enjoys statistical guarantees, which include a controlled false positive rate and the expected time to correctly identify a source as an LLM. Experiments were conducted to demonstrate the effectiveness of our method.
2410.23780
Maixuan Xue
Xinyuan Chang, Maixuan Xue, Xinran Liu, Zheng Pan, Xing Wei
Driving by the Rules: A Benchmark for Integrating Traffic Sign Regulations into Vectorized HD Map
26 pages, 16 figures. Accepted as a Highlight at CVPR 2025. Project page: https://miv-xjtu.github.io/MapDR/
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Ensuring adherence to traffic sign regulations is essential for both human and autonomous vehicle navigation. While current online mapping solutions often prioritize the construction of the geometric and connectivity layers of HD maps, overlooking the construction of the traffic regulation layer within HD maps. Addressing this gap, we introduce MapDR, a novel dataset designed for the extraction of Driving Rules from traffic signs and their association with vectorized, locally perceived HD Maps. MapDR features over $10,000$ annotated video clips that capture the intricate correlation between traffic sign regulations and lanes. Built upon this benchmark and the newly defined task of integrating traffic regulations into online HD maps, we provide modular and end-to-end solutions: VLE-MEE and RuleVLM, offering a strong baseline for advancing autonomous driving technology. It fills a critical gap in the integration of traffic sign rules, contributing to the development of reliable autonomous driving systems. Code is available at https://github.com/MIV-XJTU/MapDR.
[ { "version": "v1", "created": "Thu, 31 Oct 2024 09:53:21 GMT" }, { "version": "v2", "created": "Mon, 6 Jan 2025 12:07:55 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 11:13:00 GMT" } ]
2025-04-11T00:00:00
[ [ "Chang", "Xinyuan", "" ], [ "Xue", "Maixuan", "" ], [ "Liu", "Xinran", "" ], [ "Pan", "Zheng", "" ], [ "Wei", "Xing", "" ] ]
TITLE: Driving by the Rules: A Benchmark for Integrating Traffic Sign Regulations into Vectorized HD Map ABSTRACT: Ensuring adherence to traffic sign regulations is essential for both human and autonomous vehicle navigation. While current online mapping solutions often prioritize the construction of the geometric and connectivity layers of HD maps, overlooking the construction of the traffic regulation layer within HD maps. Addressing this gap, we introduce MapDR, a novel dataset designed for the extraction of Driving Rules from traffic signs and their association with vectorized, locally perceived HD Maps. MapDR features over $10,000$ annotated video clips that capture the intricate correlation between traffic sign regulations and lanes. Built upon this benchmark and the newly defined task of integrating traffic regulations into online HD maps, we provide modular and end-to-end solutions: VLE-MEE and RuleVLM, offering a strong baseline for advancing autonomous driving technology. It fills a critical gap in the integration of traffic sign rules, contributing to the development of reliable autonomous driving systems. Code is available at https://github.com/MIV-XJTU/MapDR.
2411.08033
Yushi Lan
Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, Chen Change Loy
GaussianAnything: Interactive Point Cloud Flow Matching For 3D Object Generation
ICLR 2025 project page: https://nirvanalan.github.io/projects/GA/
null
null
null
cs.CV cs.AI cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
While 3D content generation has advanced significantly, existing methods still face challenges with input formats, latent space design, and output representations. This paper introduces a novel 3D generation framework that addresses these challenges, offering scalable, high-quality 3D generation with an interactive Point Cloud-structured Latent space. Our framework employs a Variational Autoencoder (VAE) with multi-view posed RGB-D(epth)-N(ormal) renderings as input, using a unique latent space design that preserves 3D shape information, and incorporates a cascaded latent flow-based model for improved shape-texture disentanglement. The proposed method, GaussianAnything, supports multi-modal conditional 3D generation, allowing for point cloud, caption, and single image inputs. Notably, the newly proposed latent space naturally enables geometry-texture disentanglement, thus allowing 3D-aware editing. Experimental results demonstrate the effectiveness of our approach on multiple datasets, outperforming existing native 3D methods in both text- and image-conditioned 3D generation.
[ { "version": "v1", "created": "Tue, 12 Nov 2024 18:59:32 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 12:24:52 GMT" } ]
2025-04-11T00:00:00
[ [ "Lan", "Yushi", "" ], [ "Zhou", "Shangchen", "" ], [ "Lyu", "Zhaoyang", "" ], [ "Hong", "Fangzhou", "" ], [ "Yang", "Shuai", "" ], [ "Dai", "Bo", "" ], [ "Pan", "Xingang", "" ], [ "Loy", "Chen Change", "" ] ]
TITLE: GaussianAnything: Interactive Point Cloud Flow Matching For 3D Object Generation ABSTRACT: While 3D content generation has advanced significantly, existing methods still face challenges with input formats, latent space design, and output representations. This paper introduces a novel 3D generation framework that addresses these challenges, offering scalable, high-quality 3D generation with an interactive Point Cloud-structured Latent space. Our framework employs a Variational Autoencoder (VAE) with multi-view posed RGB-D(epth)-N(ormal) renderings as input, using a unique latent space design that preserves 3D shape information, and incorporates a cascaded latent flow-based model for improved shape-texture disentanglement. The proposed method, GaussianAnything, supports multi-modal conditional 3D generation, allowing for point cloud, caption, and single image inputs. Notably, the newly proposed latent space naturally enables geometry-texture disentanglement, thus allowing 3D-aware editing. Experimental results demonstrate the effectiveness of our approach on multiple datasets, outperforming existing native 3D methods in both text- and image-conditioned 3D generation.
2411.08753
Sagnik Majumder
Sagnik Majumder, Tushar Nagarajan, Ziad Al-Halah, Reina Pradhan, Kristen Grauman
Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Instructional Videos
Accepted to CVPR 2025 (Highlight)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a multi-view video, which viewpoint is most informative for a human observer? Existing methods rely on heuristics or expensive "best-view" supervision to answer this question, limiting their applicability. We propose a weakly supervised approach that leverages language accompanying an instructional multi-view video as a means to recover its most informative viewpoint(s). Our key hypothesis is that the more accurately an individual view can predict a view-agnostic text summary, the more informative it is. To put this into action, we propose LangView, a framework that uses the relative accuracy of view-dependent caption predictions as a proxy for best view pseudo-labels. Then, those pseudo-labels are used to train a view selector, together with an auxiliary camera pose predictor that enhances view-sensitivity. During inference, our model takes as input only a multi-view video--no language or camera poses--and returns the best viewpoint to watch at each timestep. On two challenging datasets comprised of diverse multi-camera setups and how-to activities, our model consistently outperforms state-of-the-art baselines, both with quantitative metrics and human evaluation. Project page: https://vision.cs.utexas.edu/projects/which-view-shows-it-best.
[ { "version": "v1", "created": "Wed, 13 Nov 2024 16:31:08 GMT" }, { "version": "v2", "created": "Thu, 26 Dec 2024 15:49:20 GMT" }, { "version": "v3", "created": "Fri, 4 Apr 2025 20:45:11 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 02:02:49 GMT" } ]
2025-04-11T00:00:00
[ [ "Majumder", "Sagnik", "" ], [ "Nagarajan", "Tushar", "" ], [ "Al-Halah", "Ziad", "" ], [ "Pradhan", "Reina", "" ], [ "Grauman", "Kristen", "" ] ]
TITLE: Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Instructional Videos ABSTRACT: Given a multi-view video, which viewpoint is most informative for a human observer? Existing methods rely on heuristics or expensive "best-view" supervision to answer this question, limiting their applicability. We propose a weakly supervised approach that leverages language accompanying an instructional multi-view video as a means to recover its most informative viewpoint(s). Our key hypothesis is that the more accurately an individual view can predict a view-agnostic text summary, the more informative it is. To put this into action, we propose LangView, a framework that uses the relative accuracy of view-dependent caption predictions as a proxy for best view pseudo-labels. Then, those pseudo-labels are used to train a view selector, together with an auxiliary camera pose predictor that enhances view-sensitivity. During inference, our model takes as input only a multi-view video--no language or camera poses--and returns the best viewpoint to watch at each timestep. On two challenging datasets comprised of diverse multi-camera setups and how-to activities, our model consistently outperforms state-of-the-art baselines, both with quantitative metrics and human evaluation. Project page: https://vision.cs.utexas.edu/projects/which-view-shows-it-best.
2411.10739
Jiangang Chen
Jiangang Chen, Yung-Hong Sun, Kristen Pickett, Barbara King, Yu Hen Hu, Hongrui Jiang
A Wearable Gait Monitoring System for 17 Gait Parameters Based on Computer Vision
13 pages, 14 figures. This paper was submitted for publication to the IEEE Transactions on Instrumentation and Measurement
null
10.1109/TIM.2025.3557814
null
eess.SY cs.CV cs.SY eess.SP
http://creativecommons.org/licenses/by/4.0/
We developed a shoe-mounted gait monitoring system capable of tracking up to 17 gait parameters, including gait length, step time, stride velocity, and others. The system employs a stereo camera mounted on one shoe to track a marker placed on the opposite shoe, enabling the estimation of spatial gait parameters. Additionally, a Force Sensitive Resistor (FSR) affixed to the heel of the shoe, combined with a custom-designed algorithm, is utilized to measure temporal gait parameters. Through testing on multiple participants and comparison with the gait mat, the proposed gait monitoring system exhibited notable performance, with the accuracy of all measured gait parameters exceeding 93.61%. The system also demonstrated a low drift of 4.89% during long-distance walking. A gait identification task conducted on participants using a trained Transformer model achieved 95.7% accuracy on the dataset collected by the proposed system, demonstrating that our hardware has the potential to collect long-sequence gait data suitable for integration with current Large Language Models (LLMs). The system is cost-effective, user-friendly, and well-suited for real-life measurements.
[ { "version": "v1", "created": "Sat, 16 Nov 2024 08:25:22 GMT" } ]
2025-04-11T00:00:00
[ [ "Chen", "Jiangang", "" ], [ "Sun", "Yung-Hong", "" ], [ "Pickett", "Kristen", "" ], [ "King", "Barbara", "" ], [ "Hu", "Yu Hen", "" ], [ "Jiang", "Hongrui", "" ] ]
TITLE: A Wearable Gait Monitoring System for 17 Gait Parameters Based on Computer Vision ABSTRACT: We developed a shoe-mounted gait monitoring system capable of tracking up to 17 gait parameters, including gait length, step time, stride velocity, and others. The system employs a stereo camera mounted on one shoe to track a marker placed on the opposite shoe, enabling the estimation of spatial gait parameters. Additionally, a Force Sensitive Resistor (FSR) affixed to the heel of the shoe, combined with a custom-designed algorithm, is utilized to measure temporal gait parameters. Through testing on multiple participants and comparison with the gait mat, the proposed gait monitoring system exhibited notable performance, with the accuracy of all measured gait parameters exceeding 93.61%. The system also demonstrated a low drift of 4.89% during long-distance walking. A gait identification task conducted on participants using a trained Transformer model achieved 95.7% accuracy on the dataset collected by the proposed system, demonstrating that our hardware has the potential to collect long-sequence gait data suitable for integration with current Large Language Models (LLMs). The system is cost-effective, user-friendly, and well-suited for real-life measurements.
2411.11260
Cameron Morin
Cameron Morin and Matti Marttinen Larsson
Large corpora and large language models: a replicable method for automating grammatical annotation
null
Linguistics Vanguard, 1-10 (2025)
10.1515/lingvan-2024-0228
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Much linguistic research relies on annotated datasets of features extracted from text corpora, but the rapid quantitative growth of these corpora has created practical difficulties for linguists to manually annotate large data samples. In this paper, we present a replicable, supervised method that leverages large language models for assisting the linguist in grammatical annotation through prompt engineering, training, and evaluation. We introduce a methodological pipeline applied to the case study of formal variation in the English evaluative verb construction 'consider X (as) (to be) Y', based on the large language model Claude 3.5 Sonnet and corpus data from Davies' NOW and EnTenTen21 (SketchEngine). Overall, we reach a model accuracy of over 90% on our held-out test samples with only a small amount of training data, validating the method for the annotation of very large quantities of tokens of the construction in the future. We discuss the generalisability of our results for a wider range of case studies of grammatical constructions and grammatical variation and change, underlining the value of AI copilots as tools for future linguistic research, notwithstanding some important caveats.
[ { "version": "v1", "created": "Mon, 18 Nov 2024 03:29:48 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 07:24:50 GMT" } ]
2025-04-11T00:00:00
[ [ "Morin", "Cameron", "" ], [ "Larsson", "Matti Marttinen", "" ] ]
TITLE: Large corpora and large language models: a replicable method for automating grammatical annotation ABSTRACT: Much linguistic research relies on annotated datasets of features extracted from text corpora, but the rapid quantitative growth of these corpora has created practical difficulties for linguists to manually annotate large data samples. In this paper, we present a replicable, supervised method that leverages large language models for assisting the linguist in grammatical annotation through prompt engineering, training, and evaluation. We introduce a methodological pipeline applied to the case study of formal variation in the English evaluative verb construction 'consider X (as) (to be) Y', based on the large language model Claude 3.5 Sonnet and corpus data from Davies' NOW and EnTenTen21 (SketchEngine). Overall, we reach a model accuracy of over 90% on our held-out test samples with only a small amount of training data, validating the method for the annotation of very large quantities of tokens of the construction in the future. We discuss the generalisability of our results for a wider range of case studies of grammatical constructions and grammatical variation and change, underlining the value of AI copilots as tools for future linguistic research, notwithstanding some important caveats.
2411.15139
Bencheng Liao
Bencheng Liao, Shaoyu Chen, Haoran Yin, Bo Jiang, Cheng Wang, Sixu Yan, Xinbang Zhang, Xiangyu Li, Ying Zhang, Qian Zhang, Xinggang Wang
DiffusionDrive: Truncated Diffusion Model for End-to-End Autonomous Driving
Accepted to CVPR 2025 as Highlight. Code & demo & model are available at https://github.com/hustvl/DiffusionDrive
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recently, the diffusion model has emerged as a powerful generative technique for robotic policy learning, capable of modeling multi-mode action distributions. Leveraging its capability for end-to-end autonomous driving is a promising direction. However, the numerous denoising steps in the robotic diffusion policy and the more dynamic, open-world nature of traffic scenes pose substantial challenges for generating diverse driving actions at a real-time speed. To address these challenges, we propose a novel truncated diffusion policy that incorporates prior multi-mode anchors and truncates the diffusion schedule, enabling the model to learn denoising from anchored Gaussian distribution to the multi-mode driving action distribution. Additionally, we design an efficient cascade diffusion decoder for enhanced interaction with conditional scene context. The proposed model, DiffusionDrive, demonstrates 10$\times$ reduction in denoising steps compared to vanilla diffusion policy, delivering superior diversity and quality in just 2 steps. On the planning-oriented NAVSIM dataset, with the aligned ResNet-34 backbone, DiffusionDrive achieves 88.1 PDMS without bells and whistles, setting a new record, while running at a real-time speed of 45 FPS on an NVIDIA 4090. Qualitative results on challenging scenarios further confirm that DiffusionDrive can robustly generate diverse plausible driving actions. Code and model will be available at https://github.com/hustvl/DiffusionDrive.
[ { "version": "v1", "created": "Fri, 22 Nov 2024 18:59:47 GMT" }, { "version": "v2", "created": "Mon, 24 Mar 2025 03:02:15 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 08:48:37 GMT" } ]
2025-04-11T00:00:00
[ [ "Liao", "Bencheng", "" ], [ "Chen", "Shaoyu", "" ], [ "Yin", "Haoran", "" ], [ "Jiang", "Bo", "" ], [ "Wang", "Cheng", "" ], [ "Yan", "Sixu", "" ], [ "Zhang", "Xinbang", "" ], [ "Li", "Xiangyu", "" ], [ "Zhang", "Ying", "" ], [ "Zhang", "Qian", "" ], [ "Wang", "Xinggang", "" ] ]
TITLE: DiffusionDrive: Truncated Diffusion Model for End-to-End Autonomous Driving ABSTRACT: Recently, the diffusion model has emerged as a powerful generative technique for robotic policy learning, capable of modeling multi-mode action distributions. Leveraging its capability for end-to-end autonomous driving is a promising direction. However, the numerous denoising steps in the robotic diffusion policy and the more dynamic, open-world nature of traffic scenes pose substantial challenges for generating diverse driving actions at a real-time speed. To address these challenges, we propose a novel truncated diffusion policy that incorporates prior multi-mode anchors and truncates the diffusion schedule, enabling the model to learn denoising from anchored Gaussian distribution to the multi-mode driving action distribution. Additionally, we design an efficient cascade diffusion decoder for enhanced interaction with conditional scene context. The proposed model, DiffusionDrive, demonstrates 10$\times$ reduction in denoising steps compared to vanilla diffusion policy, delivering superior diversity and quality in just 2 steps. On the planning-oriented NAVSIM dataset, with the aligned ResNet-34 backbone, DiffusionDrive achieves 88.1 PDMS without bells and whistles, setting a new record, while running at a real-time speed of 45 FPS on an NVIDIA 4090. Qualitative results on challenging scenarios further confirm that DiffusionDrive can robustly generate diverse plausible driving actions. Code and model will be available at https://github.com/hustvl/DiffusionDrive.
2411.17060
Mark Iskarous
Mark M. Iskarous, Zan Chaudhry, Fangjie Li, Samuel Bello, Sriramana Sankar, Ariel Slepyan, Natasha Chugh, Christopher L. Hunt, Rebecca J. Greene, Nitish V. Thakor
Invariant neuromorphic representations of tactile stimuli improve robustness of a real-time texture classification system
34 pages, 9 figures, 1 table
null
10.1002/aisy.202401078
null
cs.RO eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Humans have an exquisite sense of touch which robotic and prosthetic systems aim to recreate. We developed algorithms to create neuron-like (neuromorphic) spiking representations of texture that are invariant to the scanning speed and contact force applied in the sensing process. The spiking representations are based on mimicking activity from mechanoreceptors in human skin and further processing up to the brain. The neuromorphic encoding process transforms analog sensor readings into speed and force invariant spiking representations in three sequential stages: the force invariance module (in the analog domain), the spiking activity encoding module (transforms from analog to spiking domain), and the speed invariance module (in the spiking domain). The algorithms were tested on a tactile texture dataset collected in 15 speed-force conditions. An offline texture classification system built on the invariant representations has higher classification accuracy, improved computational efficiency, and increased capability to identify textures explored in novel speed-force conditions. The speed invariance algorithm was adapted to a real-time human-operated texture classification system. Similarly, the invariant representations improved classification accuracy, computational efficiency, and capability to identify textures explored in novel conditions. The invariant representation is even more crucial in this context due to human imprecision which seems to the classification system as a novel condition. These results demonstrate that invariant neuromorphic representations enable better performing neurorobotic tactile sensing systems. Furthermore, because the neuromorphic representations are based on biological processing, this work can be used in the future as the basis for naturalistic sensory feedback for upper limb amputees.
[ { "version": "v1", "created": "Tue, 26 Nov 2024 02:57:37 GMT" } ]
2025-04-11T00:00:00
[ [ "Iskarous", "Mark M.", "" ], [ "Chaudhry", "Zan", "" ], [ "Li", "Fangjie", "" ], [ "Bello", "Samuel", "" ], [ "Sankar", "Sriramana", "" ], [ "Slepyan", "Ariel", "" ], [ "Chugh", "Natasha", "" ], [ "Hunt", "Christopher L.", "" ], [ "Greene", "Rebecca J.", "" ], [ "Thakor", "Nitish V.", "" ] ]
TITLE: Invariant neuromorphic representations of tactile stimuli improve robustness of a real-time texture classification system ABSTRACT: Humans have an exquisite sense of touch which robotic and prosthetic systems aim to recreate. We developed algorithms to create neuron-like (neuromorphic) spiking representations of texture that are invariant to the scanning speed and contact force applied in the sensing process. The spiking representations are based on mimicking activity from mechanoreceptors in human skin and further processing up to the brain. The neuromorphic encoding process transforms analog sensor readings into speed and force invariant spiking representations in three sequential stages: the force invariance module (in the analog domain), the spiking activity encoding module (transforms from analog to spiking domain), and the speed invariance module (in the spiking domain). The algorithms were tested on a tactile texture dataset collected in 15 speed-force conditions. An offline texture classification system built on the invariant representations has higher classification accuracy, improved computational efficiency, and increased capability to identify textures explored in novel speed-force conditions. The speed invariance algorithm was adapted to a real-time human-operated texture classification system. Similarly, the invariant representations improved classification accuracy, computational efficiency, and capability to identify textures explored in novel conditions. The invariant representation is even more crucial in this context due to human imprecision which seems to the classification system as a novel condition. These results demonstrate that invariant neuromorphic representations enable better performing neurorobotic tactile sensing systems. Furthermore, because the neuromorphic representations are based on biological processing, this work can be used in the future as the basis for naturalistic sensory feedback for upper limb amputees.
2411.19050
Nicola Fanelli
Nicola Fanelli, Gennaro Vessio, Giovanna Castellano
I Dream My Painting: Connecting MLLMs and Diffusion Models via Prompt Generation for Text-Guided Multi-Mask Inpainting
Accepted at WACV 2025
null
10.1109/WACV61041.2025.00592
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inpainting focuses on filling missing or corrupted regions of an image to blend seamlessly with its surrounding content and style. While conditional diffusion models have proven effective for text-guided inpainting, we introduce the novel task of multi-mask inpainting, where multiple regions are simultaneously inpainted using distinct prompts. Furthermore, we design a fine-tuning procedure for multimodal LLMs, such as LLaVA, to generate multi-mask prompts automatically using corrupted images as inputs. These models can generate helpful and detailed prompt suggestions for filling the masked regions. The generated prompts are then fed to Stable Diffusion, which is fine-tuned for the multi-mask inpainting problem using rectified cross-attention, enforcing prompts onto their designated regions for filling. Experiments on digitized paintings from WikiArt and the Densely Captioned Images dataset demonstrate that our pipeline delivers creative and accurate inpainting results. Our code, data, and trained models are available at https://cilabuniba.github.io/i-dream-my-painting.
[ { "version": "v1", "created": "Thu, 28 Nov 2024 10:55:09 GMT" }, { "version": "v2", "created": "Fri, 6 Dec 2024 10:58:53 GMT" } ]
2025-04-11T00:00:00
[ [ "Fanelli", "Nicola", "" ], [ "Vessio", "Gennaro", "" ], [ "Castellano", "Giovanna", "" ] ]
TITLE: I Dream My Painting: Connecting MLLMs and Diffusion Models via Prompt Generation for Text-Guided Multi-Mask Inpainting ABSTRACT: Inpainting focuses on filling missing or corrupted regions of an image to blend seamlessly with its surrounding content and style. While conditional diffusion models have proven effective for text-guided inpainting, we introduce the novel task of multi-mask inpainting, where multiple regions are simultaneously inpainted using distinct prompts. Furthermore, we design a fine-tuning procedure for multimodal LLMs, such as LLaVA, to generate multi-mask prompts automatically using corrupted images as inputs. These models can generate helpful and detailed prompt suggestions for filling the masked regions. The generated prompts are then fed to Stable Diffusion, which is fine-tuned for the multi-mask inpainting problem using rectified cross-attention, enforcing prompts onto their designated regions for filling. Experiments on digitized paintings from WikiArt and the Densely Captioned Images dataset demonstrate that our pipeline delivers creative and accurate inpainting results. Our code, data, and trained models are available at https://cilabuniba.github.io/i-dream-my-painting.
2411.19346
Mohamed Fazli Imam
Mohamed Fazli Imam, Rufael Fedaku Marew, Jameel Hassan, Mustansar Fiaz, Alham Fikri Aji, Hisham Cholakkal
CLIP meets DINO for Tuning Zero-Shot Classifier using Unlabeled Image Collections
null
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the era of foundation models, CLIP has emerged as a powerful tool for aligning text & visual modalities into a common embedding space. However, the alignment objective used to train CLIP often results in subpar visual features for fine-grained tasks. In contrast, SSL-pretrained models like DINO excel at extracting rich visual features due to their specialized training paradigm. Yet, these SSL models require an additional supervised linear probing step, which relies on fully labeled data which is often expensive and difficult to obtain at scale. In this paper, we propose a label-free prompt-tuning method that leverages the rich visual features of self-supervised learning models (DINO) and the broad textual knowledge of large language models (LLMs) to largely enhance CLIP-based image classification performance using unlabeled images. Our approach unfolds in three key steps: (1) We generate robust textual feature embeddings that more accurately represent object classes by leveraging class-specific descriptions from LLMs, enabling more effective zero-shot classification compared to CLIP's default name-specific prompts. (2) These textual embeddings are then used to produce pseudo-labels to train an alignment module that integrates the complementary strengths of LLM description-based textual embeddings & DINO's visual features. (3) Finally, we prompt-tune CLIP's vision encoder through DINO-assisted supervision using the trained alignment module. This three-step process allows us to harness the best of visual & textual foundation models, resulting in a powerful and efficient approach that surpasses state-of-the-art label-free classification methods. Notably, our framework, NoLA (No Labels Attached), achieves an average absolute gain of 3.6% over the state-of-the-art LaFTer across 11 diverse image classification datasets. Our code & models can be found at https://github.com/fazliimam/NoLA.
[ { "version": "v1", "created": "Thu, 28 Nov 2024 19:48:54 GMT" }, { "version": "v2", "created": "Fri, 7 Mar 2025 08:08:18 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 11:09:41 GMT" } ]
2025-04-11T00:00:00
[ [ "Imam", "Mohamed Fazli", "" ], [ "Marew", "Rufael Fedaku", "" ], [ "Hassan", "Jameel", "" ], [ "Fiaz", "Mustansar", "" ], [ "Aji", "Alham Fikri", "" ], [ "Cholakkal", "Hisham", "" ] ]
TITLE: CLIP meets DINO for Tuning Zero-Shot Classifier using Unlabeled Image Collections ABSTRACT: In the era of foundation models, CLIP has emerged as a powerful tool for aligning text & visual modalities into a common embedding space. However, the alignment objective used to train CLIP often results in subpar visual features for fine-grained tasks. In contrast, SSL-pretrained models like DINO excel at extracting rich visual features due to their specialized training paradigm. Yet, these SSL models require an additional supervised linear probing step, which relies on fully labeled data which is often expensive and difficult to obtain at scale. In this paper, we propose a label-free prompt-tuning method that leverages the rich visual features of self-supervised learning models (DINO) and the broad textual knowledge of large language models (LLMs) to largely enhance CLIP-based image classification performance using unlabeled images. Our approach unfolds in three key steps: (1) We generate robust textual feature embeddings that more accurately represent object classes by leveraging class-specific descriptions from LLMs, enabling more effective zero-shot classification compared to CLIP's default name-specific prompts. (2) These textual embeddings are then used to produce pseudo-labels to train an alignment module that integrates the complementary strengths of LLM description-based textual embeddings & DINO's visual features. (3) Finally, we prompt-tune CLIP's vision encoder through DINO-assisted supervision using the trained alignment module. This three-step process allows us to harness the best of visual & textual foundation models, resulting in a powerful and efficient approach that surpasses state-of-the-art label-free classification methods. Notably, our framework, NoLA (No Labels Attached), achieves an average absolute gain of 3.6% over the state-of-the-art LaFTer across 11 diverse image classification datasets. Our code & models can be found at https://github.com/fazliimam/NoLA.
2412.01721
Marc Van Droogenbroeck
Floriane Magera and Thomas Hoyoux and Olivier Barnich and Marc Van Droogenbroeck
BroadTrack: Broadcast Camera Tracking for Soccer
12 pages, 4 figures, 3 tables, 60 references
null
10.1109/wacv61041.2025.00602
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Camera calibration and localization, sometimes simply named camera calibration, enables many applications in the context of soccer broadcasting, for instance regarding the interpretation and analysis of the game, or the insertion of augmented reality graphics for storytelling or refereeing purposes. To contribute to such applications, the research community has typically focused on single-view calibration methods, leveraging the near-omnipresence of soccer field markings in wide-angle broadcast views, but leaving all temporal aspects, if considered at all, to general-purpose tracking or filtering techniques. Only a few contributions have been made to leverage any domain-specific knowledge for this tracking task, and, as a result, there lacks a truly performant and off-the-shelf camera tracking system tailored for soccer broadcasting, specifically for elevated tripod-mounted cameras around the stadium. In this work, we present such a system capable of addressing the task of soccer broadcast camera tracking efficiently, robustly, and accurately, outperforming by far the most precise methods of the state-of-the-art. By combining the available open-source soccer field detectors with carefully designed camera and tripod models, our tracking system, BroadTrack, halves the mean reprojection error rate and gains more than 15% in terms of Jaccard index for camera calibration on the SoccerNet dataset. Furthermore, as the SoccerNet dataset videos are relatively short (30 seconds), we also present qualitative results on a 20-minute broadcast clip to showcase the robustness and the soundness of our system.
[ { "version": "v1", "created": "Mon, 2 Dec 2024 17:10:52 GMT" } ]
2025-04-11T00:00:00
[ [ "Magera", "Floriane", "" ], [ "Hoyoux", "Thomas", "" ], [ "Barnich", "Olivier", "" ], [ "Van Droogenbroeck", "Marc", "" ] ]
TITLE: BroadTrack: Broadcast Camera Tracking for Soccer ABSTRACT: Camera calibration and localization, sometimes simply named camera calibration, enables many applications in the context of soccer broadcasting, for instance regarding the interpretation and analysis of the game, or the insertion of augmented reality graphics for storytelling or refereeing purposes. To contribute to such applications, the research community has typically focused on single-view calibration methods, leveraging the near-omnipresence of soccer field markings in wide-angle broadcast views, but leaving all temporal aspects, if considered at all, to general-purpose tracking or filtering techniques. Only a few contributions have been made to leverage any domain-specific knowledge for this tracking task, and, as a result, there lacks a truly performant and off-the-shelf camera tracking system tailored for soccer broadcasting, specifically for elevated tripod-mounted cameras around the stadium. In this work, we present such a system capable of addressing the task of soccer broadcast camera tracking efficiently, robustly, and accurately, outperforming by far the most precise methods of the state-of-the-art. By combining the available open-source soccer field detectors with carefully designed camera and tripod models, our tracking system, BroadTrack, halves the mean reprojection error rate and gains more than 15% in terms of Jaccard index for camera calibration on the SoccerNet dataset. Furthermore, as the SoccerNet dataset videos are relatively short (30 seconds), we also present qualitative results on a 20-minute broadcast clip to showcase the robustness and the soundness of our system.
2412.08864
Jiankang Wang
Jiankang Wang, Jianjun Xu, Xiaorui Wang, Yuxin Wang, Mengting Xing, Shancheng Fang, Zhineng Chen, Hongtao Xie, Yongdong Zhang
A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthesizing high-quality reasoning data for continual training has been proven to be effective in enhancing the performance of Large Language Models (LLMs). However, previous synthetic approaches struggle to easily scale up data and incur high costs in the pursuit of high quality. In this paper, we propose the Graph-based Synthetic Data Pipeline (GSDP), an economical and scalable framework for high-quality reasoning data synthesis. Inspired by knowledge graphs, we extracted knowledge points from seed data and constructed a knowledge point relationships graph to explore their interconnections. By exploring the implicit relationships among knowledge, our method achieves $\times$255 data expansion. Furthermore, GSDP led by open-source models, achieves synthesis quality comparable to GPT-4-0613 while maintaining $\times$100 lower costs. To tackle the most challenging mathematical reasoning task, we present the GSDP-MATH dataset comprising over 1.91 million pairs of math problems and answers. After fine-tuning on GSDP-MATH, GSDP-7B based on Mistral-7B achieves 37.7% accuracy on MATH and 78.4% on GSM8K, demonstrating the effectiveness of our method. The dataset and models will be released in https://github.com/Jayce1kk/GSDP.
[ { "version": "v1", "created": "Thu, 12 Dec 2024 01:52:25 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 10:47:53 GMT" } ]
2025-04-11T00:00:00
[ [ "Wang", "Jiankang", "" ], [ "Xu", "Jianjun", "" ], [ "Wang", "Xiaorui", "" ], [ "Wang", "Yuxin", "" ], [ "Xing", "Mengting", "" ], [ "Fang", "Shancheng", "" ], [ "Chen", "Zhineng", "" ], [ "Xie", "Hongtao", "" ], [ "Zhang", "Yongdong", "" ] ]
TITLE: A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions ABSTRACT: Synthesizing high-quality reasoning data for continual training has been proven to be effective in enhancing the performance of Large Language Models (LLMs). However, previous synthetic approaches struggle to easily scale up data and incur high costs in the pursuit of high quality. In this paper, we propose the Graph-based Synthetic Data Pipeline (GSDP), an economical and scalable framework for high-quality reasoning data synthesis. Inspired by knowledge graphs, we extracted knowledge points from seed data and constructed a knowledge point relationships graph to explore their interconnections. By exploring the implicit relationships among knowledge, our method achieves $\times$255 data expansion. Furthermore, GSDP led by open-source models, achieves synthesis quality comparable to GPT-4-0613 while maintaining $\times$100 lower costs. To tackle the most challenging mathematical reasoning task, we present the GSDP-MATH dataset comprising over 1.91 million pairs of math problems and answers. After fine-tuning on GSDP-MATH, GSDP-7B based on Mistral-7B achieves 37.7% accuracy on MATH and 78.4% on GSM8K, demonstrating the effectiveness of our method. The dataset and models will be released in https://github.com/Jayce1kk/GSDP.
2412.08912
Ali Mollaahmadi Dehaghi
Ali Mollaahmadi Dehaghi, Reza Razavi, Mohammad Moshirpour
Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression
12 pages, 8 figures
null
10.1109/WACV61041.2025.00130
null
cs.CV cs.MM
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we introduce DiQP; a novel Transformer-Diffusion model for restoring 8K video quality degraded by codec compression. To the best of our knowledge, our model is the first to consider restoring the artifacts introduced by various codecs (AV1, HEVC) by Denoising Diffusion without considering additional noise. This approach allows us to model the complex, non-Gaussian nature of compression artifacts, effectively learning to reverse the degradation. Our architecture combines the power of Transformers to capture long-range dependencies with an enhanced windowed mechanism that preserves spatiotemporal context within groups of pixels across frames. To further enhance restoration, the model incorporates auxiliary "Look Ahead" and "Look Around" modules, providing both future and surrounding frame information to aid in reconstructing fine details and enhancing overall visual quality. Extensive experiments on different datasets demonstrate that our model outperforms state-of-the-art methods, particularly for high-resolution videos such as 4K and 8K, showcasing its effectiveness in restoring perceptually pleasing videos from highly compressed sources.
[ { "version": "v1", "created": "Thu, 12 Dec 2024 03:49:22 GMT" } ]
2025-04-11T00:00:00
[ [ "Dehaghi", "Ali Mollaahmadi", "" ], [ "Razavi", "Reza", "" ], [ "Moshirpour", "Mohammad", "" ] ]
TITLE: Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression ABSTRACT: In this paper, we introduce DiQP; a novel Transformer-Diffusion model for restoring 8K video quality degraded by codec compression. To the best of our knowledge, our model is the first to consider restoring the artifacts introduced by various codecs (AV1, HEVC) by Denoising Diffusion without considering additional noise. This approach allows us to model the complex, non-Gaussian nature of compression artifacts, effectively learning to reverse the degradation. Our architecture combines the power of Transformers to capture long-range dependencies with an enhanced windowed mechanism that preserves spatiotemporal context within groups of pixels across frames. To further enhance restoration, the model incorporates auxiliary "Look Ahead" and "Look Around" modules, providing both future and surrounding frame information to aid in reconstructing fine details and enhancing overall visual quality. Extensive experiments on different datasets demonstrate that our model outperforms state-of-the-art methods, particularly for high-resolution videos such as 4K and 8K, showcasing its effectiveness in restoring perceptually pleasing videos from highly compressed sources.
2412.14602
Yuxuan Liang
Yuxuan Liang, Wentao Zhang, Zeang Sheng, Ling Yang, Quanqing Xu, Jiawei Jiang, Yunhai Tong, Bin Cui
Towards Scalable and Deep Graph Neural Networks via Noise Masking
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, Graph Neural Networks (GNNs) have achieved remarkable success in many graph mining tasks. However, scaling them to large graphs is challenging due to the high computational and storage costs of repeated feature propagation and non-linear transformation during training. One commonly employed approach to address this challenge is model-simplification, which only executes the Propagation (P) once in the pre-processing, and Combine (C) these receptive fields in different ways and then feed them into a simple model for better performance. Despite their high predictive performance and scalability, these methods still face two limitations. First, existing approaches mainly focus on exploring different C methods from the model perspective, neglecting the crucial problem of performance degradation with increasing P depth from the data-centric perspective, known as the over-smoothing problem. Second, pre-processing overhead takes up most of the end-to-end processing time, especially for large-scale graphs. To address these limitations, we present random walk with noise masking (RMask), a plug-and-play module compatible with the existing model-simplification works. This module enables the exploration of deeper GNNs while preserving their scalability. Unlike the previous model-simplification works, we focus on continuous P and found that the noise existing inside each P is the cause of the over-smoothing issue, and use the efficient masking mechanism to eliminate them. Experimental results on six real-world datasets demonstrate that model-simplification works equipped with RMask yield superior performance compared to their original version and can make a good trade-off between accuracy and efficiency.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 07:48:14 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 02:16:19 GMT" } ]
2025-04-11T00:00:00
[ [ "Liang", "Yuxuan", "" ], [ "Zhang", "Wentao", "" ], [ "Sheng", "Zeang", "" ], [ "Yang", "Ling", "" ], [ "Xu", "Quanqing", "" ], [ "Jiang", "Jiawei", "" ], [ "Tong", "Yunhai", "" ], [ "Cui", "Bin", "" ] ]
TITLE: Towards Scalable and Deep Graph Neural Networks via Noise Masking ABSTRACT: In recent years, Graph Neural Networks (GNNs) have achieved remarkable success in many graph mining tasks. However, scaling them to large graphs is challenging due to the high computational and storage costs of repeated feature propagation and non-linear transformation during training. One commonly employed approach to address this challenge is model-simplification, which only executes the Propagation (P) once in the pre-processing, and Combine (C) these receptive fields in different ways and then feed them into a simple model for better performance. Despite their high predictive performance and scalability, these methods still face two limitations. First, existing approaches mainly focus on exploring different C methods from the model perspective, neglecting the crucial problem of performance degradation with increasing P depth from the data-centric perspective, known as the over-smoothing problem. Second, pre-processing overhead takes up most of the end-to-end processing time, especially for large-scale graphs. To address these limitations, we present random walk with noise masking (RMask), a plug-and-play module compatible with the existing model-simplification works. This module enables the exploration of deeper GNNs while preserving their scalability. Unlike the previous model-simplification works, we focus on continuous P and found that the noise existing inside each P is the cause of the over-smoothing issue, and use the efficient masking mechanism to eliminate them. Experimental results on six real-world datasets demonstrate that model-simplification works equipped with RMask yield superior performance compared to their original version and can make a good trade-off between accuracy and efficiency.
2412.14719
Kun Li
Kun Li, Dan Guo, Guoliang Chen, Chunxiao Fan, Jingyuan Xu, Zhiliang Wu, Hehe Fan, Meng Wang
Prototypical Calibrating Ambiguous Samples for Micro-Action Recognition
Fix typos; Accepted by AAAI 2025
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Micro-Action Recognition (MAR) has gained increasing attention due to its crucial role as a form of non-verbal communication in social interactions, with promising potential for applications in human communication and emotion analysis. However, current approaches often overlook the inherent ambiguity in micro-actions, which arises from the wide category range and subtle visual differences between categories. This oversight hampers the accuracy of micro-action recognition. In this paper, we propose a novel Prototypical Calibrating Ambiguous Network (PCAN) to unleash and mitigate the ambiguity of MAR. Firstly, we employ a hierarchical action-tree to identify the ambiguous sample, categorizing them into distinct sets of ambiguous samples of false negatives and false positives, considering both body- and action-level categories. Secondly, we implement an ambiguous contrastive refinement module to calibrate these ambiguous samples by regulating the distance between ambiguous samples and their corresponding prototypes. This calibration process aims to pull false negative (FN) samples closer to their respective prototypes and push false positive (FP) samples apart from their affiliated prototypes. In addition, we propose a new prototypical diversity amplification loss to strengthen the model's capacity by amplifying the differences between different prototypes. Finally, we propose a prototype-guided rectification to rectify prediction by incorporating the representability of prototypes. Extensive experiments conducted on the benchmark dataset demonstrate the superior performance of our method compared to existing approaches. The code is available at https://github.com/kunli-cs/PCAN.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 10:41:24 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 05:13:15 GMT" } ]
2025-04-11T00:00:00
[ [ "Li", "Kun", "" ], [ "Guo", "Dan", "" ], [ "Chen", "Guoliang", "" ], [ "Fan", "Chunxiao", "" ], [ "Xu", "Jingyuan", "" ], [ "Wu", "Zhiliang", "" ], [ "Fan", "Hehe", "" ], [ "Wang", "Meng", "" ] ]
TITLE: Prototypical Calibrating Ambiguous Samples for Micro-Action Recognition ABSTRACT: Micro-Action Recognition (MAR) has gained increasing attention due to its crucial role as a form of non-verbal communication in social interactions, with promising potential for applications in human communication and emotion analysis. However, current approaches often overlook the inherent ambiguity in micro-actions, which arises from the wide category range and subtle visual differences between categories. This oversight hampers the accuracy of micro-action recognition. In this paper, we propose a novel Prototypical Calibrating Ambiguous Network (PCAN) to unleash and mitigate the ambiguity of MAR. Firstly, we employ a hierarchical action-tree to identify the ambiguous sample, categorizing them into distinct sets of ambiguous samples of false negatives and false positives, considering both body- and action-level categories. Secondly, we implement an ambiguous contrastive refinement module to calibrate these ambiguous samples by regulating the distance between ambiguous samples and their corresponding prototypes. This calibration process aims to pull false negative (FN) samples closer to their respective prototypes and push false positive (FP) samples apart from their affiliated prototypes. In addition, we propose a new prototypical diversity amplification loss to strengthen the model's capacity by amplifying the differences between different prototypes. Finally, we propose a prototype-guided rectification to rectify prediction by incorporating the representability of prototypes. Extensive experiments conducted on the benchmark dataset demonstrate the superior performance of our method compared to existing approaches. The code is available at https://github.com/kunli-cs/PCAN.
2412.17534
Ege Yi\u{g}it \c{C}elik
Ege Yi\u{g}it \c{C}elik and Selma Tekir
CiteBART: Learning to Generate Citations for Local Citation Recommendation
17 pages, 2 figures, 10 tables
null
null
null
cs.IR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Local citation recommendation (LCR) suggests a set of papers for a citation placeholder within a given context. The task has evolved as generative approaches have become more promising than the traditional pre-fetch and re-rank-based state-of-the-art approaches. This paper introduces citation-specific pre-training within an encoder-decoder architecture, where author-date citation tokens are masked to learn to reconstruct them to fulfill LCR. There are two variants for this pre-training. In the local context-only base scheme (CiteBART-Base), the citation token in a local context is masked to learn to predict the citation. The global version (CiteBART-Global) extends the local context with the citing paper's title and abstract to enrich the learning signal. CiteBART-Global achieves state-of-the-art performance on LCR benchmarks except for the FullTextPeerRead dataset, which is quite small to see the advantage of generative pre-training. The effect is significant in the larger benchmarks, e.g., Refseer and ArXiv., with the Refseer benchmark-trained model emerging as the best-performing model. We perform comprehensive experiments, including an ablation study, a qualitative analysis, and a taxonomy of hallucinations with detailed statistics. Our analyses confirm that CiteBART-Global has a cross-dataset generalization capability; the macro hallucination rate (MaHR) at the top-3 predictions is 4\%, and when the ground-truth is in the top-k prediction list, the hallucination tendency in the other predictions drops significantly.
[ { "version": "v1", "created": "Mon, 23 Dec 2024 12:58:30 GMT" }, { "version": "v2", "created": "Wed, 9 Apr 2025 20:23:16 GMT" } ]
2025-04-11T00:00:00
[ [ "Çelik", "Ege Yiğit", "" ], [ "Tekir", "Selma", "" ] ]
TITLE: CiteBART: Learning to Generate Citations for Local Citation Recommendation ABSTRACT: Local citation recommendation (LCR) suggests a set of papers for a citation placeholder within a given context. The task has evolved as generative approaches have become more promising than the traditional pre-fetch and re-rank-based state-of-the-art approaches. This paper introduces citation-specific pre-training within an encoder-decoder architecture, where author-date citation tokens are masked to learn to reconstruct them to fulfill LCR. There are two variants for this pre-training. In the local context-only base scheme (CiteBART-Base), the citation token in a local context is masked to learn to predict the citation. The global version (CiteBART-Global) extends the local context with the citing paper's title and abstract to enrich the learning signal. CiteBART-Global achieves state-of-the-art performance on LCR benchmarks except for the FullTextPeerRead dataset, which is quite small to see the advantage of generative pre-training. The effect is significant in the larger benchmarks, e.g., Refseer and ArXiv., with the Refseer benchmark-trained model emerging as the best-performing model. We perform comprehensive experiments, including an ablation study, a qualitative analysis, and a taxonomy of hallucinations with detailed statistics. Our analyses confirm that CiteBART-Global has a cross-dataset generalization capability; the macro hallucination rate (MaHR) at the top-3 predictions is 4\%, and when the ground-truth is in the top-k prediction list, the hallucination tendency in the other predictions drops significantly.
2412.20439
Wangyu Wu
Wangyu Wu, Xianglin Qiu, Siqi Song, Zhenhong Chen, Xiaowei Huang, Fei Ma, Jimin Xiao
Image Augmentation Agent for Weakly Supervised Semantic Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly-supervised semantic segmentation (WSSS) has achieved remarkable progress using only image-level labels. However, most existing WSSS methods focus on designing new network structures and loss functions to generate more accurate dense labels, overlooking the limitations imposed by fixed datasets, which can constrain performance improvements. We argue that more diverse trainable images provides WSSS richer information and help model understand more comprehensive semantic pattern. Therefore in this paper, we introduce a novel approach called Image Augmentation Agent (IAA) which shows that it is possible to enhance WSSS from data generation perspective. IAA mainly design an augmentation agent that leverages large language models (LLMs) and diffusion models to automatically generate additional images for WSSS. In practice, to address the instability in prompt generation by LLMs, we develop a prompt self-refinement mechanism. It allow LLMs to re-evaluate the rationality of generated prompts to produce more coherent prompts. Additionally, we insert an online filter into diffusion generation process to dynamically ensure the quality and balance of generated images. Experimental results show that our method significantly surpasses state-of-the-art WSSS approaches on the PASCAL VOC 2012 and MS COCO 2014 datasets.
[ { "version": "v1", "created": "Sun, 29 Dec 2024 11:32:55 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 08:36:11 GMT" } ]
2025-04-11T00:00:00
[ [ "Wu", "Wangyu", "" ], [ "Qiu", "Xianglin", "" ], [ "Song", "Siqi", "" ], [ "Chen", "Zhenhong", "" ], [ "Huang", "Xiaowei", "" ], [ "Ma", "Fei", "" ], [ "Xiao", "Jimin", "" ] ]
TITLE: Image Augmentation Agent for Weakly Supervised Semantic Segmentation ABSTRACT: Weakly-supervised semantic segmentation (WSSS) has achieved remarkable progress using only image-level labels. However, most existing WSSS methods focus on designing new network structures and loss functions to generate more accurate dense labels, overlooking the limitations imposed by fixed datasets, which can constrain performance improvements. We argue that more diverse trainable images provides WSSS richer information and help model understand more comprehensive semantic pattern. Therefore in this paper, we introduce a novel approach called Image Augmentation Agent (IAA) which shows that it is possible to enhance WSSS from data generation perspective. IAA mainly design an augmentation agent that leverages large language models (LLMs) and diffusion models to automatically generate additional images for WSSS. In practice, to address the instability in prompt generation by LLMs, we develop a prompt self-refinement mechanism. It allow LLMs to re-evaluate the rationality of generated prompts to produce more coherent prompts. Additionally, we insert an online filter into diffusion generation process to dynamically ensure the quality and balance of generated images. Experimental results show that our method significantly surpasses state-of-the-art WSSS approaches on the PASCAL VOC 2012 and MS COCO 2014 datasets.
2412.21037
Soujanya Poria
Chia-Yu Hung, Navonil Majumder, Zhifeng Kong, Ambuj Mehrish, Amir Ali Bagherzadeh, Chuan Li, Rafael Valle, Bryan Catanzaro, Soujanya Poria
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
https://tangoflux.github.io/
null
null
null
cs.SD cs.AI cs.CL eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
We introduce TangoFlux, an efficient Text-to-Audio (TTA) generative model with 515M parameters, capable of generating up to 30 seconds of 44.1kHz audio in just 3.7 seconds on a single A40 GPU. A key challenge in aligning TTA models lies in the difficulty of creating preference pairs, as TTA lacks structured mechanisms like verifiable rewards or gold-standard answers available for Large Language Models (LLMs). To address this, we propose CLAP-Ranked Preference Optimization (CRPO), a novel framework that iteratively generates and optimizes preference data to enhance TTA alignment. We demonstrate that the audio preference dataset generated using CRPO outperforms existing alternatives. With this framework, TangoFlux achieves state-of-the-art performance across both objective and subjective benchmarks. We open source all code and models to support further research in TTA generation.
[ { "version": "v1", "created": "Mon, 30 Dec 2024 16:02:44 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 05:01:32 GMT" } ]
2025-04-11T00:00:00
[ [ "Hung", "Chia-Yu", "" ], [ "Majumder", "Navonil", "" ], [ "Kong", "Zhifeng", "" ], [ "Mehrish", "Ambuj", "" ], [ "Bagherzadeh", "Amir Ali", "" ], [ "Li", "Chuan", "" ], [ "Valle", "Rafael", "" ], [ "Catanzaro", "Bryan", "" ], [ "Poria", "Soujanya", "" ] ]
TITLE: TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization ABSTRACT: We introduce TangoFlux, an efficient Text-to-Audio (TTA) generative model with 515M parameters, capable of generating up to 30 seconds of 44.1kHz audio in just 3.7 seconds on a single A40 GPU. A key challenge in aligning TTA models lies in the difficulty of creating preference pairs, as TTA lacks structured mechanisms like verifiable rewards or gold-standard answers available for Large Language Models (LLMs). To address this, we propose CLAP-Ranked Preference Optimization (CRPO), a novel framework that iteratively generates and optimizes preference data to enhance TTA alignment. We demonstrate that the audio preference dataset generated using CRPO outperforms existing alternatives. With this framework, TangoFlux achieves state-of-the-art performance across both objective and subjective benchmarks. We open source all code and models to support further research in TTA generation.
2501.05449
MD Arafat Alam Khandaker Arafat
Md. Arafat Alam Khandaker, Ziyan Shirin Raha, Shifat Islam, Tashreef Muhammad
Explainable AI-Enhanced Deep Learning for Pumpkin Leaf Disease Detection: A Comparative Analysis of CNN Architectures
Accepted in 2024 27th International Conference on Computer and Information Technology (ICCIT)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Pumpkin leaf diseases are significant threats to agricultural productivity, requiring a timely and precise diagnosis for effective management. Traditional identification methods are laborious and susceptible to human error, emphasizing the necessity for automated solutions. This study employs on the "Pumpkin Leaf Disease Dataset", that comprises of 2000 high-resolution images separated into five categories. Downy mildew, powdery mildew, mosaic disease, bacterial leaf spot, and healthy leaves. The dataset was rigorously assembled from several agricultural fields to ensure a strong representation for model training. We explored many proficient deep learning architectures, including DenseNet201, DenseNet121, DenseNet169, Xception, ResNet50, ResNet101 and InceptionResNetV2, and observed that ResNet50 performed most effectively, with an accuracy of 90.5% and comparable precision, recall, and F1-Score. We used Explainable AI (XAI) approaches like Grad-CAM, Grad-CAM++, Score-CAM, and Layer-CAM to provide meaningful representations of model decision-making processes, which improved understanding and trust in automated disease diagnostics. These findings demonstrate ResNet50's potential to revolutionize pumpkin leaf disease detection, allowing for earlier and more accurate treatments.
[ { "version": "v1", "created": "Thu, 9 Jan 2025 18:59:35 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 17:35:24 GMT" } ]
2025-04-11T00:00:00
[ [ "Khandaker", "Md. Arafat Alam", "" ], [ "Raha", "Ziyan Shirin", "" ], [ "Islam", "Shifat", "" ], [ "Muhammad", "Tashreef", "" ] ]
TITLE: Explainable AI-Enhanced Deep Learning for Pumpkin Leaf Disease Detection: A Comparative Analysis of CNN Architectures ABSTRACT: Pumpkin leaf diseases are significant threats to agricultural productivity, requiring a timely and precise diagnosis for effective management. Traditional identification methods are laborious and susceptible to human error, emphasizing the necessity for automated solutions. This study employs on the "Pumpkin Leaf Disease Dataset", that comprises of 2000 high-resolution images separated into five categories. Downy mildew, powdery mildew, mosaic disease, bacterial leaf spot, and healthy leaves. The dataset was rigorously assembled from several agricultural fields to ensure a strong representation for model training. We explored many proficient deep learning architectures, including DenseNet201, DenseNet121, DenseNet169, Xception, ResNet50, ResNet101 and InceptionResNetV2, and observed that ResNet50 performed most effectively, with an accuracy of 90.5% and comparable precision, recall, and F1-Score. We used Explainable AI (XAI) approaches like Grad-CAM, Grad-CAM++, Score-CAM, and Layer-CAM to provide meaningful representations of model decision-making processes, which improved understanding and trust in automated disease diagnostics. These findings demonstrate ResNet50's potential to revolutionize pumpkin leaf disease detection, allowing for earlier and more accurate treatments.
2501.06465
Ye Chen
Ye Chen, Dongdong Huang, Haoyun Xu, Cong Fu, Lin Sheng, Qingli Zhou, Yuqiang Shen, Kai Wang
MedCT: A Clinical Terminology Graph for Generative AI Applications in Healthcare
Accepted into ICCS 2025 and published in Springer's LNCS Series
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We introduce the world's first clinical terminology for the Chinese healthcare community, namely MedCT, accompanied by a clinical foundation model MedBERT and an entity linking model MedLink. The MedCT system enables standardized and programmable representation of Chinese clinical data, successively stimulating the development of new medicines, treatment pathways, and better patient outcomes for the populous Chinese community. Moreover, the MedCT knowledge graph provides a principled mechanism to minimize the hallucination problem of large language models (LLMs), therefore achieving significant levels of accuracy and safety in LLM-based clinical applications. By leveraging the LLMs' emergent capabilities of generativeness and expressiveness, we were able to rapidly built a production-quality terminology system and deployed to real-world clinical field within three months, while classical terminologies like SNOMED CT have gone through more than twenty years development. Our experiments show that the MedCT system achieves state-of-the-art (SOTA) performance in semantic matching and entity linking tasks, not only for Chinese but also for English. We also conducted a longitudinal field experiment by applying MedCT and LLMs in a representative spectrum of clinical tasks, including electronic health record (EHR) auto-generation and medical document search for diagnostic decision making. Our study shows a multitude of values of MedCT for clinical workflows and patient outcomes, especially in the new genre of clinical LLM applications. We present our approach in sufficient engineering detail, such that implementing a clinical terminology for other non-English societies should be readily reproducible. We openly release our terminology, models and algorithms, along with real-world clinical datasets for the development.
[ { "version": "v1", "created": "Sat, 11 Jan 2025 07:35:51 GMT" }, { "version": "v2", "created": "Tue, 21 Jan 2025 01:56:11 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 07:29:10 GMT" } ]
2025-04-11T00:00:00
[ [ "Chen", "Ye", "" ], [ "Huang", "Dongdong", "" ], [ "Xu", "Haoyun", "" ], [ "Fu", "Cong", "" ], [ "Sheng", "Lin", "" ], [ "Zhou", "Qingli", "" ], [ "Shen", "Yuqiang", "" ], [ "Wang", "Kai", "" ] ]
TITLE: MedCT: A Clinical Terminology Graph for Generative AI Applications in Healthcare ABSTRACT: We introduce the world's first clinical terminology for the Chinese healthcare community, namely MedCT, accompanied by a clinical foundation model MedBERT and an entity linking model MedLink. The MedCT system enables standardized and programmable representation of Chinese clinical data, successively stimulating the development of new medicines, treatment pathways, and better patient outcomes for the populous Chinese community. Moreover, the MedCT knowledge graph provides a principled mechanism to minimize the hallucination problem of large language models (LLMs), therefore achieving significant levels of accuracy and safety in LLM-based clinical applications. By leveraging the LLMs' emergent capabilities of generativeness and expressiveness, we were able to rapidly built a production-quality terminology system and deployed to real-world clinical field within three months, while classical terminologies like SNOMED CT have gone through more than twenty years development. Our experiments show that the MedCT system achieves state-of-the-art (SOTA) performance in semantic matching and entity linking tasks, not only for Chinese but also for English. We also conducted a longitudinal field experiment by applying MedCT and LLMs in a representative spectrum of clinical tasks, including electronic health record (EHR) auto-generation and medical document search for diagnostic decision making. Our study shows a multitude of values of MedCT for clinical workflows and patient outcomes, especially in the new genre of clinical LLM applications. We present our approach in sufficient engineering detail, such that implementing a clinical terminology for other non-English societies should be readily reproducible. We openly release our terminology, models and algorithms, along with real-world clinical datasets for the development.
2501.07824
Joonho Ko
Joonho Ko, Jinheon Baek, Sung Ju Hwang
Real-time Verification and Refinement of Language Model Text Generation
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have shown remarkable performance across a wide range of natural language tasks. However, a critical challenge remains in that they sometimes generate factually incorrect answers. To address this, while many previous work has focused on identifying errors in their generation and further refining them, they are slow in deployment since they are designed to verify the response from LLMs only after their entire generation (from the first to last tokens) is done. Further, we observe that once LLMs generate incorrect tokens early on, there is a higher likelihood that subsequent tokens will also be factually incorrect. To this end, in this work, we propose Streaming-VR (Streaming Verification and Refinement), a novel approach designed to enhance the efficiency of verification and refinement of LLM outputs. Specifically, the proposed Streaming-VR enables on-the-fly verification and correction of tokens as they are being generated, similar to a streaming process, ensuring that each subset of tokens is checked and refined in real-time by another LLM as the LLM constructs its response. Through comprehensive evaluations on multiple datasets, we demonstrate that our approach not only enhances the factual accuracy of LLMs, but also offers a more efficient solution compared to prior refinement methods.
[ { "version": "v1", "created": "Tue, 14 Jan 2025 03:59:48 GMT" }, { "version": "v2", "created": "Mon, 17 Feb 2025 13:26:52 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 06:39:35 GMT" } ]
2025-04-11T00:00:00
[ [ "Ko", "Joonho", "" ], [ "Baek", "Jinheon", "" ], [ "Hwang", "Sung Ju", "" ] ]
TITLE: Real-time Verification and Refinement of Language Model Text Generation ABSTRACT: Large language models (LLMs) have shown remarkable performance across a wide range of natural language tasks. However, a critical challenge remains in that they sometimes generate factually incorrect answers. To address this, while many previous work has focused on identifying errors in their generation and further refining them, they are slow in deployment since they are designed to verify the response from LLMs only after their entire generation (from the first to last tokens) is done. Further, we observe that once LLMs generate incorrect tokens early on, there is a higher likelihood that subsequent tokens will also be factually incorrect. To this end, in this work, we propose Streaming-VR (Streaming Verification and Refinement), a novel approach designed to enhance the efficiency of verification and refinement of LLM outputs. Specifically, the proposed Streaming-VR enables on-the-fly verification and correction of tokens as they are being generated, similar to a streaming process, ensuring that each subset of tokens is checked and refined in real-time by another LLM as the LLM constructs its response. Through comprehensive evaluations on multiple datasets, we demonstrate that our approach not only enhances the factual accuracy of LLMs, but also offers a more efficient solution compared to prior refinement methods.
2501.14174
Junyeob Baek
Junyeob Baek, Yi-Fu Wu, Gautam Singh, Sungjin Ahn
Dreamweaver: Learning Compositional World Models from Pixels
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Humans have an innate ability to decompose their perceptions of the world into objects and their attributes, such as colors, shapes, and movement patterns. This cognitive process enables us to imagine novel futures by recombining familiar concepts. However, replicating this ability in artificial intelligence systems has proven challenging, particularly when it comes to modeling videos into compositional concepts and generating unseen, recomposed futures without relying on auxiliary data, such as text, masks, or bounding boxes. In this paper, we propose Dreamweaver, a neural architecture designed to discover hierarchical and compositional representations from raw videos and generate compositional future simulations. Our approach leverages a novel Recurrent Block-Slot Unit (RBSU) to decompose videos into their constituent objects and attributes. In addition, Dreamweaver uses a multi-future-frame prediction objective to capture disentangled representations for dynamic concepts more effectively as well as static concepts. In experiments, we demonstrate our model outperforms current state-of-the-art baselines for world modeling when evaluated under the DCI framework across multiple datasets. Furthermore, we show how the modularized concept representations of our model enable compositional imagination, allowing the generation of novel videos by recombining attributes from previously seen objects. cun-bjy.github.io/dreamweaver-website
[ { "version": "v1", "created": "Fri, 24 Jan 2025 01:50:19 GMT" }, { "version": "v2", "created": "Tue, 18 Feb 2025 08:16:03 GMT" }, { "version": "v3", "created": "Thu, 27 Feb 2025 16:09:15 GMT" }, { "version": "v4", "created": "Fri, 28 Feb 2025 08:12:21 GMT" }, { "version": "v5", "created": "Thu, 10 Apr 2025 13:12:34 GMT" } ]
2025-04-11T00:00:00
[ [ "Baek", "Junyeob", "" ], [ "Wu", "Yi-Fu", "" ], [ "Singh", "Gautam", "" ], [ "Ahn", "Sungjin", "" ] ]
TITLE: Dreamweaver: Learning Compositional World Models from Pixels ABSTRACT: Humans have an innate ability to decompose their perceptions of the world into objects and their attributes, such as colors, shapes, and movement patterns. This cognitive process enables us to imagine novel futures by recombining familiar concepts. However, replicating this ability in artificial intelligence systems has proven challenging, particularly when it comes to modeling videos into compositional concepts and generating unseen, recomposed futures without relying on auxiliary data, such as text, masks, or bounding boxes. In this paper, we propose Dreamweaver, a neural architecture designed to discover hierarchical and compositional representations from raw videos and generate compositional future simulations. Our approach leverages a novel Recurrent Block-Slot Unit (RBSU) to decompose videos into their constituent objects and attributes. In addition, Dreamweaver uses a multi-future-frame prediction objective to capture disentangled representations for dynamic concepts more effectively as well as static concepts. In experiments, we demonstrate our model outperforms current state-of-the-art baselines for world modeling when evaluated under the DCI framework across multiple datasets. Furthermore, we show how the modularized concept representations of our model enable compositional imagination, allowing the generation of novel videos by recombining attributes from previously seen objects. cun-bjy.github.io/dreamweaver-website
2501.15293
Sajjad Saleem
Rubab Hafeez, Sadia Waheed, Syeda Aleena Naqvi, Fahad Maqbool, Amna Sarwar, Sajjad Saleem, Muhammad Imran Sharif, Kamran Siddique, Zahid Akhtar
Deep Learning in Early Alzheimer's disease's Detection: A Comprehensive Survey of Classification, Segmentation, and Feature Extraction Methods
22 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Alzheimers disease is a deadly neurological condition, impairing important memory and brain functions. Alzheimers disease promotes brain shrinkage, ultimately leading to dementia. Dementia diagnosis typically takes 2.8 to 4.4 years after the first clinical indication. Advancements in computing and information technology have led to many techniques of studying Alzheimers disease. Early identification and therapy are crucial for preventing Alzheimers disease, as early-onset dementia hits people before the age of 65, while late-onset dementia occurs after this age. According to the 2015 World Alzheimers disease Report, there are 46.8 million individuals worldwide suffering from dementia, with an anticipated 74.7 million more by 2030 and 131.5 million by 2050. Deep Learning has outperformed conventional Machine Learning techniques by identifying intricate structures in high-dimensional data. Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), have achieved an accuracy of up to 96.0% for Alzheimers disease classification, and 84.2% for mild cognitive impairment (MCI) conversion prediction. There have been few literature surveys available on applying ML to predict dementia, lacking in congenital observations. However, this survey has focused on a specific data channel for dementia detection. This study evaluated Deep Learning algorithms for early Alzheimers disease detection, using openly accessible datasets, feature segmentation, and classification methods. This article also has identified research gaps and limits in detecting Alzheimers disease, which can inform future research.
[ { "version": "v1", "created": "Sat, 25 Jan 2025 18:00:17 GMT" }, { "version": "v2", "created": "Wed, 29 Jan 2025 10:30:35 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 22:39:50 GMT" } ]
2025-04-11T00:00:00
[ [ "Hafeez", "Rubab", "" ], [ "Waheed", "Sadia", "" ], [ "Naqvi", "Syeda Aleena", "" ], [ "Maqbool", "Fahad", "" ], [ "Sarwar", "Amna", "" ], [ "Saleem", "Sajjad", "" ], [ "Sharif", "Muhammad Imran", "" ], [ "Siddique", "Kamran", "" ], [ "Akhtar", "Zahid", "" ] ]
TITLE: Deep Learning in Early Alzheimer's disease's Detection: A Comprehensive Survey of Classification, Segmentation, and Feature Extraction Methods ABSTRACT: Alzheimers disease is a deadly neurological condition, impairing important memory and brain functions. Alzheimers disease promotes brain shrinkage, ultimately leading to dementia. Dementia diagnosis typically takes 2.8 to 4.4 years after the first clinical indication. Advancements in computing and information technology have led to many techniques of studying Alzheimers disease. Early identification and therapy are crucial for preventing Alzheimers disease, as early-onset dementia hits people before the age of 65, while late-onset dementia occurs after this age. According to the 2015 World Alzheimers disease Report, there are 46.8 million individuals worldwide suffering from dementia, with an anticipated 74.7 million more by 2030 and 131.5 million by 2050. Deep Learning has outperformed conventional Machine Learning techniques by identifying intricate structures in high-dimensional data. Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), have achieved an accuracy of up to 96.0% for Alzheimers disease classification, and 84.2% for mild cognitive impairment (MCI) conversion prediction. There have been few literature surveys available on applying ML to predict dementia, lacking in congenital observations. However, this survey has focused on a specific data channel for dementia detection. This study evaluated Deep Learning algorithms for early Alzheimers disease detection, using openly accessible datasets, feature segmentation, and classification methods. This article also has identified research gaps and limits in detecting Alzheimers disease, which can inform future research.
2502.04790
Yuting Zeng
Yuting Zeng, Weizhe Huang, Lei Jiang, Tongxuan Liu, Xitai Jin, Chen Tianying Tiana, Jing Li, Xiaohua Xu
S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Accepted to NAACL 2025 Main
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have demonstrated remarkable capabilities across various natural language processing (NLP) scenarios, but they still face challenges when handling complex arithmetic and logical reasoning tasks. While Chain-Of-Thought (CoT) reasoning, self-consistency (SC) and self-correction strategies have attempted to guide models in sequential, multi-step reasoning, Multi-agent Debate (MAD) has emerged as a viable approach for enhancing the reasoning capabilities of LLMs. By increasing both the number of agents and the frequency of debates, the performance of LLMs improves significantly. However, this strategy results in a significant increase in token costs, presenting a barrier to scalability. To address this challenge, we introduce a novel sparsification strategy designed to reduce token costs within MAD. This approach minimizes ineffective exchanges of information and unproductive discussions among agents, thereby enhancing the overall efficiency of the debate process. We conduct comparative experiments on multiple datasets across various models, demonstrating that our approach significantly reduces the token costs in MAD to a considerable extent. Specifically, compared to MAD, our approach achieves an impressive reduction of up to 94.5\% in token costs while maintaining performance degradation below 2.0\%.
[ { "version": "v1", "created": "Fri, 7 Feb 2025 09:49:56 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 02:29:35 GMT" } ]
2025-04-11T00:00:00
[ [ "Zeng", "Yuting", "" ], [ "Huang", "Weizhe", "" ], [ "Jiang", "Lei", "" ], [ "Liu", "Tongxuan", "" ], [ "Jin", "Xitai", "" ], [ "Tiana", "Chen Tianying", "" ], [ "Li", "Jing", "" ], [ "Xu", "Xiaohua", "" ] ]
TITLE: S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency ABSTRACT: Large language models (LLMs) have demonstrated remarkable capabilities across various natural language processing (NLP) scenarios, but they still face challenges when handling complex arithmetic and logical reasoning tasks. While Chain-Of-Thought (CoT) reasoning, self-consistency (SC) and self-correction strategies have attempted to guide models in sequential, multi-step reasoning, Multi-agent Debate (MAD) has emerged as a viable approach for enhancing the reasoning capabilities of LLMs. By increasing both the number of agents and the frequency of debates, the performance of LLMs improves significantly. However, this strategy results in a significant increase in token costs, presenting a barrier to scalability. To address this challenge, we introduce a novel sparsification strategy designed to reduce token costs within MAD. This approach minimizes ineffective exchanges of information and unproductive discussions among agents, thereby enhancing the overall efficiency of the debate process. We conduct comparative experiments on multiple datasets across various models, demonstrating that our approach significantly reduces the token costs in MAD to a considerable extent. Specifically, compared to MAD, our approach achieves an impressive reduction of up to 94.5\% in token costs while maintaining performance degradation below 2.0\%.
2502.05780
Danny Wang
Danny Wang, Ruihong Qiu, Guangdong Bai, Zi Huang
GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial Latent Generation
ICLR25
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Despite graph neural networks' (GNNs) great success in modelling graph-structured data, out-of-distribution (OOD) test instances still pose a great challenge for current GNNs. One of the most effective techniques to detect OOD nodes is to expose the detector model with an additional OOD node-set, yet the extra OOD instances are often difficult to obtain in practice. Recent methods for image data address this problem using OOD data synthesis, typically relying on pre-trained generative models like Stable Diffusion. However, these approaches require vast amounts of additional data, as well as one-for-all pre-trained generative models, which are not available for graph data. Therefore, we propose the GOLD framework for graph OOD detection, an implicit adversarial learning pipeline with synthetic OOD exposure without pre-trained models. The implicit adversarial training process employs a novel alternating optimisation framework by training: (1) a latent generative model to regularly imitate the in-distribution (ID) embeddings from an evolving GNN, and (2) a GNN encoder and an OOD detector to accurately classify ID data while increasing the energy divergence between the ID embeddings and the generative model's synthetic embeddings. This novel approach implicitly transforms the synthetic embeddings into pseudo-OOD instances relative to the ID data, effectively simulating exposure to OOD scenarios without auxiliary data. Extensive OOD detection experiments are conducted on five benchmark graph datasets, verifying the superior performance of GOLD without using real OOD data compared with the state-of-the-art OOD exposure and non-exposure baselines.
[ { "version": "v1", "created": "Sun, 9 Feb 2025 05:19:53 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 11:41:23 GMT" } ]
2025-04-11T00:00:00
[ [ "Wang", "Danny", "" ], [ "Qiu", "Ruihong", "" ], [ "Bai", "Guangdong", "" ], [ "Huang", "Zi", "" ] ]
TITLE: GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial Latent Generation ABSTRACT: Despite graph neural networks' (GNNs) great success in modelling graph-structured data, out-of-distribution (OOD) test instances still pose a great challenge for current GNNs. One of the most effective techniques to detect OOD nodes is to expose the detector model with an additional OOD node-set, yet the extra OOD instances are often difficult to obtain in practice. Recent methods for image data address this problem using OOD data synthesis, typically relying on pre-trained generative models like Stable Diffusion. However, these approaches require vast amounts of additional data, as well as one-for-all pre-trained generative models, which are not available for graph data. Therefore, we propose the GOLD framework for graph OOD detection, an implicit adversarial learning pipeline with synthetic OOD exposure without pre-trained models. The implicit adversarial training process employs a novel alternating optimisation framework by training: (1) a latent generative model to regularly imitate the in-distribution (ID) embeddings from an evolving GNN, and (2) a GNN encoder and an OOD detector to accurately classify ID data while increasing the energy divergence between the ID embeddings and the generative model's synthetic embeddings. This novel approach implicitly transforms the synthetic embeddings into pseudo-OOD instances relative to the ID data, effectively simulating exposure to OOD scenarios without auxiliary data. Extensive OOD detection experiments are conducted on five benchmark graph datasets, verifying the superior performance of GOLD without using real OOD data compared with the state-of-the-art OOD exposure and non-exposure baselines.
2502.06193
Ruiqi Wang
Ruiqi Wang, Jiyu Guo, Cuiyun Gao, Guodong Fan, Chun Yong Chong, Xin Xia
Can LLMs Replace Human Evaluators? An Empirical Study of LLM-as-a-Judge in Software Engineering
Accepted by ISSTA 2025: https://conf.researchr.org/details/issta-2025/issta-2025-papers/85/Can-LLMs-replace-Human-Evaluators-An-Empirical-Study-of-LLM-as-a-Judge-in-Software-E
null
10.1145/3728963
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
Recently, large language models (LLMs) have been deployed to tackle various software engineering (SE) tasks like code generation, significantly advancing the automation of SE tasks. However, assessing the quality of these LLM-generated code and text remains challenging. The commonly used Pass@k metric necessitates extensive unit tests and configured environments, demands a high labor cost, and is not suitable for evaluating LLM-generated text. Conventional metrics like BLEU, which measure only lexical rather than semantic similarity, have also come under scrutiny. In response, a new trend has emerged to employ LLMs for automated evaluation, known as LLM-as-a-judge. These LLM-as-a-judge methods are claimed to better mimic human assessment than conventional metrics without relying on high-quality reference answers. Nevertheless, their exact human alignment in SE tasks remains unexplored. In this paper, we empirically explore LLM-as-a-judge methods for evaluating SE tasks, focusing on their alignment with human judgments. We select seven LLM-as-a-judge methods that utilize general-purpose LLMs, alongside two LLMs specifically fine-tuned for evaluation. After generating and manually scoring LLM responses on three recent SE datasets of code translation, code generation, and code summarization, we then prompt these methods to evaluate each response. Finally, we compare the scores generated by these methods with human evaluation. The results indicate that output-based methods reach the highest Pearson correlation of 81.32 and 68.51 with human scores in code translation and generation, achieving near-human evaluation, noticeably outperforming ChrF++, one of the best conventional metrics, at 34.23 and 64.92. Such output-based methods prompt LLMs to output judgments directly, and exhibit more balanced score distributions that resemble human score patterns. Finally, we provide...
[ { "version": "v1", "created": "Mon, 10 Feb 2025 06:49:29 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 07:33:55 GMT" } ]
2025-04-11T00:00:00
[ [ "Wang", "Ruiqi", "" ], [ "Guo", "Jiyu", "" ], [ "Gao", "Cuiyun", "" ], [ "Fan", "Guodong", "" ], [ "Chong", "Chun Yong", "" ], [ "Xia", "Xin", "" ] ]
TITLE: Can LLMs Replace Human Evaluators? An Empirical Study of LLM-as-a-Judge in Software Engineering ABSTRACT: Recently, large language models (LLMs) have been deployed to tackle various software engineering (SE) tasks like code generation, significantly advancing the automation of SE tasks. However, assessing the quality of these LLM-generated code and text remains challenging. The commonly used Pass@k metric necessitates extensive unit tests and configured environments, demands a high labor cost, and is not suitable for evaluating LLM-generated text. Conventional metrics like BLEU, which measure only lexical rather than semantic similarity, have also come under scrutiny. In response, a new trend has emerged to employ LLMs for automated evaluation, known as LLM-as-a-judge. These LLM-as-a-judge methods are claimed to better mimic human assessment than conventional metrics without relying on high-quality reference answers. Nevertheless, their exact human alignment in SE tasks remains unexplored. In this paper, we empirically explore LLM-as-a-judge methods for evaluating SE tasks, focusing on their alignment with human judgments. We select seven LLM-as-a-judge methods that utilize general-purpose LLMs, alongside two LLMs specifically fine-tuned for evaluation. After generating and manually scoring LLM responses on three recent SE datasets of code translation, code generation, and code summarization, we then prompt these methods to evaluate each response. Finally, we compare the scores generated by these methods with human evaluation. The results indicate that output-based methods reach the highest Pearson correlation of 81.32 and 68.51 with human scores in code translation and generation, achieving near-human evaluation, noticeably outperforming ChrF++, one of the best conventional metrics, at 34.23 and 64.92. Such output-based methods prompt LLMs to output judgments directly, and exhibit more balanced score distributions that resemble human score patterns. Finally, we provide...
2502.07532
Erik Larsson
Erik Larsson, Joel Oskarsson, Tomas Landelius, Fredrik Lindsten
Diffusion-LAM: Probabilistic Limited Area Weather Forecasting with Diffusion
Accepted, camera ready version
null
null
null
cs.LG physics.ao-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning methods have been shown to be effective for weather forecasting, based on the speed and accuracy compared to traditional numerical models. While early efforts primarily concentrated on deterministic predictions, the field has increasingly shifted toward probabilistic forecasting to better capture the forecast uncertainty. Most machine learning-based models have been designed for global-scale predictions, with only limited work targeting regional or limited area forecasting, which allows more specialized and flexible modeling for specific locations. This work introduces Diffusion-LAM, a probabilistic limited area weather model leveraging conditional diffusion. By conditioning on boundary data from surrounding regions, our approach generates forecasts within a defined area. Experimental results on the MEPS limited area dataset demonstrate the potential of Diffusion-LAM to deliver accurate probabilistic forecasts, highlighting its promise for limited-area weather prediction.
[ { "version": "v1", "created": "Tue, 11 Feb 2025 13:15:16 GMT" }, { "version": "v2", "created": "Thu, 13 Feb 2025 13:55:08 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 12:10:33 GMT" } ]
2025-04-11T00:00:00
[ [ "Larsson", "Erik", "" ], [ "Oskarsson", "Joel", "" ], [ "Landelius", "Tomas", "" ], [ "Lindsten", "Fredrik", "" ] ]
TITLE: Diffusion-LAM: Probabilistic Limited Area Weather Forecasting with Diffusion ABSTRACT: Machine learning methods have been shown to be effective for weather forecasting, based on the speed and accuracy compared to traditional numerical models. While early efforts primarily concentrated on deterministic predictions, the field has increasingly shifted toward probabilistic forecasting to better capture the forecast uncertainty. Most machine learning-based models have been designed for global-scale predictions, with only limited work targeting regional or limited area forecasting, which allows more specialized and flexible modeling for specific locations. This work introduces Diffusion-LAM, a probabilistic limited area weather model leveraging conditional diffusion. By conditioning on boundary data from surrounding regions, our approach generates forecasts within a defined area. Experimental results on the MEPS limited area dataset demonstrate the potential of Diffusion-LAM to deliver accurate probabilistic forecasts, highlighting its promise for limited-area weather prediction.
2502.09535
Carlton Shepherd
Carlton Shepherd and Elliot Hurley
Entropy Collapse in Mobile Sensors: The Hidden Risks of Sensor-Based Security
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Mobile sensor data has been proposed for security-critical applications such as device pairing, proximity detection, and continuous authentication. However, the foundational premise that these signals provide sufficient entropy remains under-explored. In this work, we systematically analyse the entropy of mobile sensor data across four diverse datasets spanning multiple application contexts. Our findings reveal pervasive biases, with single-sensor mean min-entropy values ranging from 3.408-4.483 bits (S.D.=1.018-1.574) despite Shannon entropy being several multiples higher, showing a significant collapse between average- to worst-case settings. We further demonstrate that correlations between sensor modalities reduce the worst-case entropy of using multiple sensors by up to ~75% compared to average-case Shannon entropy. This brings joint min-entropy well below 10 bits in many cases and, in the best case, yielding only ~24 bits of min-entropy when combining 20 sensor modalities. These results raise the serious risk of attacks that exhaustively search the space of possible sensor measurements. Our work also calls into question the widely held assumption that adding more sensors inherently yields higher security, and we strongly urge caution when relying on mobile sensor data for security applications.
[ { "version": "v1", "created": "Thu, 13 Feb 2025 17:50:58 GMT" }, { "version": "v2", "created": "Fri, 14 Feb 2025 10:43:47 GMT" }, { "version": "v3", "created": "Mon, 17 Feb 2025 22:41:20 GMT" }, { "version": "v4", "created": "Tue, 25 Mar 2025 11:42:52 GMT" }, { "version": "v5", "created": "Wed, 26 Mar 2025 22:14:50 GMT" }, { "version": "v6", "created": "Thu, 10 Apr 2025 17:53:17 GMT" } ]
2025-04-11T00:00:00
[ [ "Shepherd", "Carlton", "" ], [ "Hurley", "Elliot", "" ] ]
TITLE: Entropy Collapse in Mobile Sensors: The Hidden Risks of Sensor-Based Security ABSTRACT: Mobile sensor data has been proposed for security-critical applications such as device pairing, proximity detection, and continuous authentication. However, the foundational premise that these signals provide sufficient entropy remains under-explored. In this work, we systematically analyse the entropy of mobile sensor data across four diverse datasets spanning multiple application contexts. Our findings reveal pervasive biases, with single-sensor mean min-entropy values ranging from 3.408-4.483 bits (S.D.=1.018-1.574) despite Shannon entropy being several multiples higher, showing a significant collapse between average- to worst-case settings. We further demonstrate that correlations between sensor modalities reduce the worst-case entropy of using multiple sensors by up to ~75% compared to average-case Shannon entropy. This brings joint min-entropy well below 10 bits in many cases and, in the best case, yielding only ~24 bits of min-entropy when combining 20 sensor modalities. These results raise the serious risk of attacks that exhaustively search the space of possible sensor measurements. Our work also calls into question the widely held assumption that adding more sensors inherently yields higher security, and we strongly urge caution when relying on mobile sensor data for security applications.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
92