Text2Text Generation
Transformers
Safetensors
English

FLANEC: Exploring FLAN-T5 for Post-ASR Error Correction

Model Overview

FLANEC is an encoder-decoder model based on FLAN-T5, specifically fine-tuned for post-Automatic Speech Recognition (ASR) error correction, also known as Generative Speech Error Correction (GenSEC). The model utilizes n-best hypotheses from ASR systems to enhance the accuracy and grammaticality of final transcriptions by generating a single corrected output. FLANEC models are trained on diverse subsets of the HyPoradise dataset, leveraging multiple ASR domains to provide robust, scalable error correction across different types of audio data.

FLANEC was developed for the GenSEC Task 1 challenge at SLT 2024 - Challenge website.

⚠️ IMPORTANT: This repository contains the Single-Dataset (SD) versions of FLANEC models. Each model is trained on a single specific dataset from the HyPoradise collection, allowing for domain-specialized ASR error correction. For models trained on the cumulative dataset (CD), please see the related models section below.

Repository Structure

This repository contains multiple model variants trained individually on each dataset from the HyPoradise collection:

flanec-sd-models/
β”œβ”€β”€ flanec-base-sd-ft/     # Base models (250M params) with full fine-tuning
β”‚   β”œβ”€β”€ atis/              # ATIS dataset model
β”‚   β”œβ”€β”€ chime4/            # CHiME-4 dataset model
β”‚   └── ...                # Other dataset models
β”œβ”€β”€ flanec-base-sd-lora/   # Base models with LoRA fine-tuning
β”œβ”€β”€ flanec-large-sd-ft/    # Large models (800M params) with full fine-tuning
β”œβ”€β”€ flanec-large-sd-lora/  # Large models with LoRA fine-tuning
β”œβ”€β”€ flanec-xl-sd-ft/       # XL models (3B params) with full fine-tuning
└── flanec-xl-sd-lora/     # XL models with LoRA fine-tuning

Each dataset directory contains the best model checkpoint along with its tokenizer.

Getting Started

Cloning the Repository

Warning: This repository is very large due to containing multiple model variants across different sizes and datasets.

git clone https://huggingface.co./morenolq/flanec-sd-models

For more efficient cloning, you can use the Hugging Face CLI to clone only specific models:

# Install the Hugging Face Hub CLI if you haven't already
pip install -U "huggingface_hub[cli]"

# Clone only a specific model variant and dataset
huggingface-cli download morenolq/flanec-sd-models --include "flanec-base-sd-ft/atis/**" --local-dir flanec-sd-models

Using a Model

To use a specific model:

from transformers import T5ForConditionalGeneration, T5Tokenizer

# Choose a specific model path based on:
# 1. Model size (base, large, xl)
# 2. Training method (ft, lora)
# 3. Dataset (atis, wsj, chime4, etc.)
model_path = "path/to/flanec-sd-models/flanec-base-sd-ft/atis"
tokenizer = T5Tokenizer.from_pretrained(model_path)
model = T5ForConditionalGeneration.from_pretrained(model_path)

# Example input with n-best ASR hypotheses
input_text = """Generate the correct transcription for the following n-best list of ASR hypotheses:

1. i need to fly from dallas to chicago next monday
2. i need to fly from dallas to chicago next thursday
3. i need to fly from dallas to chicago on monday
4. i need to fly dallas to chicago next monday
5. i need to fly from dallas chicago next monday"""

input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=128)
corrected_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(corrected_text)

Model Variants

Available Model Sizes

  • Base: ~250 million parameters
  • Large: ~800 million parameters
  • XL: ~3 billion parameters

Training Methods

  • Full Fine-tuning (ft): All model parameters are updated during training
  • LoRA (lora): Low-Rank Adaptation for parameter-efficient fine-tuning

Datasets

All models are trained on specific subsets of the HyPoradise dataset:

  1. WSJ: Business and financial news.
  2. ATIS: Airline travel queries.
  3. CHiME-4: Noisy speech.
  4. Tedlium-3: TED talks.
  5. CV-accent: Accented speech.
  6. SwitchBoard: Conversational speech.
  7. LRS2: BBC program audio.
  8. CORAAL: Accented speech from African American English.

For more details on each dataset, see the HyPoradise paper.

Related Models

If you're looking for models trained on the combined datasets (Cumulative Dataset models), please check:

Full Fine-tuning (FT) Cumulative Dataset Models:

LoRA Cumulative Dataset Models:

Performance Overview

Our research demonstrated that:

  • Single-dataset models excel at their specific domains but may not generalize well to others
  • Larger models generally deliver better performance within their domain
  • Full fine-tuning typically outperforms LoRA, especially for larger models
  • The CORAAL dataset presents unique challenges across all model configurations

For detailed performance metrics and analysis, please see the FlanEC paper.

Intended Use

FLANEC is designed for the task of Generative Speech Error Correction (GenSEC). The models are suitable for post-processing ASR outputs to correct grammatical and linguistic errors. The models support the English language.

Citation

Please use the following citation to reference this work in your research:

@article{quatra_2024_flanec:,
  author = {Moreno La Quatra and Valerio Mario Salerno and Yu Tsao and Sabato Marco Siniscalchi},
  title = {FlanEC: Exploring Flan-T5 for Post-ASR Error Correction},
  journal = {2024 IEEE Spoken Language Technology Workshop (SLT)},
  year = {2024},
  doi = {10.1109/slt61566.2024.10832257},
  url = {https://doi.org/10.1109/slt61566.2024.10832257}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for morenolq/flanec-sd-models

Finetuned
(712)
this model

Dataset used to train morenolq/flanec-sd-models

Collection including morenolq/flanec-sd-models