thenativefox
Added split files and tables
939262b
Glossary
This glossary defines general machine learning and πŸ€— Transformers terms to help you better understand the
documentation.
A
attention mask
The attention mask is an optional argument used when batching sequences together.
This argument indicates to the model which tokens should be attended to, and which should not.
For example, consider these two sequences:
thon
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased")
sequence_a = "This is a short sequence."
sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
The encoded versions have different lengths:
thon
len(encoded_sequence_a), len(encoded_sequence_b)
(8, 19)
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
of the second one, or the second one needs to be truncated down to the length of the first one.
In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
it to pad like this:
thon
padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
thon
padded_sequences["input_ids"]
[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
position of the padded indices so that the model does not attend to them. For the [BertTokenizer], 1 indicates a
value that should be attended to, while 0 indicates a padded value. This attention mask is in the dictionary returned
by the tokenizer under the key "attention_mask":
thon
padded_sequences["attention_mask"]
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
autoencoding models
See encoder models and masked language modeling
autoregressive models
See causal language modeling and decoder models
B
backbone
The backbone is the network (embeddings and layers) that outputs the raw hidden states or features. It is usually connected to a head which accepts the features as its input to make a prediction. For example, [ViTModel] is a backbone without a specific head on top. Other models can also use [VitModel] as a backbone such as DPT.
C
causal language modeling
A pretraining task where the model reads the texts in order and has to predict the next word. It's usually done by
reading the whole sentence but using a mask inside the model to hide the future tokens at a certain timestep.
channel
Color images are made up of some combination of values in three channels: red, green, and blue (RGB) and grayscale images only have one channel. In πŸ€— Transformers, the channel can be the first or last dimension of an image's tensor: [n_channels, height, width] or [height, width, n_channels].
connectionist temporal classification (CTC)
An algorithm which allows a model to learn without knowing exactly how the input and output are aligned; CTC calculates the distribution of all possible outputs for a given input and chooses the most likely output from it. CTC is commonly used in speech recognition tasks because speech doesn't always cleanly align with the transcript for a variety of reasons such as a speaker's different speech rates.
convolution
A type of layer in a neural network where the input matrix is multiplied element-wise by a smaller matrix (kernel or filter) and the values are summed up in a new matrix. This is known as a convolutional operation which is repeated over the entire input matrix. Each operation is applied to a different segment of the input matrix. Convolutional neural networks (CNNs) are commonly used in computer vision.
D
DataParallel (DP)
Parallelism technique for training on multiple GPUs where the same setup is replicated multiple times, with each instance
receiving a distinct data slice. The processing is done in parallel and all setups are synchronized at the end of each training step.
Learn more about how DataParallel works here.
decoder input IDs
This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These
inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a
way specific to each model.
Most encoder-decoder models (BART, T5) create their decoder_input_ids on their own from the labels. In such models,
passing the labels is the preferred way to handle training.
Please check each model's docs to see how they handle these input IDs for sequence to sequence training.
decoder models
Also referred to as autoregressive models, decoder models involve a pretraining task (called causal language modeling) where the model reads the texts in order and has to predict the next word. It's usually done by
reading the whole sentence with a mask to hide future tokens at a certain timestep.
deep learning (DL)
Machine learning algorithms which uses neural networks with several layers.
E
encoder models
Also known as autoencoding models, encoder models take an input (such as text or images) and transform them into a condensed numerical representation called an embedding. Oftentimes, encoder models are pretrained using techniques like masked language modeling, which masks parts of the input sequence and forces the model to create more meaningful representations.
F
feature extraction
The process of selecting and transforming raw data into a set of features that are more informative and useful for machine learning algorithms. Some examples of feature extraction include transforming raw text into word embeddings and extracting important features such as edges or shapes from image/video data.
feed forward chunking
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for
google-bert/bert-base-uncased).
For an input of size [batch_size, sequence_length], the memory required to store the intermediate feed forward
embeddings [batch_size, sequence_length, config.intermediate_size] can account for a large fraction of the memory
use. The authors of Reformer: The Efficient Transformer noticed that since the
computation is independent of the sequence_length dimension, it is mathematically equivalent to compute the output
embeddings of both feed forward layers [batch_size, config.hidden_size]_0, , [batch_size, config.hidden_size]_n
individually and concat them afterward to [batch_size, sequence_length, config.hidden_size] with n = sequence_length, which trades increased computation time against reduced memory use, but yields a mathematically
equivalent result.
For models employing the function [apply_chunking_to_forward], the chunk_size defines the number of output
embeddings that are computed in parallel and thus defines the trade-off between memory and time complexity. If
chunk_size is set to 0, no feed forward chunking is done.
finetuned models
Finetuning is a form of transfer learning which involves taking a pretrained model, freezing its weights, and replacing the output layer with a newly added model head. The model head is trained on your target dataset.
See the Fine-tune a pretrained model tutorial for more details, and learn how to fine-tune models with πŸ€— Transformers.
H
head
The model head refers to the last layer of a neural network that accepts the raw hidden states and projects them onto a different dimension. There is a different model head for each task. For example:
[GPT2ForSequenceClassification] is a sequence classification head - a linear layer - on top of the base [GPT2Model].
[ViTForImageClassification] is an image classification head - a linear layer on top of the final hidden state of the CLS token - on top of the base [ViTModel].
[Wav2Vec2ForCTC] is a language modeling head with CTC on top of the base [Wav2Vec2Model].
I
image patch
Vision-based Transformers models split an image into smaller patches which are linearly embedded, and then passed as a sequence to the model. You can find the patch_size - or resolution - of the model in its configuration.
inference
Inference is the process of evaluating a model on new data after training is complete. See the Pipeline for inference tutorial to learn how to perform inference with πŸ€— Transformers.
input IDs
The input ids are often the only required parameters to be passed to the model as input. They are token indices,
numerical representations of tokens building the sequences that will be used as input by the model.
Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
tokenizer, which is a WordPiece tokenizer:
thon
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased")
sequence = "A Titan RTX has 24GB of VRAM"
The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.
thon
tokenized_sequence = tokenizer.tokenize(sequence)
The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix
is added for "RA" and "M":
thon
print(tokenized_sequence)
['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding the sentence to the tokenizer, which leverages the Rust implementation of πŸ€— Tokenizers for peak performance.
thon
inputs = tokenizer(sequence)
The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key input_ids:
thon
encoded_sequence = inputs["input_ids"]
print(encoded_sequence)
[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
IDs the model sometimes uses.
If we decode the previous sequence of ids,
thon
decoded_sequence = tokenizer.decode(encoded_sequence)
we will see
thon
print(decoded_sequence)
[CLS] A Titan RTX has 24GB of VRAM [SEP]
because this is the way a [BertModel] is going to expect its inputs.
L
labels
The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its
predictions and the expected value (the label).
These labels are different according to the model head, for example:
For sequence classification models, ([BertForSequenceClassification]), the model expects a tensor of dimension
(batch_size) with each value of the batch corresponding to the expected label of the entire sequence.
For token classification models, ([BertForTokenClassification]), the model expects a tensor of dimension
(batch_size, seq_length) with each value corresponding to the expected label of each individual token.
For masked language modeling, ([BertForMaskedLM]), the model expects a tensor of dimension (batch_size,
seq_length) with each value corresponding to the expected label of each individual token: the labels being the token
ID for the masked token, and values to be ignored for the rest (usually -100).
For sequence to sequence tasks, ([BartForConditionalGeneration], [MBartForConditionalGeneration]), the model
expects a tensor of dimension (batch_size, tgt_seq_length) with each value corresponding to the target sequences
associated with each input sequence. During training, both BART and T5 will make the appropriate
decoder_input_ids and decoder attention masks internally. They usually do not need to be supplied. This does not
apply to models leveraging the Encoder-Decoder framework.
For image classification models, ([ViTForImageClassification]), the model expects a tensor of dimension
(batch_size) with each value of the batch corresponding to the expected label of each individual image.
For semantic segmentation models, ([SegformerForSemanticSegmentation]), the model expects a tensor of dimension
(batch_size, height, width) with each value of the batch corresponding to the expected label of each individual pixel.
For object detection models, ([DetrForObjectDetection]), the model expects a list of dictionaries with a
class_labels and boxes key where each value of the batch corresponds to the expected label and number of bounding boxes of each individual image.
For automatic speech recognition models, ([Wav2Vec2ForCTC]), the model expects a tensor of dimension (batch_size,
target_length) with each value corresponding to the expected label of each individual token.
Each model's labels may be different, so be sure to always check the documentation of each model for more information
about their specific labels!