Spaces:
Running
Running
Each model's labels may be different, so be sure to always check the documentation of each model for more information | |
about their specific labels! | |
The base models ([BertModel]) do not accept labels, as these are the base transformer models, simply outputting | |
features. | |
large language models (LLM) | |
A generic term that refers to transformer language models (GPT-3, BLOOM, OPT) that were trained on a large quantity of data. These models also tend to have a large number of learnable parameters (e.g. 175 billion for GPT-3). | |
M | |
masked language modeling (MLM) | |
A pretraining task where the model sees a corrupted version of the texts, usually done by | |
masking some tokens randomly, and has to predict the original text. | |
multimodal | |
A task that combines texts with another kind of inputs (for instance images). | |
N | |
Natural language generation (NLG) | |
All tasks related to generating text (for instance, Write With Transformers, translation). | |
Natural language processing (NLP) | |
A generic way to say "deal with texts". | |
Natural language understanding (NLU) | |
All tasks related to understanding what is in a text (for instance classifying the | |
whole text, individual words). | |
P | |
pipeline | |
A pipeline in π€ Transformers is an abstraction referring to a series of steps that are executed in a specific order to preprocess and transform data and return a prediction from a model. Some example stages found in a pipeline might be data preprocessing, feature extraction, and normalization. | |
For more details, see Pipelines for inference. | |
PipelineParallel (PP) | |
Parallelism technique in which the model is split up vertically (layer-level) across multiple GPUs, so that only one or | |
several layers of the model are placed on a single GPU. Each GPU processes in parallel different stages of the pipeline | |
and working on a small chunk of the batch. Learn more about how PipelineParallel works here. | |
pixel values | |
A tensor of the numerical representations of an image that is passed to a model. The pixel values have a shape of [batch_size, num_channels, height, width], and are generated from an image processor. | |
pooling | |
An operation that reduces a matrix into a smaller matrix, either by taking the maximum or average of the pooled dimension(s). Pooling layers are commonly found between convolutional layers to downsample the feature representation. | |
position IDs | |
Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of | |
each token. Therefore, the position IDs (position_ids) are used by the model to identify each token's position in the | |
list of tokens. | |
They are an optional parameter. If no position_ids are passed to the model, the IDs are automatically created as | |
absolute positional embeddings. | |
Absolute positional embeddings are selected in the range [0, config.max_position_embeddings - 1]. Some models use | |
other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings. | |
preprocessing | |
The task of preparing raw data into a format that can be easily consumed by machine learning models. For example, text is typically preprocessed by tokenization. To gain a better idea of what preprocessing looks like for other input types, check out the Preprocess tutorial. | |
pretrained model | |
A model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods involve a | |
self-supervised objective, which can be reading the text and trying to predict the next word (see causal language | |
modeling) or masking some words and trying to predict them (see masked language | |
modeling). | |
Speech and vision models have their own pretraining objectives. For example, Wav2Vec2 is a speech model pretrained on a contrastive task which requires the model to identify the "true" speech representation from a set of "false" speech representations. On the other hand, BEiT is a vision model pretrained on a masked image modeling task which masks some of the image patches and requires the model to predict the masked patches (similar to the masked language modeling objective). | |
R | |
recurrent neural network (RNN) | |
A type of model that uses a loop over a layer to process texts. | |
representation learning | |
A subfield of machine learning which focuses on learning meaningful representations of raw data. Some examples of representation learning techniques include word embeddings, autoencoders, and Generative Adversarial Networks (GANs). | |
S | |
sampling rate | |
A measurement in hertz of the number of samples (the audio signal) taken per second. The sampling rate is a result of discretizing a continuous signal such as speech. | |
self-attention | |
Each element of the input finds out which other elements of the input they should attend to. | |
self-supervised learning | |
A category of machine learning techniques in which a model creates its own learning objective from unlabeled data. It differs from unsupervised learning and supervised learning in that the learning process is supervised, but not explicitly from the user. | |
One example of self-supervised learning is masked language modeling, where a model is passed sentences with a proportion of its tokens removed and learns to predict the missing tokens. | |
semi-supervised learning | |
A broad category of machine learning training techniques that leverages a small amount of labeled data with a larger quantity of unlabeled data to improve the accuracy of a model, unlike supervised learning and unsupervised learning. | |
An example of a semi-supervised learning approach is "self-training", in which a model is trained on labeled data, and then used to make predictions on the unlabeled data. The portion of the unlabeled data that the model predicts with the most confidence gets added to the labeled dataset and used to retrain the model. | |
sequence-to-sequence (seq2seq) | |
Models that generate a new sequence from an input, like translation models, or summarization models (such as | |
Bart or T5). | |
Sharded DDP | |
Another name for the foundational ZeRO concept as used by various other implementations of ZeRO. | |
stride | |
In convolution or pooling, the stride refers to the distance the kernel is moved over a matrix. A stride of 1 means the kernel is moved one pixel over at a time, and a stride of 2 means the kernel is moved two pixels over at a time. | |
supervised learning | |
A form of model training that directly uses labeled data to correct and instruct model performance. Data is fed into the model being trained, and its predictions are compared to the known labels. The model updates its weights based on how incorrect its predictions were, and the process is repeated to optimize model performance. | |
T | |
Tensor Parallelism (TP) | |
Parallelism technique for training on multiple GPUs in which each tensor is split up into multiple chunks, so instead of | |
having the whole tensor reside on a single GPU, each shard of the tensor resides on its designated GPU. Shards gets | |
processed separately and in parallel on different GPUs and the results are synced at the end of the processing step. | |
This is what is sometimes called horizontal parallelism, as the splitting happens on horizontal level. | |
Learn more about Tensor Parallelism here. | |
token | |
A part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords) or a | |
punctuation symbol. | |
token Type IDs | |
Some models' purpose is to do classification on pairs of sentences or question answering. | |
These require two different sequences to be joined in a single "input_ids" entry, which usually is performed with the | |
help of special tokens, such as the classifier ([CLS]) and separator ([SEP]) tokens. For example, the BERT model | |
builds its two sequence input as such: | |
thon | |
[CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP] | |
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to tokenizer as two | |
arguments (and not a list, like before) like this: | |
thon | |
from transformers import BertTokenizer | |
tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-cased") | |
sequence_a = "HuggingFace is based in NYC" | |
sequence_b = "Where is HuggingFace based?" | |
encoded_dict = tokenizer(sequence_a, sequence_b) | |
decoded = tokenizer.decode(encoded_dict["input_ids"]) | |
which will return: | |
thon | |
print(decoded) | |
[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP] | |
This is enough for some models to understand where one sequence ends and where another begins. However, other models, | |
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying | |
the two types of sequence in the model. | |
The tokenizer returns this mask as the "token_type_ids" entry: | |
thon | |
encoded_dict["token_type_ids"] | |
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] | |
The first sequence, the "context" used for the question, has all its tokens represented by a 0, whereas the second | |
sequence, corresponding to the "question", has all its tokens represented by a 1. | |
Some models, like [XLNetModel] use an additional token represented by a 2. | |
transfer learning | |
A technique that involves taking a pretrained model and adapting it to a dataset specific to your task. Instead of training a model from scratch, you can leverage knowledge obtained from an existing model as a starting point. This speeds up the learning process and reduces the amount of training data needed. | |
transformer | |
Self-attention based deep learning model architecture. | |
U | |
unsupervised learning | |
A form of model training in which data provided to the model is not labeled. Unsupervised learning techniques leverage statistical information of the data distribution to find patterns useful for the task at hand. | |
Z | |
Zero Redundancy Optimizer (ZeRO) | |
Parallelism technique which performs sharding of the tensors somewhat similar to TensorParallel, | |
except the whole tensor gets reconstructed in time for a forward or backward computation, therefore the model doesn't need | |
to be modified. This method also supports various offloading techniques to compensate for limited GPU memory. | |
Learn more about ZeRO here. |