跳转至

wav2vec2

mindnlp.transformers.models.wav2vec2.configuration_wav2vec2

Wav2Vec2 model configuration

mindnlp.transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config

Bases: PretrainedConfig

This is the configuration class to store the configuration of a [Wav2Vec2Model]. It is used to instantiate an Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2 facebook/wav2vec2-base-960h architecture.

Configuration objects inherit from [PretrainedConfig] and can be used to control the model outputs. Read the documentation from [PretrainedConfig] for more information.

PARAMETER DESCRIPTION
vocab_size

Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling [Wav2Vec2Model] or [TFWav2Vec2Model]. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of [Wav2Vec2Model].

TYPE: `int`, *optional*, defaults to 32 DEFAULT: 32

hidden_size

Dimensionality of the encoder layers and the pooler layer.

TYPE: `int`, *optional*, defaults to 768 DEFAULT: 768

num_hidden_layers

Number of hidden layers in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 12 DEFAULT: 12

num_attention_heads

Number of attention heads for each attention layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 12 DEFAULT: 12

intermediate_size

Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.

TYPE: `int`, *optional*, defaults to 3072 DEFAULT: 3072

hidden_act

The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

TYPE: `str` or `function`, *optional*, defaults to `"gelu"` DEFAULT: 'gelu'

hidden_dropout

The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

activation_dropout

The dropout ratio for activations inside the fully connected layer.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

attention_dropout

The dropout ratio for the attention probabilities.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

final_dropout

The dropout probability for the final projection layer of [Wav2Vec2ForCTC].

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

layerdrop

The LayerDrop probability. See the LayerDrop paper for more details.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

initializer_range

The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

TYPE: `float`, *optional*, defaults to 0.02 DEFAULT: 0.02

layer_norm_eps

The epsilon used by the layer normalization layers.

TYPE: `float`, *optional*, defaults to 1e-12 DEFAULT: 1e-05

feat_extract_norm

The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D convolutional layers.

TYPE: `str`, *optional*, defaults to `"group"` DEFAULT: 'group'

feat_proj_dropout

The dropout probability for output of the feature encoder.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

feat_extract_activation

The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, "gelu", "relu", "selu" and "gelu_new" are supported.

TYPE: `str, `optional`, defaults to `"gelu"` DEFAULT: 'gelu'

feat_quantizer_dropout

The dropout probability for quantized feature encoder states.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

conv_dim

A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of conv_dim defines the number of 1D convolutional layers.

TYPE: `Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)` DEFAULT: (512, 512, 512, 512, 512, 512, 512)

conv_stride

A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of conv_stride defines the number of convolutional layers and has to match the length of conv_dim.

TYPE: `Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)` DEFAULT: (5, 2, 2, 2, 2, 2, 2)

conv_kernel

A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of conv_kernel defines the number of convolutional layers and has to match the length of conv_dim.

TYPE: `Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)` DEFAULT: (10, 3, 3, 3, 3, 2, 2)

conv_bias

Whether the 1D convolutional layers have a bias.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

num_conv_pos_embeddings

Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer.

TYPE: `int`, *optional*, defaults to 128 DEFAULT: 128

num_conv_pos_embedding_groups

Number of groups of 1D convolutional positional embeddings layer.

TYPE: `int`, *optional*, defaults to 16 DEFAULT: 16

do_stable_layer_norm

Whether to apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

apply_spec_augment

Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

mask_time_prob

Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_time_prob should be prob_vector_start*mask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True.

TYPE: `float`, *optional*, defaults to 0.05 DEFAULT: 0.05

mask_time_length

Length of vector span along the time axis.

TYPE: `int`, *optional*, defaults to 10 DEFAULT: 10

mask_time_min_masks

The minimum number of masks of length mask_feature_length generated along the time axis, each time step, irrespectively of mask_feature_prob. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks''

TYPE: `int`, *optional*, defaults to 2), DEFAULT: 2

mask_feature_prob

Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_feature_prob should be prob_vector_start*mask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

mask_feature_length

Length of vector span along the feature axis.

TYPE: `int`, *optional*, defaults to 10 DEFAULT: 10

mask_feature_min_masks

The minimum number of masks of length mask_feature_length generated along the feature axis, each time step, irrespectively of mask_feature_prob. Only relevant if ''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks''

TYPE: `int`, *optional*, defaults to 0), DEFAULT: 0

num_codevectors_per_group

Number of entries in each quantization codebook (group).

TYPE: `int`, *optional*, defaults to 320 DEFAULT: 320

num_codevector_groups

Number of codevector groups for product codevector quantization.

TYPE: `int`, *optional*, defaults to 2 DEFAULT: 2

contrastive_logits_temperature

The temperature kappa in the contrastive loss.

TYPE: `float`, *optional*, defaults to 0.1 DEFAULT: 0.1

feat_quantizer_dropout

The dropout probability for the output of the feature encoder that's used by the quantizer.

TYPE: `float`, *optional*, defaults to 0.0 DEFAULT: 0.0

num_negatives

Number of negative samples for the contrastive loss.

TYPE: `int`, *optional*, defaults to 100 DEFAULT: 100

codevector_dim

Dimensionality of the quantized feature vectors.

TYPE: `int`, *optional*, defaults to 256 DEFAULT: 256

proj_codevector_dim

Dimensionality of the final projection of both the quantized and the transformer features.

TYPE: `int`, *optional*, defaults to 256 DEFAULT: 256

diversity_loss_weight

The weight of the codebook diversity loss component.

TYPE: `int`, *optional*, defaults to 0.1 DEFAULT: 0.1

ctc_loss_reduction

Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an instance of [Wav2Vec2ForCTC].

TYPE: `str`, *optional*, defaults to `"sum"` DEFAULT: 'sum'

ctc_zero_infinity

Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [Wav2Vec2ForCTC].

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

use_weighted_layer_sum

Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [Wav2Vec2ForSequenceClassification].

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

classifier_proj_size

Dimensionality of the projection before token mean-pooling for classification.

TYPE: `int`, *optional*, defaults to 256 DEFAULT: 256

tdnn_dim

A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN module of the XVector model. The length of tdnn_dim defines the number of TDNN layers.

TYPE: `Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)` DEFAULT: (512, 512, 512, 512, 1500)

tdnn_kernel

A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the XVector model. The length of tdnn_kernel has to match the length of tdnn_dim.

TYPE: `Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)` DEFAULT: (5, 3, 3, 1, 1)

tdnn_dilation

A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the XVector model. The length of tdnn_dilation has to match the length of tdnn_dim.

TYPE: `Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)` DEFAULT: (1, 2, 3, 1, 1)

xvector_output_dim

Dimensionality of the XVector embedding vectors.

TYPE: `int`, *optional*, defaults to 512 DEFAULT: 512

add_adapter

Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for warm-starting Wav2Vec2 for SpeechEncoderDecoder models.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

adapter_kernel_size

Kernel size of the convolutional layers in the adapter network. Only relevant if add_adapter is True.

TYPE: `int`, *optional*, defaults to 3 DEFAULT: 3

adapter_stride

Stride of the convolutional layers in the adapter network. Only relevant if add_adapter is True.

TYPE: `int`, *optional*, defaults to 2 DEFAULT: 2

num_adapter_layers

Number of convolutional layers that should be used in the adapter network. Only relevant if add_adapter is True.

TYPE: `int`, *optional*, defaults to 3 DEFAULT: 3

adapter_attn_dim

Dimension of the attention adapter weights to be used in each attention block. An example of a model using attention adapters is facebook/mms-1b-all.

TYPE: `int`, *optional* DEFAULT: None

output_hidden_size

Dimensionality of the encoder output layer. If not defined, this defaults to hidden-size. Only relevant if add_adapter is True.

TYPE: `int`, *optional* DEFAULT: None

Example
>>> from transformers import Wav2Vec2Config, Wav2Vec2Model
...
>>> # Initializing a Wav2Vec2 facebook/wav2vec2-base-960h style configuration
>>> configuration = Wav2Vec2Config()
...
>>> # Initializing a model (with random weights) from the facebook/wav2vec2-base-960h style configuration
>>> model = Wav2Vec2Model(configuration)
...
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in mindnlp\transformers\models\wav2vec2\configuration_wav2vec2.py
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
class Wav2Vec2Config(PretrainedConfig):
    r"""
    This is the configuration class to store the configuration of a [`Wav2Vec2Model`]. It is used to instantiate an
    Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration
    with the defaults will yield a similar configuration to that of the Wav2Vec2
    [facebook/wav2vec2-base-960h](https://hf-mirror.com/facebook/wav2vec2-base-960h) architecture.

    Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
    documentation from [`PretrainedConfig`] for more information.


    Args:
        vocab_size (`int`, *optional*, defaults to 32):
            Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by
            the `inputs_ids` passed when calling [`Wav2Vec2Model`] or [`TFWav2Vec2Model`]. Vocabulary size of the
            model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward
            method of [`Wav2Vec2Model`].
        hidden_size (`int`, *optional*, defaults to 768):
            Dimensionality of the encoder layers and the pooler layer.
        num_hidden_layers (`int`, *optional*, defaults to 12):
            Number of hidden layers in the Transformer encoder.
        num_attention_heads (`int`, *optional*, defaults to 12):
            Number of attention heads for each attention layer in the Transformer encoder.
        intermediate_size (`int`, *optional*, defaults to 3072):
            Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
        hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
            The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
            `"relu"`, `"selu"` and `"gelu_new"` are supported.
        hidden_dropout (`float`, *optional*, defaults to 0.1):
            The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
        activation_dropout (`float`, *optional*, defaults to 0.1):
            The dropout ratio for activations inside the fully connected layer.
        attention_dropout (`float`, *optional*, defaults to 0.1):
            The dropout ratio for the attention probabilities.
        final_dropout (`float`, *optional*, defaults to 0.1):
            The dropout probability for the final projection layer of [`Wav2Vec2ForCTC`].
        layerdrop (`float`, *optional*, defaults to 0.1):
            The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more
            details.
        initializer_range (`float`, *optional*, defaults to 0.02):
            The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
        layer_norm_eps (`float`, *optional*, defaults to 1e-12):
            The epsilon used by the layer normalization layers.
        feat_extract_norm (`str`, *optional*, defaults to `"group"`):
            The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group
            normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D
            convolutional layers.
        feat_proj_dropout (`float`, *optional*, defaults to 0.0):
            The dropout probability for output of the feature encoder.
        feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
            The non-linear activation function (function or string) in the 1D convolutional layers of the feature
            extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
        feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
            The dropout probability for quantized feature encoder states.
        conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
            A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
            feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
        conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`):
            A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length
            of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*.
        conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`):
            A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The
            length of *conv_kernel* defines the number of convolutional layers and has to match the length of
            *conv_dim*.
        conv_bias (`bool`, *optional*, defaults to `False`):
            Whether the 1D convolutional layers have a bias.
        num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
            Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional
            embeddings layer.
        num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16):
            Number of groups of 1D convolutional positional embeddings layer.
        do_stable_layer_norm (`bool`, *optional*, defaults to `False`):
            Whether to apply *stable* layer norm architecture of the Transformer encoder. `do_stable_layer_norm is
            True` corresponds to applying layer norm before the attention layer, whereas `do_stable_layer_norm is
            False` corresponds to applying layer norm after the attention layer.
        apply_spec_augment (`bool`, *optional*, defaults to `True`):
            Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
            [SpecAugment: A Simple Data Augmentation Method for Automatic Speech
            Recognition](https://arxiv.org/abs/1904.08779).
        mask_time_prob (`float`, *optional*, defaults to 0.05):
            Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking
            procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
            reasoning from the propability of each feature vector to be chosen as the start of the vector span to be
            masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the
            actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`.
        mask_time_length (`int`, *optional*, defaults to 10):
            Length of vector span along the time axis.
        mask_time_min_masks (`int`, *optional*, defaults to 2),:
            The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step,
            irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length <
            mask_time_min_masks''
        mask_feature_prob (`float`, *optional*, defaults to 0.0):
            Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The
            masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over
            the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector
            span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
            may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is
            True`.
        mask_feature_length (`int`, *optional*, defaults to 10):
            Length of vector span along the feature axis.
        mask_feature_min_masks (`int`, *optional*, defaults to 0),:
            The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
            step, irrespectively of `mask_feature_prob`. Only relevant if
            ''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks''
        num_codevectors_per_group (`int`, *optional*, defaults to 320):
            Number of entries in each quantization codebook (group).
        num_codevector_groups (`int`, *optional*, defaults to 2):
            Number of codevector groups for product codevector quantization.
        contrastive_logits_temperature (`float`, *optional*, defaults to 0.1):
            The temperature *kappa* in the contrastive loss.
        feat_quantizer_dropout (`float`, *optional*, defaults to 0.0):
            The dropout probability for the output of the feature encoder that's used by the quantizer.
        num_negatives (`int`, *optional*, defaults to 100):
            Number of negative samples for the contrastive loss.
        codevector_dim (`int`, *optional*, defaults to 256):
            Dimensionality of the quantized feature vectors.
        proj_codevector_dim (`int`, *optional*, defaults to 256):
            Dimensionality of the final projection of both the quantized and the transformer features.
        diversity_loss_weight (`int`, *optional*, defaults to 0.1):
            The weight of the codebook diversity loss component.
        ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`):
            Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an
            instance of [`Wav2Vec2ForCTC`].
        ctc_zero_infinity (`bool`, *optional*, defaults to `False`):
            Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly
            occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance
            of [`Wav2Vec2ForCTC`].
        use_weighted_layer_sum (`bool`, *optional*, defaults to `False`):
            Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an
            instance of [`Wav2Vec2ForSequenceClassification`].
        classifier_proj_size (`int`, *optional*, defaults to 256):
            Dimensionality of the projection before token mean-pooling for classification.
        tdnn_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`):
            A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN*
            module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers.
        tdnn_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`):
            A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the
            *XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*.
        tdnn_dilation (`Tuple[int]` or `List[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`):
            A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the
            *XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*.
        xvector_output_dim (`int`, *optional*, defaults to 512):
            Dimensionality of the *XVector* embedding vectors.
        add_adapter (`bool`, *optional*, defaults to `False`):
            Whether a convolutional network should be stacked on top of the Wav2Vec2 Encoder. Can be very useful for
            warm-starting Wav2Vec2 for SpeechEncoderDecoder models.
        adapter_kernel_size (`int`, *optional*, defaults to 3):
            Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
        adapter_stride (`int`, *optional*, defaults to 2):
            Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`.
        num_adapter_layers (`int`, *optional*, defaults to 3):
            Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is
            True`.
        adapter_attn_dim (`int`, *optional*):
            Dimension of the attention adapter weights to be used in each attention block. An example of a model using
            attention adapters is [facebook/mms-1b-all](https://hf-mirror.com/facebook/mms-1b-all).
        output_hidden_size (`int`, *optional*):
            Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant
            if `add_adapter is True`.

    Example:
        ```python
        >>> from transformers import Wav2Vec2Config, Wav2Vec2Model
        ...
        >>> # Initializing a Wav2Vec2 facebook/wav2vec2-base-960h style configuration
        >>> configuration = Wav2Vec2Config()
        ...
        >>> # Initializing a model (with random weights) from the facebook/wav2vec2-base-960h style configuration
        >>> model = Wav2Vec2Model(configuration)
        ...
        >>> # Accessing the model configuration
        >>> configuration = model.config
        ```
    """
    model_type = "wav2vec2"

    def __init__(
        self,
        vocab_size=32,
        hidden_size=768,
        num_hidden_layers=12,
        num_attention_heads=12,
        intermediate_size=3072,
        hidden_act="gelu",
        hidden_dropout=0.1,
        activation_dropout=0.1,
        attention_dropout=0.1,
        feat_proj_dropout=0.0,
        feat_quantizer_dropout=0.0,
        final_dropout=0.1,
        layerdrop=0.1,
        initializer_range=0.02,
        layer_norm_eps=1e-5,
        feat_extract_norm="group",
        feat_extract_activation="gelu",
        conv_dim=(512, 512, 512, 512, 512, 512, 512),
        conv_stride=(5, 2, 2, 2, 2, 2, 2),
        conv_kernel=(10, 3, 3, 3, 3, 2, 2),
        conv_bias=False,
        num_conv_pos_embeddings=128,
        num_conv_pos_embedding_groups=16,
        do_stable_layer_norm=False,
        apply_spec_augment=True,
        mask_time_prob=0.05,
        mask_time_length=10,
        mask_time_min_masks=2,
        mask_feature_prob=0.0,
        mask_feature_length=10,
        mask_feature_min_masks=0,
        num_codevectors_per_group=320,
        num_codevector_groups=2,
        contrastive_logits_temperature=0.1,
        num_negatives=100,
        codevector_dim=256,
        proj_codevector_dim=256,
        diversity_loss_weight=0.1,
        ctc_loss_reduction="sum",
        ctc_zero_infinity=False,
        use_weighted_layer_sum=False,
        classifier_proj_size=256,
        tdnn_dim=(512, 512, 512, 512, 1500),
        tdnn_kernel=(5, 3, 3, 1, 1),
        tdnn_dilation=(1, 2, 3, 1, 1),
        xvector_output_dim=512,
        pad_token_id=0,
        bos_token_id=1,
        eos_token_id=2,
        add_adapter=False,
        adapter_kernel_size=3,
        adapter_stride=2,
        num_adapter_layers=3,
        output_hidden_size=None,
        adapter_attn_dim=None,
        **kwargs,
    ):
        """
        Initializes a new instance of the Wav2Vec2Config class.

        Args:
            self: The class instance.
            vocab_size (int, optional): The size of the vocabulary. Defaults to 32.
            hidden_size (int, optional): The size of the hidden layers. Defaults to 768.
            num_hidden_layers (int, optional): The number of hidden layers. Defaults to 12.
            num_attention_heads (int, optional): The number of attention heads. Defaults to 12.
            intermediate_size (int, optional): The size of the intermediate layers. Defaults to 3072.
            hidden_act (str, optional): The activation function for the hidden layers. Defaults to 'gelu'.
            hidden_dropout (float, optional): The dropout rate for the hidden layers. Defaults to 0.1.
            activation_dropout (float, optional): The dropout rate for the activation function. Defaults to 0.1.
            attention_dropout (float, optional): The dropout rate for the attention mechanism. Defaults to 0.1.
            feat_proj_dropout (float, optional): The dropout rate for the feature projection. Defaults to 0.0.
            feat_quantizer_dropout (float, optional): The dropout rate for the feature quantizer. Defaults to 0.0.
            final_dropout (float, optional): The final dropout rate. Defaults to 0.1.
            layerdrop (float, optional): The layer dropout rate. Defaults to 0.1.
            initializer_range (float, optional): The range for weight initialization. Defaults to 0.02.
            layer_norm_eps (float, optional): The epsilon value for layer normalization. Defaults to 1e-05.
            feat_extract_norm (str, optional): The normalization method for feature extraction. Defaults to 'group'.
            feat_extract_activation (str, optional): The activation function for feature extraction. Defaults to 'gelu'.
            conv_dim (tuple, optional): The dimensions for convolutional layers. Defaults to (512, 512, 512, 512, 512, 512, 512).
            conv_stride (tuple, optional): The stride for convolutional layers. Defaults to (5, 2, 2, 2, 2, 2, 2).
            conv_kernel (tuple, optional): The kernel size for convolutional layers. Defaults to (10, 3, 3, 3, 3, 2, 2).
            conv_bias (bool, optional): Whether to include bias in convolutional layers. Defaults to False.
            num_conv_pos_embeddings (int, optional): The number of positional embeddings for convolutional layers. Defaults to 128.
            num_conv_pos_embedding_groups (int, optional): The number of groups for positional embeddings. Defaults to 16.
            do_stable_layer_norm (bool, optional): Whether to use stable layer normalization. Defaults to False.
            apply_spec_augment (bool, optional): Whether to apply SpecAugment during training. Defaults to True.
            mask_time_prob (float, optional): The probability of masking time steps during SpecAugment. Defaults to 0.05.
            mask_time_length (int, optional): The maximum length of time masking during SpecAugment. Defaults to 10.
            mask_time_min_masks (int, optional): The minimum number of time masks during SpecAugment. Defaults to 2.
            mask_feature_prob (float, optional): The probability of masking features during SpecAugment. Defaults to 0.0.
            mask_feature_length (int, optional): The maximum length of feature masking during SpecAugment. Defaults to 10.
            mask_feature_min_masks (int, optional): The minimum number of feature masks during SpecAugment. Defaults to 0.
            num_codevectors_per_group (int, optional): The number of codevectors per group for quantization. Defaults to 320.
            num_codevector_groups (int, optional): The number of codevector groups for quantization. Defaults to 2.
            contrastive_logits_temperature (float, optional): The temperature for contrastive loss. Defaults to 0.1.
            num_negatives (int, optional): The number of negative samples for contrastive loss. Defaults to 100.
            codevector_dim (int, optional): The dimension of the codevectors. Defaults to 256.
            proj_codevector_dim (int, optional): The dimension of projected codevectors. Defaults to 256.
            diversity_loss_weight (float, optional): The weight for diversity loss. Defaults to 0.1.
            ctc_loss_reduction (str, optional): The reduction method for CTC loss. Defaults to 'sum'.
            ctc_zero_infinity (bool, optional): Whether to zero out infinity in CTC loss. Defaults to False.
            use_weighted_layer_sum (bool, optional): Whether to use weighted layer sum. Defaults to False.
            classifier_proj_size (int, optional): The size of the projection for the classifier. Defaults to 256.
            tdnn_dim (tuple, optional): The dimensions for time-delay neural network layers. Defaults to (512, 512, 512, 512, 1500).
            tdnn_kernel (tuple, optional): The kernel size for time-delay neural network layers. Defaults to (5, 3, 3, 1, 1).
            tdnn_dilation (tuple, optional): The dilation for time-delay neural network layers. Defaults to (1, 2, 3, 1, 1).
            xvector_output_dim (int, optional): The output dimension for x-vector representation. Defaults to 512.
            pad_token_id (int, optional): The token ID for padding. Defaults to 0.
            bos_token_id (int, optional): The token ID for the beginning of sentence. Defaults to 1.
            eos_token_id (int, optional): The token ID for the end of sentence. Defaults to 2.
            add_adapter (bool, optional): Whether to add adapter layers. Defaults to False.
            adapter_kernel_size (int, optional): The kernel size for adapter layers. Defaults to 3.
            adapter_stride (int, optional): The stride for adapter layers. Defaults to 2.
            num_adapter_layers (int, optional): The number of adapter layers. Defaults to 3.
            output_hidden_size (int, optional): The size of the output hidden layers. Defaults to None.
            adapter_attn_dim (int, optional): The attention dimension for adapter layers. Defaults to None.

        Returns:
            None.

        Raises:
            ValueError: If the configuration for convolutional layers is incorrect,
                i.e., if the dimensions, strides, or kernel sizes are not of the same length.

        """
        super().__init__(**kwargs, pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id)
        self.hidden_size = hidden_size
        self.feat_extract_norm = feat_extract_norm
        self.feat_extract_activation = feat_extract_activation
        self.conv_dim = list(conv_dim)
        self.conv_stride = list(conv_stride)
        self.conv_kernel = list(conv_kernel)
        self.conv_bias = conv_bias
        self.num_conv_pos_embeddings = num_conv_pos_embeddings
        self.num_conv_pos_embedding_groups = num_conv_pos_embedding_groups
        self.num_feat_extract_layers = len(self.conv_dim)
        self.num_hidden_layers = num_hidden_layers
        self.intermediate_size = intermediate_size
        self.hidden_act = hidden_act
        self.num_attention_heads = num_attention_heads
        self.hidden_dropout = hidden_dropout
        self.attention_dropout = attention_dropout
        self.activation_dropout = activation_dropout
        self.feat_proj_dropout = feat_proj_dropout
        self.final_dropout = final_dropout
        self.layerdrop = layerdrop
        self.layer_norm_eps = layer_norm_eps
        self.initializer_range = initializer_range
        self.vocab_size = vocab_size
        self.do_stable_layer_norm = do_stable_layer_norm
        self.use_weighted_layer_sum = use_weighted_layer_sum

        if (
            (len(self.conv_stride) != self.num_feat_extract_layers)
            or (len(self.conv_kernel) != self.num_feat_extract_layers)
            or (len(self.conv_dim) != self.num_feat_extract_layers)
        ):
            raise ValueError(
                "Configuration for convolutional layers is incorrect. It is required that `len(config.conv_dim)` =="
                " `len(config.conv_stride)` == `len(config.conv_kernel)`, but is `len(config.conv_dim) ="
                f" {len(self.conv_dim)}`, `len(config.conv_stride) = {len(self.conv_stride)}`,"
                f" `len(config.conv_kernel) = {len(self.conv_kernel)}`."
            )

        # fine-tuning config parameters for SpecAugment: https://arxiv.org/abs/1904.08779
        self.apply_spec_augment = apply_spec_augment
        self.mask_time_prob = mask_time_prob
        self.mask_time_length = mask_time_length
        self.mask_time_min_masks = mask_time_min_masks
        self.mask_feature_prob = mask_feature_prob
        self.mask_feature_length = mask_feature_length
        self.mask_feature_min_masks = mask_feature_min_masks

        # parameters for pretraining with codevector quantized representations
        self.num_codevectors_per_group = num_codevectors_per_group
        self.num_codevector_groups = num_codevector_groups
        self.contrastive_logits_temperature = contrastive_logits_temperature
        self.feat_quantizer_dropout = feat_quantizer_dropout
        self.num_negatives = num_negatives
        self.codevector_dim = codevector_dim
        self.proj_codevector_dim = proj_codevector_dim
        self.diversity_loss_weight = diversity_loss_weight

        # ctc loss
        self.ctc_loss_reduction = ctc_loss_reduction
        self.ctc_zero_infinity = ctc_zero_infinity

        # adapter
        self.add_adapter = add_adapter
        self.adapter_kernel_size = adapter_kernel_size
        self.adapter_stride = adapter_stride
        self.num_adapter_layers = num_adapter_layers
        self.output_hidden_size = output_hidden_size or hidden_size
        self.adapter_attn_dim = adapter_attn_dim

        # SequenceClassification-specific parameter. Feel free to ignore for other classes.
        self.classifier_proj_size = classifier_proj_size

        # XVector-specific parameters. Feel free to ignore for other classes.
        self.tdnn_dim = list(tdnn_dim)
        self.tdnn_kernel = list(tdnn_kernel)
        self.tdnn_dilation = list(tdnn_dilation)
        self.xvector_output_dim = xvector_output_dim

    @property
    def inputs_to_logits_ratio(self):
        """
        Calculates the ratio of inputs to logits for the Wav2Vec2Config class.

        Args:
            self (Wav2Vec2Config): The instance of the Wav2Vec2Config class.

        Returns:
            None: This method does not return any value.

        Raises:
            None.

        This method calculates the ratio of inputs to logits by multiplying the convolution stride values.
        The convolution stride values are accessed using the self.conv_stride attribute. The functools.reduce() function
        is used to multiply all the stride values together. If there are no stride values, the ratio is assumed to be
        1. The calculated ratio is then returned as the output of this method.
        """
        return functools.reduce(operator.mul, self.conv_stride, 1)

mindnlp.transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config.inputs_to_logits_ratio property

Calculates the ratio of inputs to logits for the Wav2Vec2Config class.

PARAMETER DESCRIPTION
self

The instance of the Wav2Vec2Config class.

TYPE: Wav2Vec2Config

RETURNS DESCRIPTION
None

This method does not return any value.

This method calculates the ratio of inputs to logits by multiplying the convolution stride values. The convolution stride values are accessed using the self.conv_stride attribute. The functools.reduce() function is used to multiply all the stride values together. If there are no stride values, the ratio is assumed to be 1. The calculated ratio is then returned as the output of this method.

mindnlp.transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config.__init__(vocab_size=32, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout=0.1, activation_dropout=0.1, attention_dropout=0.1, feat_proj_dropout=0.0, feat_quantizer_dropout=0.0, final_dropout=0.1, layerdrop=0.1, initializer_range=0.02, layer_norm_eps=1e-05, feat_extract_norm='group', feat_extract_activation='gelu', conv_dim=(512, 512, 512, 512, 512, 512, 512), conv_stride=(5, 2, 2, 2, 2, 2, 2), conv_kernel=(10, 3, 3, 3, 3, 2, 2), conv_bias=False, num_conv_pos_embeddings=128, num_conv_pos_embedding_groups=16, do_stable_layer_norm=False, apply_spec_augment=True, mask_time_prob=0.05, mask_time_length=10, mask_time_min_masks=2, mask_feature_prob=0.0, mask_feature_length=10, mask_feature_min_masks=0, num_codevectors_per_group=320, num_codevector_groups=2, contrastive_logits_temperature=0.1, num_negatives=100, codevector_dim=256, proj_codevector_dim=256, diversity_loss_weight=0.1, ctc_loss_reduction='sum', ctc_zero_infinity=False, use_weighted_layer_sum=False, classifier_proj_size=256, tdnn_dim=(512, 512, 512, 512, 1500), tdnn_kernel=(5, 3, 3, 1, 1), tdnn_dilation=(1, 2, 3, 1, 1), xvector_output_dim=512, pad_token_id=0, bos_token_id=1, eos_token_id=2, add_adapter=False, adapter_kernel_size=3, adapter_stride=2, num_adapter_layers=3, output_hidden_size=None, adapter_attn_dim=None, **kwargs)

Initializes a new instance of the Wav2Vec2Config class.

PARAMETER DESCRIPTION
self

The class instance.

vocab_size

The size of the vocabulary. Defaults to 32.

TYPE: int DEFAULT: 32

hidden_size

The size of the hidden layers. Defaults to 768.

TYPE: int DEFAULT: 768

num_hidden_layers

The number of hidden layers. Defaults to 12.

TYPE: int DEFAULT: 12

num_attention_heads

The number of attention heads. Defaults to 12.

TYPE: int DEFAULT: 12

intermediate_size

The size of the intermediate layers. Defaults to 3072.

TYPE: int DEFAULT: 3072

hidden_act

The activation function for the hidden layers. Defaults to 'gelu'.

TYPE: str DEFAULT: 'gelu'

hidden_dropout

The dropout rate for the hidden layers. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

activation_dropout

The dropout rate for the activation function. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

attention_dropout

The dropout rate for the attention mechanism. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

feat_proj_dropout

The dropout rate for the feature projection. Defaults to 0.0.

TYPE: float DEFAULT: 0.0

feat_quantizer_dropout

The dropout rate for the feature quantizer. Defaults to 0.0.

TYPE: float DEFAULT: 0.0

final_dropout

The final dropout rate. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

layerdrop

The layer dropout rate. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

initializer_range

The range for weight initialization. Defaults to 0.02.

TYPE: float DEFAULT: 0.02

layer_norm_eps

The epsilon value for layer normalization. Defaults to 1e-05.

TYPE: float DEFAULT: 1e-05

feat_extract_norm

The normalization method for feature extraction. Defaults to 'group'.

TYPE: str DEFAULT: 'group'

feat_extract_activation

The activation function for feature extraction. Defaults to 'gelu'.

TYPE: str DEFAULT: 'gelu'

conv_dim

The dimensions for convolutional layers. Defaults to (512, 512, 512, 512, 512, 512, 512).

TYPE: tuple DEFAULT: (512, 512, 512, 512, 512, 512, 512)

conv_stride

The stride for convolutional layers. Defaults to (5, 2, 2, 2, 2, 2, 2).

TYPE: tuple DEFAULT: (5, 2, 2, 2, 2, 2, 2)

conv_kernel

The kernel size for convolutional layers. Defaults to (10, 3, 3, 3, 3, 2, 2).

TYPE: tuple DEFAULT: (10, 3, 3, 3, 3, 2, 2)

conv_bias

Whether to include bias in convolutional layers. Defaults to False.

TYPE: bool DEFAULT: False

num_conv_pos_embeddings

The number of positional embeddings for convolutional layers. Defaults to 128.

TYPE: int DEFAULT: 128

num_conv_pos_embedding_groups

The number of groups for positional embeddings. Defaults to 16.

TYPE: int DEFAULT: 16

do_stable_layer_norm

Whether to use stable layer normalization. Defaults to False.

TYPE: bool DEFAULT: False

apply_spec_augment

Whether to apply SpecAugment during training. Defaults to True.

TYPE: bool DEFAULT: True

mask_time_prob

The probability of masking time steps during SpecAugment. Defaults to 0.05.

TYPE: float DEFAULT: 0.05

mask_time_length

The maximum length of time masking during SpecAugment. Defaults to 10.

TYPE: int DEFAULT: 10

mask_time_min_masks

The minimum number of time masks during SpecAugment. Defaults to 2.

TYPE: int DEFAULT: 2

mask_feature_prob

The probability of masking features during SpecAugment. Defaults to 0.0.

TYPE: float DEFAULT: 0.0

mask_feature_length

The maximum length of feature masking during SpecAugment. Defaults to 10.

TYPE: int DEFAULT: 10

mask_feature_min_masks

The minimum number of feature masks during SpecAugment. Defaults to 0.

TYPE: int DEFAULT: 0

num_codevectors_per_group

The number of codevectors per group for quantization. Defaults to 320.

TYPE: int DEFAULT: 320

num_codevector_groups

The number of codevector groups for quantization. Defaults to 2.

TYPE: int DEFAULT: 2

contrastive_logits_temperature

The temperature for contrastive loss. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

num_negatives

The number of negative samples for contrastive loss. Defaults to 100.

TYPE: int DEFAULT: 100

codevector_dim

The dimension of the codevectors. Defaults to 256.

TYPE: int DEFAULT: 256

proj_codevector_dim

The dimension of projected codevectors. Defaults to 256.

TYPE: int DEFAULT: 256

diversity_loss_weight

The weight for diversity loss. Defaults to 0.1.

TYPE: float DEFAULT: 0.1

ctc_loss_reduction

The reduction method for CTC loss. Defaults to 'sum'.

TYPE: str DEFAULT: 'sum'

ctc_zero_infinity

Whether to zero out infinity in CTC loss. Defaults to False.

TYPE: bool DEFAULT: False

use_weighted_layer_sum

Whether to use weighted layer sum. Defaults to False.

TYPE: bool DEFAULT: False

classifier_proj_size

The size of the projection for the classifier. Defaults to 256.

TYPE: int DEFAULT: 256

tdnn_dim

The dimensions for time-delay neural network layers. Defaults to (512, 512, 512, 512, 1500).

TYPE: tuple DEFAULT: (512, 512, 512, 512, 1500)

tdnn_kernel

The kernel size for time-delay neural network layers. Defaults to (5, 3, 3, 1, 1).

TYPE: tuple DEFAULT: (5, 3, 3, 1, 1)

tdnn_dilation

The dilation for time-delay neural network layers. Defaults to (1, 2, 3, 1, 1).

TYPE: tuple DEFAULT: (1, 2, 3, 1, 1)

xvector_output_dim

The output dimension for x-vector representation. Defaults to 512.

TYPE: int DEFAULT: 512

pad_token_id

The token ID for padding. Defaults to 0.

TYPE: int DEFAULT: 0

bos_token_id

The token ID for the beginning of sentence. Defaults to 1.

TYPE: int DEFAULT: 1

eos_token_id

The token ID for the end of sentence. Defaults to 2.

TYPE: int DEFAULT: 2

add_adapter

Whether to add adapter layers. Defaults to False.

TYPE: bool DEFAULT: False

adapter_kernel_size

The kernel size for adapter layers. Defaults to 3.

TYPE: int DEFAULT: 3

adapter_stride

The stride for adapter layers. Defaults to 2.

TYPE: int DEFAULT: 2

num_adapter_layers

The number of adapter layers. Defaults to 3.

TYPE: int DEFAULT: 3

output_hidden_size

The size of the output hidden layers. Defaults to None.

TYPE: int DEFAULT: None

adapter_attn_dim

The attention dimension for adapter layers. Defaults to None.

TYPE: int DEFAULT: None

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
ValueError

If the configuration for convolutional layers is incorrect, i.e., if the dimensions, strides, or kernel sizes are not of the same length.

Source code in mindnlp\transformers\models\wav2vec2\configuration_wav2vec2.py
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
def __init__(
    self,
    vocab_size=32,
    hidden_size=768,
    num_hidden_layers=12,
    num_attention_heads=12,
    intermediate_size=3072,
    hidden_act="gelu",
    hidden_dropout=0.1,
    activation_dropout=0.1,
    attention_dropout=0.1,
    feat_proj_dropout=0.0,
    feat_quantizer_dropout=0.0,
    final_dropout=0.1,
    layerdrop=0.1,
    initializer_range=0.02,
    layer_norm_eps=1e-5,
    feat_extract_norm="group",
    feat_extract_activation="gelu",
    conv_dim=(512, 512, 512, 512, 512, 512, 512),
    conv_stride=(5, 2, 2, 2, 2, 2, 2),
    conv_kernel=(10, 3, 3, 3, 3, 2, 2),
    conv_bias=False,
    num_conv_pos_embeddings=128,
    num_conv_pos_embedding_groups=16,
    do_stable_layer_norm=False,
    apply_spec_augment=True,
    mask_time_prob=0.05,
    mask_time_length=10,
    mask_time_min_masks=2,
    mask_feature_prob=0.0,
    mask_feature_length=10,
    mask_feature_min_masks=0,
    num_codevectors_per_group=320,
    num_codevector_groups=2,
    contrastive_logits_temperature=0.1,
    num_negatives=100,
    codevector_dim=256,
    proj_codevector_dim=256,
    diversity_loss_weight=0.1,
    ctc_loss_reduction="sum",
    ctc_zero_infinity=False,
    use_weighted_layer_sum=False,
    classifier_proj_size=256,
    tdnn_dim=(512, 512, 512, 512, 1500),
    tdnn_kernel=(5, 3, 3, 1, 1),
    tdnn_dilation=(1, 2, 3, 1, 1),
    xvector_output_dim=512,
    pad_token_id=0,
    bos_token_id=1,
    eos_token_id=2,
    add_adapter=False,
    adapter_kernel_size=3,
    adapter_stride=2,
    num_adapter_layers=3,
    output_hidden_size=None,
    adapter_attn_dim=None,
    **kwargs,
):
    """
    Initializes a new instance of the Wav2Vec2Config class.

    Args:
        self: The class instance.
        vocab_size (int, optional): The size of the vocabulary. Defaults to 32.
        hidden_size (int, optional): The size of the hidden layers. Defaults to 768.
        num_hidden_layers (int, optional): The number of hidden layers. Defaults to 12.
        num_attention_heads (int, optional): The number of attention heads. Defaults to 12.
        intermediate_size (int, optional): The size of the intermediate layers. Defaults to 3072.
        hidden_act (str, optional): The activation function for the hidden layers. Defaults to 'gelu'.
        hidden_dropout (float, optional): The dropout rate for the hidden layers. Defaults to 0.1.
        activation_dropout (float, optional): The dropout rate for the activation function. Defaults to 0.1.
        attention_dropout (float, optional): The dropout rate for the attention mechanism. Defaults to 0.1.
        feat_proj_dropout (float, optional): The dropout rate for the feature projection. Defaults to 0.0.
        feat_quantizer_dropout (float, optional): The dropout rate for the feature quantizer. Defaults to 0.0.
        final_dropout (float, optional): The final dropout rate. Defaults to 0.1.
        layerdrop (float, optional): The layer dropout rate. Defaults to 0.1.
        initializer_range (float, optional): The range for weight initialization. Defaults to 0.02.
        layer_norm_eps (float, optional): The epsilon value for layer normalization. Defaults to 1e-05.
        feat_extract_norm (str, optional): The normalization method for feature extraction. Defaults to 'group'.
        feat_extract_activation (str, optional): The activation function for feature extraction. Defaults to 'gelu'.
        conv_dim (tuple, optional): The dimensions for convolutional layers. Defaults to (512, 512, 512, 512, 512, 512, 512).
        conv_stride (tuple, optional): The stride for convolutional layers. Defaults to (5, 2, 2, 2, 2, 2, 2).
        conv_kernel (tuple, optional): The kernel size for convolutional layers. Defaults to (10, 3, 3, 3, 3, 2, 2).
        conv_bias (bool, optional): Whether to include bias in convolutional layers. Defaults to False.
        num_conv_pos_embeddings (int, optional): The number of positional embeddings for convolutional layers. Defaults to 128.
        num_conv_pos_embedding_groups (int, optional): The number of groups for positional embeddings. Defaults to 16.
        do_stable_layer_norm (bool, optional): Whether to use stable layer normalization. Defaults to False.
        apply_spec_augment (bool, optional): Whether to apply SpecAugment during training. Defaults to True.
        mask_time_prob (float, optional): The probability of masking time steps during SpecAugment. Defaults to 0.05.
        mask_time_length (int, optional): The maximum length of time masking during SpecAugment. Defaults to 10.
        mask_time_min_masks (int, optional): The minimum number of time masks during SpecAugment. Defaults to 2.
        mask_feature_prob (float, optional): The probability of masking features during SpecAugment. Defaults to 0.0.
        mask_feature_length (int, optional): The maximum length of feature masking during SpecAugment. Defaults to 10.
        mask_feature_min_masks (int, optional): The minimum number of feature masks during SpecAugment. Defaults to 0.
        num_codevectors_per_group (int, optional): The number of codevectors per group for quantization. Defaults to 320.
        num_codevector_groups (int, optional): The number of codevector groups for quantization. Defaults to 2.
        contrastive_logits_temperature (float, optional): The temperature for contrastive loss. Defaults to 0.1.
        num_negatives (int, optional): The number of negative samples for contrastive loss. Defaults to 100.
        codevector_dim (int, optional): The dimension of the codevectors. Defaults to 256.
        proj_codevector_dim (int, optional): The dimension of projected codevectors. Defaults to 256.
        diversity_loss_weight (float, optional): The weight for diversity loss. Defaults to 0.1.
        ctc_loss_reduction (str, optional): The reduction method for CTC loss. Defaults to 'sum'.
        ctc_zero_infinity (bool, optional): Whether to zero out infinity in CTC loss. Defaults to False.
        use_weighted_layer_sum (bool, optional): Whether to use weighted layer sum. Defaults to False.
        classifier_proj_size (int, optional): The size of the projection for the classifier. Defaults to 256.
        tdnn_dim (tuple, optional): The dimensions for time-delay neural network layers. Defaults to (512, 512, 512, 512, 1500).
        tdnn_kernel (tuple, optional): The kernel size for time-delay neural network layers. Defaults to (5, 3, 3, 1, 1).
        tdnn_dilation (tuple, optional): The dilation for time-delay neural network layers. Defaults to (1, 2, 3, 1, 1).
        xvector_output_dim (int, optional): The output dimension for x-vector representation. Defaults to 512.
        pad_token_id (int, optional): The token ID for padding. Defaults to 0.
        bos_token_id (int, optional): The token ID for the beginning of sentence. Defaults to 1.
        eos_token_id (int, optional): The token ID for the end of sentence. Defaults to 2.
        add_adapter (bool, optional): Whether to add adapter layers. Defaults to False.
        adapter_kernel_size (int, optional): The kernel size for adapter layers. Defaults to 3.
        adapter_stride (int, optional): The stride for adapter layers. Defaults to 2.
        num_adapter_layers (int, optional): The number of adapter layers. Defaults to 3.
        output_hidden_size (int, optional): The size of the output hidden layers. Defaults to None.
        adapter_attn_dim (int, optional): The attention dimension for adapter layers. Defaults to None.

    Returns:
        None.

    Raises:
        ValueError: If the configuration for convolutional layers is incorrect,
            i.e., if the dimensions, strides, or kernel sizes are not of the same length.

    """
    super().__init__(**kwargs, pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id)
    self.hidden_size = hidden_size
    self.feat_extract_norm = feat_extract_norm
    self.feat_extract_activation = feat_extract_activation
    self.conv_dim = list(conv_dim)
    self.conv_stride = list(conv_stride)
    self.conv_kernel = list(conv_kernel)
    self.conv_bias = conv_bias
    self.num_conv_pos_embeddings = num_conv_pos_embeddings
    self.num_conv_pos_embedding_groups = num_conv_pos_embedding_groups
    self.num_feat_extract_layers = len(self.conv_dim)
    self.num_hidden_layers = num_hidden_layers
    self.intermediate_size = intermediate_size
    self.hidden_act = hidden_act
    self.num_attention_heads = num_attention_heads
    self.hidden_dropout = hidden_dropout
    self.attention_dropout = attention_dropout
    self.activation_dropout = activation_dropout
    self.feat_proj_dropout = feat_proj_dropout
    self.final_dropout = final_dropout
    self.layerdrop = layerdrop
    self.layer_norm_eps = layer_norm_eps
    self.initializer_range = initializer_range
    self.vocab_size = vocab_size
    self.do_stable_layer_norm = do_stable_layer_norm
    self.use_weighted_layer_sum = use_weighted_layer_sum

    if (
        (len(self.conv_stride) != self.num_feat_extract_layers)
        or (len(self.conv_kernel) != self.num_feat_extract_layers)
        or (len(self.conv_dim) != self.num_feat_extract_layers)
    ):
        raise ValueError(
            "Configuration for convolutional layers is incorrect. It is required that `len(config.conv_dim)` =="
            " `len(config.conv_stride)` == `len(config.conv_kernel)`, but is `len(config.conv_dim) ="
            f" {len(self.conv_dim)}`, `len(config.conv_stride) = {len(self.conv_stride)}`,"
            f" `len(config.conv_kernel) = {len(self.conv_kernel)}`."
        )

    # fine-tuning config parameters for SpecAugment: https://arxiv.org/abs/1904.08779
    self.apply_spec_augment = apply_spec_augment
    self.mask_time_prob = mask_time_prob
    self.mask_time_length = mask_time_length
    self.mask_time_min_masks = mask_time_min_masks
    self.mask_feature_prob = mask_feature_prob
    self.mask_feature_length = mask_feature_length
    self.mask_feature_min_masks = mask_feature_min_masks

    # parameters for pretraining with codevector quantized representations
    self.num_codevectors_per_group = num_codevectors_per_group
    self.num_codevector_groups = num_codevector_groups
    self.contrastive_logits_temperature = contrastive_logits_temperature
    self.feat_quantizer_dropout = feat_quantizer_dropout
    self.num_negatives = num_negatives
    self.codevector_dim = codevector_dim
    self.proj_codevector_dim = proj_codevector_dim
    self.diversity_loss_weight = diversity_loss_weight

    # ctc loss
    self.ctc_loss_reduction = ctc_loss_reduction
    self.ctc_zero_infinity = ctc_zero_infinity

    # adapter
    self.add_adapter = add_adapter
    self.adapter_kernel_size = adapter_kernel_size
    self.adapter_stride = adapter_stride
    self.num_adapter_layers = num_adapter_layers
    self.output_hidden_size = output_hidden_size or hidden_size
    self.adapter_attn_dim = adapter_attn_dim

    # SequenceClassification-specific parameter. Feel free to ignore for other classes.
    self.classifier_proj_size = classifier_proj_size

    # XVector-specific parameters. Feel free to ignore for other classes.
    self.tdnn_dim = list(tdnn_dim)
    self.tdnn_kernel = list(tdnn_kernel)
    self.tdnn_dilation = list(tdnn_dilation)
    self.xvector_output_dim = xvector_output_dim

mindnlp.transformers.models.wav2vec2.feature_extraction_wav2vec2

Feature extractor class for Wav2Vec2

mindnlp.transformers.models.wav2vec2.feature_extraction_wav2vec2.Wav2Vec2FeatureExtractor

Bases: SequenceFeatureExtractor

Constructs a Wav2Vec2 feature extractor.

This feature extractor inherits from [~feature_extraction_sequence_utils.SequenceFeatureExtractor] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

PARAMETER DESCRIPTION
feature_size

The feature dimension of the extracted features.

TYPE: `int`, defaults to 1 DEFAULT: 1

sampling_rate

The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).

TYPE: `int`, defaults to 16000 DEFAULT: 16000

padding_value

The value that is used to fill the padding values.

TYPE: `float`, defaults to 0.0 DEFAULT: 0.0

do_normalize

Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models, e.g., wav2vec2-lv60.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

return_attention_mask

Whether or not [~Wav2Vec2FeatureExtractor.__call__] should return attention_mask.

Wav2Vec2 models that have set config.feat_extract_norm == "group", such as wav2vec2-base, have not been trained using attention_mask. For such models, input_values should simply be padded with 0 and no attention_mask should be passed.

For Wav2Vec2 models that have set config.feat_extract_norm == "layer", such as wav2vec2-lv60, attention_mask should be passed for batched inference.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

Source code in mindnlp\transformers\models\wav2vec2\feature_extraction_wav2vec2.py
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
class Wav2Vec2FeatureExtractor(SequenceFeatureExtractor):
    r"""
    Constructs a Wav2Vec2 feature extractor.

    This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains
    most of the main methods. Users should refer to this superclass for more information regarding those methods.

    Args:
        feature_size (`int`, defaults to 1):
            The feature dimension of the extracted features.
        sampling_rate (`int`, defaults to 16000):
            The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
        padding_value (`float`, defaults to 0.0):
            The value that is used to fill the padding values.
        do_normalize (`bool`, *optional*, defaults to `True`):
            Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
            improve the performance for some models, *e.g.*,
            [wav2vec2-lv60](https://hf-mirror.com/models?search=lv60).
        return_attention_mask (`bool`, *optional*, defaults to `False`):
            Whether or not [`~Wav2Vec2FeatureExtractor.__call__`] should return `attention_mask`.

            <Tip>

            Wav2Vec2 models that have set `config.feat_extract_norm == "group"`, such as
            [wav2vec2-base](https://hf-mirror.com/facebook/wav2vec2-base-960h), have **not** been trained using
            `attention_mask`. For such models, `input_values` should simply be padded with 0 and no `attention_mask`
            should be passed.

            For Wav2Vec2 models that have set `config.feat_extract_norm == "layer"`, such as
            [wav2vec2-lv60](https://hf-mirror.com/facebook/wav2vec2-large-960h-lv60-self), `attention_mask` should be
            passed for batched inference.

            </Tip>"""
    model_input_names = ["input_values", "attention_mask"]

    def __init__(
        self,
        feature_size=1,
        sampling_rate=16000,
        padding_value=0.0,
        return_attention_mask=False,
        do_normalize=True,
        **kwargs,
    ):
        """
        Initialize the Wav2Vec2FeatureExtractor class.

        Args:
            self (object): The instance of the class.
            feature_size (int, optional): The size of the input features. Defaults to 1.
            sampling_rate (int, optional): The sampling rate of the audio data. Defaults to 16000.
            padding_value (float, optional): The value used for padding sequences. Defaults to 0.0.
            return_attention_mask (bool, optional): Whether to return the attention mask. Defaults to False.
            do_normalize (bool, optional): Whether to normalize the input features. Defaults to True.
            **kwargs: Additional keyword arguments.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(feature_size=feature_size, sampling_rate=sampling_rate, padding_value=padding_value, **kwargs)
        self.return_attention_mask = return_attention_mask
        self.do_normalize = do_normalize

    @staticmethod
    def zero_mean_unit_var_norm(
        input_values: List[np.ndarray], attention_mask: List[np.ndarray], padding_value: float = 0.0
    ) -> List[np.ndarray]:
        """
        Every array in the list is normalized to have zero mean and unit variance
        """
        if attention_mask is not None:
            attention_mask = np.array(attention_mask, np.int32)
            normed_input_values = []

            for vector, length in zip(input_values, attention_mask.sum(-1)):
                normed_slice = (vector - vector[:length].mean()) / np.sqrt(vector[:length].var() + 1e-7)
                if length < normed_slice.shape[0]:
                    normed_slice[length:] = padding_value

                normed_input_values.append(normed_slice)
        else:
            normed_input_values = [(x - x.mean()) / np.sqrt(x.var() + 1e-7) for x in input_values]

        return normed_input_values

    def __call__(
        self,
        raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
        padding: Union[bool, str, PaddingStrategy] = False,
        max_length: Optional[int] = None,
        truncation: bool = False,
        pad_to_multiple_of: Optional[int] = None,
        return_attention_mask: Optional[bool] = None,
        return_tensors: Optional[Union[str, TensorType]] = None,
        sampling_rate: Optional[int] = None,
        **kwargs,
    ) -> BatchFeature:
        """
        Main method to featurize and prepare for the model one or several sequence(s).

        Args:
            raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
                The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
                values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
                stereo, i.e. single float per timestep.
            padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
                Select a strategy to pad the returned sequences (according to the model's padding side and padding
                index) among:

                - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
                sequence if provided).
                - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
                acceptable input length for the model if that argument is not provided.
                - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
                lengths).
            max_length (`int`, *optional*):
                Maximum length of the returned list and optionally padding length (see above).
            truncation (`bool`):
                Activates truncation to cut input sequences longer than *max_length* to *max_length*.
            pad_to_multiple_of (`int`, *optional*):
                If set will pad the sequence to a multiple of the provided value.

                This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
                `>= 7.5` (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
            return_attention_mask (`bool`, *optional*):
                Whether to return the attention mask. If left to the default, will return the attention mask according
                to the specific feature_extractor's default.

                [What are attention masks?](../glossary#attention-mask)

                <Tip>

                Wav2Vec2 models that have set `config.feat_extract_norm == "group"`, such as
                [wav2vec2-base](https://hf-mirror.com/facebook/wav2vec2-base-960h), have **not** been trained using
                `attention_mask`. For such models, `input_values` should simply be padded with 0 and no
                `attention_mask` should be passed.

                For Wav2Vec2 models that have set `config.feat_extract_norm == "layer"`, such as
                [wav2vec2-lv60](https://hf-mirror.com/facebook/wav2vec2-large-960h-lv60-self), `attention_mask` should
                be passed for batched inference.

                </Tip>

            return_tensors (`str` or [`~utils.TensorType`], *optional*):
                If set, will return tensors instead of list of python integers. Acceptable values are:

                - `'tf'`: Return TensorFlow `tf.constant` objects.
                - `'pt'`: Return PyTorch `torch.Tensor` objects.
                - `'np'`: Return Numpy `np.ndarray` objects.
            sampling_rate (`int`, *optional*):
                The sampling rate at which the `raw_speech` input was sampled. It is strongly recommended to pass
                `sampling_rate` at the forward call to prevent silent errors.
            padding_value (`float`, defaults to 0.0):
        """
        if sampling_rate is not None:
            if sampling_rate != self.sampling_rate:
                raise ValueError(
                    f"The model corresponding to this feature extractor: {self} was trained using a sampling rate of"
                    f" {self.sampling_rate}. Please make sure that the provided `raw_speech` input was sampled with"
                    f" {self.sampling_rate} and not {sampling_rate}."
                )
        else:
            logger.warning(
                "It is strongly recommended to pass the ``sampling_rate`` argument to this function. "
                "Failing to do so can result in silent errors that might be hard to debug."
            )

        is_batched_numpy = isinstance(raw_speech, np.ndarray) and len(raw_speech.shape) > 1
        if is_batched_numpy and len(raw_speech.shape) > 2:
            raise ValueError(f"Only mono-channel audio is supported for input to {self}")
        is_batched = is_batched_numpy or (
            isinstance(raw_speech, (list, tuple)) and (isinstance(raw_speech[0], (np.ndarray, tuple, list)))
        )

        # always return batch
        if not is_batched:
            raw_speech = [raw_speech]

        # convert into correct format for padding
        encoded_inputs = BatchFeature({"input_values": raw_speech})

        padded_inputs = self.pad(
            encoded_inputs,
            padding=padding,
            max_length=max_length,
            truncation=truncation,
            pad_to_multiple_of=pad_to_multiple_of,
            return_attention_mask=return_attention_mask,
        )

        # convert input values to correct format
        input_values = padded_inputs["input_values"]
        if not isinstance(input_values[0], np.ndarray):
            padded_inputs["input_values"] = [np.asarray(array, dtype=np.float32) for array in input_values]
        elif (
            not isinstance(input_values, np.ndarray)
            and isinstance(input_values[0], np.ndarray)
            and input_values[0].dtype is np.dtype(np.float64)
        ):
            padded_inputs["input_values"] = [array.astype(np.float32) for array in input_values]
        elif isinstance(input_values, np.ndarray) and input_values.dtype is np.dtype(np.float64):
            padded_inputs["input_values"] = input_values.astype(np.float32)

        # convert attention_mask to correct format
        attention_mask = padded_inputs.get("attention_mask")
        if attention_mask is not None:
            padded_inputs["attention_mask"] = [np.asarray(array, dtype=np.int32) for array in attention_mask]

        # zero-mean and unit-variance normalization
        if self.do_normalize:
            attention_mask = (
                attention_mask
                if self._get_padding_strategies(padding, max_length=max_length) is not PaddingStrategy.DO_NOT_PAD
                else None
            )
            padded_inputs["input_values"] = self.zero_mean_unit_var_norm(
                padded_inputs["input_values"], attention_mask=attention_mask, padding_value=self.padding_value
            )

        if return_tensors is not None:
            padded_inputs = padded_inputs.convert_to_tensors(return_tensors)

        return padded_inputs

mindnlp.transformers.models.wav2vec2.feature_extraction_wav2vec2.Wav2Vec2FeatureExtractor.__call__(raw_speech, padding=False, max_length=None, truncation=False, pad_to_multiple_of=None, return_attention_mask=None, return_tensors=None, sampling_rate=None, **kwargs)

Main method to featurize and prepare for the model one or several sequence(s).

PARAMETER DESCRIPTION
raw_speech

The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep.

TYPE: `np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`

padding

Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among:

  • True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided).
  • 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided.
  • False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths).

TYPE: `bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False` DEFAULT: False

max_length

Maximum length of the returned list and optionally padding length (see above).

TYPE: `int`, *optional* DEFAULT: None

truncation

Activates truncation to cut input sequences longer than max_length to max_length.

TYPE: `bool` DEFAULT: False

pad_to_multiple_of

If set will pad the sequence to a multiple of the provided value.

This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.

TYPE: `int`, *optional* DEFAULT: None

return_attention_mask

Whether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractor's default.

What are attention masks?

Wav2Vec2 models that have set config.feat_extract_norm == "group", such as wav2vec2-base, have not been trained using attention_mask. For such models, input_values should simply be padded with 0 and no attention_mask should be passed.

For Wav2Vec2 models that have set config.feat_extract_norm == "layer", such as wav2vec2-lv60, attention_mask should be passed for batched inference.

TYPE: `bool`, *optional* DEFAULT: None

return_tensors

If set, will return tensors instead of list of python integers. Acceptable values are:

  • 'tf': Return TensorFlow tf.constant objects.
  • 'pt': Return PyTorch torch.Tensor objects.
  • 'np': Return Numpy np.ndarray objects.

TYPE: `str` or [`~utils.TensorType`], *optional* DEFAULT: None

sampling_rate

The sampling rate at which the raw_speech input was sampled. It is strongly recommended to pass sampling_rate at the forward call to prevent silent errors.

TYPE: `int`, *optional* DEFAULT: None

padding_value

TYPE: `float`, defaults to 0.0

Source code in mindnlp\transformers\models\wav2vec2\feature_extraction_wav2vec2.py
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
def __call__(
    self,
    raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
    padding: Union[bool, str, PaddingStrategy] = False,
    max_length: Optional[int] = None,
    truncation: bool = False,
    pad_to_multiple_of: Optional[int] = None,
    return_attention_mask: Optional[bool] = None,
    return_tensors: Optional[Union[str, TensorType]] = None,
    sampling_rate: Optional[int] = None,
    **kwargs,
) -> BatchFeature:
    """
    Main method to featurize and prepare for the model one or several sequence(s).

    Args:
        raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
            The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
            values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not
            stereo, i.e. single float per timestep.
        padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
            Select a strategy to pad the returned sequences (according to the model's padding side and padding
            index) among:

            - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
            sequence if provided).
            - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
            acceptable input length for the model if that argument is not provided.
            - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
            lengths).
        max_length (`int`, *optional*):
            Maximum length of the returned list and optionally padding length (see above).
        truncation (`bool`):
            Activates truncation to cut input sequences longer than *max_length* to *max_length*.
        pad_to_multiple_of (`int`, *optional*):
            If set will pad the sequence to a multiple of the provided value.

            This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
            `>= 7.5` (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
        return_attention_mask (`bool`, *optional*):
            Whether to return the attention mask. If left to the default, will return the attention mask according
            to the specific feature_extractor's default.

            [What are attention masks?](../glossary#attention-mask)

            <Tip>

            Wav2Vec2 models that have set `config.feat_extract_norm == "group"`, such as
            [wav2vec2-base](https://hf-mirror.com/facebook/wav2vec2-base-960h), have **not** been trained using
            `attention_mask`. For such models, `input_values` should simply be padded with 0 and no
            `attention_mask` should be passed.

            For Wav2Vec2 models that have set `config.feat_extract_norm == "layer"`, such as
            [wav2vec2-lv60](https://hf-mirror.com/facebook/wav2vec2-large-960h-lv60-self), `attention_mask` should
            be passed for batched inference.

            </Tip>

        return_tensors (`str` or [`~utils.TensorType`], *optional*):
            If set, will return tensors instead of list of python integers. Acceptable values are:

            - `'tf'`: Return TensorFlow `tf.constant` objects.
            - `'pt'`: Return PyTorch `torch.Tensor` objects.
            - `'np'`: Return Numpy `np.ndarray` objects.
        sampling_rate (`int`, *optional*):
            The sampling rate at which the `raw_speech` input was sampled. It is strongly recommended to pass
            `sampling_rate` at the forward call to prevent silent errors.
        padding_value (`float`, defaults to 0.0):
    """
    if sampling_rate is not None:
        if sampling_rate != self.sampling_rate:
            raise ValueError(
                f"The model corresponding to this feature extractor: {self} was trained using a sampling rate of"
                f" {self.sampling_rate}. Please make sure that the provided `raw_speech` input was sampled with"
                f" {self.sampling_rate} and not {sampling_rate}."
            )
    else:
        logger.warning(
            "It is strongly recommended to pass the ``sampling_rate`` argument to this function. "
            "Failing to do so can result in silent errors that might be hard to debug."
        )

    is_batched_numpy = isinstance(raw_speech, np.ndarray) and len(raw_speech.shape) > 1
    if is_batched_numpy and len(raw_speech.shape) > 2:
        raise ValueError(f"Only mono-channel audio is supported for input to {self}")
    is_batched = is_batched_numpy or (
        isinstance(raw_speech, (list, tuple)) and (isinstance(raw_speech[0], (np.ndarray, tuple, list)))
    )

    # always return batch
    if not is_batched:
        raw_speech = [raw_speech]

    # convert into correct format for padding
    encoded_inputs = BatchFeature({"input_values": raw_speech})

    padded_inputs = self.pad(
        encoded_inputs,
        padding=padding,
        max_length=max_length,
        truncation=truncation,
        pad_to_multiple_of=pad_to_multiple_of,
        return_attention_mask=return_attention_mask,
    )

    # convert input values to correct format
    input_values = padded_inputs["input_values"]
    if not isinstance(input_values[0], np.ndarray):
        padded_inputs["input_values"] = [np.asarray(array, dtype=np.float32) for array in input_values]
    elif (
        not isinstance(input_values, np.ndarray)
        and isinstance(input_values[0], np.ndarray)
        and input_values[0].dtype is np.dtype(np.float64)
    ):
        padded_inputs["input_values"] = [array.astype(np.float32) for array in input_values]
    elif isinstance(input_values, np.ndarray) and input_values.dtype is np.dtype(np.float64):
        padded_inputs["input_values"] = input_values.astype(np.float32)

    # convert attention_mask to correct format
    attention_mask = padded_inputs.get("attention_mask")
    if attention_mask is not None:
        padded_inputs["attention_mask"] = [np.asarray(array, dtype=np.int32) for array in attention_mask]

    # zero-mean and unit-variance normalization
    if self.do_normalize:
        attention_mask = (
            attention_mask
            if self._get_padding_strategies(padding, max_length=max_length) is not PaddingStrategy.DO_NOT_PAD
            else None
        )
        padded_inputs["input_values"] = self.zero_mean_unit_var_norm(
            padded_inputs["input_values"], attention_mask=attention_mask, padding_value=self.padding_value
        )

    if return_tensors is not None:
        padded_inputs = padded_inputs.convert_to_tensors(return_tensors)

    return padded_inputs

mindnlp.transformers.models.wav2vec2.feature_extraction_wav2vec2.Wav2Vec2FeatureExtractor.__init__(feature_size=1, sampling_rate=16000, padding_value=0.0, return_attention_mask=False, do_normalize=True, **kwargs)

Initialize the Wav2Vec2FeatureExtractor class.

PARAMETER DESCRIPTION
self

The instance of the class.

TYPE: object

feature_size

The size of the input features. Defaults to 1.

TYPE: int DEFAULT: 1

sampling_rate

The sampling rate of the audio data. Defaults to 16000.

TYPE: int DEFAULT: 16000

padding_value

The value used for padding sequences. Defaults to 0.0.

TYPE: float DEFAULT: 0.0

return_attention_mask

Whether to return the attention mask. Defaults to False.

TYPE: bool DEFAULT: False

do_normalize

Whether to normalize the input features. Defaults to True.

TYPE: bool DEFAULT: True

**kwargs

Additional keyword arguments.

DEFAULT: {}

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\wav2vec2\feature_extraction_wav2vec2.py
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
def __init__(
    self,
    feature_size=1,
    sampling_rate=16000,
    padding_value=0.0,
    return_attention_mask=False,
    do_normalize=True,
    **kwargs,
):
    """
    Initialize the Wav2Vec2FeatureExtractor class.

    Args:
        self (object): The instance of the class.
        feature_size (int, optional): The size of the input features. Defaults to 1.
        sampling_rate (int, optional): The sampling rate of the audio data. Defaults to 16000.
        padding_value (float, optional): The value used for padding sequences. Defaults to 0.0.
        return_attention_mask (bool, optional): Whether to return the attention mask. Defaults to False.
        do_normalize (bool, optional): Whether to normalize the input features. Defaults to True.
        **kwargs: Additional keyword arguments.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(feature_size=feature_size, sampling_rate=sampling_rate, padding_value=padding_value, **kwargs)
    self.return_attention_mask = return_attention_mask
    self.do_normalize = do_normalize

mindnlp.transformers.models.wav2vec2.feature_extraction_wav2vec2.Wav2Vec2FeatureExtractor.zero_mean_unit_var_norm(input_values, attention_mask, padding_value=0.0) staticmethod

Every array in the list is normalized to have zero mean and unit variance

Source code in mindnlp\transformers\models\wav2vec2\feature_extraction_wav2vec2.py
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
@staticmethod
def zero_mean_unit_var_norm(
    input_values: List[np.ndarray], attention_mask: List[np.ndarray], padding_value: float = 0.0
) -> List[np.ndarray]:
    """
    Every array in the list is normalized to have zero mean and unit variance
    """
    if attention_mask is not None:
        attention_mask = np.array(attention_mask, np.int32)
        normed_input_values = []

        for vector, length in zip(input_values, attention_mask.sum(-1)):
            normed_slice = (vector - vector[:length].mean()) / np.sqrt(vector[:length].var() + 1e-7)
            if length < normed_slice.shape[0]:
                normed_slice[length:] = padding_value

            normed_input_values.append(normed_slice)
    else:
        normed_input_values = [(x - x.mean()) / np.sqrt(x.var() + 1e-7) for x in input_values]

    return normed_input_values

mindnlp.transformers.models.wav2vec2.processing_wav2vec2

Speech processor class for Wav2Vec2

mindnlp.transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor

Bases: ProcessorMixin

Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor and a Wav2Vec2 CTC tokenizer into a single processor.

[Wav2Vec2Processor] offers all the functionalities of [Wav2Vec2FeatureExtractor] and [PreTrainedTokenizer]. See the docstring of [~Wav2Vec2Processor.__call__] and [~Wav2Vec2Processor.decode] for more information.

PARAMETER DESCRIPTION
feature_extractor

An instance of [Wav2Vec2FeatureExtractor]. The feature extractor is a required input.

TYPE: `Wav2Vec2FeatureExtractor`

tokenizer

An instance of [PreTrainedTokenizer]. The tokenizer is a required input.

TYPE: [`PreTrainedTokenizer`]

Source code in mindnlp\transformers\models\wav2vec2\processing_wav2vec2.py
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
class Wav2Vec2Processor(ProcessorMixin):
    r"""
    Constructs a Wav2Vec2 processor which wraps a Wav2Vec2 feature extractor and a Wav2Vec2 CTC tokenizer into a single
    processor.

    [`Wav2Vec2Processor`] offers all the functionalities of [`Wav2Vec2FeatureExtractor`] and [`PreTrainedTokenizer`].
    See the docstring of [`~Wav2Vec2Processor.__call__`] and [`~Wav2Vec2Processor.decode`] for more information.

    Args:
        feature_extractor (`Wav2Vec2FeatureExtractor`):
            An instance of [`Wav2Vec2FeatureExtractor`]. The feature extractor is a required input.
        tokenizer ([`PreTrainedTokenizer`]):
            An instance of [`PreTrainedTokenizer`]. The tokenizer is a required input.
    """
    feature_extractor_class = "Wav2Vec2FeatureExtractor"
    tokenizer_class = "AutoTokenizer"

    def __init__(self, feature_extractor, tokenizer):
        """
        Initializes a new instance of the Wav2Vec2Processor class.

        Args:
            self (Wav2Vec2Processor): The current instance of the Wav2Vec2Processor class.
            feature_extractor (object): The feature extractor used for processing input data.
                It should be an instance of a feature extraction class.
            tokenizer (object): The tokenizer used for tokenizing input data.
                It should be an instance of a tokenizer class.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(feature_extractor, tokenizer)
        self.current_processor = self.feature_extractor
        self._in_target_context_manager = False

    @classmethod
    def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
        """
        This method creates an instance of the Wav2Vec2Processor class from a pre-trained model.

        Args:
            cls (class): The class itself.
            pretrained_model_name_or_path (str): The name or path of the pre-trained model to load.

        Returns:
            None.

        Raises:
            OSError: If an OSError occurs during the loading process.
            FutureWarning: If the tokenizer is being loaded from a config that does not include a `tokenizer_class`
                attribute, a FutureWarning is issued. It advises adding a `'tokenizer_class': 'Wav2Vec2CTCTokenizer'`
                attribute to either the `config.json` or `tokenizer_config.json` file to suppress the warning.
        """
        try:
            return super().from_pretrained(pretrained_model_name_or_path, **kwargs)
        except OSError:
            warnings.warn(
                f"Loading a tokenizer inside {cls.__name__} from a config that does not"
                " include a `tokenizer_class` attribute is deprecated and will be "
                "removed in v5. Please add `'tokenizer_class': 'Wav2Vec2CTCTokenizer'`"
                " attribute to either your `config.json` or `tokenizer_config.json` "
                "file to suppress this warning: ",
                FutureWarning,
            )

            feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(pretrained_model_name_or_path, **kwargs)
            tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(pretrained_model_name_or_path, **kwargs)

            return cls(feature_extractor=feature_extractor, tokenizer=tokenizer)

    def __call__(self, *args, **kwargs):
        """
        When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor's
        [`~Wav2Vec2FeatureExtractor.__call__`] and returns its output. If used in the context
        [`~Wav2Vec2Processor.as_target_processor`] this method forwards all its arguments to PreTrainedTokenizer's
        [`~PreTrainedTokenizer.__call__`]. Please refer to the docstring of the above two methods for more information.
        """
        # For backward compatibility
        if self._in_target_context_manager:
            return self.current_processor(*args, **kwargs)

        if "raw_speech" in kwargs:
            warnings.warn("Using `raw_speech` as a keyword argument is deprecated. Use `audio` instead.")
            audio = kwargs.pop("raw_speech")
        else:
            audio = kwargs.pop("audio", None)
        sampling_rate = kwargs.pop("sampling_rate", None)
        text = kwargs.pop("text", None)
        if len(args) > 0:
            audio = args[0]
            args = args[1:]

        if audio is None and text is None:
            raise ValueError("You need to specify either an `audio` or `text` input to process.")

        if audio is not None:
            inputs = self.feature_extractor(audio, *args, sampling_rate=sampling_rate, **kwargs)
        if text is not None:
            encodings = self.tokenizer(text, **kwargs)

        if text is None:
            return inputs
        elif audio is None:
            return encodings
        else:
            inputs["labels"] = encodings["input_ids"]
            return inputs

    def pad(self, *args, **kwargs):
        """
        When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor's
        [`~Wav2Vec2FeatureExtractor.pad`] and returns its output. If used in the context
        [`~Wav2Vec2Processor.as_target_processor`] this method forwards all its arguments to PreTrainedTokenizer's
        [`~PreTrainedTokenizer.pad`]. Please refer to the docstring of the above two methods for more information.
        """
        # For backward compatibility
        if self._in_target_context_manager:
            return self.current_processor.pad(*args, **kwargs)

        input_features = kwargs.pop("input_features", None)
        labels = kwargs.pop("labels", None)
        if len(args) > 0:
            input_features = args[0]
            args = args[1:]

        if input_features is not None:
            input_features = self.feature_extractor.pad(input_features, *args, **kwargs)
        if labels is not None:
            labels = self.tokenizer.pad(labels, **kwargs)

        if labels is None:
            return input_features
        elif input_features is None:
            return labels
        else:
            input_features["labels"] = labels["input_ids"]
            return input_features

    def batch_decode(self, *args, **kwargs):
        """
        This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please
        refer to the docstring of this method for more information.
        """
        return self.tokenizer.batch_decode(*args, **kwargs)

    def decode(self, *args, **kwargs):
        """
        This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer
        to the docstring of this method for more information.
        """
        return self.tokenizer.decode(*args, **kwargs)

    @contextmanager
    def as_target_processor(self):
        """
        Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning
        Wav2Vec2.
        """
        warnings.warn(
            "`as_target_processor` is deprecated. You can process your "
            "labels by using the argument `text` of the regular `__call__` method (either in the same call as "
            "your audio inputs, or in a separate call."
        )
        self._in_target_context_manager = True
        self.current_processor = self.tokenizer
        yield
        self.current_processor = self.feature_extractor
        self._in_target_context_manager = False

mindnlp.transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor.__call__(*args, **kwargs)

When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor's [~Wav2Vec2FeatureExtractor.__call__] and returns its output. If used in the context [~Wav2Vec2Processor.as_target_processor] this method forwards all its arguments to PreTrainedTokenizer's [~PreTrainedTokenizer.__call__]. Please refer to the docstring of the above two methods for more information.

Source code in mindnlp\transformers\models\wav2vec2\processing_wav2vec2.py
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
def __call__(self, *args, **kwargs):
    """
    When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor's
    [`~Wav2Vec2FeatureExtractor.__call__`] and returns its output. If used in the context
    [`~Wav2Vec2Processor.as_target_processor`] this method forwards all its arguments to PreTrainedTokenizer's
    [`~PreTrainedTokenizer.__call__`]. Please refer to the docstring of the above two methods for more information.
    """
    # For backward compatibility
    if self._in_target_context_manager:
        return self.current_processor(*args, **kwargs)

    if "raw_speech" in kwargs:
        warnings.warn("Using `raw_speech` as a keyword argument is deprecated. Use `audio` instead.")
        audio = kwargs.pop("raw_speech")
    else:
        audio = kwargs.pop("audio", None)
    sampling_rate = kwargs.pop("sampling_rate", None)
    text = kwargs.pop("text", None)
    if len(args) > 0:
        audio = args[0]
        args = args[1:]

    if audio is None and text is None:
        raise ValueError("You need to specify either an `audio` or `text` input to process.")

    if audio is not None:
        inputs = self.feature_extractor(audio, *args, sampling_rate=sampling_rate, **kwargs)
    if text is not None:
        encodings = self.tokenizer(text, **kwargs)

    if text is None:
        return inputs
    elif audio is None:
        return encodings
    else:
        inputs["labels"] = encodings["input_ids"]
        return inputs

mindnlp.transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor.__init__(feature_extractor, tokenizer)

Initializes a new instance of the Wav2Vec2Processor class.

PARAMETER DESCRIPTION
self

The current instance of the Wav2Vec2Processor class.

TYPE: Wav2Vec2Processor

feature_extractor

The feature extractor used for processing input data. It should be an instance of a feature extraction class.

TYPE: object

tokenizer

The tokenizer used for tokenizing input data. It should be an instance of a tokenizer class.

TYPE: object

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\wav2vec2\processing_wav2vec2.py
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
def __init__(self, feature_extractor, tokenizer):
    """
    Initializes a new instance of the Wav2Vec2Processor class.

    Args:
        self (Wav2Vec2Processor): The current instance of the Wav2Vec2Processor class.
        feature_extractor (object): The feature extractor used for processing input data.
            It should be an instance of a feature extraction class.
        tokenizer (object): The tokenizer used for tokenizing input data.
            It should be an instance of a tokenizer class.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(feature_extractor, tokenizer)
    self.current_processor = self.feature_extractor
    self._in_target_context_manager = False

mindnlp.transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor.as_target_processor()

Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning Wav2Vec2.

Source code in mindnlp\transformers\models\wav2vec2\processing_wav2vec2.py
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
@contextmanager
def as_target_processor(self):
    """
    Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning
    Wav2Vec2.
    """
    warnings.warn(
        "`as_target_processor` is deprecated. You can process your "
        "labels by using the argument `text` of the regular `__call__` method (either in the same call as "
        "your audio inputs, or in a separate call."
    )
    self._in_target_context_manager = True
    self.current_processor = self.tokenizer
    yield
    self.current_processor = self.feature_extractor
    self._in_target_context_manager = False

mindnlp.transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor.batch_decode(*args, **kwargs)

This method forwards all its arguments to PreTrainedTokenizer's [~PreTrainedTokenizer.batch_decode]. Please refer to the docstring of this method for more information.

Source code in mindnlp\transformers\models\wav2vec2\processing_wav2vec2.py
171
172
173
174
175
176
def batch_decode(self, *args, **kwargs):
    """
    This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please
    refer to the docstring of this method for more information.
    """
    return self.tokenizer.batch_decode(*args, **kwargs)

mindnlp.transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor.decode(*args, **kwargs)

This method forwards all its arguments to PreTrainedTokenizer's [~PreTrainedTokenizer.decode]. Please refer to the docstring of this method for more information.

Source code in mindnlp\transformers\models\wav2vec2\processing_wav2vec2.py
178
179
180
181
182
183
def decode(self, *args, **kwargs):
    """
    This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer
    to the docstring of this method for more information.
    """
    return self.tokenizer.decode(*args, **kwargs)

mindnlp.transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor.from_pretrained(pretrained_model_name_or_path, **kwargs) classmethod

This method creates an instance of the Wav2Vec2Processor class from a pre-trained model.

PARAMETER DESCRIPTION
cls

The class itself.

TYPE: class

pretrained_model_name_or_path

The name or path of the pre-trained model to load.

TYPE: str

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
OSError

If an OSError occurs during the loading process.

FutureWarning

If the tokenizer is being loaded from a config that does not include a tokenizer_class attribute, a FutureWarning is issued. It advises adding a 'tokenizer_class': 'Wav2Vec2CTCTokenizer' attribute to either the config.json or tokenizer_config.json file to suppress the warning.

Source code in mindnlp\transformers\models\wav2vec2\processing_wav2vec2.py
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
    """
    This method creates an instance of the Wav2Vec2Processor class from a pre-trained model.

    Args:
        cls (class): The class itself.
        pretrained_model_name_or_path (str): The name or path of the pre-trained model to load.

    Returns:
        None.

    Raises:
        OSError: If an OSError occurs during the loading process.
        FutureWarning: If the tokenizer is being loaded from a config that does not include a `tokenizer_class`
            attribute, a FutureWarning is issued. It advises adding a `'tokenizer_class': 'Wav2Vec2CTCTokenizer'`
            attribute to either the `config.json` or `tokenizer_config.json` file to suppress the warning.
    """
    try:
        return super().from_pretrained(pretrained_model_name_or_path, **kwargs)
    except OSError:
        warnings.warn(
            f"Loading a tokenizer inside {cls.__name__} from a config that does not"
            " include a `tokenizer_class` attribute is deprecated and will be "
            "removed in v5. Please add `'tokenizer_class': 'Wav2Vec2CTCTokenizer'`"
            " attribute to either your `config.json` or `tokenizer_config.json` "
            "file to suppress this warning: ",
            FutureWarning,
        )

        feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(pretrained_model_name_or_path, **kwargs)
        tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(pretrained_model_name_or_path, **kwargs)

        return cls(feature_extractor=feature_extractor, tokenizer=tokenizer)

mindnlp.transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor.pad(*args, **kwargs)

When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor's [~Wav2Vec2FeatureExtractor.pad] and returns its output. If used in the context [~Wav2Vec2Processor.as_target_processor] this method forwards all its arguments to PreTrainedTokenizer's [~PreTrainedTokenizer.pad]. Please refer to the docstring of the above two methods for more information.

Source code in mindnlp\transformers\models\wav2vec2\processing_wav2vec2.py
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
def pad(self, *args, **kwargs):
    """
    When used in normal mode, this method forwards all its arguments to Wav2Vec2FeatureExtractor's
    [`~Wav2Vec2FeatureExtractor.pad`] and returns its output. If used in the context
    [`~Wav2Vec2Processor.as_target_processor`] this method forwards all its arguments to PreTrainedTokenizer's
    [`~PreTrainedTokenizer.pad`]. Please refer to the docstring of the above two methods for more information.
    """
    # For backward compatibility
    if self._in_target_context_manager:
        return self.current_processor.pad(*args, **kwargs)

    input_features = kwargs.pop("input_features", None)
    labels = kwargs.pop("labels", None)
    if len(args) > 0:
        input_features = args[0]
        args = args[1:]

    if input_features is not None:
        input_features = self.feature_extractor.pad(input_features, *args, **kwargs)
    if labels is not None:
        labels = self.tokenizer.pad(labels, **kwargs)

    if labels is None:
        return input_features
    elif input_features is None:
        return labels
    else:
        input_features["labels"] = labels["input_ids"]
        return input_features

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2

Tokenization class for Wav2Vec2.

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer

Bases: PreTrainedTokenizer

Constructs a Wav2Vec2CTC tokenizer.

This tokenizer inherits from [PreTrainedTokenizer] which contains some of the main methods. Users should refer to the superclass for more information regarding such methods.

PARAMETER DESCRIPTION
vocab_file

File containing the vocabulary.

TYPE: `str`

bos_token

The beginning of sentence token.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

eos_token

The end of sentence token.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

unk_token

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

TYPE: `str`, *optional*, defaults to `"<unk>"` DEFAULT: '<unk>'

pad_token

The token used for padding, for example when batching sequences of different lengths.

TYPE: `str`, *optional*, defaults to `"<pad>"` DEFAULT: '<pad>'

word_delimiter_token

The token used for defining the end of a word.

TYPE: `str`, *optional*, defaults to `"|"` DEFAULT: '|'

do_lower_case

Whether or not to accept lowercase input and lowercase the output when decoding.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

target_lang

A target language the tokenizer should set by default. target_lang has to be defined for multi-lingual, nested vocabulary such as facebook/mms-1b-all.

TYPE: `str`, *optional* DEFAULT: None

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
class Wav2Vec2CTCTokenizer(PreTrainedTokenizer):

    """
    Constructs a Wav2Vec2CTC tokenizer.

    This tokenizer inherits from [`PreTrainedTokenizer`] which contains some of the main methods. Users should refer to
    the superclass for more information regarding such methods.

    Args:
        vocab_file (`str`):
            File containing the vocabulary.
        bos_token (`str`, *optional*, defaults to `"<s>"`):
            The beginning of sentence token.
        eos_token (`str`, *optional*, defaults to `"</s>"`):
            The end of sentence token.
        unk_token (`str`, *optional*, defaults to `"<unk>"`):
            The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
            token instead.
        pad_token (`str`, *optional*, defaults to `"<pad>"`):
            The token used for padding, for example when batching sequences of different lengths.
        word_delimiter_token (`str`, *optional*, defaults to `"|"`):
            The token used for defining the end of a word.
        do_lower_case (`bool`, *optional*, defaults to `False`):
            Whether or not to accept lowercase input and lowercase the output when decoding.
        target_lang (`str`, *optional*):
            A target language the tokenizer should set by default. `target_lang` has to be defined for multi-lingual,
            nested vocabulary such as [facebook/mms-1b-all](https://hf-mirror.com/facebook/mms-1b-all).

        **kwargs
            Additional keyword arguments passed along to [`PreTrainedTokenizer`]
    """
    vocab_files_names = VOCAB_FILES_NAMES
    pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
    max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
    model_input_names = ["input_ids", "attention_mask"]

    def __init__(
        self,
        vocab_file,
        bos_token="<s>",
        eos_token="</s>",
        unk_token="<unk>",
        pad_token="<pad>",
        word_delimiter_token="|",
        replace_word_delimiter_char=" ",
        do_lower_case=False,
        target_lang=None,
        **kwargs,
    ):
        """
        Initializes a new instance of the Wav2Vec2CTCTokenizer class.

        Args:
            self (Wav2Vec2CTCTokenizer): The instance of the Wav2Vec2CTCTokenizer class.
            vocab_file (str): The path to the vocabulary file.
            bos_token (str, optional): The beginning of sentence token. Default is '<s>'.
            eos_token (str, optional): The end of sentence token. Default is '</s>'.
            unk_token (str, optional): The unknown token. Default is '<unk>'.
            pad_token (str, optional): The padding token. Default is '<pad>'.
            word_delimiter_token (str, optional): The word delimiter token. Default is '|'.
            replace_word_delimiter_char (str, optional): The character used to replace the word delimiter. Default is ' '.
            do_lower_case (bool, optional): Whether to convert all tokens to lowercase. Default is False.
            target_lang (str, optional): The target language for encoding. Default is None.
            **kwargs: Additional keyword arguments.

        Returns:
            None

        Raises:
            None
        """
        self._word_delimiter_token = word_delimiter_token

        self.do_lower_case = do_lower_case
        self.replace_word_delimiter_char = replace_word_delimiter_char
        self.target_lang = target_lang

        with open(vocab_file, encoding="utf-8") as vocab_handle:
            self.vocab = json.load(vocab_handle)

        # if target lang is defined vocab must be a nested dict
        # with each target lang being one vocabulary
        if target_lang is not None:
            self.encoder = self.vocab[target_lang]
        else:
            self.encoder = self.vocab

        self.decoder = {v: k for k, v in self.encoder.items()}

        super().__init__(
            unk_token=unk_token,
            bos_token=bos_token,
            eos_token=eos_token,
            pad_token=pad_token,
            do_lower_case=do_lower_case,
            word_delimiter_token=word_delimiter_token,
            replace_word_delimiter_char=replace_word_delimiter_char,
            target_lang=target_lang,
            **kwargs,
        )

        # make sure that tokens made of several
        # characters are not split at tokenization
        for token in self.encoder.keys():
            if len(token) > 1:
                self.add_tokens(AddedToken(token, rstrip=True, lstrip=True, normalized=False))

    def set_target_lang(self, target_lang: str):
        """
        Set the target language of a nested multi-lingual dictionary
        """
        if self.vocab == self.encoder:
            raise ValueError(f"{self.vocab} is not a multi-lingual, nested tokenizer. Cannot set target language.")

        if target_lang not in self.vocab:
            raise ValueError(f"{target_lang} does not exist. Choose one of {', '.join(self.vocab.keys())}.")

        self.target_lang = target_lang
        self.init_kwargs["target_lang"] = target_lang
        self.encoder = self.vocab[target_lang]
        self.decoder = {v: k for k, v in self.encoder.items()}

        # make sure that tokens made of several
        # characters are not split at tokenization
        for token in self.encoder.keys():
            if len(token) > 1:
                self.add_tokens(AddedToken(token, rstrip=True, lstrip=True, normalized=False))

    @property
    def word_delimiter_token(self) -> str:
        """
        `str`: Word delimiter token. Log an error if used while not having been set.
        """
        if self._word_delimiter_token is None and self.verbose:
            logger.error("Using word_delimiter_token, but it is not set yet.")
            return None
        return str(self._word_delimiter_token)

    @property
    def word_delimiter_token_id(self) -> Optional[int]:
        """
        `Optional[int]`: Id of the word_delimiter_token in the vocabulary. Returns `None` if the token has not been
        set.
        """
        if self._word_delimiter_token is None:
            return None
        return self.convert_tokens_to_ids(self.word_delimiter_token)

    @word_delimiter_token.setter
    def word_delimiter_token(self, value):
        """
        Sets the word delimiter token for the Wav2Vec2CTCTokenizer.

        Args:
            self (Wav2Vec2CTCTokenizer): The instance of the Wav2Vec2CTCTokenizer class.
            value (str): The word delimiter token to be set.

        Returns:
            None.

        Raises:
            None.
        """
        self._word_delimiter_token = value

    @word_delimiter_token_id.setter
    def word_delimiter_token_id(self, value):
        """
        Sets the word delimiter token ID for the Wav2Vec2CTCTokenizer.

        Args:
            self (Wav2Vec2CTCTokenizer): The Wav2Vec2CTCTokenizer instance.
            value (list[int]): A list of integers representing the token IDs for word delimiters.

        Returns:
            None.

        Raises:
            TypeError: If the provided value is not a list of integers.
            ValueError: If the provided value contains invalid token IDs.
        """
        self._word_delimiter_token = self.convert_tokens_to_ids(value)

    @property
    def vocab_size(self) -> int:
        """
        Returns the size of the vocabulary used by the Wav2Vec2CTCTokenizer.

        Args:
            self: An instance of the Wav2Vec2CTCTokenizer class.

        Returns:
            int: The size of the vocabulary, which represents the total number of unique tokens in the decoder.

        Raises:
            None.

        Example:
            ```python
            >>> tokenizer = Wav2Vec2CTCTokenizer()
            >>> tokenizer.vocab_size()
            50000
            ```
        """
        return len(self.decoder)

    def get_vocab(self) -> Dict:
        """
        Returns the vocabulary used by the Wav2Vec2CTCTokenizer.

        Args:
            self (Wav2Vec2CTCTokenizer): An instance of the Wav2Vec2CTCTokenizer class.

        Returns:
            Dict: A dictionary representing the vocabulary used by the tokenizer.
                The keys are integers representing the token IDs, and the values are the corresponding tokens.

        Raises:
            None.

        This method retrieves the vocabulary used by the Wav2Vec2CTCTokenizer instance. The vocabulary is a dictionary
        that combines the encoder and added_tokens_encoder dictionaries. The encoder dictionary maps tokens to unique
        integer IDs, while the added_tokens_encoder dictionary contains additional tokens added by the user.
        The resulting vocabulary dictionary is returned.
        """
        vocab = dict(self.encoder)
        vocab.update(self.added_tokens_encoder)
        return vocab

    def _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens: bool = False) -> int:
        """
        Add tokens to the Wav2Vec2CTCTokenizer's vocabulary.

        Args:
            self (Wav2Vec2CTCTokenizer): The instance of the Wav2Vec2CTCTokenizer class.
            new_tokens (Union[List[str], List[AddedToken]]): A list of new tokens to be added to the vocabulary.
                Each token can be either a string or an instance of AddedToken.
            special_tokens (bool, optional): A flag indicating whether the new tokens are special tokens.
                Defaults to False.

        Returns:
            int: The number of tokens added to the vocabulary.

        Raises:
            None

        This method takes a list of new tokens and adds them to the vocabulary of the Wav2Vec2CTCTokenizer.
        The new tokens can be either strings or instances of AddedToken. If a token is a string, a default AddedToken
        object will be created with the token as its text and the following default values for its attributes:
        rstrip=False, lstrip=False, normalized=False. If a token is already an instance of AddedToken,
        it will be added as is. The method then calls the super()._add_tokens() method to add the tokens to the
        vocabulary. The special_tokens flag can be used to indicate whether the new tokens are special tokens.
        """
        # Overwritten to never strip!
        to_add = []
        for token in new_tokens:
            if isinstance(token, str):
                to_add.append(AddedToken(token, rstrip=False, lstrip=False, normalized=False))
            else:
                to_add.append(token)

        return super()._add_tokens(to_add, special_tokens)

    def _tokenize(self, text, **kwargs):
        """
        Converts a string into a sequence of tokens (string), using the tokenizer.
        """
        if self.do_lower_case:
            text = text.upper()

        return list(text.replace(" ", self.word_delimiter_token))

    def _convert_token_to_id(self, token: str) -> int:
        """Converts a token (str) in an index (integer) using the vocab."""
        return self.encoder.get(token, self.encoder.get(self.unk_token))

    def _convert_id_to_token(self, index: int) -> str:
        """Converts an index (integer) in a token (str) using the vocab."""
        result = self.decoder.get(index, self.unk_token)
        return result

    def convert_tokens_to_string(
        self,
        tokens: List[str],
        group_tokens: bool = True,
        spaces_between_special_tokens: bool = False,
        output_char_offsets: bool = False,
        output_word_offsets: bool = False,
    ) -> Dict[str, Union[str, float]]:
        """
        Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
        """
        if len(tokens) == 0:
            return {"text": "", "char_offsets": [], "word_offsets": []}
        # group same tokens into non-repeating tokens in CTC style decoding
        if group_tokens:
            chars, char_repetitions = zip(*((token, len(list(group_iter))) for token, group_iter in groupby(tokens)))
        else:
            chars = tokens
            char_repetitions = len(tokens) * [1]

        # filter self.pad_token which is used as CTC-blank token
        processed_chars = list(filter(lambda char: char != self.pad_token, chars))

        # replace delimiter token
        processed_chars = [
            self.replace_word_delimiter_char if char == self.word_delimiter_token else char for char in processed_chars
        ]

        # retrieve offsets
        char_offsets = word_offsets = None
        if output_char_offsets or output_word_offsets:
            char_offsets = self._compute_offsets(char_repetitions, chars, self.pad_token)

            if len(char_offsets) != len(processed_chars):
                raise ValueError(
                    f"`char_offsets`: {char_offsets} and `processed_tokens`: {processed_chars}"
                    " have to be of the same length, but are: "
                    f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
                    f" {len(processed_chars)}"
                )

            # set tokens to correct processed token
            for i, char in enumerate(processed_chars):
                char_offsets[i]["char"] = char

            # retrieve word offsets from character offsets
            word_offsets = None
            if output_word_offsets:
                word_offsets = self._get_word_offsets(char_offsets, self.replace_word_delimiter_char)

            # don't output chars if not set to True
            if not output_char_offsets:
                char_offsets = None

        # join to string
        join_char = " " if spaces_between_special_tokens else ""
        string = join_char.join(processed_chars).strip()

        if self.do_lower_case:
            string = string.lower()

        return {"text": string, "char_offsets": char_offsets, "word_offsets": word_offsets}

    @staticmethod
    def _compute_offsets(
        char_repetitions: List[int], chars: List[str], ctc_token: int
    ) -> List[Dict[str, Union[str, int]]]:
        """
        Compute offsets for characters based on char repetitions and tokens.

        Args:
            char_repetitions (List[int]): A list of integers representing the number of repetitions for each character.
            chars (List[str]): A list of characters.
            ctc_token (int): The CTC token to be filtered out from the offsets.

        Returns:
            List[Dict[str, Union[str, int]]]: A list of dictionaries where each dictionary contains the character,
                start offset, and end offset.

        Raises:
            None
        """
        end_indices = np.asarray(char_repetitions).cumsum()
        start_indices = np.concatenate(([0], end_indices[:-1]))

        offsets = [
            {"char": t, "start_offset": s, "end_offset": e} for t, s, e in zip(chars, start_indices, end_indices)
        ]

        # filter out CTC token
        offsets = list(filter(lambda offsets: offsets["char"] != ctc_token, offsets))
        return offsets

    @staticmethod
    def _get_word_offsets(
        offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
    ) -> Dict[str, Union[str, float]]:
        """
        Method to extract word offsets from a given set of character offsets.

        Args:
            offsets (Dict[str, Union[str, float]]): A dictionary containing character offsets with keys 'char',
                'start_offset', and 'end_offset'. The 'char' key represents the character, 'start_offset' represents
                the start offset, and 'end_offset' represents the end offset.
            word_delimiter_char (str, optional): The character used as a word delimiter. Defaults to a space character.

        Returns:
            Dict[str, Union[str, float]]: A dictionary containing word offsets with keys 'word', 'start_offset',
                and 'end_offset'. The 'word' key represents the extracted word, 'start_offset' represents the start
                offset, and 'end_offset' represents the end offset.

        Raises:
            None
        """
        word_offsets = []

        last_state = "SPACE"
        word = ""
        start_offset = 0
        end_offset = 0
        for offset in offsets:
            char = offset["char"]
            state = "SPACE" if char == word_delimiter_char else "WORD"

            if state == last_state:
                # If we are in the same state as before, we simply repeat what we've done before
                end_offset = offset["end_offset"]
                word += char
            else:
                # Switching state
                if state == "SPACE":
                    # Finishing a word
                    word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
                else:
                    # Starting a new word
                    start_offset = offset["start_offset"]
                    end_offset = offset["end_offset"]
                    word = char

            last_state = state
        if last_state == "WORD":
            word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})

        return word_offsets

    def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
        """
        Prepare the input text for tokenization.

        Args:
            self (Wav2Vec2CTCTokenizer): The instance of the Wav2Vec2CTCTokenizer class.
            text (str): The input text to be prepared for tokenization.
            is_split_into_words (bool): A flag indicating whether the input text is already split into words.
                If True, the input text is expected to be split into words;
                otherwise, the input text is treated as a continuous string.
                Defaults to False.

        Returns:
            tuple: A tuple containing the prepared text and optional keyword arguments.

        Raises:
            None
        """
        if is_split_into_words:
            text = " " + text
        return (text, kwargs)

    def _decode(
        self,
        token_ids: List[int],
        skip_special_tokens: bool = False,
        clean_up_tokenization_spaces: bool = None,
        group_tokens: bool = True,
        spaces_between_special_tokens: bool = False,
        output_word_offsets: Optional[bool] = False,
        output_char_offsets: Optional[bool] = False,
    ) -> str:
        """
        special _decode function is needed for Wav2Vec2Tokenizer because added tokens should be treated exactly the
        same as tokens of the base vocabulary and therefore the function `convert_tokens_to_string` has to be called on
        the whole token list and not individually on added tokens
        """
        filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)

        result = []
        for token in filtered_tokens:
            if skip_special_tokens and token in self.all_special_ids:
                continue
            result.append(token)

        string_output = self.convert_tokens_to_string(
            result,
            group_tokens=group_tokens,
            spaces_between_special_tokens=spaces_between_special_tokens,
            output_word_offsets=output_word_offsets,
            output_char_offsets=output_char_offsets,
        )

        text = string_output["text"]

        clean_up_tokenization_spaces = (
            clean_up_tokenization_spaces
            if clean_up_tokenization_spaces is not None
            else self.clean_up_tokenization_spaces
        )
        if clean_up_tokenization_spaces:
            text = self.clean_up_tokenization(text)

        if output_word_offsets or output_char_offsets:
            return Wav2Vec2CTCTokenizerOutput(
                text=text,
                char_offsets=string_output["char_offsets"],
                word_offsets=string_output["word_offsets"],
            )
        else:
            return text

    # overwritten from `tokenization_utils_base.py` because tokenizer can output
    # `ModelOutput` which should not be a list for batched output and
    # because we need docs for `output_char_offsets` here
    def batch_decode(
        self,
        sequences: Union[List[int], List[List[int]], "np.ndarray", "Tensor"],
        skip_special_tokens: bool = False,
        clean_up_tokenization_spaces: bool = None,
        output_char_offsets: bool = False,
        output_word_offsets: bool = False,
        **kwargs,
    ) -> List[str]:
        """
        Convert a list of lists of token ids into a list of strings by calling decode.

        Args:
            sequences (`Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]`):
                List of tokenized input ids. Can be obtained using the `__call__` method.
            skip_special_tokens (`bool`, *optional*, defaults to `False`):
                Whether or not to remove special tokens in the decoding.
            clean_up_tokenization_spaces (`bool`, *optional*):
                Whether or not to clean up the tokenization spaces.
            output_char_offsets (`bool`, *optional*, defaults to `False`):
                Whether or not to output character offsets. Character offsets can be used in combination with the
                sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.

                <Tip>

                Please take a look at the Example of [`~Wav2Vec2CTCTokenizer.decode`] to better understand how to make
                use of `output_char_offsets`. [`~Wav2Vec2CTCTokenizer.batch_decode`] works the same way with batched
                output.

                </Tip>

            output_word_offsets (`bool`, *optional*, defaults to `False`):
                Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate
                and model downsampling rate to compute the time-stamps of transcribed words.

                <Tip>

                Please take a look at the Example of [`~Wav2Vec2CTCTokenizer.decode`] to better understand how to make
                use of `output_word_offsets`. [`~Wav2Vec2CTCTokenizer.batch_decode`] works the same way with batched
                output.

                </Tip>

            kwargs (additional keyword arguments, *optional*):
                Will be passed to the underlying model specific decode method.

        Returns:
            `List[str]` or [`~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput`]: The list of decoded
                sentences. Will be a [`~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput`] when
                `output_char_offsets == True` or `output_word_offsets == True`.
        """
        batch_decoded = [
            self.decode(
                seq,
                skip_special_tokens=skip_special_tokens,
                clean_up_tokenization_spaces=clean_up_tokenization_spaces,
                output_char_offsets=output_char_offsets,
                output_word_offsets=output_word_offsets,
                **kwargs,
            )
            for seq in sequences
        ]
        if output_char_offsets or output_word_offsets:
            # transform list of dicts to dict of lists
            return Wav2Vec2CTCTokenizerOutput({k: [d[k] for d in batch_decoded] for k in batch_decoded[0]})

        return batch_decoded

    # overwritten from `tokenization_utils_base.py` because we need docs for `output_char_offsets`
    # and `output_word_offsets` here
    def decode(
        self,
        token_ids: Union[int, List[int], "np.ndarray", "Tensor"],
        skip_special_tokens: bool = False,
        clean_up_tokenization_spaces: bool = None,
        output_char_offsets: bool = False,
        output_word_offsets: bool = False,
        **kwargs,
    ) -> str:
        """
        Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
        tokens and clean up tokenization spaces.

        Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.

        Args:
            token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
                List of tokenized input ids. Can be obtained using the `__call__` method.
            skip_special_tokens (`bool`, *optional*, defaults to `False`):
                Whether or not to remove special tokens in the decoding.
            clean_up_tokenization_spaces (`bool`, *optional*):
                Whether or not to clean up the tokenization spaces.
            output_char_offsets (`bool`, *optional*, defaults to `False`):
                Whether or not to output character offsets. Character offsets can be used in combination with the
                sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.

                <Tip>

                Please take a look at the example below to better understand how to make use of `output_char_offsets`.

                </Tip>

            output_word_offsets (`bool`, *optional*, defaults to `False`):
                Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate
                and model downsampling rate to compute the time-stamps of transcribed words.

                <Tip>

                Please take a look at the example below to better understand how to make use of `output_word_offsets`.

                </Tip>

            kwargs (additional keyword arguments, *optional*):
                Will be passed to the underlying model specific decode method.

        Returns:
            `str` or [`~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput`]: The list of decoded
                sentences. Will be a [`~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput`] when
                `output_char_offsets == True` or `output_word_offsets == True`.

        Example:
            ```python
            >>> # Let's see how to retrieve time steps for a model
            >>> from transformers import AutoTokenizer, AutoFeatureExtractor, AutoModelForCTC
            >>> from datasets import load_dataset
            >>> import datasets
            >>> import torch
            ...
            >>> # import model, feature extractor, tokenizer
            >>> model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h")
            >>> tokenizer = AutoTokenizer.from_pretrained("facebook/wav2vec2-base-960h")
            >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
            ...
            >>> # load first sample of English common_voice
            >>> dataset = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="train", streaming=True)
            >>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
            >>> dataset_iter = iter(dataset)
            >>> sample = next(dataset_iter)
            ...
            >>> # forward sample through model to get greedily predicted transcription ids
            >>> input_values = feature_extractor(sample["audio"]["array"], return_tensors="ms").input_values
            >>> logits = model(input_values).logits[0]
            >>> pred_ids = torch.argmax(logits, axis=-1)
            ...
            >>> # retrieve word stamps (analogous commands for `output_char_offsets`)
            >>> outputs = tokenizer.decode(pred_ids, output_word_offsets=True)
            >>> # compute `time_offset` in seconds as product of downsampling ratio and sampling_rate
            >>> time_offset = model.config.inputs_to_logits_ratio / feature_extractor.sampling_rate
            ...
            >>> word_offsets = [
            ...     {
            ...         "word": d["word"],
            ...         "start_time": round(d["start_offset"] * time_offset, 2),
            ...         "end_time": round(d["end_offset"] * time_offset, 2),
            ...     }
            ...     for d in outputs.word_offsets
            ... ]
            >>> # compare word offsets with audio `en_train_0/common_voice_en_19121553.mp3` online on the dataset viewer:
            >>> # https://hf-mirror.com/datasets/mozilla-foundation/common_voice_11_0/viewer/en
            >>> word_offsets[:3]
            [{'word': 'THE', 'start_time': 0.7, 'end_time': 0.78}, {'word': 'TRICK', 'start_time': 0.88, 'end_time': 1.08}, {'word': 'APPEARS', 'start_time': 1.2, 'end_time': 1.64}]
            ```
        """
        # Convert inputs to python lists
        token_ids = to_py_obj(token_ids)

        return self._decode(
            token_ids=token_ids,
            skip_special_tokens=skip_special_tokens,
            clean_up_tokenization_spaces=clean_up_tokenization_spaces,
            output_char_offsets=output_char_offsets,
            output_word_offsets=output_word_offsets,
            **kwargs,
        )

    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
        """
        Save the vocabulary to a specified directory.

        Args:
            self: The instance of the Wav2Vec2CTCTokenizer class.
            save_directory (str): The directory where the vocabulary will be saved.
            filename_prefix (Optional[str]): An optional prefix to be added to the filename. Defaults to None.

        Returns:
            Tuple[str]: A tuple containing the file path of the saved vocabulary.

        Raises:
            OSError: If the save_directory is not a valid directory.
        """
        if not os.path.isdir(save_directory):
            logger.error(f"Vocabulary path ({save_directory}) should be a directory")
            return
        vocab_file = os.path.join(
            save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
        )

        with open(vocab_file, "w", encoding="utf-8") as f:
            f.write(json.dumps(self.vocab, indent=2, sort_keys=True, ensure_ascii=False) + "\n")

        return (vocab_file,)

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.vocab_size: int property

Returns the size of the vocabulary used by the Wav2Vec2CTCTokenizer.

PARAMETER DESCRIPTION
self

An instance of the Wav2Vec2CTCTokenizer class.

RETURNS DESCRIPTION
int

The size of the vocabulary, which represents the total number of unique tokens in the decoder.

TYPE: int

Example
>>> tokenizer = Wav2Vec2CTCTokenizer()
>>> tokenizer.vocab_size()
50000

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.word_delimiter_token: str property writable

str: Word delimiter token. Log an error if used while not having been set.

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.word_delimiter_token_id: Optional[int] property writable

Optional[int]: Id of the word_delimiter_token in the vocabulary. Returns None if the token has not been set.

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.__init__(vocab_file, bos_token='<s>', eos_token='</s>', unk_token='<unk>', pad_token='<pad>', word_delimiter_token='|', replace_word_delimiter_char=' ', do_lower_case=False, target_lang=None, **kwargs)

Initializes a new instance of the Wav2Vec2CTCTokenizer class.

PARAMETER DESCRIPTION
self

The instance of the Wav2Vec2CTCTokenizer class.

TYPE: Wav2Vec2CTCTokenizer

vocab_file

The path to the vocabulary file.

TYPE: str

bos_token

The beginning of sentence token. Default is ''.

TYPE: str DEFAULT: '<s>'

eos_token

The end of sentence token. Default is ''.

TYPE: str DEFAULT: '</s>'

unk_token

The unknown token. Default is ''.

TYPE: str DEFAULT: '<unk>'

pad_token

The padding token. Default is ''.

TYPE: str DEFAULT: '<pad>'

word_delimiter_token

The word delimiter token. Default is '|'.

TYPE: str DEFAULT: '|'

replace_word_delimiter_char

The character used to replace the word delimiter. Default is ' '.

TYPE: str DEFAULT: ' '

do_lower_case

Whether to convert all tokens to lowercase. Default is False.

TYPE: bool DEFAULT: False

target_lang

The target language for encoding. Default is None.

TYPE: str DEFAULT: None

**kwargs

Additional keyword arguments.

DEFAULT: {}

RETURNS DESCRIPTION

None

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
def __init__(
    self,
    vocab_file,
    bos_token="<s>",
    eos_token="</s>",
    unk_token="<unk>",
    pad_token="<pad>",
    word_delimiter_token="|",
    replace_word_delimiter_char=" ",
    do_lower_case=False,
    target_lang=None,
    **kwargs,
):
    """
    Initializes a new instance of the Wav2Vec2CTCTokenizer class.

    Args:
        self (Wav2Vec2CTCTokenizer): The instance of the Wav2Vec2CTCTokenizer class.
        vocab_file (str): The path to the vocabulary file.
        bos_token (str, optional): The beginning of sentence token. Default is '<s>'.
        eos_token (str, optional): The end of sentence token. Default is '</s>'.
        unk_token (str, optional): The unknown token. Default is '<unk>'.
        pad_token (str, optional): The padding token. Default is '<pad>'.
        word_delimiter_token (str, optional): The word delimiter token. Default is '|'.
        replace_word_delimiter_char (str, optional): The character used to replace the word delimiter. Default is ' '.
        do_lower_case (bool, optional): Whether to convert all tokens to lowercase. Default is False.
        target_lang (str, optional): The target language for encoding. Default is None.
        **kwargs: Additional keyword arguments.

    Returns:
        None

    Raises:
        None
    """
    self._word_delimiter_token = word_delimiter_token

    self.do_lower_case = do_lower_case
    self.replace_word_delimiter_char = replace_word_delimiter_char
    self.target_lang = target_lang

    with open(vocab_file, encoding="utf-8") as vocab_handle:
        self.vocab = json.load(vocab_handle)

    # if target lang is defined vocab must be a nested dict
    # with each target lang being one vocabulary
    if target_lang is not None:
        self.encoder = self.vocab[target_lang]
    else:
        self.encoder = self.vocab

    self.decoder = {v: k for k, v in self.encoder.items()}

    super().__init__(
        unk_token=unk_token,
        bos_token=bos_token,
        eos_token=eos_token,
        pad_token=pad_token,
        do_lower_case=do_lower_case,
        word_delimiter_token=word_delimiter_token,
        replace_word_delimiter_char=replace_word_delimiter_char,
        target_lang=target_lang,
        **kwargs,
    )

    # make sure that tokens made of several
    # characters are not split at tokenization
    for token in self.encoder.keys():
        if len(token) > 1:
            self.add_tokens(AddedToken(token, rstrip=True, lstrip=True, normalized=False))

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.batch_decode(sequences, skip_special_tokens=False, clean_up_tokenization_spaces=None, output_char_offsets=False, output_word_offsets=False, **kwargs)

Convert a list of lists of token ids into a list of strings by calling decode.

PARAMETER DESCRIPTION
sequences

List of tokenized input ids. Can be obtained using the __call__ method.

TYPE: `Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]`

skip_special_tokens

Whether or not to remove special tokens in the decoding.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

clean_up_tokenization_spaces

Whether or not to clean up the tokenization spaces.

TYPE: `bool`, *optional* DEFAULT: None

output_char_offsets

Whether or not to output character offsets. Character offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.

Please take a look at the Example of [~Wav2Vec2CTCTokenizer.decode] to better understand how to make use of output_char_offsets. [~Wav2Vec2CTCTokenizer.batch_decode] works the same way with batched output.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

output_word_offsets

Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.

Please take a look at the Example of [~Wav2Vec2CTCTokenizer.decode] to better understand how to make use of output_word_offsets. [~Wav2Vec2CTCTokenizer.batch_decode] works the same way with batched output.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

kwargs

Will be passed to the underlying model specific decode method.

TYPE: additional keyword arguments, *optional* DEFAULT: {}

RETURNS DESCRIPTION
List[str]

List[str] or [~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput]: The list of decoded sentences. Will be a [~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput] when output_char_offsets == True or output_word_offsets == True.

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
def batch_decode(
    self,
    sequences: Union[List[int], List[List[int]], "np.ndarray", "Tensor"],
    skip_special_tokens: bool = False,
    clean_up_tokenization_spaces: bool = None,
    output_char_offsets: bool = False,
    output_word_offsets: bool = False,
    **kwargs,
) -> List[str]:
    """
    Convert a list of lists of token ids into a list of strings by calling decode.

    Args:
        sequences (`Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]`):
            List of tokenized input ids. Can be obtained using the `__call__` method.
        skip_special_tokens (`bool`, *optional*, defaults to `False`):
            Whether or not to remove special tokens in the decoding.
        clean_up_tokenization_spaces (`bool`, *optional*):
            Whether or not to clean up the tokenization spaces.
        output_char_offsets (`bool`, *optional*, defaults to `False`):
            Whether or not to output character offsets. Character offsets can be used in combination with the
            sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.

            <Tip>

            Please take a look at the Example of [`~Wav2Vec2CTCTokenizer.decode`] to better understand how to make
            use of `output_char_offsets`. [`~Wav2Vec2CTCTokenizer.batch_decode`] works the same way with batched
            output.

            </Tip>

        output_word_offsets (`bool`, *optional*, defaults to `False`):
            Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate
            and model downsampling rate to compute the time-stamps of transcribed words.

            <Tip>

            Please take a look at the Example of [`~Wav2Vec2CTCTokenizer.decode`] to better understand how to make
            use of `output_word_offsets`. [`~Wav2Vec2CTCTokenizer.batch_decode`] works the same way with batched
            output.

            </Tip>

        kwargs (additional keyword arguments, *optional*):
            Will be passed to the underlying model specific decode method.

    Returns:
        `List[str]` or [`~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput`]: The list of decoded
            sentences. Will be a [`~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput`] when
            `output_char_offsets == True` or `output_word_offsets == True`.
    """
    batch_decoded = [
        self.decode(
            seq,
            skip_special_tokens=skip_special_tokens,
            clean_up_tokenization_spaces=clean_up_tokenization_spaces,
            output_char_offsets=output_char_offsets,
            output_word_offsets=output_word_offsets,
            **kwargs,
        )
        for seq in sequences
    ]
    if output_char_offsets or output_word_offsets:
        # transform list of dicts to dict of lists
        return Wav2Vec2CTCTokenizerOutput({k: [d[k] for d in batch_decoded] for k in batch_decoded[0]})

    return batch_decoded

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.convert_tokens_to_string(tokens, group_tokens=True, spaces_between_special_tokens=False, output_char_offsets=False, output_word_offsets=False)

Converts a connectionist-temporal-classification (CTC) output tokens into a single string.

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
def convert_tokens_to_string(
    self,
    tokens: List[str],
    group_tokens: bool = True,
    spaces_between_special_tokens: bool = False,
    output_char_offsets: bool = False,
    output_word_offsets: bool = False,
) -> Dict[str, Union[str, float]]:
    """
    Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
    """
    if len(tokens) == 0:
        return {"text": "", "char_offsets": [], "word_offsets": []}
    # group same tokens into non-repeating tokens in CTC style decoding
    if group_tokens:
        chars, char_repetitions = zip(*((token, len(list(group_iter))) for token, group_iter in groupby(tokens)))
    else:
        chars = tokens
        char_repetitions = len(tokens) * [1]

    # filter self.pad_token which is used as CTC-blank token
    processed_chars = list(filter(lambda char: char != self.pad_token, chars))

    # replace delimiter token
    processed_chars = [
        self.replace_word_delimiter_char if char == self.word_delimiter_token else char for char in processed_chars
    ]

    # retrieve offsets
    char_offsets = word_offsets = None
    if output_char_offsets or output_word_offsets:
        char_offsets = self._compute_offsets(char_repetitions, chars, self.pad_token)

        if len(char_offsets) != len(processed_chars):
            raise ValueError(
                f"`char_offsets`: {char_offsets} and `processed_tokens`: {processed_chars}"
                " have to be of the same length, but are: "
                f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
                f" {len(processed_chars)}"
            )

        # set tokens to correct processed token
        for i, char in enumerate(processed_chars):
            char_offsets[i]["char"] = char

        # retrieve word offsets from character offsets
        word_offsets = None
        if output_word_offsets:
            word_offsets = self._get_word_offsets(char_offsets, self.replace_word_delimiter_char)

        # don't output chars if not set to True
        if not output_char_offsets:
            char_offsets = None

    # join to string
    join_char = " " if spaces_between_special_tokens else ""
    string = join_char.join(processed_chars).strip()

    if self.do_lower_case:
        string = string.lower()

    return {"text": string, "char_offsets": char_offsets, "word_offsets": word_offsets}

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.decode(token_ids, skip_special_tokens=False, clean_up_tokenization_spaces=None, output_char_offsets=False, output_word_offsets=False, **kwargs)

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

PARAMETER DESCRIPTION
token_ids

List of tokenized input ids. Can be obtained using the __call__ method.

TYPE: `Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`

skip_special_tokens

Whether or not to remove special tokens in the decoding.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

clean_up_tokenization_spaces

Whether or not to clean up the tokenization spaces.

TYPE: `bool`, *optional* DEFAULT: None

output_char_offsets

Whether or not to output character offsets. Character offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.

Please take a look at the example below to better understand how to make use of output_char_offsets.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

output_word_offsets

Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate and model downsampling rate to compute the time-stamps of transcribed words.

Please take a look at the example below to better understand how to make use of output_word_offsets.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

kwargs

Will be passed to the underlying model specific decode method.

TYPE: additional keyword arguments, *optional* DEFAULT: {}

RETURNS DESCRIPTION
str

str or [~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput]: The list of decoded sentences. Will be a [~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput] when output_char_offsets == True or output_word_offsets == True.

Example
>>> # Let's see how to retrieve time steps for a model
>>> from transformers import AutoTokenizer, AutoFeatureExtractor, AutoModelForCTC
>>> from datasets import load_dataset
>>> import datasets
>>> import torch
...
>>> # import model, feature extractor, tokenizer
>>> model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/wav2vec2-base-960h")
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
...
>>> # load first sample of English common_voice
>>> dataset = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="train", streaming=True)
>>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
>>> dataset_iter = iter(dataset)
>>> sample = next(dataset_iter)
...
>>> # forward sample through model to get greedily predicted transcription ids
>>> input_values = feature_extractor(sample["audio"]["array"], return_tensors="ms").input_values
>>> logits = model(input_values).logits[0]
>>> pred_ids = torch.argmax(logits, axis=-1)
...
>>> # retrieve word stamps (analogous commands for `output_char_offsets`)
>>> outputs = tokenizer.decode(pred_ids, output_word_offsets=True)
>>> # compute `time_offset` in seconds as product of downsampling ratio and sampling_rate
>>> time_offset = model.config.inputs_to_logits_ratio / feature_extractor.sampling_rate
...
>>> word_offsets = [
...     {
...         "word": d["word"],
...         "start_time": round(d["start_offset"] * time_offset, 2),
...         "end_time": round(d["end_offset"] * time_offset, 2),
...     }
...     for d in outputs.word_offsets
... ]
>>> # compare word offsets with audio `en_train_0/common_voice_en_19121553.mp3` online on the dataset viewer:
>>> # https://hf-mirror.com/datasets/mozilla-foundation/common_voice_11_0/viewer/en
>>> word_offsets[:3]
[{'word': 'THE', 'start_time': 0.7, 'end_time': 0.78}, {'word': 'TRICK', 'start_time': 0.88, 'end_time': 1.08}, {'word': 'APPEARS', 'start_time': 1.2, 'end_time': 1.64}]
Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
def decode(
    self,
    token_ids: Union[int, List[int], "np.ndarray", "Tensor"],
    skip_special_tokens: bool = False,
    clean_up_tokenization_spaces: bool = None,
    output_char_offsets: bool = False,
    output_word_offsets: bool = False,
    **kwargs,
) -> str:
    """
    Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
    tokens and clean up tokenization spaces.

    Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.

    Args:
        token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
            List of tokenized input ids. Can be obtained using the `__call__` method.
        skip_special_tokens (`bool`, *optional*, defaults to `False`):
            Whether or not to remove special tokens in the decoding.
        clean_up_tokenization_spaces (`bool`, *optional*):
            Whether or not to clean up the tokenization spaces.
        output_char_offsets (`bool`, *optional*, defaults to `False`):
            Whether or not to output character offsets. Character offsets can be used in combination with the
            sampling rate and model downsampling rate to compute the time-stamps of transcribed characters.

            <Tip>

            Please take a look at the example below to better understand how to make use of `output_char_offsets`.

            </Tip>

        output_word_offsets (`bool`, *optional*, defaults to `False`):
            Whether or not to output word offsets. Word offsets can be used in combination with the sampling rate
            and model downsampling rate to compute the time-stamps of transcribed words.

            <Tip>

            Please take a look at the example below to better understand how to make use of `output_word_offsets`.

            </Tip>

        kwargs (additional keyword arguments, *optional*):
            Will be passed to the underlying model specific decode method.

    Returns:
        `str` or [`~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput`]: The list of decoded
            sentences. Will be a [`~models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput`] when
            `output_char_offsets == True` or `output_word_offsets == True`.

    Example:
        ```python
        >>> # Let's see how to retrieve time steps for a model
        >>> from transformers import AutoTokenizer, AutoFeatureExtractor, AutoModelForCTC
        >>> from datasets import load_dataset
        >>> import datasets
        >>> import torch
        ...
        >>> # import model, feature extractor, tokenizer
        >>> model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h")
        >>> tokenizer = AutoTokenizer.from_pretrained("facebook/wav2vec2-base-960h")
        >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
        ...
        >>> # load first sample of English common_voice
        >>> dataset = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="train", streaming=True)
        >>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
        >>> dataset_iter = iter(dataset)
        >>> sample = next(dataset_iter)
        ...
        >>> # forward sample through model to get greedily predicted transcription ids
        >>> input_values = feature_extractor(sample["audio"]["array"], return_tensors="ms").input_values
        >>> logits = model(input_values).logits[0]
        >>> pred_ids = torch.argmax(logits, axis=-1)
        ...
        >>> # retrieve word stamps (analogous commands for `output_char_offsets`)
        >>> outputs = tokenizer.decode(pred_ids, output_word_offsets=True)
        >>> # compute `time_offset` in seconds as product of downsampling ratio and sampling_rate
        >>> time_offset = model.config.inputs_to_logits_ratio / feature_extractor.sampling_rate
        ...
        >>> word_offsets = [
        ...     {
        ...         "word": d["word"],
        ...         "start_time": round(d["start_offset"] * time_offset, 2),
        ...         "end_time": round(d["end_offset"] * time_offset, 2),
        ...     }
        ...     for d in outputs.word_offsets
        ... ]
        >>> # compare word offsets with audio `en_train_0/common_voice_en_19121553.mp3` online on the dataset viewer:
        >>> # https://hf-mirror.com/datasets/mozilla-foundation/common_voice_11_0/viewer/en
        >>> word_offsets[:3]
        [{'word': 'THE', 'start_time': 0.7, 'end_time': 0.78}, {'word': 'TRICK', 'start_time': 0.88, 'end_time': 1.08}, {'word': 'APPEARS', 'start_time': 1.2, 'end_time': 1.64}]
        ```
    """
    # Convert inputs to python lists
    token_ids = to_py_obj(token_ids)

    return self._decode(
        token_ids=token_ids,
        skip_special_tokens=skip_special_tokens,
        clean_up_tokenization_spaces=clean_up_tokenization_spaces,
        output_char_offsets=output_char_offsets,
        output_word_offsets=output_word_offsets,
        **kwargs,
    )

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.get_vocab()

Returns the vocabulary used by the Wav2Vec2CTCTokenizer.

PARAMETER DESCRIPTION
self

An instance of the Wav2Vec2CTCTokenizer class.

TYPE: Wav2Vec2CTCTokenizer

RETURNS DESCRIPTION
Dict

A dictionary representing the vocabulary used by the tokenizer. The keys are integers representing the token IDs, and the values are the corresponding tokens.

TYPE: Dict

This method retrieves the vocabulary used by the Wav2Vec2CTCTokenizer instance. The vocabulary is a dictionary that combines the encoder and added_tokens_encoder dictionaries. The encoder dictionary maps tokens to unique integer IDs, while the added_tokens_encoder dictionary contains additional tokens added by the user. The resulting vocabulary dictionary is returned.

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
def get_vocab(self) -> Dict:
    """
    Returns the vocabulary used by the Wav2Vec2CTCTokenizer.

    Args:
        self (Wav2Vec2CTCTokenizer): An instance of the Wav2Vec2CTCTokenizer class.

    Returns:
        Dict: A dictionary representing the vocabulary used by the tokenizer.
            The keys are integers representing the token IDs, and the values are the corresponding tokens.

    Raises:
        None.

    This method retrieves the vocabulary used by the Wav2Vec2CTCTokenizer instance. The vocabulary is a dictionary
    that combines the encoder and added_tokens_encoder dictionaries. The encoder dictionary maps tokens to unique
    integer IDs, while the added_tokens_encoder dictionary contains additional tokens added by the user.
    The resulting vocabulary dictionary is returned.
    """
    vocab = dict(self.encoder)
    vocab.update(self.added_tokens_encoder)
    return vocab

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.prepare_for_tokenization(text, is_split_into_words=False, **kwargs)

Prepare the input text for tokenization.

PARAMETER DESCRIPTION
self

The instance of the Wav2Vec2CTCTokenizer class.

TYPE: Wav2Vec2CTCTokenizer

text

The input text to be prepared for tokenization.

TYPE: str

is_split_into_words

A flag indicating whether the input text is already split into words. If True, the input text is expected to be split into words; otherwise, the input text is treated as a continuous string. Defaults to False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION
tuple

A tuple containing the prepared text and optional keyword arguments.

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
    """
    Prepare the input text for tokenization.

    Args:
        self (Wav2Vec2CTCTokenizer): The instance of the Wav2Vec2CTCTokenizer class.
        text (str): The input text to be prepared for tokenization.
        is_split_into_words (bool): A flag indicating whether the input text is already split into words.
            If True, the input text is expected to be split into words;
            otherwise, the input text is treated as a continuous string.
            Defaults to False.

    Returns:
        tuple: A tuple containing the prepared text and optional keyword arguments.

    Raises:
        None
    """
    if is_split_into_words:
        text = " " + text
    return (text, kwargs)

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.save_vocabulary(save_directory, filename_prefix=None)

Save the vocabulary to a specified directory.

PARAMETER DESCRIPTION
self

The instance of the Wav2Vec2CTCTokenizer class.

save_directory

The directory where the vocabulary will be saved.

TYPE: str

filename_prefix

An optional prefix to be added to the filename. Defaults to None.

TYPE: Optional[str] DEFAULT: None

RETURNS DESCRIPTION
Tuple[str]

Tuple[str]: A tuple containing the file path of the saved vocabulary.

RAISES DESCRIPTION
OSError

If the save_directory is not a valid directory.

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
    """
    Save the vocabulary to a specified directory.

    Args:
        self: The instance of the Wav2Vec2CTCTokenizer class.
        save_directory (str): The directory where the vocabulary will be saved.
        filename_prefix (Optional[str]): An optional prefix to be added to the filename. Defaults to None.

    Returns:
        Tuple[str]: A tuple containing the file path of the saved vocabulary.

    Raises:
        OSError: If the save_directory is not a valid directory.
    """
    if not os.path.isdir(save_directory):
        logger.error(f"Vocabulary path ({save_directory}) should be a directory")
        return
    vocab_file = os.path.join(
        save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
    )

    with open(vocab_file, "w", encoding="utf-8") as f:
        f.write(json.dumps(self.vocab, indent=2, sort_keys=True, ensure_ascii=False) + "\n")

    return (vocab_file,)

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizer.set_target_lang(target_lang)

Set the target language of a nested multi-lingual dictionary

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
def set_target_lang(self, target_lang: str):
    """
    Set the target language of a nested multi-lingual dictionary
    """
    if self.vocab == self.encoder:
        raise ValueError(f"{self.vocab} is not a multi-lingual, nested tokenizer. Cannot set target language.")

    if target_lang not in self.vocab:
        raise ValueError(f"{target_lang} does not exist. Choose one of {', '.join(self.vocab.keys())}.")

    self.target_lang = target_lang
    self.init_kwargs["target_lang"] = target_lang
    self.encoder = self.vocab[target_lang]
    self.decoder = {v: k for k, v in self.encoder.items()}

    # make sure that tokens made of several
    # characters are not split at tokenization
    for token in self.encoder.keys():
        if len(token) > 1:
            self.add_tokens(AddedToken(token, rstrip=True, lstrip=True, normalized=False))

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2CTCTokenizerOutput dataclass

Bases: ModelOutput

Output type of [Wav2Vec2CTCTokenizer], with transcription.

PARAMETER DESCRIPTION
text

Decoded logits in text from. Usually the speech transcription.

TYPE: list of `str` or `str`

char_offsets

Offsets of the decoded characters. In combination with sampling rate and model downsampling rate char offsets can be used to compute time stamps for each charater. Total logit score of the beam associated with produced text.

TYPE: list of `List[Dict[str, Union[int, str]]]` or `List[Dict[str, Union[int, str]]]` DEFAULT: None

word_offsets

Offsets of the decoded words. In combination with sampling rate and model downsampling rate word offsets can be used to compute time stamps for each word.

TYPE: list of `List[Dict[str, Union[int, str]]]` or `List[Dict[str, Union[int, str]]]` DEFAULT: None

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
@dataclass
class Wav2Vec2CTCTokenizerOutput(ModelOutput):
    """
    Output type of [` Wav2Vec2CTCTokenizer`], with transcription.

    Args:
        text (list of `str` or `str`):
            Decoded logits in text from. Usually the speech transcription.
        char_offsets (list of `List[Dict[str, Union[int, str]]]` or `List[Dict[str, Union[int, str]]]`):
            Offsets of the decoded characters. In combination with sampling rate and model downsampling rate char
            offsets can be used to compute time stamps for each charater. Total logit score of the beam associated with
            produced text.
        word_offsets (list of `List[Dict[str, Union[int, str]]]` or `List[Dict[str, Union[int, str]]]`):
            Offsets of the decoded words. In combination with sampling rate and model downsampling rate word offsets
            can be used to compute time stamps for each word.
    """
    text: Union[List[str], str]
    char_offsets: Union[List[ListOfDict], ListOfDict] = None
    word_offsets: Union[List[ListOfDict], ListOfDict] = None

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer

Bases: PreTrainedTokenizer

Constructs a Wav2Vec2 tokenizer.

This tokenizer inherits from [PreTrainedTokenizer] which contains some of the main methods. Users should refer to the superclass for more information regarding such methods.

PARAMETER DESCRIPTION
vocab_file

File containing the vocabulary.

TYPE: `str`

bos_token

The beginning of sentence token.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

eos_token

The end of sentence token.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

unk_token

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

TYPE: `str`, *optional*, defaults to `"<unk>"` DEFAULT: '<unk>'

pad_token

The token used for padding, for example when batching sequences of different lengths.

TYPE: `str`, *optional*, defaults to `"<pad>"` DEFAULT: '<pad>'

word_delimiter_token

The token used for defining the end of a word.

TYPE: `str`, *optional*, defaults to `"|"` DEFAULT: '|'

do_lower_case

Whether or not to lowercase the output when decoding.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

do_normalize

Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models, e.g., wav2vec2-lv60.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

return_attention_mask

Whether or not [~Wav2Vec2Tokenizer.__call__] should return attention_mask.

Wav2Vec2 models that have set config.feat_extract_norm == "group", such as wav2vec2-base, have not been trained using attention_mask. For such models, input_values should simply be padded with 0 and no attention_mask should be passed.

For Wav2Vec2 models that have set config.feat_extract_norm == "layer", such as wav2vec2-lv60, attention_mask should be passed for batched inference.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
class Wav2Vec2Tokenizer(PreTrainedTokenizer):
    """
    Constructs a Wav2Vec2 tokenizer.

    This tokenizer inherits from [`PreTrainedTokenizer`] which contains some of the main methods. Users should refer to
    the superclass for more information regarding such methods.

    Args:
        vocab_file (`str`):
            File containing the vocabulary.
        bos_token (`str`, *optional*, defaults to `"<s>"`):
            The beginning of sentence token.
        eos_token (`str`, *optional*, defaults to `"</s>"`):
            The end of sentence token.
        unk_token (`str`, *optional*, defaults to `"<unk>"`):
            The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
            token instead.
        pad_token (`str`, *optional*, defaults to `"<pad>"`):
            The token used for padding, for example when batching sequences of different lengths.
        word_delimiter_token (`str`, *optional*, defaults to `"|"`):
            The token used for defining the end of a word.
        do_lower_case (`bool`, *optional*, defaults to `False`):
            Whether or not to lowercase the output when decoding.
        do_normalize (`bool`, *optional*, defaults to `False`):
            Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly
            improve the performance for some models, *e.g.*,
            [wav2vec2-lv60](https://hf-mirror.com/models?search=lv60).
        return_attention_mask (`bool`, *optional*, defaults to `False`):
            Whether or not [`~Wav2Vec2Tokenizer.__call__`] should return `attention_mask`.

            <Tip>

            Wav2Vec2 models that have set `config.feat_extract_norm == "group"`, such as
            [wav2vec2-base](https://hf-mirror.com/facebook/wav2vec2-base-960h), have **not** been trained using
            `attention_mask`. For such models, `input_values` should simply be padded with 0 and no `attention_mask`
            should be passed.

            For Wav2Vec2 models that have set `config.feat_extract_norm == "layer"`, such as
            [wav2vec2-lv60](https://hf-mirror.com/facebook/wav2vec2-large-960h-lv60-self), `attention_mask` should be
            passed for batched inference.

            </Tip>

        **kwargs
            Additional keyword arguments passed along to [`PreTrainedTokenizer`]
    """
    vocab_files_names = VOCAB_FILES_NAMES
    pretrained_vocab_files_map = {
        "vocab_file": {
            "facebook/wav2vec2-base-960h": "https://hf-mirror.com/facebook/wav2vec2-base-960h/resolve/main/vocab.json"
        },
        "tokenizer_config_file": {
            "facebook/wav2vec2-base-960h": (
                "https://hf-mirror.com/facebook/wav2vec2-base-960h/resolve/main/tokenizer.json"
            ),
        },
    }
    model_input_names = ["input_values", "attention_mask"]

    def __init__(
        self,
        vocab_file,
        bos_token="<s>",
        eos_token="</s>",
        unk_token="<unk>",
        pad_token="<pad>",
        word_delimiter_token="|",
        do_lower_case=False,
        do_normalize=False,
        return_attention_mask=False,
        **kwargs,
    ):
        """
        Initializes a new instance of the Wav2Vec2Tokenizer class.

        Args:
            self: The instance of the class.
            vocab_file (str): The path to the vocabulary file.
            bos_token (str, optional): The beginning of sentence token. Default is '<s>'.
            eos_token (str, optional): The end of sentence token. Default is '</s>'.
            unk_token (str, optional): The unknown token. Default is '<unk>'.
            pad_token (str, optional): The padding token. Default is '<pad>'.
            word_delimiter_token (str, optional): The word delimiter token. Default is '|'.
            do_lower_case (bool, optional): Whether to convert tokens to lowercase. Default is False.
            do_normalize (bool, optional): Whether to apply text normalization. Default is False.
            return_attention_mask (bool, optional): Whether to return the attention mask. Default is False.

        Returns:
            None

        Raises:
            FutureWarning: This class is deprecated.
                Please use Wav2Vec2Processor or Wav2Vec2CTCTokenizer instead.
        """
        warnings.warn(
            "The class `Wav2Vec2Tokenizer` is deprecated. Please use"
            " `Wav2Vec2Processor` or `Wav2Vec2CTCTokenizer` instead.",
            FutureWarning,
        )

        self._word_delimiter_token = word_delimiter_token

        self.do_lower_case = do_lower_case
        self.return_attention_mask = return_attention_mask
        self.do_normalize = do_normalize

        with open(vocab_file, encoding="utf-8") as vocab_handle:
            self.encoder = json.load(vocab_handle)

        self.decoder = {v: k for k, v in self.encoder.items()}

        super().__init__(
            unk_token=unk_token,
            bos_token=bos_token,
            eos_token=eos_token,
            pad_token=pad_token,
            do_lower_case=do_lower_case,
            do_normalize=do_normalize,
            return_attention_mask=return_attention_mask,
            word_delimiter_token=word_delimiter_token,
            **kwargs,
        )

    @property
    def word_delimiter_token(self) -> str:
        """
        `str`: Padding token. Log an error if used while not having been set.
        """
        if self._word_delimiter_token is None and self.verbose:
            logger.error("Using word_delimiter_token, but it is not set yet.")
            return None
        return str(self._word_delimiter_token)

    @property
    def word_delimiter_token_id(self) -> Optional[int]:
        """
        `Optional[int]`: Id of the word_delimiter_token in the vocabulary. Returns `None` if the token has not been
        set.
        """
        if self._word_delimiter_token is None:
            return None
        return self.convert_tokens_to_ids(self.word_delimiter_token)

    @word_delimiter_token.setter
    def word_delimiter_token(self, value):
        """
        word_delimiter_token

        Setter method for setting the word delimiter token in the Wav2Vec2Tokenizer class.

        Args:
            self (Wav2Vec2Tokenizer): The instance of the Wav2Vec2Tokenizer class.
            value (str): The value to be set as the word delimiter token. Should be a string
                representing the word delimiter token.

        Returns:
            None.

        Raises:
            None.
        """
        self._word_delimiter_token = value

    @word_delimiter_token_id.setter
    def word_delimiter_token_id(self, value):
        """
        Method to set the token ID for word delimiter in the Wav2Vec2Tokenizer class.

        Args:
            self (Wav2Vec2Tokenizer): The instance of the Wav2Vec2Tokenizer class.
                This parameter refers to the tokenizer object itself.
            value (Union[int, List[int]]): The new token ID or list of token IDs for word delimiter.
                The value should be an integer or a list of integers representing token IDs.
                If a list is provided, the tokens will be converted to their corresponding IDs.

        Returns:
            None: This method does not return any value. It sets the word delimiter token ID internally.

        Raises:
            ValueError: If the provided value is not a valid integer or list of integers.
            TypeError: If the provided value is not of type int or list.
        """
        self._word_delimiter_token = self.convert_tokens_to_ids(value)

    def __call__(           self,
        raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
        padding: Union[bool, str, PaddingStrategy] = False,
        max_length: Optional[int] = None,
        pad_to_multiple_of: Optional[int] = None,
        return_tensors: Optional[Union[str, TensorType]] = None,
        verbose: bool = True,
        **kwargs,
    ) -> BatchEncoding:
        """
        Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
        sequences.

        Args:
            raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
                The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
                values, a list of numpy array or a list of list of float values. Must be mono channel audio, not
                stereo, i.e. single float per timestep.
        """
        is_batched_numpy = isinstance(raw_speech, np.ndarray) and len(raw_speech.shape) > 1
        if is_batched_numpy and len(raw_speech.shape) > 2:
            raise ValueError(f"Only mono-channel audio is supported for input to {self}")
        is_batched = is_batched_numpy or (
            isinstance(raw_speech, (list, tuple)) and (isinstance(raw_speech[0], (np.ndarray, tuple, list)))
        )

        # make sure input is in list format
        if is_batched and not isinstance(raw_speech[0], np.ndarray):
            raw_speech = [np.asarray(speech) for speech in raw_speech]
        elif not is_batched and not isinstance(raw_speech, np.ndarray):
            raw_speech = np.asarray(raw_speech)

        # always return batch
        if not is_batched:
            raw_speech = [raw_speech]

        # zero-mean and unit-variance normalization
        if self.do_normalize:
            raw_speech = [(x - np.mean(x)) / np.sqrt(np.var(x) + 1e-5) for x in raw_speech]

        # convert into correct format for padding
        encoded_inputs = BatchEncoding({"input_values": raw_speech})

        padded_inputs = self.pad(
            encoded_inputs,
            padding=padding,
            max_length=max_length,
            pad_to_multiple_of=pad_to_multiple_of,
            return_attention_mask=self.return_attention_mask,
            return_tensors=return_tensors,
            verbose=verbose,
        )

        return padded_inputs

    @property
    def vocab_size(self) -> int:
        """
        Method to retrieve the vocabulary size of the Wav2Vec2Tokenizer instance.

        Args:
            self (Wav2Vec2Tokenizer): The instance of the Wav2Vec2Tokenizer class.
                This parameter refers to the current instance of the Wav2Vec2Tokenizer class.
                It is used to access the decoder attribute to calculate the vocabulary size.

        Returns:
            int: An integer representing the size of the vocabulary.
                The return value corresponds to the number of elements in the decoder attribute of the instance.

        Raises:
            None.
        """
        return len(self.decoder)

    def get_vocab(self) -> Dict:
        """
        This method returns a vocabulary dictionary containing the encoder and added tokens encoder.

        Args:
            self (Wav2Vec2Tokenizer): The instance of the Wav2Vec2Tokenizer class.

        Returns:
            Dict: A dictionary containing the combined encoder and added tokens encoder.

        Raises:
            None.
        """
        return dict(self.encoder, **self.added_tokens_encoder)

    def _convert_token_to_id(self, token: str) -> int:
        """Converts a token (str) in an index (integer) using the vocab."""
        return self.encoder.get(token, self.encoder.get(self.unk_token))

    def _convert_id_to_token(self, index: int) -> str:
        """Converts an index (integer) in a token (str) using the vocab."""
        result = self.decoder.get(index, self.unk_token)
        return result

    def convert_tokens_to_string(self, tokens: List[str]) -> str:
        """
        Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
        """
        # group same tokens into non-repeating tokens in CTC style decoding
        grouped_tokens = [token_group[0] for token_group in groupby(tokens)]

        # filter self.pad_token which is used as CTC-blank token
        filtered_tokens = list(filter(lambda token: token != self.pad_token, grouped_tokens))

        # replace delimiter token
        string = "".join([" " if token == self.word_delimiter_token else token for token in filtered_tokens]).strip()

        if self.do_lower_case:
            string = string.lower()

        return string

    def _decode(
        self,
        token_ids: List[int],
        skip_special_tokens: bool = False,
        clean_up_tokenization_spaces: bool = None,
        **kwargs,
    ) -> str:
        """
        special _decode function is needed for Wav2Vec2Tokenizer because added tokens should be treated exactly the
        same as tokens of the base vocabulary and therefore the function `convert_tokens_to_string` has to be called on
        the whole token list and not individually on added tokens
        """
        filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)

        result = []
        for token in filtered_tokens:
            if skip_special_tokens and token in self.all_special_ids:
                continue
            result.append(token)

        text = self.convert_tokens_to_string(result)

        clean_up_tokenization_spaces = (
            clean_up_tokenization_spaces
            if clean_up_tokenization_spaces is not None
            else self.clean_up_tokenization_spaces
        )
        if clean_up_tokenization_spaces:
            clean_text = self.clean_up_tokenization(text)
            return clean_text
        else:
            return text

    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
        """
        Saves the vocabulary of the Wav2Vec2Tokenizer to a file.

        Args:
            self (Wav2Vec2Tokenizer): An instance of the Wav2Vec2Tokenizer class.
            save_directory (str): The directory where the vocabulary file will be saved.
            filename_prefix (Optional[str], optional): A prefix to be added to the filename. Defaults to None.

        Returns:
            Tuple[str]: A tuple containing the path to the saved vocabulary file.

        Raises:
            FileNotFoundError: If the specified save_directory does not exist.
            IsADirectoryError: If save_directory is not a directory.
        """
        if not os.path.isdir(save_directory):
            logger.error(f"Vocabulary path ({save_directory}) should be a directory")
            return
        vocab_file = os.path.join(
            save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
        )

        with open(vocab_file, "w", encoding="utf-8") as f:
            f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")

        return (vocab_file,)

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer.vocab_size: int property

Method to retrieve the vocabulary size of the Wav2Vec2Tokenizer instance.

PARAMETER DESCRIPTION
self

The instance of the Wav2Vec2Tokenizer class. This parameter refers to the current instance of the Wav2Vec2Tokenizer class. It is used to access the decoder attribute to calculate the vocabulary size.

TYPE: Wav2Vec2Tokenizer

RETURNS DESCRIPTION
int

An integer representing the size of the vocabulary. The return value corresponds to the number of elements in the decoder attribute of the instance.

TYPE: int

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer.word_delimiter_token: str property writable

str: Padding token. Log an error if used while not having been set.

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer.word_delimiter_token_id: Optional[int] property writable

Optional[int]: Id of the word_delimiter_token in the vocabulary. Returns None if the token has not been set.

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer.__call__(raw_speech, padding=False, max_length=None, pad_to_multiple_of=None, return_tensors=None, verbose=True, **kwargs)

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.

PARAMETER DESCRIPTION
raw_speech

The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy array or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep.

TYPE: `np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
def __call__(           self,
    raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
    padding: Union[bool, str, PaddingStrategy] = False,
    max_length: Optional[int] = None,
    pad_to_multiple_of: Optional[int] = None,
    return_tensors: Optional[Union[str, TensorType]] = None,
    verbose: bool = True,
    **kwargs,
) -> BatchEncoding:
    """
    Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
    sequences.

    Args:
        raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
            The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
            values, a list of numpy array or a list of list of float values. Must be mono channel audio, not
            stereo, i.e. single float per timestep.
    """
    is_batched_numpy = isinstance(raw_speech, np.ndarray) and len(raw_speech.shape) > 1
    if is_batched_numpy and len(raw_speech.shape) > 2:
        raise ValueError(f"Only mono-channel audio is supported for input to {self}")
    is_batched = is_batched_numpy or (
        isinstance(raw_speech, (list, tuple)) and (isinstance(raw_speech[0], (np.ndarray, tuple, list)))
    )

    # make sure input is in list format
    if is_batched and not isinstance(raw_speech[0], np.ndarray):
        raw_speech = [np.asarray(speech) for speech in raw_speech]
    elif not is_batched and not isinstance(raw_speech, np.ndarray):
        raw_speech = np.asarray(raw_speech)

    # always return batch
    if not is_batched:
        raw_speech = [raw_speech]

    # zero-mean and unit-variance normalization
    if self.do_normalize:
        raw_speech = [(x - np.mean(x)) / np.sqrt(np.var(x) + 1e-5) for x in raw_speech]

    # convert into correct format for padding
    encoded_inputs = BatchEncoding({"input_values": raw_speech})

    padded_inputs = self.pad(
        encoded_inputs,
        padding=padding,
        max_length=max_length,
        pad_to_multiple_of=pad_to_multiple_of,
        return_attention_mask=self.return_attention_mask,
        return_tensors=return_tensors,
        verbose=verbose,
    )

    return padded_inputs

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer.__init__(vocab_file, bos_token='<s>', eos_token='</s>', unk_token='<unk>', pad_token='<pad>', word_delimiter_token='|', do_lower_case=False, do_normalize=False, return_attention_mask=False, **kwargs)

Initializes a new instance of the Wav2Vec2Tokenizer class.

PARAMETER DESCRIPTION
self

The instance of the class.

vocab_file

The path to the vocabulary file.

TYPE: str

bos_token

The beginning of sentence token. Default is ''.

TYPE: str DEFAULT: '<s>'

eos_token

The end of sentence token. Default is ''.

TYPE: str DEFAULT: '</s>'

unk_token

The unknown token. Default is ''.

TYPE: str DEFAULT: '<unk>'

pad_token

The padding token. Default is ''.

TYPE: str DEFAULT: '<pad>'

word_delimiter_token

The word delimiter token. Default is '|'.

TYPE: str DEFAULT: '|'

do_lower_case

Whether to convert tokens to lowercase. Default is False.

TYPE: bool DEFAULT: False

do_normalize

Whether to apply text normalization. Default is False.

TYPE: bool DEFAULT: False

return_attention_mask

Whether to return the attention mask. Default is False.

TYPE: bool DEFAULT: False

RETURNS DESCRIPTION

None

RAISES DESCRIPTION
FutureWarning

This class is deprecated. Please use Wav2Vec2Processor or Wav2Vec2CTCTokenizer instead.

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
def __init__(
    self,
    vocab_file,
    bos_token="<s>",
    eos_token="</s>",
    unk_token="<unk>",
    pad_token="<pad>",
    word_delimiter_token="|",
    do_lower_case=False,
    do_normalize=False,
    return_attention_mask=False,
    **kwargs,
):
    """
    Initializes a new instance of the Wav2Vec2Tokenizer class.

    Args:
        self: The instance of the class.
        vocab_file (str): The path to the vocabulary file.
        bos_token (str, optional): The beginning of sentence token. Default is '<s>'.
        eos_token (str, optional): The end of sentence token. Default is '</s>'.
        unk_token (str, optional): The unknown token. Default is '<unk>'.
        pad_token (str, optional): The padding token. Default is '<pad>'.
        word_delimiter_token (str, optional): The word delimiter token. Default is '|'.
        do_lower_case (bool, optional): Whether to convert tokens to lowercase. Default is False.
        do_normalize (bool, optional): Whether to apply text normalization. Default is False.
        return_attention_mask (bool, optional): Whether to return the attention mask. Default is False.

    Returns:
        None

    Raises:
        FutureWarning: This class is deprecated.
            Please use Wav2Vec2Processor or Wav2Vec2CTCTokenizer instead.
    """
    warnings.warn(
        "The class `Wav2Vec2Tokenizer` is deprecated. Please use"
        " `Wav2Vec2Processor` or `Wav2Vec2CTCTokenizer` instead.",
        FutureWarning,
    )

    self._word_delimiter_token = word_delimiter_token

    self.do_lower_case = do_lower_case
    self.return_attention_mask = return_attention_mask
    self.do_normalize = do_normalize

    with open(vocab_file, encoding="utf-8") as vocab_handle:
        self.encoder = json.load(vocab_handle)

    self.decoder = {v: k for k, v in self.encoder.items()}

    super().__init__(
        unk_token=unk_token,
        bos_token=bos_token,
        eos_token=eos_token,
        pad_token=pad_token,
        do_lower_case=do_lower_case,
        do_normalize=do_normalize,
        return_attention_mask=return_attention_mask,
        word_delimiter_token=word_delimiter_token,
        **kwargs,
    )

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer.convert_tokens_to_string(tokens)

Converts a connectionist-temporal-classification (CTC) output tokens into a single string.

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
def convert_tokens_to_string(self, tokens: List[str]) -> str:
    """
    Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
    """
    # group same tokens into non-repeating tokens in CTC style decoding
    grouped_tokens = [token_group[0] for token_group in groupby(tokens)]

    # filter self.pad_token which is used as CTC-blank token
    filtered_tokens = list(filter(lambda token: token != self.pad_token, grouped_tokens))

    # replace delimiter token
    string = "".join([" " if token == self.word_delimiter_token else token for token in filtered_tokens]).strip()

    if self.do_lower_case:
        string = string.lower()

    return string

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer.get_vocab()

This method returns a vocabulary dictionary containing the encoder and added tokens encoder.

PARAMETER DESCRIPTION
self

The instance of the Wav2Vec2Tokenizer class.

TYPE: Wav2Vec2Tokenizer

RETURNS DESCRIPTION
Dict

A dictionary containing the combined encoder and added tokens encoder.

TYPE: Dict

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
def get_vocab(self) -> Dict:
    """
    This method returns a vocabulary dictionary containing the encoder and added tokens encoder.

    Args:
        self (Wav2Vec2Tokenizer): The instance of the Wav2Vec2Tokenizer class.

    Returns:
        Dict: A dictionary containing the combined encoder and added tokens encoder.

    Raises:
        None.
    """
    return dict(self.encoder, **self.added_tokens_encoder)

mindnlp.transformers.models.wav2vec2.tokenization_wav2vec2.Wav2Vec2Tokenizer.save_vocabulary(save_directory, filename_prefix=None)

Saves the vocabulary of the Wav2Vec2Tokenizer to a file.

PARAMETER DESCRIPTION
self

An instance of the Wav2Vec2Tokenizer class.

TYPE: Wav2Vec2Tokenizer

save_directory

The directory where the vocabulary file will be saved.

TYPE: str

filename_prefix

A prefix to be added to the filename. Defaults to None.

TYPE: Optional[str] DEFAULT: None

RETURNS DESCRIPTION
Tuple[str]

Tuple[str]: A tuple containing the path to the saved vocabulary file.

RAISES DESCRIPTION
FileNotFoundError

If the specified save_directory does not exist.

IsADirectoryError

If save_directory is not a directory.

Source code in mindnlp\transformers\models\wav2vec2\tokenization_wav2vec2.py
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
    """
    Saves the vocabulary of the Wav2Vec2Tokenizer to a file.

    Args:
        self (Wav2Vec2Tokenizer): An instance of the Wav2Vec2Tokenizer class.
        save_directory (str): The directory where the vocabulary file will be saved.
        filename_prefix (Optional[str], optional): A prefix to be added to the filename. Defaults to None.

    Returns:
        Tuple[str]: A tuple containing the path to the saved vocabulary file.

    Raises:
        FileNotFoundError: If the specified save_directory does not exist.
        IsADirectoryError: If save_directory is not a directory.
    """
    if not os.path.isdir(save_directory):
        logger.error(f"Vocabulary path ({save_directory}) should be a directory")
        return
    vocab_file = os.path.join(
        save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
    )

    with open(vocab_file, "w", encoding="utf-8") as f:
        f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")

    return (vocab_file,)

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2

MindSpore Wav2Vec2 model.

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2Attention

Bases: Module

Multi-headed attention from 'Attention Is All You Need' paper

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
class Wav2Vec2Attention(nn.Module):
    """Multi-headed attention from 'Attention Is All You Need' paper"""

    def __init__(
        self,
        embed_dim: int,
        num_heads: int,
        dropout: float = 0.0,
        is_decoder: bool = False,
        bias: bool = True,
        is_causal: bool = False,
        config: Optional[Wav2Vec2Config] = None,
    ):
        super().__init__()
        self.embed_dim = embed_dim
        self.num_heads = num_heads
        self.dropout = dropout
        self.head_dim = embed_dim // num_heads
        self.config = config

        if (self.head_dim * num_heads) != self.embed_dim:
            raise ValueError(
                f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
                f" and `num_heads`: {num_heads})."
            )
        self.scaling = self.head_dim**-0.5
        self.is_decoder = is_decoder
        self.is_causal = is_causal

        self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
        self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
        self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
        self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)

    def _shape(self, tensor: mindspore.Tensor, seq_len: int, bsz: int):
        return ops.transpose(tensor.view(bsz, seq_len, self.num_heads, self.head_dim), 1, 2)

    def forward(
        self,
        hidden_states: mindspore.Tensor,
        key_value_states: Optional[mindspore.Tensor] = None,
        past_key_value: Optional[Tuple[mindspore.Tensor]] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        layer_head_mask: Optional[mindspore.Tensor] = None,
        output_attentions: bool = False,
    ) -> Tuple[mindspore.Tensor, Optional[mindspore.Tensor], Optional[Tuple[mindspore.Tensor]]]:
        """Input shape: Batch x Time x Channel"""

        # if key_value_states are provided this layer is used as a cross-attention layer
        # for the decoder
        is_cross_attention = key_value_states is not None

        bsz, tgt_len, _ = hidden_states.shape

        # get query proj
        query_states = self.q_proj(hidden_states) * self.scaling
        # get key, value proj
        # `past_key_value[0].shape[2] == key_value_states.shape[1]`
        # is checking that the `sequence_length` of the `past_key_value` is the same as
        # the provided `key_value_states` to support prefix tuning
        if (
            is_cross_attention
            and past_key_value is not None
            and past_key_value[0].shape[2] == key_value_states.shape[1]
        ):
            # reuse k,v, cross_attentions
            key_states = past_key_value[0]
            value_states = past_key_value[1]
        elif is_cross_attention:
            # cross_attentions
            key_states = self._shape(self.k_proj(key_value_states), -1, bsz)
            value_states = self._shape(self.v_proj(key_value_states), -1, bsz)
        elif past_key_value is not None:
            # reuse k, v, self_attention
            key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
            value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
            key_states = ops.cat([past_key_value[0], key_states], dim=2)
            value_states = ops.cat([past_key_value[1], value_states], dim=2)
        else:
            # self_attention
            key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
            value_states = self._shape(self.v_proj(hidden_states), -1, bsz)

        if self.is_decoder:
            # if cross_attention save Tuple(mindspore.Tensor, mindspore.Tensor) of all cross attention key/value_states.
            # Further calls to cross_attention layer can then reuse all cross-attention
            # key/value_states (first "if" case)
            # if uni-directional self-attention (decoder) save Tuple(mindspore.Tensor, mindspore.Tensor) of
            # all previous decoder key/value_states. Further calls to uni-directional self-attention
            # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
            # if encoder bi-directional self-attention `past_key_value` is always `None`
            past_key_value = (key_states, value_states)

        proj_shape = (bsz * self.num_heads, -1, self.head_dim)
        query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)
        key_states = key_states.reshape(*proj_shape)
        value_states = value_states.reshape(*proj_shape)

        src_len = key_states.shape[1]
        attn_weights = ops.bmm(query_states, ops.transpose(key_states, 1, 2))

        if attn_weights.shape != (bsz * self.num_heads, tgt_len, src_len):
            raise ValueError(
                f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is"
                f" {attn_weights.shape}"
            )

        if attention_mask is not None:
            if attention_mask.shape != (bsz, 1, tgt_len, src_len):
                raise ValueError(
                    f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.shape}"
                )
            attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
            attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)

        attn_weights = nn.functional.softmax(attn_weights, dim=-1)

        if layer_head_mask is not None:
            if layer_head_mask.shape != (self.num_heads,):
                raise ValueError(
                    f"Head mask for a single layer should be of size {(self.num_heads,)}, but is"
                    f" {layer_head_mask.shape}"
                )
            attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
            attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)

        if output_attentions:
            # this operation is a bit awkward, but it's required to
            # make sure that attn_weights keeps its gradient.
            # In order to do so, attn_weights have to be reshaped
            # twice and have to be reused in the following
            attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
            attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)
        else:
            attn_weights_reshaped = None

        attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)

        attn_output = ops.bmm(attn_probs, value_states)

        if attn_output.shape != (bsz * self.num_heads, tgt_len, self.head_dim):
            raise ValueError(
                f"`attn_output` should be of size {(bsz * self.num_heads, tgt_len, self.head_dim)}, but is"
                f" {attn_output.shape}"
            )

        attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
        attn_output = ops.transpose(attn_output, 1, 2)

        # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be
        # partitioned across GPUs when using tensor-parallelism.
        attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim)

        attn_output = self.out_proj(attn_output)

        return attn_output, attn_weights_reshaped, past_key_value

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2Attention.forward(hidden_states, key_value_states=None, past_key_value=None, attention_mask=None, layer_head_mask=None, output_attentions=False)

Input shape: Batch x Time x Channel

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
def forward(
    self,
    hidden_states: mindspore.Tensor,
    key_value_states: Optional[mindspore.Tensor] = None,
    past_key_value: Optional[Tuple[mindspore.Tensor]] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    layer_head_mask: Optional[mindspore.Tensor] = None,
    output_attentions: bool = False,
) -> Tuple[mindspore.Tensor, Optional[mindspore.Tensor], Optional[Tuple[mindspore.Tensor]]]:
    """Input shape: Batch x Time x Channel"""

    # if key_value_states are provided this layer is used as a cross-attention layer
    # for the decoder
    is_cross_attention = key_value_states is not None

    bsz, tgt_len, _ = hidden_states.shape

    # get query proj
    query_states = self.q_proj(hidden_states) * self.scaling
    # get key, value proj
    # `past_key_value[0].shape[2] == key_value_states.shape[1]`
    # is checking that the `sequence_length` of the `past_key_value` is the same as
    # the provided `key_value_states` to support prefix tuning
    if (
        is_cross_attention
        and past_key_value is not None
        and past_key_value[0].shape[2] == key_value_states.shape[1]
    ):
        # reuse k,v, cross_attentions
        key_states = past_key_value[0]
        value_states = past_key_value[1]
    elif is_cross_attention:
        # cross_attentions
        key_states = self._shape(self.k_proj(key_value_states), -1, bsz)
        value_states = self._shape(self.v_proj(key_value_states), -1, bsz)
    elif past_key_value is not None:
        # reuse k, v, self_attention
        key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
        value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
        key_states = ops.cat([past_key_value[0], key_states], dim=2)
        value_states = ops.cat([past_key_value[1], value_states], dim=2)
    else:
        # self_attention
        key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
        value_states = self._shape(self.v_proj(hidden_states), -1, bsz)

    if self.is_decoder:
        # if cross_attention save Tuple(mindspore.Tensor, mindspore.Tensor) of all cross attention key/value_states.
        # Further calls to cross_attention layer can then reuse all cross-attention
        # key/value_states (first "if" case)
        # if uni-directional self-attention (decoder) save Tuple(mindspore.Tensor, mindspore.Tensor) of
        # all previous decoder key/value_states. Further calls to uni-directional self-attention
        # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
        # if encoder bi-directional self-attention `past_key_value` is always `None`
        past_key_value = (key_states, value_states)

    proj_shape = (bsz * self.num_heads, -1, self.head_dim)
    query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)
    key_states = key_states.reshape(*proj_shape)
    value_states = value_states.reshape(*proj_shape)

    src_len = key_states.shape[1]
    attn_weights = ops.bmm(query_states, ops.transpose(key_states, 1, 2))

    if attn_weights.shape != (bsz * self.num_heads, tgt_len, src_len):
        raise ValueError(
            f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is"
            f" {attn_weights.shape}"
        )

    if attention_mask is not None:
        if attention_mask.shape != (bsz, 1, tgt_len, src_len):
            raise ValueError(
                f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.shape}"
            )
        attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
        attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)

    attn_weights = nn.functional.softmax(attn_weights, dim=-1)

    if layer_head_mask is not None:
        if layer_head_mask.shape != (self.num_heads,):
            raise ValueError(
                f"Head mask for a single layer should be of size {(self.num_heads,)}, but is"
                f" {layer_head_mask.shape}"
            )
        attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
        attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)

    if output_attentions:
        # this operation is a bit awkward, but it's required to
        # make sure that attn_weights keeps its gradient.
        # In order to do so, attn_weights have to be reshaped
        # twice and have to be reused in the following
        attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
        attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)
    else:
        attn_weights_reshaped = None

    attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)

    attn_output = ops.bmm(attn_probs, value_states)

    if attn_output.shape != (bsz * self.num_heads, tgt_len, self.head_dim):
        raise ValueError(
            f"`attn_output` should be of size {(bsz * self.num_heads, tgt_len, self.head_dim)}, but is"
            f" {attn_output.shape}"
        )

    attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
    attn_output = ops.transpose(attn_output, 1, 2)

    # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be
    # partitioned across GPUs when using tensor-parallelism.
    attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim)

    attn_output = self.out_proj(attn_output)

    return attn_output, attn_weights_reshaped, past_key_value

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2AttnAdapterLayer

Bases: Module

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
class Wav2Vec2AttnAdapterLayer(nn.Module):
    def __init__(self, config):
        """
        Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed
        up training throughput.
        """
        super().__init__()
        self.input_dim = config.adapter_attn_dim
        self.hidden_dim = config.hidden_size

        self.norm = nn.LayerNorm(self.hidden_dim)
        self.linear_1 = nn.Linear(self.hidden_dim, self.input_dim)
        self.act_fn = nn.ReLU()
        self.linear_2 = nn.Linear(self.input_dim, self.hidden_dim)

    def forward(self, hidden_states: mindspore.Tensor):
        hidden_states = self.norm(hidden_states)

        hidden_states = self.linear_1(hidden_states)
        hidden_states = self.act_fn(hidden_states)
        hidden_states = self.linear_2(hidden_states)

        return hidden_states

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2AttnAdapterLayer.__init__(config)

Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed up training throughput.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
def __init__(self, config):
    """
    Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed
    up training throughput.
    """
    super().__init__()
    self.input_dim = config.adapter_attn_dim
    self.hidden_dim = config.hidden_size

    self.norm = nn.LayerNorm(self.hidden_dim)
    self.linear_1 = nn.Linear(self.hidden_dim, self.input_dim)
    self.act_fn = nn.ReLU()
    self.linear_2 = nn.Linear(self.input_dim, self.hidden_dim)

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2FeatureEncoder

Bases: Module

Construct the features from raw audio waveform

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
class Wav2Vec2FeatureEncoder(nn.Module):
    """Construct the features from raw audio waveform"""

    def __init__(self, config):
        super().__init__()

        if config.feat_extract_norm == "group":
            conv_layers = [Wav2Vec2GroupNormConvLayer(config, layer_id=0)] + [
                Wav2Vec2NoLayerNormConvLayer(config, layer_id=i + 1) for i in range(config.num_feat_extract_layers - 1)
            ]
        elif config.feat_extract_norm == "layer":
            conv_layers = [
                Wav2Vec2LayerNormConvLayer(config, layer_id=i) for i in range(config.num_feat_extract_layers)
            ]
        else:
            raise ValueError(
                f"`config.feat_extract_norm` is {config.feat_extract_norm}, but has to be one of ['group', 'layer']"
            )
        self.conv_layers = nn.ModuleList(conv_layers)
        self.gradient_checkpointing = False
        self._requires_grad = True

    def _freeze_parameters(self):
        for param in self.parameters():
            param.requires_grad = False
        self._requires_grad = False

    def forward(self, input_values):
        hidden_states = input_values[:, None]

        # make sure hidden_states require grad for gradient_checkpointing
        if self._requires_grad and self.training:
            hidden_states.requires_grad = True

        for conv_layer in self.conv_layers:
            if self._requires_grad and self.gradient_checkpointing and self.training:
                hidden_states = self._gradient_checkpointing_func(
                    conv_layer.__call__,
                    hidden_states,
                )
            else:
                hidden_states = conv_layer(hidden_states)

        return hidden_states

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForAudioFrameClassification

Bases: Wav2Vec2PreTrainedModel

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
class Wav2Vec2ForAudioFrameClassification(Wav2Vec2PreTrainedModel):
    def __init__(self, config):
        super().__init__(config)

        if hasattr(config, "add_adapter") and config.add_adapter:
            raise ValueError(
                "Audio frame classification does not support the use of Wav2Vec2 adapters (config.add_adapter=True)"
            )
        self.wav2vec2 = Wav2Vec2Model(config)
        num_layers = config.num_hidden_layers + 1  # transformer layers + input embeddings
        if config.use_weighted_layer_sum:
            self.layer_weights = nn.Parameter(ops.ones(num_layers) / num_layers)
        self.classifier = nn.Linear(config.hidden_size, config.num_labels)
        self.num_labels = config.num_labels

        self.init_weights()

    def freeze_feature_extractor(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        warnings.warn(
            "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
            "Please use the equivalent `freeze_feature_encoder` method instead.",
            FutureWarning,
        )
        self.freeze_feature_encoder()

    def freeze_feature_encoder(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        self.wav2vec2.feature_extractor._freeze_parameters()

    def freeze_base_model(self):
        """
        Calling this function will disable the gradient computation for the base model so that its parameters will not
        be updated during training. Only the classification head will be updated.
        """
        for param in self.wav2vec2.parameters():
            param.requires_grad = False

    def forward(
        self,
        input_values: Optional[mindspore.Tensor],
        attention_mask: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple, TokenClassifierOutput]:
        r"""
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
            config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
            `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
        """

        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states

        outputs = self.wav2vec2(
            input_values,
            attention_mask=attention_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        if self.config.use_weighted_layer_sum:
            hidden_states = outputs[_HIDDEN_STATES_START_POSITION]
            hidden_states = ops.stack(hidden_states, dim=1)
            norm_weights = nn.functional.softmax(self.layer_weights, dim=-1)
            hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1)
        else:
            hidden_states = outputs[0]

        logits = self.classifier(hidden_states)

        loss = None
        if labels is not None:
            loss_fct = CrossEntropyLoss()
            loss = loss_fct(logits.view(-1, self.num_labels), ops.argmax(labels.view(-1, self.num_labels), dim=1))

        if not return_dict:
            output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:]
            return output

        return TokenClassifierOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForAudioFrameClassification.forward(input_values, attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

labels (mindspore.Tensor of shape (batch_size,), optional): Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
def forward(
    self,
    input_values: Optional[mindspore.Tensor],
    attention_mask: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple, TokenClassifierOutput]:
    r"""
    labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
        Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
        config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
        `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
    """

    return_dict = return_dict if return_dict is not None else self.config.use_return_dict
    output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states

    outputs = self.wav2vec2(
        input_values,
        attention_mask=attention_mask,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    if self.config.use_weighted_layer_sum:
        hidden_states = outputs[_HIDDEN_STATES_START_POSITION]
        hidden_states = ops.stack(hidden_states, dim=1)
        norm_weights = nn.functional.softmax(self.layer_weights, dim=-1)
        hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1)
    else:
        hidden_states = outputs[0]

    logits = self.classifier(hidden_states)

    loss = None
    if labels is not None:
        loss_fct = CrossEntropyLoss()
        loss = loss_fct(logits.view(-1, self.num_labels), ops.argmax(labels.view(-1, self.num_labels), dim=1))

    if not return_dict:
        output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:]
        return output

    return TokenClassifierOutput(
        loss=loss,
        logits=logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForAudioFrameClassification.freeze_base_model()

Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2045
2046
2047
2048
2049
2050
2051
def freeze_base_model(self):
    """
    Calling this function will disable the gradient computation for the base model so that its parameters will not
    be updated during training. Only the classification head will be updated.
    """
    for param in self.wav2vec2.parameters():
        param.requires_grad = False

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForAudioFrameClassification.freeze_feature_encoder()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2038
2039
2040
2041
2042
2043
def freeze_feature_encoder(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    self.wav2vec2.feature_extractor._freeze_parameters()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForAudioFrameClassification.freeze_feature_extractor()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
def freeze_feature_extractor(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    warnings.warn(
        "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
        "Please use the equivalent `freeze_feature_encoder` method instead.",
        FutureWarning,
    )
    self.freeze_feature_encoder()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC

Bases: Wav2Vec2PreTrainedModel

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
class Wav2Vec2ForCTC(Wav2Vec2PreTrainedModel):
    def __init__(self, config, target_lang: Optional[str] = None):
        super().__init__(config)

        self.wav2vec2 = Wav2Vec2Model(config)
        self.dropout = nn.Dropout(config.final_dropout)

        self.target_lang = target_lang

        if config.vocab_size is None:
            raise ValueError(
                f"You are trying to instantiate {self.__class__} with a configuration that "
                "does not define the vocabulary size of the language model head. Please "
                "instantiate the model as follows: `Wav2Vec2ForCTC.from_pretrained(..., vocab_size=vocab_size)`. "
                "or define `vocab_size` of your model's configuration."
            )
        output_hidden_size = (
            config.output_hidden_size if hasattr(config, "add_adapter") and config.add_adapter else config.hidden_size
        )
        self.lm_head = nn.Linear(output_hidden_size, config.vocab_size)

        # Initialize weights and apply final processing
        self.post_init()

    def tie_weights(self):
        """
        This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
        passing `target_lang=...` to `from_pretrained(...)`.

        This method is **not** supposed to be called by the user and is prone to be changed in the future.
        """

        # Note that `tie_weights` is usually used to tie input and output embedding weights. The method is re-purposed to
        # correctly load adapter layers for Wav2Vec2 so that we do not have to introduce a new API to
        # [`PreTrainedModel`]. While slightly hacky, Wav2Vec2 never has to tie input and output embeddings, so that it is
        # ok to repurpose this function here.
        target_lang = self.target_lang

        if target_lang is not None and getattr(self.config, "adapter_attn_dim", None) is None:
            raise ValueError(f"Cannot pass `target_lang`: {target_lang} if `config.adapter_attn_dim` is not defined.")
        elif target_lang is None and getattr(self.config, "adapter_attn_dim", None) is not None:
            logger.info("By default `target_lang` is set to 'eng'.")
        elif target_lang is not None:
            self.load_adapter(target_lang, force_load=True)

    def freeze_feature_extractor(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        warnings.warn(
            "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
            "Please use the equivalent `freeze_feature_encoder` method instead.",
            FutureWarning,
        )
        self.freeze_feature_encoder()

    def freeze_feature_encoder(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        self.wav2vec2.feature_extractor._freeze_parameters()

    def freeze_base_model(self):
        """
        Calling this function will disable the gradient computation for the base model so that its parameters will not
        be updated during training. Only the classification head will be updated.
        """
        for param in self.wav2vec2.parameters():
            param.requires_grad = False

    def forward(
        self,
        input_values: Optional[mindspore.Tensor],
        attention_mask: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
        labels: Optional[mindspore.Tensor] = None,
    ) -> Union[Tuple, CausalLMOutput]:
        r"""
        labels (`mindspore.Tensor` of shape `(batch_size, target_length)`, *optional*):
            Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to
            the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`.
            All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ...,
            config.vocab_size - 1]`.
        """
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if labels is not None and labels.max() >= self.config.vocab_size:
            raise ValueError(f"Label values must be <= vocab_size: {self.config.vocab_size}")

        outputs = self.wav2vec2(
            input_values,
            attention_mask=attention_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        hidden_states = outputs[0]
        hidden_states = self.dropout(hidden_states)

        logits = self.lm_head(hidden_states)

        loss = None
        if labels is not None:
            # retrieve loss input_lengths from attention_mask
            attention_mask = (
                attention_mask if attention_mask is not None else ops.ones_like(input_values, dtype=mindspore.int64)
            )
            input_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(-1)).to(mindspore.int64)

            # assuming that padded tokens are filled with -100
            # when not being attended to
            labels_mask = labels >= 0
            target_lengths = labels_mask.sum(-1)
            flattened_targets = labels.masked_select(labels_mask)

            # ctc_loss doesn't support fp16
            log_probs = ops.transpose(nn.functional.log_softmax(logits, dim=-1, dtype=mindspore.float32), 0, 1)

            loss = nn.functional.ctc_loss(
                log_probs,
                labels,
                input_lengths,
                target_lengths,
                blank=self.config.pad_token_id,
                reduction=self.config.ctc_loss_reduction,
                zero_infinity=self.config.ctc_zero_infinity,
            )

        if not return_dict:
            output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:]
            return ((loss,) + output) if loss is not None else output

        return CausalLMOutput(
            loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions
        )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC.forward(input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)

labels (mindspore.Tensor of shape (batch_size, target_length), optional): Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1].

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
def forward(
    self,
    input_values: Optional[mindspore.Tensor],
    attention_mask: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
    labels: Optional[mindspore.Tensor] = None,
) -> Union[Tuple, CausalLMOutput]:
    r"""
    labels (`mindspore.Tensor` of shape `(batch_size, target_length)`, *optional*):
        Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to
        the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`.
        All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ...,
        config.vocab_size - 1]`.
    """
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if labels is not None and labels.max() >= self.config.vocab_size:
        raise ValueError(f"Label values must be <= vocab_size: {self.config.vocab_size}")

    outputs = self.wav2vec2(
        input_values,
        attention_mask=attention_mask,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    hidden_states = outputs[0]
    hidden_states = self.dropout(hidden_states)

    logits = self.lm_head(hidden_states)

    loss = None
    if labels is not None:
        # retrieve loss input_lengths from attention_mask
        attention_mask = (
            attention_mask if attention_mask is not None else ops.ones_like(input_values, dtype=mindspore.int64)
        )
        input_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(-1)).to(mindspore.int64)

        # assuming that padded tokens are filled with -100
        # when not being attended to
        labels_mask = labels >= 0
        target_lengths = labels_mask.sum(-1)
        flattened_targets = labels.masked_select(labels_mask)

        # ctc_loss doesn't support fp16
        log_probs = ops.transpose(nn.functional.log_softmax(logits, dim=-1, dtype=mindspore.float32), 0, 1)

        loss = nn.functional.ctc_loss(
            log_probs,
            labels,
            input_lengths,
            target_lengths,
            blank=self.config.pad_token_id,
            reduction=self.config.ctc_loss_reduction,
            zero_infinity=self.config.ctc_zero_infinity,
        )

    if not return_dict:
        output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:]
        return ((loss,) + output) if loss is not None else output

    return CausalLMOutput(
        loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions
    )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC.freeze_base_model()

Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1825
1826
1827
1828
1829
1830
1831
def freeze_base_model(self):
    """
    Calling this function will disable the gradient computation for the base model so that its parameters will not
    be updated during training. Only the classification head will be updated.
    """
    for param in self.wav2vec2.parameters():
        param.requires_grad = False

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC.freeze_feature_encoder()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1818
1819
1820
1821
1822
1823
def freeze_feature_encoder(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    self.wav2vec2.feature_extractor._freeze_parameters()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC.freeze_feature_extractor()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
def freeze_feature_extractor(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    warnings.warn(
        "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
        "Please use the equivalent `freeze_feature_encoder` method instead.",
        FutureWarning,
    )
    self.freeze_feature_encoder()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC.tie_weights()

This method overwrites [~PreTrainedModel.tie_weights] so that adapter weights can be correctly loaded when passing target_lang=... to from_pretrained(...).

This method is not supposed to be called by the user and is prone to be changed in the future.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
def tie_weights(self):
    """
    This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
    passing `target_lang=...` to `from_pretrained(...)`.

    This method is **not** supposed to be called by the user and is prone to be changed in the future.
    """

    # Note that `tie_weights` is usually used to tie input and output embedding weights. The method is re-purposed to
    # correctly load adapter layers for Wav2Vec2 so that we do not have to introduce a new API to
    # [`PreTrainedModel`]. While slightly hacky, Wav2Vec2 never has to tie input and output embeddings, so that it is
    # ok to repurpose this function here.
    target_lang = self.target_lang

    if target_lang is not None and getattr(self.config, "adapter_attn_dim", None) is None:
        raise ValueError(f"Cannot pass `target_lang`: {target_lang} if `config.adapter_attn_dim` is not defined.")
    elif target_lang is None and getattr(self.config, "adapter_attn_dim", None) is not None:
        logger.info("By default `target_lang` is set to 'eng'.")
    elif target_lang is not None:
        self.load_adapter(target_lang, force_load=True)

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTraining

Bases: Wav2Vec2PreTrainedModel

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
class Wav2Vec2ForPreTraining(Wav2Vec2PreTrainedModel):
    def __init__(self, config: Wav2Vec2Config):
        super().__init__(config)
        self.wav2vec2 = Wav2Vec2Model(config)
        self.dropout_features = nn.Dropout(config.feat_quantizer_dropout)

        self.quantizer = Wav2Vec2GumbelVectorQuantizer(config)

        self.project_hid = nn.Linear(config.hidden_size, config.proj_codevector_dim)
        self.project_q = nn.Linear(config.codevector_dim, config.proj_codevector_dim)

        # Initialize weights and apply final processing
        self.post_init()

    def set_gumbel_temperature(self, temperature: int):
        """
        Set the Gumbel softmax temperature to a given value. Only necessary for training
        """
        self.quantizer.temperature = temperature

    def freeze_feature_extractor(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameters will
        not be updated during training.
        """
        warnings.warn(
            "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
            "Please use the equivalent `freeze_feature_encoder` method instead.",
            FutureWarning,
        )
        self.freeze_feature_encoder()

    def freeze_feature_encoder(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        self.wav2vec2.feature_extractor._freeze_parameters()

    @staticmethod
    def compute_contrastive_logits(
        target_features: mindspore.Tensor,
        negative_features: mindspore.Tensor,
        predicted_features: mindspore.Tensor,
        temperature: int = 0.1,
    ):
        """
        Compute logits for contrastive loss based using cosine similarity as the distance measure between
        `[positive_feature, negative_features]` and `[predicted_features]`. Additionally, temperature can be applied.
        """
        target_features = ops.cat([target_features, negative_features], dim=0)

        logits = nn.functional.cosine_similarity(predicted_features.float(), target_features.float(), dim=-1).type_as(
            target_features
        )

        # apply temperature
        logits = logits / temperature
        return logits

    def forward(
        self,
        input_values: Optional[mindspore.Tensor],
        attention_mask: Optional[mindspore.Tensor] = None,
        mask_time_indices: Optional[mindspore.Tensor] = None,
        sampled_negative_indices: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple, Wav2Vec2ForPreTrainingOutput]:
        r"""
        mask_time_indices (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
            masked extracted features in *config.proj_codevector_dim* space.
        sampled_negative_indices (`mindspore.Tensor` of shape `(batch_size, sequence_length, num_negatives)`, *optional*):
            Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss.
            Required input for pre-training.

        Returns:

        Example:

        ```python
        >>> import torch
        >>> from transformers import AutoFeatureExtractor, Wav2Vec2ForPreTraining
        >>> from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices, _sample_negative_indices
        >>> from datasets import load_dataset

        >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
        >>> model = Wav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-base")

        >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
        >>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="ms").input_values  # Batch size 1

        >>> # compute masked indices
        >>> batch_size, raw_sequence_length = input_values.shape
        >>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item()
        >>> mask_time_indices = _compute_mask_indices(
        ...     shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2
        ... )
        >>> sampled_negative_indices = _sample_negative_indices(
        ...     features_shape=(batch_size, sequence_length),
        ...     num_negatives=model.config.num_negatives,
        ...     mask_time_indices=mask_time_indices,
        ... )
        >>> mask_time_indices = mindspore.tensor(data=mask_time_indices, dtype=mindspore.int64)
        >>> sampled_negative_indices = mindspore.tensor(
        ...     data=sampled_negative_indices, dtype=mindspore.int64
        ... )

        >>> with no_grad():
        ...     outputs = model(input_values, mask_time_indices=mask_time_indices)

        >>> # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
        >>> cosine_sim = ops.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1)

        >>> # show that cosine similarity is much higher than random
        >>> cosine_sim[mask_time_indices.to(mindspore.bool_)].mean() > 0.5
        tensor(True)

        >>> # for contrastive loss training model should be put into train mode
        >>> model = model.train()
        >>> loss = model(
        ...     input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices
        ... ).loss
        ```"""

        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if mask_time_indices is not None:
            mask_time_indices = mask_time_indices.to(mindspore.bool_)

        outputs = self.wav2vec2(
            input_values,
            attention_mask=attention_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            mask_time_indices=mask_time_indices,
            return_dict=return_dict,
        )

        # 1. project all transformed features (including masked) to final vq dim
        transformer_features = self.project_hid(outputs[0])

        # 2. quantize all (unmasked) extracted features and project to final vq dim
        extract_features = self.dropout_features(outputs[1])

        if attention_mask is not None:
            # compute reduced attention_mask correponding to feature vectors
            attention_mask = self._get_feature_vector_attention_mask(
                extract_features.shape[1], attention_mask, add_adapter=False
            )

        quantized_features, codevector_perplexity = self.quantizer(
            extract_features, mask_time_indices=mask_time_indices
        )

        quantized_features = quantized_features.to(self.project_q.weight.dtype)
        quantized_features = self.project_q(quantized_features)

        loss = contrastive_loss = diversity_loss = None
        if sampled_negative_indices is not None:
            batch_size, sequence_length, hidden_size = quantized_features.shape

            # for training, we sample negatives
            # 3. sample K negatives (distractors) quantized states for contrastive loss
            # if attention_mask is passed, make sure that padded feature vectors cannot be sampled
            # sample negative quantized vectors BTC => (BxT)C
            negative_quantized_features = quantized_features.view(-1, hidden_size)[
                sampled_negative_indices.long().view(-1)
            ]
            negative_quantized_features = negative_quantized_features.view(
                batch_size, sequence_length, -1, hidden_size
            ).permute(2, 0, 1, 3)

            # 4. compute logits, corresponding to `logs = sim(c_t, [q_t, \sim{q}_t]) / \kappa`
            # of equation (3) in https://arxiv.org/pdf/2006.11477.pdf
            logits = self.compute_contrastive_logits(
                quantized_features[None, :],
                negative_quantized_features,
                transformer_features,
                self.config.contrastive_logits_temperature,
            )

            # 5. if a negative vector is identical to the positive (i.e. when codebook utilization is low),
            # its cosine similarity will be masked
            neg_is_pos = (quantized_features == negative_quantized_features).all(-1)

            if neg_is_pos.any():
                logits[1:][neg_is_pos] = float(ops.finfo(logits.dtype).min)

            # 6. compute contrastive loss \mathbf{L}_m = cross_entropy(logs) =
            # -log(exp(sim(c_t, q_t)/\kappa) / \sum_{\sim{q}} exp(sim(c_t, \sim{q})/\kappa))
            logits = ops.transpose(logits, 0, 2).reshape(-1, logits.shape[0])
            target = ops.transpose(((1 - mask_time_indices.long()) * -100), 0, 1).flatten()

            contrastive_loss = nn.functional.cross_entropy(logits.float(), target, reduction="sum")
            # 7. compute diversity loss: \mathbf{L}_d
            num_codevectors = self.config.num_codevectors_per_group * self.config.num_codevector_groups
            diversity_loss = ((num_codevectors - codevector_perplexity) / num_codevectors) * mask_time_indices.sum()
            # 8. \mathbf{L} = \mathbf{L}_m + \alpha * \mathbf{L}_d
            loss = contrastive_loss + self.config.diversity_loss_weight * diversity_loss

        if not return_dict:
            if loss is not None:
                return (loss, transformer_features, quantized_features, codevector_perplexity) + outputs[2:]
            return (transformer_features, quantized_features, codevector_perplexity) + outputs[2:]

        return Wav2Vec2ForPreTrainingOutput(
            loss=loss,
            projected_states=transformer_features,
            projected_quantized_states=quantized_features,
            codevector_perplexity=codevector_perplexity,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
            contrastive_loss=contrastive_loss,
            diversity_loss=diversity_loss,
        )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTraining.compute_contrastive_logits(target_features, negative_features, predicted_features, temperature=0.1) staticmethod

Compute logits for contrastive loss based using cosine similarity as the distance measure between [positive_feature, negative_features] and [predicted_features]. Additionally, temperature can be applied.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
@staticmethod
def compute_contrastive_logits(
    target_features: mindspore.Tensor,
    negative_features: mindspore.Tensor,
    predicted_features: mindspore.Tensor,
    temperature: int = 0.1,
):
    """
    Compute logits for contrastive loss based using cosine similarity as the distance measure between
    `[positive_feature, negative_features]` and `[predicted_features]`. Additionally, temperature can be applied.
    """
    target_features = ops.cat([target_features, negative_features], dim=0)

    logits = nn.functional.cosine_similarity(predicted_features.float(), target_features.float(), dim=-1).type_as(
        target_features
    )

    # apply temperature
    logits = logits / temperature
    return logits

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTraining.forward(input_values, attention_mask=None, mask_time_indices=None, sampled_negative_indices=None, output_attentions=None, output_hidden_states=None, return_dict=None)

mask_time_indices (mindspore.Tensor of shape (batch_size, sequence_length), optional): Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in config.proj_codevector_dim space. sampled_negative_indices (mindspore.Tensor of shape (batch_size, sequence_length, num_negatives), optional): Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss. Required input for pre-training.

Returns:

Example:

>>> import torch
>>> from transformers import AutoFeatureExtractor, Wav2Vec2ForPreTraining
>>> from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices, _sample_negative_indices
>>> from datasets import load_dataset

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
>>> model = Wav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-base")

>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="ms").input_values  # Batch size 1

>>> # compute masked indices
>>> batch_size, raw_sequence_length = input_values.shape
>>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item()
>>> mask_time_indices = _compute_mask_indices(
...     shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2
... )
>>> sampled_negative_indices = _sample_negative_indices(
...     features_shape=(batch_size, sequence_length),
...     num_negatives=model.config.num_negatives,
...     mask_time_indices=mask_time_indices,
... )
>>> mask_time_indices = mindspore.tensor(data=mask_time_indices, dtype=mindspore.int64)
>>> sampled_negative_indices = mindspore.tensor(
...     data=sampled_negative_indices, dtype=mindspore.int64
... )

>>> with no_grad():
...     outputs = model(input_values, mask_time_indices=mask_time_indices)

>>> # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
>>> cosine_sim = ops.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1)

>>> # show that cosine similarity is much higher than random
>>> cosine_sim[mask_time_indices.to(mindspore.bool_)].mean() > 0.5
tensor(True)

>>> # for contrastive loss training model should be put into train mode
>>> model = model.train()
>>> loss = model(
...     input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices
... ).loss
Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
def forward(
    self,
    input_values: Optional[mindspore.Tensor],
    attention_mask: Optional[mindspore.Tensor] = None,
    mask_time_indices: Optional[mindspore.Tensor] = None,
    sampled_negative_indices: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple, Wav2Vec2ForPreTrainingOutput]:
    r"""
    mask_time_indices (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
        Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
        masked extracted features in *config.proj_codevector_dim* space.
    sampled_negative_indices (`mindspore.Tensor` of shape `(batch_size, sequence_length, num_negatives)`, *optional*):
        Indices indicating which quantized target vectors are used as negative sampled vectors in contrastive loss.
        Required input for pre-training.

    Returns:

    Example:

    ```python
    >>> import torch
    >>> from transformers import AutoFeatureExtractor, Wav2Vec2ForPreTraining
    >>> from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices, _sample_negative_indices
    >>> from datasets import load_dataset

    >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
    >>> model = Wav2Vec2ForPreTraining.from_pretrained("facebook/wav2vec2-base")

    >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
    >>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="ms").input_values  # Batch size 1

    >>> # compute masked indices
    >>> batch_size, raw_sequence_length = input_values.shape
    >>> sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item()
    >>> mask_time_indices = _compute_mask_indices(
    ...     shape=(batch_size, sequence_length), mask_prob=0.2, mask_length=2
    ... )
    >>> sampled_negative_indices = _sample_negative_indices(
    ...     features_shape=(batch_size, sequence_length),
    ...     num_negatives=model.config.num_negatives,
    ...     mask_time_indices=mask_time_indices,
    ... )
    >>> mask_time_indices = mindspore.tensor(data=mask_time_indices, dtype=mindspore.int64)
    >>> sampled_negative_indices = mindspore.tensor(
    ...     data=sampled_negative_indices, dtype=mindspore.int64
    ... )

    >>> with no_grad():
    ...     outputs = model(input_values, mask_time_indices=mask_time_indices)

    >>> # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states)
    >>> cosine_sim = ops.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1)

    >>> # show that cosine similarity is much higher than random
    >>> cosine_sim[mask_time_indices.to(mindspore.bool_)].mean() > 0.5
    tensor(True)

    >>> # for contrastive loss training model should be put into train mode
    >>> model = model.train()
    >>> loss = model(
    ...     input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices
    ... ).loss
    ```"""

    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if mask_time_indices is not None:
        mask_time_indices = mask_time_indices.to(mindspore.bool_)

    outputs = self.wav2vec2(
        input_values,
        attention_mask=attention_mask,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        mask_time_indices=mask_time_indices,
        return_dict=return_dict,
    )

    # 1. project all transformed features (including masked) to final vq dim
    transformer_features = self.project_hid(outputs[0])

    # 2. quantize all (unmasked) extracted features and project to final vq dim
    extract_features = self.dropout_features(outputs[1])

    if attention_mask is not None:
        # compute reduced attention_mask correponding to feature vectors
        attention_mask = self._get_feature_vector_attention_mask(
            extract_features.shape[1], attention_mask, add_adapter=False
        )

    quantized_features, codevector_perplexity = self.quantizer(
        extract_features, mask_time_indices=mask_time_indices
    )

    quantized_features = quantized_features.to(self.project_q.weight.dtype)
    quantized_features = self.project_q(quantized_features)

    loss = contrastive_loss = diversity_loss = None
    if sampled_negative_indices is not None:
        batch_size, sequence_length, hidden_size = quantized_features.shape

        # for training, we sample negatives
        # 3. sample K negatives (distractors) quantized states for contrastive loss
        # if attention_mask is passed, make sure that padded feature vectors cannot be sampled
        # sample negative quantized vectors BTC => (BxT)C
        negative_quantized_features = quantized_features.view(-1, hidden_size)[
            sampled_negative_indices.long().view(-1)
        ]
        negative_quantized_features = negative_quantized_features.view(
            batch_size, sequence_length, -1, hidden_size
        ).permute(2, 0, 1, 3)

        # 4. compute logits, corresponding to `logs = sim(c_t, [q_t, \sim{q}_t]) / \kappa`
        # of equation (3) in https://arxiv.org/pdf/2006.11477.pdf
        logits = self.compute_contrastive_logits(
            quantized_features[None, :],
            negative_quantized_features,
            transformer_features,
            self.config.contrastive_logits_temperature,
        )

        # 5. if a negative vector is identical to the positive (i.e. when codebook utilization is low),
        # its cosine similarity will be masked
        neg_is_pos = (quantized_features == negative_quantized_features).all(-1)

        if neg_is_pos.any():
            logits[1:][neg_is_pos] = float(ops.finfo(logits.dtype).min)

        # 6. compute contrastive loss \mathbf{L}_m = cross_entropy(logs) =
        # -log(exp(sim(c_t, q_t)/\kappa) / \sum_{\sim{q}} exp(sim(c_t, \sim{q})/\kappa))
        logits = ops.transpose(logits, 0, 2).reshape(-1, logits.shape[0])
        target = ops.transpose(((1 - mask_time_indices.long()) * -100), 0, 1).flatten()

        contrastive_loss = nn.functional.cross_entropy(logits.float(), target, reduction="sum")
        # 7. compute diversity loss: \mathbf{L}_d
        num_codevectors = self.config.num_codevectors_per_group * self.config.num_codevector_groups
        diversity_loss = ((num_codevectors - codevector_perplexity) / num_codevectors) * mask_time_indices.sum()
        # 8. \mathbf{L} = \mathbf{L}_m + \alpha * \mathbf{L}_d
        loss = contrastive_loss + self.config.diversity_loss_weight * diversity_loss

    if not return_dict:
        if loss is not None:
            return (loss, transformer_features, quantized_features, codevector_perplexity) + outputs[2:]
        return (transformer_features, quantized_features, codevector_perplexity) + outputs[2:]

    return Wav2Vec2ForPreTrainingOutput(
        loss=loss,
        projected_states=transformer_features,
        projected_quantized_states=quantized_features,
        codevector_perplexity=codevector_perplexity,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
        contrastive_loss=contrastive_loss,
        diversity_loss=diversity_loss,
    )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTraining.freeze_feature_encoder()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1529
1530
1531
1532
1533
1534
def freeze_feature_encoder(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    self.wav2vec2.feature_extractor._freeze_parameters()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTraining.freeze_feature_extractor()

Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
def freeze_feature_extractor(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameters will
    not be updated during training.
    """
    warnings.warn(
        "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
        "Please use the equivalent `freeze_feature_encoder` method instead.",
        FutureWarning,
    )
    self.freeze_feature_encoder()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTraining.set_gumbel_temperature(temperature)

Set the Gumbel softmax temperature to a given value. Only necessary for training

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1511
1512
1513
1514
1515
def set_gumbel_temperature(self, temperature: int):
    """
    Set the Gumbel softmax temperature to a given value. Only necessary for training
    """
    self.quantizer.temperature = temperature

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput dataclass

Bases: ModelOutput

Output type of [Wav2Vec2ForPreTraining], with potential hidden states and attentions.

PARAMETER DESCRIPTION
loss

Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the official paper . (classification) loss.

TYPE: *optional*, returned when `sample_negative_indices` are passed, `mindspore.Tensor` of shape `(1,)` DEFAULT: None

projected_states

Hidden-states of the model projected to config.proj_codevector_dim that can be used to predict the masked projected quantized states.

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)` DEFAULT: None

projected_quantized_states

Quantized extracted feature vectors projected to config.proj_codevector_dim representing the positive target vectors for contrastive loss.

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)` DEFAULT: None

hidden_states

Tuple of mindspore.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

TYPE: `tuple(mindspore.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True` DEFAULT: None

attentions

Tuple of mindspore.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

TYPE: `tuple(mindspore.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True` DEFAULT: None

contrastive_loss

The contrastive loss (L_m) as stated in the official paper .

TYPE: *optional*, returned when `sample_negative_indices` are passed, `mindspore.Tensor` of shape `(1,)` DEFAULT: None

diversity_loss

The diversity loss (L_d) as stated in the official paper .

TYPE: *optional*, returned when `sample_negative_indices` are passed, `mindspore.Tensor` of shape `(1,)` DEFAULT: None

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
@dataclass
class Wav2Vec2ForPreTrainingOutput(ModelOutput):
    """
    Output type of [`Wav2Vec2ForPreTraining`], with potential hidden states and attentions.

    Args:
        loss (*optional*, returned when `sample_negative_indices` are passed, `mindspore.Tensor` of shape `(1,)`):
            Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the [official
            paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss.
        projected_states (`mindspore.Tensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
            Hidden-states of the model projected to *config.proj_codevector_dim* that can be used to predict the masked
            projected quantized states.
        projected_quantized_states (`mindspore.Tensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`):
            Quantized extracted feature vectors projected to *config.proj_codevector_dim* representing the positive
            target vectors for contrastive loss.
        hidden_states (`tuple(mindspore.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
            Tuple of `mindspore.Tensor` (one for the output of the embeddings + one for the output of each layer) of
            shape `(batch_size, sequence_length, hidden_size)`.

            Hidden-states of the model at the output of each layer plus the initial embedding outputs.
        attentions (`tuple(mindspore.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
            Tuple of `mindspore.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
            sequence_length)`.

            Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
            heads.
        contrastive_loss (*optional*, returned when `sample_negative_indices` are passed, `mindspore.Tensor` of shape `(1,)`):
            The contrastive loss (L_m) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) .
        diversity_loss (*optional*, returned when `sample_negative_indices` are passed, `mindspore.Tensor` of shape `(1,)`):
            The diversity loss (L_d) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) .
    """

    loss: Optional[mindspore.Tensor] = None
    projected_states: mindspore.Tensor = None
    projected_quantized_states: mindspore.Tensor = None
    codevector_perplexity: mindspore.Tensor = None
    hidden_states: Optional[Tuple[mindspore.Tensor]] = None
    attentions: Optional[Tuple[mindspore.Tensor]] = None
    contrastive_loss: Optional[mindspore.Tensor] = None
    diversity_loss: Optional[mindspore.Tensor] = None

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification

Bases: Wav2Vec2PreTrainedModel

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
class Wav2Vec2ForSequenceClassification(Wav2Vec2PreTrainedModel):
    def __init__(self, config):
        super().__init__(config)

        if hasattr(config, "add_adapter") and config.add_adapter:
            raise ValueError(
                "Sequence classification does not support the use of Wav2Vec2 adapters (config.add_adapter=True)"
            )
        self.wav2vec2 = Wav2Vec2Model(config)
        num_layers = config.num_hidden_layers + 1  # transformer layers + input embeddings
        if config.use_weighted_layer_sum:
            self.layer_weights = nn.Parameter(ops.ones(num_layers) / num_layers)
        self.projector = nn.Linear(config.hidden_size, config.classifier_proj_size)
        self.classifier = nn.Linear(config.classifier_proj_size, config.num_labels)

        # Initialize weights and apply final processing
        self.post_init()

    def freeze_feature_extractor(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameters will
        not be updated during training.
        """
        warnings.warn(
            "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
            "Please use the equivalent `freeze_feature_encoder` method instead.",
            FutureWarning,
        )
        self.freeze_feature_encoder()

    def freeze_feature_encoder(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        self.wav2vec2.feature_extractor._freeze_parameters()

    def freeze_base_model(self):
        """
        Calling this function will disable the gradient computation for the base model so that its parameters will not
        be updated during training. Only the classification head will be updated.
        """
        for param in self.wav2vec2.parameters():
            param.requires_grad = False

    def forward(
        self,
        input_values: Optional[mindspore.Tensor],
        attention_mask: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
        labels: Optional[mindspore.Tensor] = None,
    ) -> Union[Tuple, SequenceClassifierOutput]:
        r"""
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
            config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
            `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
        """

        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states

        outputs = self.wav2vec2(
            input_values,
            attention_mask=attention_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        if self.config.use_weighted_layer_sum:
            hidden_states = outputs[_HIDDEN_STATES_START_POSITION]
            hidden_states = ops.stack(hidden_states, dim=1)
            norm_weights = nn.functional.softmax(self.layer_weights, dim=-1)
            hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1)
        else:
            hidden_states = outputs[0]

        hidden_states = self.projector(hidden_states)
        if attention_mask is None:
            pooled_output = ops.mean(hidden_states, dim=1)
        else:
            padding_mask = self._get_feature_vector_attention_mask(hidden_states.shape[1], attention_mask)
            hidden_states[~padding_mask] = 0.0
            pooled_output = ops.sum(hidden_states, dim=1) / ops.sum(padding_mask, dim=1).view(-1, 1)

        logits = self.classifier(pooled_output)

        loss = None
        if labels is not None:
            loss_fct = CrossEntropyLoss()
            loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1))

        if not return_dict:
            output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:]
            return ((loss,) + output) if loss is not None else output

        return SequenceClassifierOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification.forward(input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)

labels (mindspore.Tensor of shape (batch_size,), optional): Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
def forward(
    self,
    input_values: Optional[mindspore.Tensor],
    attention_mask: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
    labels: Optional[mindspore.Tensor] = None,
) -> Union[Tuple, SequenceClassifierOutput]:
    r"""
    labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
        Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
        config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
        `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
    """

    return_dict = return_dict if return_dict is not None else self.config.use_return_dict
    output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states

    outputs = self.wav2vec2(
        input_values,
        attention_mask=attention_mask,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    if self.config.use_weighted_layer_sum:
        hidden_states = outputs[_HIDDEN_STATES_START_POSITION]
        hidden_states = ops.stack(hidden_states, dim=1)
        norm_weights = nn.functional.softmax(self.layer_weights, dim=-1)
        hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1)
    else:
        hidden_states = outputs[0]

    hidden_states = self.projector(hidden_states)
    if attention_mask is None:
        pooled_output = ops.mean(hidden_states, dim=1)
    else:
        padding_mask = self._get_feature_vector_attention_mask(hidden_states.shape[1], attention_mask)
        hidden_states[~padding_mask] = 0.0
        pooled_output = ops.sum(hidden_states, dim=1) / ops.sum(padding_mask, dim=1).view(-1, 1)

    logits = self.classifier(pooled_output)

    loss = None
    if labels is not None:
        loss_fct = CrossEntropyLoss()
        loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1))

    if not return_dict:
        output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:]
        return ((loss,) + output) if loss is not None else output

    return SequenceClassifierOutput(
        loss=loss,
        logits=logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification.freeze_base_model()

Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1940
1941
1942
1943
1944
1945
1946
def freeze_base_model(self):
    """
    Calling this function will disable the gradient computation for the base model so that its parameters will not
    be updated during training. Only the classification head will be updated.
    """
    for param in self.wav2vec2.parameters():
        param.requires_grad = False

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification.freeze_feature_encoder()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1933
1934
1935
1936
1937
1938
def freeze_feature_encoder(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    self.wav2vec2.feature_extractor._freeze_parameters()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForSequenceClassification.freeze_feature_extractor()

Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
def freeze_feature_extractor(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameters will
    not be updated during training.
    """
    warnings.warn(
        "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
        "Please use the equivalent `freeze_feature_encoder` method instead.",
        FutureWarning,
    )
    self.freeze_feature_encoder()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForXVector

Bases: Wav2Vec2PreTrainedModel

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
class Wav2Vec2ForXVector(Wav2Vec2PreTrainedModel):
    def __init__(self, config):
        super().__init__(config)

        self.wav2vec2 = Wav2Vec2Model(config)
        num_layers = config.num_hidden_layers + 1  # transformer layers + input embeddings
        if config.use_weighted_layer_sum:
            self.layer_weights = nn.Parameter(ops.ones(num_layers) / num_layers)
        self.projector = nn.Linear(config.hidden_size, config.tdnn_dim[0])

        tdnn_layers = [TDNNLayer(config, i) for i in range(len(config.tdnn_dim))]
        self.tdnn = nn.ModuleList(tdnn_layers)

        self.feature_extractor = nn.Linear(config.tdnn_dim[-1] * 2, config.xvector_output_dim)
        self.classifier = nn.Linear(config.xvector_output_dim, config.xvector_output_dim)

        self.objective = AMSoftmaxLoss(config.xvector_output_dim, config.num_labels)

        self.init_weights()

    def freeze_feature_extractor(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        warnings.warn(
            "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
            "Please use the equivalent `freeze_feature_encoder` method instead.",
            FutureWarning,
        )
        self.freeze_feature_encoder()

    def freeze_feature_encoder(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        self.wav2vec2.feature_extractor._freeze_parameters()

    def freeze_base_model(self):
        """
        Calling this function will disable the gradient computation for the base model so that its parameters will not
        be updated during training. Only the classification head will be updated.
        """
        for param in self.wav2vec2.parameters():
            param.requires_grad = False

    def _get_tdnn_output_lengths(self, input_lengths: Union[mindspore.Tensor, int]):
        """
        Computes the output length of the TDNN layers
        """

        def _conv_out_length(input_length, kernel_size, stride):
            # 1D convolutional layer output length formula taken
            return (input_length - kernel_size) // stride + 1

        for kernel_size in self.config.tdnn_kernel:
            input_lengths = _conv_out_length(input_lengths, kernel_size, 1)

        return input_lengths

    def forward(
        self,
        input_values: Optional[mindspore.Tensor],
        attention_mask: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
        labels: Optional[mindspore.Tensor] = None,
    ) -> Union[Tuple, XVectorOutput]:
        r"""
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
            config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
            `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
        """

        return_dict = return_dict if return_dict is not None else self.config.use_return_dict
        output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states

        outputs = self.wav2vec2(
            input_values,
            attention_mask=attention_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        if self.config.use_weighted_layer_sum:
            hidden_states = outputs[_HIDDEN_STATES_START_POSITION]
            hidden_states = ops.stack(hidden_states, dim=1)
            norm_weights = nn.functional.softmax(self.layer_weights, dim=-1)
            hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1)
        else:
            hidden_states = outputs[0]

        hidden_states = self.projector(hidden_states)

        for tdnn_layer in self.tdnn:
            hidden_states = tdnn_layer(hidden_states)

        # Statistic Pooling
        if attention_mask is None:
            mean_features = ops.mean(hidden_states, dim=1)
            std_features = ops.std(hidden_states, dim=1)
        else:
            feat_extract_output_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(dim=1))
            tdnn_output_lengths = self._get_tdnn_output_lengths(feat_extract_output_lengths)
            mean_features = []
            std_features = []
            for i, length in enumerate(tdnn_output_lengths):
                mean_features.append(ops.mean(hidden_states[i, :length], dim=0))
                std_features.append(ops.std(hidden_states[i, :length], dim=0))
            mean_features = ops.stack(mean_features)
            std_features = ops.stack(std_features)
        statistic_pooling = ops.cat([mean_features, std_features], dim=-1)

        output_embeddings = self.feature_extractor(statistic_pooling)
        logits = self.classifier(output_embeddings)

        loss = None
        if labels is not None:
            loss = self.objective(logits, labels)

        if not return_dict:
            output = (logits, output_embeddings) + outputs[_HIDDEN_STATES_START_POSITION:]
            return ((loss,) + output) if loss is not None else output

        return XVectorOutput(
            loss=loss,
            logits=logits,
            embeddings=output_embeddings,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForXVector.forward(input_values, attention_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None)

labels (mindspore.Tensor of shape (batch_size,), optional): Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
def forward(
    self,
    input_values: Optional[mindspore.Tensor],
    attention_mask: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
    labels: Optional[mindspore.Tensor] = None,
) -> Union[Tuple, XVectorOutput]:
    r"""
    labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
        Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
        config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
        `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
    """

    return_dict = return_dict if return_dict is not None else self.config.use_return_dict
    output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states

    outputs = self.wav2vec2(
        input_values,
        attention_mask=attention_mask,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    if self.config.use_weighted_layer_sum:
        hidden_states = outputs[_HIDDEN_STATES_START_POSITION]
        hidden_states = ops.stack(hidden_states, dim=1)
        norm_weights = nn.functional.softmax(self.layer_weights, dim=-1)
        hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1)
    else:
        hidden_states = outputs[0]

    hidden_states = self.projector(hidden_states)

    for tdnn_layer in self.tdnn:
        hidden_states = tdnn_layer(hidden_states)

    # Statistic Pooling
    if attention_mask is None:
        mean_features = ops.mean(hidden_states, dim=1)
        std_features = ops.std(hidden_states, dim=1)
    else:
        feat_extract_output_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(dim=1))
        tdnn_output_lengths = self._get_tdnn_output_lengths(feat_extract_output_lengths)
        mean_features = []
        std_features = []
        for i, length in enumerate(tdnn_output_lengths):
            mean_features.append(ops.mean(hidden_states[i, :length], dim=0))
            std_features.append(ops.std(hidden_states[i, :length], dim=0))
        mean_features = ops.stack(mean_features)
        std_features = ops.stack(std_features)
    statistic_pooling = ops.cat([mean_features, std_features], dim=-1)

    output_embeddings = self.feature_extractor(statistic_pooling)
    logits = self.classifier(output_embeddings)

    loss = None
    if labels is not None:
        loss = self.objective(logits, labels)

    if not return_dict:
        output = (logits, output_embeddings) + outputs[_HIDDEN_STATES_START_POSITION:]
        return ((loss,) + output) if loss is not None else output

    return XVectorOutput(
        loss=loss,
        logits=logits,
        embeddings=output_embeddings,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForXVector.freeze_base_model()

Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2198
2199
2200
2201
2202
2203
2204
def freeze_base_model(self):
    """
    Calling this function will disable the gradient computation for the base model so that its parameters will not
    be updated during training. Only the classification head will be updated.
    """
    for param in self.wav2vec2.parameters():
        param.requires_grad = False

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForXVector.freeze_feature_encoder()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2191
2192
2193
2194
2195
2196
def freeze_feature_encoder(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    self.wav2vec2.feature_extractor._freeze_parameters()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForXVector.freeze_feature_extractor()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
def freeze_feature_extractor(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    warnings.warn(
        "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
        "Please use the equivalent `freeze_feature_encoder` method instead.",
        FutureWarning,
    )
    self.freeze_feature_encoder()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2GumbelVectorQuantizer

Bases: Module

Vector quantization using gumbel softmax. See `CATEGORICAL REPARAMETERIZATION WITH GUMBEL-SOFTMAX for more information.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
class Wav2Vec2GumbelVectorQuantizer(nn.Module):
    """
    Vector quantization using gumbel softmax. See `[CATEGORICAL REPARAMETERIZATION WITH
    GUMBEL-SOFTMAX](https://arxiv.org/pdf/1611.01144.pdf) for more information.
    """

    def __init__(self, config):
        super().__init__()
        self.num_groups = config.num_codevector_groups
        self.num_vars = config.num_codevectors_per_group

        if config.codevector_dim % self.num_groups != 0:
            raise ValueError(
                f"`config.codevector_dim {config.codevector_dim} must be divisible "
                f"by `config.num_codevector_groups` {self.num_groups} for concatenation"
            )

        # storage for codebook variables (codewords)
        self.codevectors = nn.Parameter(
            ops.randn(1, self.num_groups * self.num_vars, config.codevector_dim // self.num_groups)
        )
        self.weight_proj = nn.Linear(config.conv_dim[-1], self.num_groups * self.num_vars)

        # can be decayed for training
        self.temperature = 2

    @staticmethod
    def _compute_perplexity(probs, mask=None):
        if mask is not None:
            mask_extended = mask.flatten()[:, None, None].broadcast_to(probs.shape)
            probs = ops.where(mask_extended, probs, ops.zeros_like(probs))
            marginal_probs = ops.sum(probs, dim=0) / mask.sum()
        else:
            marginal_probs = ops.mean(probs, dim=0)

        perplexity = ops.exp(-ops.sum(marginal_probs * ops.log(marginal_probs + 1e-7), dim=-1)).sum()
        return perplexity

    def forward(self, hidden_states, mask_time_indices=None):
        batch_size, sequence_length, hidden_size = hidden_states.shape

        # project to codevector dim
        hidden_states = self.weight_proj(hidden_states)
        hidden_states = hidden_states.view(batch_size * sequence_length * self.num_groups, -1)

        if self.training:
            # sample code vector probs via gumbel in differentiateable way
            codevector_probs = nn.functional.gumbel_softmax(
                hidden_states.float(), tau=self.temperature, hard=True
            ).type_as(hidden_states)

            # compute perplexity
            codevector_soft_dist = ops.softmax(
                hidden_states.view(batch_size * sequence_length, self.num_groups, -1).float(), dim=-1
            )
            perplexity = self._compute_perplexity(codevector_soft_dist, mask_time_indices)
        else:
            # take argmax in non-differentiable way
            # comptute hard codevector distribution (one hot)
            codevector_idx = ops.argmax(hidden_states, dim=-1).view(-1, 1)
            codevector_probs = ops.scatter(
                ops.zeros(hidden_states.shape, dtype=hidden_states.dtype),
                -1, codevector_idx, ops.ones(codevector_idx.shape, dtype=hidden_states.dtype)
            )
            codevector_probs = codevector_probs.view(batch_size * sequence_length, self.num_groups, -1)

            perplexity = self._compute_perplexity(codevector_probs, mask_time_indices)

        codevector_probs = codevector_probs.view(batch_size * sequence_length, -1)
        # use probs to retrieve codevectors
        codevectors_per_group = codevector_probs.unsqueeze(-1) * self.codevectors
        codevectors = codevectors_per_group.view(batch_size * sequence_length, self.num_groups, self.num_vars, -1)
        codevectors = codevectors.sum(-2).view(batch_size, sequence_length, -1)

        return codevectors, perplexity

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2Model

Bases: Wav2Vec2PreTrainedModel

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
class Wav2Vec2Model(Wav2Vec2PreTrainedModel):
    def __init__(self, config: Wav2Vec2Config):
        super().__init__(config)
        self.config = config
        self.feature_extractor = Wav2Vec2FeatureEncoder(config)
        self.feature_projection = Wav2Vec2FeatureProjection(config)

        # model only needs masking vector if mask prob is > 0.0
        if config.mask_time_prob > 0.0 or config.mask_feature_prob > 0.0:
            self.masked_spec_embed = nn.Parameter(ops.randn(config.hidden_size))

        if config.do_stable_layer_norm:
            self.encoder = Wav2Vec2EncoderStableLayerNorm(config)
        else:
            self.encoder = Wav2Vec2Encoder(config)

        self.adapter = Wav2Vec2Adapter(config) if config.add_adapter else None

        # Initialize weights and apply final processing
        self.post_init()

    def freeze_feature_extractor(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameters will
        not be updated during training.
        """
        warnings.warn(
            "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
            "Please use the equivalent `freeze_feature_encoder` method instead.",
            FutureWarning,
        )
        self.freeze_feature_encoder()

    def freeze_feature_encoder(self):
        """
        Calling this function will disable the gradient computation for the feature encoder so that its parameter will
        not be updated during training.
        """
        self.feature_extractor._freeze_parameters()

    def _mask_hidden_states(
        self,
        hidden_states: mindspore.Tensor,
        mask_time_indices: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
    ):
        """
        Masks extracted features along time axis and/or along feature axis according to
        [SpecAugment](https://arxiv.org/abs/1904.08779).
        """

        # `config.apply_spec_augment` can set masking to False
        if not getattr(self.config, "apply_spec_augment", True):
            return hidden_states

        # generate indices & apply SpecAugment along time axis
        batch_size, sequence_length, hidden_size = hidden_states.shape

        if mask_time_indices is not None:
            # apply SpecAugment along time axis with given mask_time_indices
            hidden_states[mask_time_indices] = self.masked_spec_embed.to(hidden_states.dtype)
        elif self.config.mask_time_prob > 0 and self.training:
            mask_time_indices = _compute_mask_indices(
                (batch_size, sequence_length),
                mask_prob=self.config.mask_time_prob,
                mask_length=self.config.mask_time_length,
                attention_mask=attention_mask,
                min_masks=self.config.mask_time_min_masks,
            )
            mask_time_indices = mindspore.tensor(mask_time_indices, dtype=mindspore.bool_)
            hidden_states[mask_time_indices] = self.masked_spec_embed.to(hidden_states.dtype)

        if self.config.mask_feature_prob > 0 and self.training:
            # generate indices & apply SpecAugment along feature axis
            mask_feature_indices = _compute_mask_indices(
                (batch_size, hidden_size),
                mask_prob=self.config.mask_feature_prob,
                mask_length=self.config.mask_feature_length,
                min_masks=self.config.mask_feature_min_masks,
            )
            mask_feature_indices = mindspore.tensor(mask_feature_indices, dtype=mindspore.bool_)
            mask_feature_indices = mask_feature_indices[:, None].broadcast_to((-1, sequence_length, -1))
            hidden_states[mask_feature_indices] = 0

        return hidden_states

    def forward(
        self,
        input_values: Optional[mindspore.Tensor],
        attention_mask: Optional[mindspore.Tensor] = None,
        mask_time_indices: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple, Wav2Vec2BaseModelOutput]:
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        extract_features = self.feature_extractor(input_values)
        extract_features = ops.transpose(extract_features, 1, 2)

        if attention_mask is not None:
            # compute reduced attention_mask corresponding to feature vectors
            attention_mask = self._get_feature_vector_attention_mask(
                extract_features.shape[1], attention_mask, add_adapter=False
            )

        hidden_states, extract_features = self.feature_projection(extract_features)
        hidden_states = self._mask_hidden_states(
            hidden_states, mask_time_indices=mask_time_indices, attention_mask=attention_mask
        )

        encoder_outputs = self.encoder(
            hidden_states,
            attention_mask=attention_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        hidden_states = encoder_outputs[0]

        if self.adapter is not None:
            hidden_states = self.adapter(hidden_states)

        if not return_dict:
            return (hidden_states, extract_features) + encoder_outputs[1:]

        return Wav2Vec2BaseModelOutput(
            last_hidden_state=hidden_states,
            extract_features=extract_features,
            hidden_states=encoder_outputs.hidden_states,
            attentions=encoder_outputs.attentions,
        )

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2Model.freeze_feature_encoder()

Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1391
1392
1393
1394
1395
1396
def freeze_feature_encoder(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameter will
    not be updated during training.
    """
    self.feature_extractor._freeze_parameters()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2Model.freeze_feature_extractor()

Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
def freeze_feature_extractor(self):
    """
    Calling this function will disable the gradient computation for the feature encoder so that its parameters will
    not be updated during training.
    """
    warnings.warn(
        "The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5. "
        "Please use the equivalent `freeze_feature_encoder` method instead.",
        FutureWarning,
    )
    self.freeze_feature_encoder()

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2PreTrainedModel

Bases: PreTrainedModel

An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
class Wav2Vec2PreTrainedModel(PreTrainedModel):
    """
    An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
    models.
    """

    config_class = Wav2Vec2Config
    base_model_prefix = "wav2vec2"
    main_input_name = "input_values"
    supports_gradient_checkpointing = True

    def _init_weights(self, module):
        """Initialize the weights"""
        # Wav2Vec2ForPreTraining last 2 linear layers need standard Linear init.
        if isinstance(module, Wav2Vec2ForPreTraining):
            module.project_hid.reset_parameters()
            module.project_q.reset_parameters()
            module.project_hid._is_initialized = True
            module.project_q._is_initialized = True
        # gumbel softmax requires special init
        elif isinstance(module, Wav2Vec2GumbelVectorQuantizer):
            nn.init.normal_(module.weight_proj.weight, mean=0.0, std=1)
            nn.init.zeros_(module.weight_proj.bias)
            nn.init.uniform_(module.codevectors)
        elif isinstance(module, Wav2Vec2PositionalConvEmbedding):
            nn.init.normal_(
                module.conv.weight,
                mean=0,
                std=2 * math.sqrt(1 / (module.conv.kernel_size[0] * module.conv.in_channels)),
            )
            nn.init.constant_(module.conv.bias, 0)
        elif isinstance(module, Wav2Vec2FeatureProjection):
            k = math.sqrt(1 / module.projection.in_features)
            nn.init.uniform_(module.projection.weight, a=-k, b=k)
            nn.init.uniform_(module.projection.bias, a=-k, b=k)
        elif isinstance(module, nn.Linear):
            nn.init.normal_(module.weight, mean=0.0, std=self.config.initializer_range)
            if module.bias is not None:
                nn.init.zeros_(module.bias)
        elif isinstance(module, (nn.LayerNorm, nn.GroupNorm)):
            nn.init.zeros_(module.bias)
            nn.init.ones_(module.weight)
        elif isinstance(module, nn.Conv1d):
            nn.init.kaiming_normal_(module.weight)
            if module.bias is not None:
                k = math.sqrt(module.groups / (module.in_channels * module.kernel_size[0]))
                nn.init.uniform_(module.bias, a=-k, b=k)

    def _get_feat_extract_output_lengths(
        self, input_lengths: Union[mindspore.Tensor, int], add_adapter: Optional[bool] = None
    ):
        """
        Computes the output length of the convolutional layers
        """

        add_adapter = self.config.add_adapter if add_adapter is None else add_adapter

        def _conv_out_length(input_length, kernel_size, stride):
            # 1D convolutional layer output length formula taken
            return ops.div(input_length - kernel_size, stride, rounding_mode="floor") + 1

        for kernel_size, stride in zip(self.config.conv_kernel, self.config.conv_stride):
            input_lengths = _conv_out_length(input_lengths, kernel_size, stride)

        if add_adapter:
            for _ in range(self.config.num_adapter_layers):
                input_lengths = _conv_out_length(input_lengths, 1, self.config.adapter_stride)

        return input_lengths

    def _get_feature_vector_attention_mask(
        self, feature_vector_length: int, attention_mask: mindspore.Tensor, add_adapter=None
    ):
        # Effectively attention_mask.sum(-1), but not inplace to be able to run
        # on inference mode.
        non_padded_lengths = ops.cumsum(attention_mask, dim=-1)[:, -1]

        output_lengths = self._get_feat_extract_output_lengths(non_padded_lengths, add_adapter=add_adapter)
        output_lengths = output_lengths.to(mindspore.int64)

        batch_size = attention_mask.shape[0]

        attention_mask = ops.zeros(
            (batch_size, feature_vector_length), dtype=attention_mask.dtype
        )
        # these two operations makes sure that all values before the output lengths idxs are attended to
        attention_mask[(ops.arange(attention_mask.shape[0]), output_lengths - 1)] = 1
        attention_mask = attention_mask.flip([-1]).int().cumsum(-1).flip([-1]).bool()
        return attention_mask

    def _get_adapters(self):
        if self.config.adapter_attn_dim is None:
            raise ValueError(f"{self.__class__} has no adapter layers. Make sure to define `config.adapter_attn_dim`.")

        adapter_weights = {}
        for name, module in self.named_modules():
            if isinstance(module, Wav2Vec2AttnAdapterLayer):
                for param_name, param in module.named_parameters():
                    adapter_weights[".".join([name, param_name])] = param

        if isinstance(self, Wav2Vec2ForCTC):
            for name, param in self.lm_head.named_parameters():
                adapter_weights[".".join(["lm_head", name])] = param

        return adapter_weights

    def init_adapter_layers(self):
        """
        (Re-)initialize attention adapter layers and lm head for adapter-only fine-tuning
        """
        # init attention adapters
        for module in self.modules():
            if isinstance(module, Wav2Vec2AttnAdapterLayer):
                self._init_weights(module)

        # init lm head
        if isinstance(self, Wav2Vec2ForCTC):
            self._init_weights(self.lm_head)

    def load_adapter(self, target_lang: str, force_load=True, **kwargs):
        r"""
        Load a language adapter model from a pre-trained adapter model.

        Parameters:
            target_lang (`str`):
                Has to be a language id of an existing adapter weight. Adapter weights are stored in the format
                adapter.<lang>.safetensors or adapter.<lang>.bin
            force_load (`bool`, defaults to `True`):
                Whether the weights shall be loaded even if `target_lang` matches `self.target_lang`.
            cache_dir (`Union[str, os.PathLike]`, *optional*):
                Path to a directory in which a downloaded pretrained model configuration should be cached if the
                standard cache should not be used.
            force_download (`bool`, *optional*, defaults to `False`):
                Whether or not to force the (re-)download of the model weights and configuration files, overriding the
                cached versions if they exist.
            resume_download:
                Deprecated and ignored. All downloads are now resumed by default when possible.
                Will be removed in v5 of Transformers.
            proxies (`Dict[str, str]`, *optional*):
                A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
                'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
            local_files_only(`bool`, *optional*, defaults to `False`):
                Whether or not to only look at local files (i.e., do not try to download the model).
            token (`str` or `bool`, *optional*):
                The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
                the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
            revision (`str`, *optional*, defaults to `"main"`):
                The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
                git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
                identifier allowed by git.

                <Tip>

                To test a pull request you made on the Hub, you can pass `revision="refs/pr/<pr_number>".

                </Tip>

            mirror (`str`, *optional*):
                Mirror source to accelerate downloads in China. If you are from China and have an accessibility
                problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety.
                Please refer to the mirror site for more information.

        <Tip>

        Activate the special ["offline-mode"](https://huggingface.co/transformers/installation.html#offline-mode) to
        use this method in a firewalled environment.

        </Tip>

        Examples:

        ```python
        >>> from transformers import Wav2Vec2ForCTC, AutoProcessor

        >>> ckpt = "facebook/mms-1b-all"
        >>> processor = AutoProcessor.from_pretrained(ckpt)
        >>> model = Wav2Vec2ForCTC.from_pretrained(ckpt, target_lang="eng")
        >>> # set specific language
        >>> processor.tokenizer.set_target_lang("spa")
        >>> model.load_adapter("spa")
        ```
        """
        if self.config.adapter_attn_dim is None:
            raise ValueError(f"Cannot load_adapter for {target_lang} if `config.adapter_attn_dim` is not defined.")

        if target_lang == self.target_lang and not force_load: # pylint: disable=access-member-before-definition
            logger.warning(f"Adapter weights are already set to {target_lang}.")
            return

        cache_dir = kwargs.pop("cache_dir", None)
        force_download = kwargs.pop("force_download", False)
        resume_download = kwargs.pop("resume_download", None)
        proxies = kwargs.pop("proxies", None)
        local_files_only = kwargs.pop("local_files_only", False)
        token = kwargs.pop("token", None)
        use_auth_token = kwargs.pop("use_auth_token", None)
        revision = kwargs.pop("revision", None)
        use_safetensors = kwargs.pop("use_safetensors", None if is_safetensors_available() else False)

        if use_auth_token is not None:
            warnings.warn(
                "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
                FutureWarning,
            )
            if token is not None:
                raise ValueError(
                    "`token` and `use_auth_token` are both specified. Please set only the argument `token`."
                )
            token = use_auth_token

        model_path_or_id = self.config._name_or_path
        state_dict = None

        # 1. Let's first try loading a safetensors adapter weight
        if use_safetensors is not False:
            filepath = WAV2VEC2_ADAPTER_SAFE_FILE.format(target_lang)

            try:
                weight_path = cached_file(
                    model_path_or_id,
                    filename=filepath,
                    force_download=force_download,
                    resume_download=resume_download,
                    proxies=proxies,
                    local_files_only=local_files_only,
                    token=token,
                    revision=revision,
                    cache_dir=cache_dir,
                )

                state_dict = safe_load_file(weight_path)

            except EnvironmentError:
                if use_safetensors:
                    # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted
                    # to the original exception.
                    raise

            except Exception:
                # For any other exception, we throw a generic error.
                if use_safetensors:
                    raise EnvironmentError(
                        f"Can't load the model for '{model_path_or_id}'. If you were trying to load it"
                        " from 'https://huggingface.co/models', make sure you don't have a local directory with the"
                        f" same name. Otherwise, make sure '{model_path_or_id}' is the correct path to a"
                        f" directory containing a file named {filepath}."
                    )

        # 2. If this didn't work let's try loading a PyTorch adapter weight
        if state_dict is None:
            filepath = WAV2VEC2_ADAPTER_PT_FILE.format(target_lang)

            try:
                weight_path = cached_file(
                    model_path_or_id,
                    filename=filepath,
                    force_download=force_download,
                    resume_download=resume_download,
                    proxies=proxies,
                    local_files_only=local_files_only,
                    token=token,
                    revision=revision,
                    cache_dir=cache_dir,
                )

                state_dict = load(weight_path)

            except EnvironmentError:
                # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted
                # to the original exception.
                raise

            except Exception as e:
                print(e)
                # For any other exception, we throw a generic error.
                raise EnvironmentError(
                    f"Can't load the model for '{model_path_or_id}'. If you were trying to load it"
                    " from 'https://huggingface.co/models', make sure you don't have a local directory with the"
                    f" same name. Otherwise, make sure '{model_path_or_id}' is the correct path to a"
                    f" directory containing a file named {filepath}."
                )

        adapter_weights = self._get_adapters()
        unexpected_keys = set(state_dict.keys()) - set(adapter_weights.keys())
        missing_keys = set(adapter_weights.keys()) - set(state_dict.keys())

        if len(unexpected_keys) > 0:
            raise ValueError(f"The adapter weights {weight_path} has unexpected keys: {', '.join(unexpected_keys)}.")
        elif len(missing_keys) > 0:
            raise ValueError(f"The adapter weights {weight_path} has missing keys: {', '.join(missing_keys)}.")

        # make sure now vocab size is correct
        target_vocab_size = state_dict["lm_head.weight"].shape[0]
        if target_vocab_size != self.config.vocab_size:
            self.lm_head = nn.Linear(
                self.config.output_hidden_size, target_vocab_size, dtype=self.dtype
            )
            self.config.vocab_size = target_vocab_size

        # make sure that adapter weights are put in exactly the same precision and device placement and overwritten adapter weights
        state_dict = {k: v.to(adapter_weights[k].dtype) for k, v in state_dict.items()}
        self.load_state_dict(state_dict, strict=False)

        # set target language corectly
        self.target_lang = target_lang

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2PreTrainedModel.init_adapter_layers()

(Re-)initialize attention adapter layers and lm head for adapter-only fine-tuning

Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
def init_adapter_layers(self):
    """
    (Re-)initialize attention adapter layers and lm head for adapter-only fine-tuning
    """
    # init attention adapters
    for module in self.modules():
        if isinstance(module, Wav2Vec2AttnAdapterLayer):
            self._init_weights(module)

    # init lm head
    if isinstance(self, Wav2Vec2ForCTC):
        self._init_weights(self.lm_head)

mindnlp.transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2PreTrainedModel.load_adapter(target_lang, force_load=True, **kwargs)

Load a language adapter model from a pre-trained adapter model.

PARAMETER DESCRIPTION
target_lang

Has to be a language id of an existing adapter weight. Adapter weights are stored in the format adapter..safetensors or adapter..bin

TYPE: `str`

force_load

Whether the weights shall be loaded even if target_lang matches self.target_lang.

TYPE: `bool`, defaults to `True` DEFAULT: True

cache_dir

Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

TYPE: `Union[str, os.PathLike]`, *optional*

force_download

Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.

TYPE: `bool`, *optional*, defaults to `False`

resume_download

Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.

proxies

A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.

TYPE: `Dict[str, str]`, *optional*

local_files_only(`bool`,

Whether or not to only look at local files (i.e., do not try to download the model).

TYPE: *optional*, defaults to `False`

token

The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).

TYPE: `str` or `bool`, *optional*

revision

The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

To test a pull request you made on the Hub, you can pass `revision="refs/pr/".

TYPE: `str`, *optional*, defaults to `"main"`

mirror

Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information.

TYPE: `str`, *optional*

Activate the special "offline-mode" to use this method in a firewalled environment.

Examples:

>>> from transformers import Wav2Vec2ForCTC, AutoProcessor

>>> ckpt = "facebook/mms-1b-all"
>>> processor = AutoProcessor.from_pretrained(ckpt)
>>> model = Wav2Vec2ForCTC.from_pretrained(ckpt, target_lang="eng")
>>> # set specific language
>>> processor.tokenizer.set_target_lang("spa")
>>> model.load_adapter("spa")
Source code in mindnlp\transformers\models\wav2vec2\modeling_wav2vec2.py
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
def load_adapter(self, target_lang: str, force_load=True, **kwargs):
    r"""
    Load a language adapter model from a pre-trained adapter model.

    Parameters:
        target_lang (`str`):
            Has to be a language id of an existing adapter weight. Adapter weights are stored in the format
            adapter.<lang>.safetensors or adapter.<lang>.bin
        force_load (`bool`, defaults to `True`):
            Whether the weights shall be loaded even if `target_lang` matches `self.target_lang`.
        cache_dir (`Union[str, os.PathLike]`, *optional*):
            Path to a directory in which a downloaded pretrained model configuration should be cached if the
            standard cache should not be used.
        force_download (`bool`, *optional*, defaults to `False`):
            Whether or not to force the (re-)download of the model weights and configuration files, overriding the
            cached versions if they exist.
        resume_download:
            Deprecated and ignored. All downloads are now resumed by default when possible.
            Will be removed in v5 of Transformers.
        proxies (`Dict[str, str]`, *optional*):
            A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
            'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
        local_files_only(`bool`, *optional*, defaults to `False`):
            Whether or not to only look at local files (i.e., do not try to download the model).
        token (`str` or `bool`, *optional*):
            The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
            the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
        revision (`str`, *optional*, defaults to `"main"`):
            The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
            git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
            identifier allowed by git.

            <Tip>

            To test a pull request you made on the Hub, you can pass `revision="refs/pr/<pr_number>".

            </Tip>

        mirror (`str`, *optional*):
            Mirror source to accelerate downloads in China. If you are from China and have an accessibility
            problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety.
            Please refer to the mirror site for more information.

    <Tip>

    Activate the special ["offline-mode"](https://huggingface.co/transformers/installation.html#offline-mode) to
    use this method in a firewalled environment.

    </Tip>

    Examples:

    ```python
    >>> from transformers import Wav2Vec2ForCTC, AutoProcessor

    >>> ckpt = "facebook/mms-1b-all"
    >>> processor = AutoProcessor.from_pretrained(ckpt)
    >>> model = Wav2Vec2ForCTC.from_pretrained(ckpt, target_lang="eng")
    >>> # set specific language
    >>> processor.tokenizer.set_target_lang("spa")
    >>> model.load_adapter("spa")
    ```
    """
    if self.config.adapter_attn_dim is None:
        raise ValueError(f"Cannot load_adapter for {target_lang} if `config.adapter_attn_dim` is not defined.")

    if target_lang == self.target_lang and not force_load: # pylint: disable=access-member-before-definition
        logger.warning(f"Adapter weights are already set to {target_lang}.")
        return

    cache_dir = kwargs.pop("cache_dir", None)
    force_download = kwargs.pop("force_download", False)
    resume_download = kwargs.pop("resume_download", None)
    proxies = kwargs.pop("proxies", None)
    local_files_only = kwargs.pop("local_files_only", False)
    token = kwargs.pop("token", None)
    use_auth_token = kwargs.pop("use_auth_token", None)
    revision = kwargs.pop("revision", None)
    use_safetensors = kwargs.pop("use_safetensors", None if is_safetensors_available() else False)

    if use_auth_token is not None:
        warnings.warn(
            "The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
            FutureWarning,
        )
        if token is not None:
            raise ValueError(
                "`token` and `use_auth_token` are both specified. Please set only the argument `token`."
            )
        token = use_auth_token

    model_path_or_id = self.config._name_or_path
    state_dict = None

    # 1. Let's first try loading a safetensors adapter weight
    if use_safetensors is not False:
        filepath = WAV2VEC2_ADAPTER_SAFE_FILE.format(target_lang)

        try:
            weight_path = cached_file(
                model_path_or_id,
                filename=filepath,
                force_download=force_download,
                resume_download=resume_download,
                proxies=proxies,
                local_files_only=local_files_only,
                token=token,
                revision=revision,
                cache_dir=cache_dir,
            )

            state_dict = safe_load_file(weight_path)

        except EnvironmentError:
            if use_safetensors:
                # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted
                # to the original exception.
                raise

        except Exception:
            # For any other exception, we throw a generic error.
            if use_safetensors:
                raise EnvironmentError(
                    f"Can't load the model for '{model_path_or_id}'. If you were trying to load it"
                    " from 'https://huggingface.co/models', make sure you don't have a local directory with the"
                    f" same name. Otherwise, make sure '{model_path_or_id}' is the correct path to a"
                    f" directory containing a file named {filepath}."
                )

    # 2. If this didn't work let's try loading a PyTorch adapter weight
    if state_dict is None:
        filepath = WAV2VEC2_ADAPTER_PT_FILE.format(target_lang)

        try:
            weight_path = cached_file(
                model_path_or_id,
                filename=filepath,
                force_download=force_download,
                resume_download=resume_download,
                proxies=proxies,
                local_files_only=local_files_only,
                token=token,
                revision=revision,
                cache_dir=cache_dir,
            )

            state_dict = load(weight_path)

        except EnvironmentError:
            # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted
            # to the original exception.
            raise

        except Exception as e:
            print(e)
            # For any other exception, we throw a generic error.
            raise EnvironmentError(
                f"Can't load the model for '{model_path_or_id}'. If you were trying to load it"
                " from 'https://huggingface.co/models', make sure you don't have a local directory with the"
                f" same name. Otherwise, make sure '{model_path_or_id}' is the correct path to a"
                f" directory containing a file named {filepath}."
            )

    adapter_weights = self._get_adapters()
    unexpected_keys = set(state_dict.keys()) - set(adapter_weights.keys())
    missing_keys = set(adapter_weights.keys()) - set(state_dict.keys())

    if len(unexpected_keys) > 0:
        raise ValueError(f"The adapter weights {weight_path} has unexpected keys: {', '.join(unexpected_keys)}.")
    elif len(missing_keys) > 0:
        raise ValueError(f"The adapter weights {weight_path} has missing keys: {', '.join(missing_keys)}.")

    # make sure now vocab size is correct
    target_vocab_size = state_dict["lm_head.weight"].shape[0]
    if target_vocab_size != self.config.vocab_size:
        self.lm_head = nn.Linear(
            self.config.output_hidden_size, target_vocab_size, dtype=self.dtype
        )
        self.config.vocab_size = target_vocab_size

    # make sure that adapter weights are put in exactly the same precision and device placement and overwritten adapter weights
    state_dict = {k: v.to(adapter_weights[k].dtype) for k, v in state_dict.items()}
    self.load_state_dict(state_dict, strict=False)

    # set target language corectly
    self.target_lang = target_lang