跳转至

roberta

mindnlp.transformers.models.roberta.modeling_roberta

roberta model, base on bert.

mindnlp.transformers.models.roberta.modeling_roberta.RobertaAttention

Bases: Module

This class represents the attention mechanism used in the Roberta model. It is a subclass of nn.Module.

The RobertaAttention class implements the attention mechanism used in the Roberta model. It consists of a self-attention module and a self-output module. The self-attention module is responsible for computing the attention scores between the input hidden states and itself, while the self-output module applies a linear transformation to the attention output.

The class provides the following methods:

  • init: Initializes the RobertaAttention instance. It takes a configuration object and an optional position_embedding_type as arguments. The config object contains the model configuration, while the position_embedding_type specifies the type of position embedding to be used.

  • prune_heads: Prunes the specified attention heads. It takes a list of heads to be pruned as input. This method updates the attention module by removing the pruned heads and adjusting the attention head size accordingly.

  • forward: Constructs the attention output given the input hidden states and optional arguments. It computes the attention scores using the self-attention module and applies the self-output module to generate the final attention output. This method returns a tuple containing the attention output and optional additional outputs.

Note
  • The 'hidden_states' argument is a tensor representing the input hidden states.
  • The 'attention_mask' argument is an optional tensor specifying the attention mask.
  • The 'head_mask' argument is an optional tensor indicating which attention heads to mask.
  • The 'encoder_hidden_states' and 'encoder_attention_mask' arguments are optional tensors representing the hidden states and attention mask of the encoder.
  • The 'past_key_value' argument is an optional tuple of past key-value tensors.
  • The 'output_attentions' argument is a boolean flag indicating whether to output the attention scores.

Please refer to the RobertaSelfAttention and RobertaSelfOutput classes for more information about the self-attention and self-output modules used in this class.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
class RobertaAttention(nn.Module):

    """
    This class represents the attention mechanism used in the Roberta model. It is a subclass of nn.Module.

    The RobertaAttention class implements the attention mechanism used in the Roberta model.
    It consists of a self-attention module and a self-output module. The self-attention module is responsible for
    computing the attention scores between the input hidden states and itself, while the self-output module applies
    a linear transformation to the attention output.

    The class provides the following methods:

    - __init__: Initializes the RobertaAttention instance. It takes a configuration object and an optional position_embedding_type as arguments. The config object
    contains the model configuration, while the position_embedding_type specifies the type of position embedding to be used.

    - prune_heads: Prunes the specified attention heads. It takes a list of heads to be pruned as input. This method updates the attention module by removing the pruned heads and adjusting the
    attention head size accordingly.

    - forward: Constructs the attention output given the input hidden states and optional arguments.
    It computes the attention scores using the self-attention module and applies the self-output module to generate
    the final attention output. This method returns a tuple containing the attention output and optional additional
    outputs.

    Note:
        - The 'hidden_states' argument is a tensor representing the input hidden states.
        - The 'attention_mask' argument is an optional tensor specifying the attention mask.
        - The 'head_mask' argument is an optional tensor indicating which attention heads to mask.
        - The 'encoder_hidden_states' and 'encoder_attention_mask' arguments are optional tensors representing the hidden
        states and attention mask of the encoder.
        - The 'past_key_value' argument is an  optional tuple of past key-value tensors.
        - The 'output_attentions' argument is a boolean flag indicating whether to output the attention scores.

    Please refer to the RobertaSelfAttention and RobertaSelfOutput classes for more information about the self-attention
    and self-output modules used in this class.
    """
    def __init__(self, config, position_embedding_type=None):
        """
        Initializes a new instance of the RobertaAttention class.

        Args:
            self (object): The instance of the class.
            config (object): The configuration object for the attention mechanism.
            position_embedding_type (str, optional): The type of position embedding to be used.
                Default is None. If provided, it should be a string representing the type of position embedding.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__()
        self.self = RobertaSelfAttention(config, position_embedding_type=position_embedding_type)
        self.output = RobertaSelfOutput(config)
        self.pruned_heads = set()

    def prune_heads(self, heads):
        """
        Prunes the attention heads in the RobertaAttention class.

        Args:
            self (RobertaAttention): The instance of the RobertaAttention class.
            heads (List[int]): The list of attention heads to be pruned.

        Returns:
            None

        Raises:
            None
        """
        if len(heads) == 0:
            return
        heads, index = find_pruneable_heads_and_indices(
            heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads
        )

        # Prune linear layers
        self.self.query = prune_linear_layer(self.self.query, index)
        self.self.key = prune_linear_layer(self.self.key, index)
        self.self.value = prune_linear_layer(self.self.value, index)
        self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)

        # Update hyper params and store pruned heads
        self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
        self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads
        self.pruned_heads = self.pruned_heads.union(heads)

    def forward(
        self,
        hidden_states: mindspore.Tensor,
        attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        past_key_value: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
        output_attentions: Optional[bool] = False,
    ) -> Tuple[mindspore.Tensor]:
        """
        Constructs the attention mechanism for the RobertaAttention class.

        Args:
            self: The instance of the RobertaAttention class.
            hidden_states (mindspore.Tensor): The input hidden states for the attention mechanism.
            attention_mask (Optional[mindspore.Tensor]): An optional mask tensor to mask out specific attention weights.
                Defaults to None.
            head_mask (Optional[mindspore.Tensor]): An optional mask tensor to mask out specific attention heads.
                Defaults to None.
            encoder_hidden_states (Optional[mindspore.Tensor]): An optional tensor representing hidden states from
                the encoder. Defaults to None.
            encoder_attention_mask (Optional[mindspore.Tensor]): An optional mask tensor to mask out specific attention
                weights from the encoder. Defaults to None.
            past_key_value (Optional[Tuple[Tuple[mindspore.Tensor]]]): An optional tuple of tensor tuples representing
                previous key-value pairs. Defaults to None.
            output_attentions (Optional[bool]): An optional flag to indicate whether to output attention weights.
                Defaults to False.

        Returns:
            Tuple[mindspore.Tensor]: A tuple containing the attention output tensor and any additional outputs
                from the mechanism.

        Raises:
            None.

        """
        self_outputs = self.self(
            hidden_states,
            attention_mask,
            head_mask,
            encoder_hidden_states,
            encoder_attention_mask,
            past_key_value,
            output_attentions,
        )
        attention_output = self.output(self_outputs[0], hidden_states)
        outputs = (attention_output,) + self_outputs[1:]  # add attentions if we output them
        return outputs

mindnlp.transformers.models.roberta.modeling_roberta.RobertaAttention.__init__(config, position_embedding_type=None)

Initializes a new instance of the RobertaAttention class.

PARAMETER DESCRIPTION
self

The instance of the class.

TYPE: object

config

The configuration object for the attention mechanism.

TYPE: object

position_embedding_type

The type of position embedding to be used. Default is None. If provided, it should be a string representing the type of position embedding.

TYPE: str DEFAULT: None

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
def __init__(self, config, position_embedding_type=None):
    """
    Initializes a new instance of the RobertaAttention class.

    Args:
        self (object): The instance of the class.
        config (object): The configuration object for the attention mechanism.
        position_embedding_type (str, optional): The type of position embedding to be used.
            Default is None. If provided, it should be a string representing the type of position embedding.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__()
    self.self = RobertaSelfAttention(config, position_embedding_type=position_embedding_type)
    self.output = RobertaSelfOutput(config)
    self.pruned_heads = set()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaAttention.forward(hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_value=None, output_attentions=False)

Constructs the attention mechanism for the RobertaAttention class.

PARAMETER DESCRIPTION
self

The instance of the RobertaAttention class.

hidden_states

The input hidden states for the attention mechanism.

TYPE: Tensor

attention_mask

An optional mask tensor to mask out specific attention weights. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

head_mask

An optional mask tensor to mask out specific attention heads. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

encoder_hidden_states

An optional tensor representing hidden states from the encoder. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

encoder_attention_mask

An optional mask tensor to mask out specific attention weights from the encoder. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

past_key_value

An optional tuple of tensor tuples representing previous key-value pairs. Defaults to None.

TYPE: Optional[Tuple[Tuple[Tensor]]] DEFAULT: None

output_attentions

An optional flag to indicate whether to output attention weights. Defaults to False.

TYPE: Optional[bool] DEFAULT: False

RETURNS DESCRIPTION
Tuple[Tensor]

Tuple[mindspore.Tensor]: A tuple containing the attention output tensor and any additional outputs from the mechanism.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
def forward(
    self,
    hidden_states: mindspore.Tensor,
    attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    past_key_value: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
    output_attentions: Optional[bool] = False,
) -> Tuple[mindspore.Tensor]:
    """
    Constructs the attention mechanism for the RobertaAttention class.

    Args:
        self: The instance of the RobertaAttention class.
        hidden_states (mindspore.Tensor): The input hidden states for the attention mechanism.
        attention_mask (Optional[mindspore.Tensor]): An optional mask tensor to mask out specific attention weights.
            Defaults to None.
        head_mask (Optional[mindspore.Tensor]): An optional mask tensor to mask out specific attention heads.
            Defaults to None.
        encoder_hidden_states (Optional[mindspore.Tensor]): An optional tensor representing hidden states from
            the encoder. Defaults to None.
        encoder_attention_mask (Optional[mindspore.Tensor]): An optional mask tensor to mask out specific attention
            weights from the encoder. Defaults to None.
        past_key_value (Optional[Tuple[Tuple[mindspore.Tensor]]]): An optional tuple of tensor tuples representing
            previous key-value pairs. Defaults to None.
        output_attentions (Optional[bool]): An optional flag to indicate whether to output attention weights.
            Defaults to False.

    Returns:
        Tuple[mindspore.Tensor]: A tuple containing the attention output tensor and any additional outputs
            from the mechanism.

    Raises:
        None.

    """
    self_outputs = self.self(
        hidden_states,
        attention_mask,
        head_mask,
        encoder_hidden_states,
        encoder_attention_mask,
        past_key_value,
        output_attentions,
    )
    attention_output = self.output(self_outputs[0], hidden_states)
    outputs = (attention_output,) + self_outputs[1:]  # add attentions if we output them
    return outputs

mindnlp.transformers.models.roberta.modeling_roberta.RobertaAttention.prune_heads(heads)

Prunes the attention heads in the RobertaAttention class.

PARAMETER DESCRIPTION
self

The instance of the RobertaAttention class.

TYPE: RobertaAttention

heads

The list of attention heads to be pruned.

TYPE: List[int]

RETURNS DESCRIPTION

None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
def prune_heads(self, heads):
    """
    Prunes the attention heads in the RobertaAttention class.

    Args:
        self (RobertaAttention): The instance of the RobertaAttention class.
        heads (List[int]): The list of attention heads to be pruned.

    Returns:
        None

    Raises:
        None
    """
    if len(heads) == 0:
        return
    heads, index = find_pruneable_heads_and_indices(
        heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads
    )

    # Prune linear layers
    self.self.query = prune_linear_layer(self.self.query, index)
    self.self.key = prune_linear_layer(self.self.key, index)
    self.self.value = prune_linear_layer(self.self.value, index)
    self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)

    # Update hyper params and store pruned heads
    self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
    self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads
    self.pruned_heads = self.pruned_heads.union(heads)

mindnlp.transformers.models.roberta.modeling_roberta.RobertaClassificationHead

Bases: Module

Head for sentence-level classification tasks.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
class RobertaClassificationHead(nn.Module):
    """Head for sentence-level classification tasks."""
    def __init__(self, config):
        """
        Initialize the RobertaClassificationHead class.

        Args:
            self (object): The instance of the class.
            config (object): Configuration object containing parameters for the classification head.
                This object should have the following attributes:

                - hidden_size (int): The size of the hidden layers.
                - classifier_dropout (float, optional): The dropout probability for the classifier. If not provided,
                defaults to config.hidden_dropout_prob.
                - hidden_dropout_prob (float): The default dropout probability for hidden layers.
                - num_labels (int): The number of output labels.

        Returns:
            None.

        Raises:
            TypeError: If the config parameter is not provided.
            ValueError: If the config parameter is missing any of the required attributes.
        """
        super().__init__()
        self.dense = nn.Linear(config.hidden_size, config.hidden_size)
        classifier_dropout = (
            config.classifier_dropout
            if config.classifier_dropout is not None
            else config.hidden_dropout_prob
        )
        self.dropout = nn.Dropout(p=classifier_dropout)
        self.out_proj = nn.Linear(config.hidden_size, config.num_labels)

    def forward(self, features, **kwargs):
        """
        Constructs the classification head for a Roberta model.

        Args:
            self (RobertaClassificationHead): The instance of the RobertaClassificationHead class.
            features (torch.Tensor): The input features for the classification head.
                It should have shape (batch_size, seq_length, hidden_size).

        Returns:
            torch.Tensor: The output tensor after passing through the classification head.
                It has shape (batch_size, seq_length, num_labels).

        Raises:
            None.
        """
        x = features[:, 0, :]  # take <s> token (equiv. to [CLS])
        x = self.dropout(x)
        x = self.dense(x)
        x = ops.tanh(x)
        x = self.dropout(x)
        x = self.out_proj(x)
        return x

mindnlp.transformers.models.roberta.modeling_roberta.RobertaClassificationHead.__init__(config)

Initialize the RobertaClassificationHead class.

PARAMETER DESCRIPTION
self

The instance of the class.

TYPE: object

config

Configuration object containing parameters for the classification head. This object should have the following attributes:

  • hidden_size (int): The size of the hidden layers.
  • classifier_dropout (float, optional): The dropout probability for the classifier. If not provided, defaults to config.hidden_dropout_prob.
  • hidden_dropout_prob (float): The default dropout probability for hidden layers.
  • num_labels (int): The number of output labels.

TYPE: object

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
TypeError

If the config parameter is not provided.

ValueError

If the config parameter is missing any of the required attributes.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
def __init__(self, config):
    """
    Initialize the RobertaClassificationHead class.

    Args:
        self (object): The instance of the class.
        config (object): Configuration object containing parameters for the classification head.
            This object should have the following attributes:

            - hidden_size (int): The size of the hidden layers.
            - classifier_dropout (float, optional): The dropout probability for the classifier. If not provided,
            defaults to config.hidden_dropout_prob.
            - hidden_dropout_prob (float): The default dropout probability for hidden layers.
            - num_labels (int): The number of output labels.

    Returns:
        None.

    Raises:
        TypeError: If the config parameter is not provided.
        ValueError: If the config parameter is missing any of the required attributes.
    """
    super().__init__()
    self.dense = nn.Linear(config.hidden_size, config.hidden_size)
    classifier_dropout = (
        config.classifier_dropout
        if config.classifier_dropout is not None
        else config.hidden_dropout_prob
    )
    self.dropout = nn.Dropout(p=classifier_dropout)
    self.out_proj = nn.Linear(config.hidden_size, config.num_labels)

mindnlp.transformers.models.roberta.modeling_roberta.RobertaClassificationHead.forward(features, **kwargs)

Constructs the classification head for a Roberta model.

PARAMETER DESCRIPTION
self

The instance of the RobertaClassificationHead class.

TYPE: RobertaClassificationHead

features

The input features for the classification head. It should have shape (batch_size, seq_length, hidden_size).

TYPE: Tensor

RETURNS DESCRIPTION

torch.Tensor: The output tensor after passing through the classification head. It has shape (batch_size, seq_length, num_labels).

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
def forward(self, features, **kwargs):
    """
    Constructs the classification head for a Roberta model.

    Args:
        self (RobertaClassificationHead): The instance of the RobertaClassificationHead class.
        features (torch.Tensor): The input features for the classification head.
            It should have shape (batch_size, seq_length, hidden_size).

    Returns:
        torch.Tensor: The output tensor after passing through the classification head.
            It has shape (batch_size, seq_length, num_labels).

    Raises:
        None.
    """
    x = features[:, 0, :]  # take <s> token (equiv. to [CLS])
    x = self.dropout(x)
    x = self.dense(x)
    x = ops.tanh(x)
    x = self.dropout(x)
    x = self.out_proj(x)
    return x

mindnlp.transformers.models.roberta.modeling_roberta.RobertaEmbeddings

Bases: Module

Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
class RobertaEmbeddings(nn.Module):
    """
    Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.
    """
    # Copied from transformers.models.bert.modeling_bert.BertEmbeddings.__init__
    def __init__(self, config):
        """
        Initializes the RobertaEmbeddings class with the provided configuration.

        Args:
            self (RobertaEmbeddings): The instance of the RobertaEmbeddings class.
            config (object):
                A configuration object containing the following attributes:

                - vocab_size (int): The size of the vocabulary.
                - hidden_size (int): The size of the hidden layers.
                - max_position_embeddings (int): The maximum number of positional embeddings.
                - type_vocab_size (int): The size of the token type vocabulary.
                - layer_norm_eps (float): The epsilon value for Layer Normalization.
                - hidden_dropout_prob (float): The dropout probability.
                - position_embedding_type (str, optional): The type of position embedding, defaults to 'absolute'.
                - pad_token_id (int): The token ID for padding.

        Returns:
            None.

        Raises:
            AttributeError: If the config object is missing required attributes.
            ValueError: If the config attributes are not of the expected types.
            RuntimeError: If there are issues with initializing embeddings or layers.
        """
        super().__init__()
        self.word_embeddings = nn.Embedding(
            config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id
        )
        self.position_embeddings = nn.Embedding(
            config.max_position_embeddings, config.hidden_size
        )
        self.token_type_embeddings = nn.Embedding(
            config.type_vocab_size, config.hidden_size
        )

        # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
        # any TensorFlow checkpoint file
        self.LayerNorm = nn.LayerNorm(
            [config.hidden_size], eps=config.layer_norm_eps
        )
        self.dropout = nn.Dropout(p=config.hidden_dropout_prob)
        # position_ids (1, len position emb) is contiguous in memory and exported when serialized
        self.position_embedding_type = getattr(
            config, "position_embedding_type", "absolute"
        )
        self.position_ids = ops.arange(config.max_position_embeddings).view((1, -1))
        self.token_type_ids = ops.zeros(self.position_ids.shape, dtype=mindspore.int64)

        # End copy
        self.padding_idx = config.pad_token_id
        self.position_embeddings = nn.Embedding(
            config.max_position_embeddings,
            config.hidden_size,
            padding_idx=self.padding_idx,
        )

    def forward(
        self,
        input_ids=None,
        token_type_ids=None,
        position_ids=None,
        inputs_embeds=None,
        past_key_values_length=0,
    ):
        """
        This method forwards the embeddings for the Roberta model.

        Args:
            self (object): The instance of the class.
            input_ids (Union[None, Tensor]): The input tensor containing the tokenized input.
            token_type_ids (Union[None, Tensor]): The tensor containing token type ids for differentiating
                token types in the input.
            position_ids (Union[None, Tensor]): The tensor containing the position ids for each token in the input.
            inputs_embeds (Union[None, Tensor]): The tensor containing the input embeddings.
            past_key_values_length (int): The length of past key values.

        Returns:
            None.

        Raises:
            ValueError: If the input shape is not valid.
            AttributeError: If the 'token_type_ids' attribute is not found.
            TypeError: If the data type of the tensors is not supported.
        """
        if position_ids is None:
            if input_ids is not None:
                # Create the position ids from the input token ids. Any padded tokens remain padded.
                position_ids = create_position_ids_from_input_ids(
                    input_ids, self.padding_idx, past_key_values_length
                )
            else:
                position_ids = self.create_position_ids_from_inputs_embeds(
                    inputs_embeds
                )

        if input_ids is not None:
            input_shape = input_ids.shape
        else:
            input_shape = inputs_embeds.shape[:-1]

        seq_length = input_shape[1]

        # Setting the token_type_ids to the registered buffer in forwardor where it is all zeros, which usually occurs
        # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves
        # issue #5664
        if token_type_ids is None:
            if hasattr(self, "token_type_ids"):
                buffered_token_type_ids = self.token_type_ids[:, :seq_length]
                buffered_token_type_ids_expanded = buffered_token_type_ids.expand(
                    (input_shape[0], seq_length)
                )
                token_type_ids = buffered_token_type_ids_expanded
            else:
                token_type_ids = ops.zeros(input_shape, dtype=mindspore.int64)
        if inputs_embeds is None:
            inputs_embeds = self.word_embeddings(input_ids)
        token_type_embeddings = self.token_type_embeddings(token_type_ids)

        embeddings = inputs_embeds + token_type_embeddings
        if self.position_embedding_type == "absolute":
            position_embeddings = self.position_embeddings(position_ids)
            embeddings += position_embeddings
        embeddings = self.LayerNorm(embeddings)
        embeddings = self.dropout(embeddings)
        return embeddings

    def create_position_ids_from_inputs_embeds(self, inputs_embeds):
        """
        We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.

        Args:
            inputs_embeds: mindspore.Tensor

        Returns: mindspore.Tensor
        """
        input_shape = inputs_embeds.shape[:-1]
        sequence_length = input_shape[1]

        position_ids = ops.arange(
            self.padding_idx + 1,
            sequence_length + self.padding_idx + 1,
            dtype=mindspore.int64,
        )
        return position_ids.unsqueeze(0).broadcast_to(input_shape)

mindnlp.transformers.models.roberta.modeling_roberta.RobertaEmbeddings.__init__(config)

Initializes the RobertaEmbeddings class with the provided configuration.

PARAMETER DESCRIPTION
self

The instance of the RobertaEmbeddings class.

TYPE: RobertaEmbeddings

config

A configuration object containing the following attributes:

  • vocab_size (int): The size of the vocabulary.
  • hidden_size (int): The size of the hidden layers.
  • max_position_embeddings (int): The maximum number of positional embeddings.
  • type_vocab_size (int): The size of the token type vocabulary.
  • layer_norm_eps (float): The epsilon value for Layer Normalization.
  • hidden_dropout_prob (float): The dropout probability.
  • position_embedding_type (str, optional): The type of position embedding, defaults to 'absolute'.
  • pad_token_id (int): The token ID for padding.

TYPE: object

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
AttributeError

If the config object is missing required attributes.

ValueError

If the config attributes are not of the expected types.

RuntimeError

If there are issues with initializing embeddings or layers.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
def __init__(self, config):
    """
    Initializes the RobertaEmbeddings class with the provided configuration.

    Args:
        self (RobertaEmbeddings): The instance of the RobertaEmbeddings class.
        config (object):
            A configuration object containing the following attributes:

            - vocab_size (int): The size of the vocabulary.
            - hidden_size (int): The size of the hidden layers.
            - max_position_embeddings (int): The maximum number of positional embeddings.
            - type_vocab_size (int): The size of the token type vocabulary.
            - layer_norm_eps (float): The epsilon value for Layer Normalization.
            - hidden_dropout_prob (float): The dropout probability.
            - position_embedding_type (str, optional): The type of position embedding, defaults to 'absolute'.
            - pad_token_id (int): The token ID for padding.

    Returns:
        None.

    Raises:
        AttributeError: If the config object is missing required attributes.
        ValueError: If the config attributes are not of the expected types.
        RuntimeError: If there are issues with initializing embeddings or layers.
    """
    super().__init__()
    self.word_embeddings = nn.Embedding(
        config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id
    )
    self.position_embeddings = nn.Embedding(
        config.max_position_embeddings, config.hidden_size
    )
    self.token_type_embeddings = nn.Embedding(
        config.type_vocab_size, config.hidden_size
    )

    # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
    # any TensorFlow checkpoint file
    self.LayerNorm = nn.LayerNorm(
        [config.hidden_size], eps=config.layer_norm_eps
    )
    self.dropout = nn.Dropout(p=config.hidden_dropout_prob)
    # position_ids (1, len position emb) is contiguous in memory and exported when serialized
    self.position_embedding_type = getattr(
        config, "position_embedding_type", "absolute"
    )
    self.position_ids = ops.arange(config.max_position_embeddings).view((1, -1))
    self.token_type_ids = ops.zeros(self.position_ids.shape, dtype=mindspore.int64)

    # End copy
    self.padding_idx = config.pad_token_id
    self.position_embeddings = nn.Embedding(
        config.max_position_embeddings,
        config.hidden_size,
        padding_idx=self.padding_idx,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaEmbeddings.create_position_ids_from_inputs_embeds(inputs_embeds)

We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.

PARAMETER DESCRIPTION
inputs_embeds

mindspore.Tensor

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
def create_position_ids_from_inputs_embeds(self, inputs_embeds):
    """
    We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.

    Args:
        inputs_embeds: mindspore.Tensor

    Returns: mindspore.Tensor
    """
    input_shape = inputs_embeds.shape[:-1]
    sequence_length = input_shape[1]

    position_ids = ops.arange(
        self.padding_idx + 1,
        sequence_length + self.padding_idx + 1,
        dtype=mindspore.int64,
    )
    return position_ids.unsqueeze(0).broadcast_to(input_shape)

mindnlp.transformers.models.roberta.modeling_roberta.RobertaEmbeddings.forward(input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0)

This method forwards the embeddings for the Roberta model.

PARAMETER DESCRIPTION
self

The instance of the class.

TYPE: object

input_ids

The input tensor containing the tokenized input.

TYPE: Union[None, Tensor] DEFAULT: None

token_type_ids

The tensor containing token type ids for differentiating token types in the input.

TYPE: Union[None, Tensor] DEFAULT: None

position_ids

The tensor containing the position ids for each token in the input.

TYPE: Union[None, Tensor] DEFAULT: None

inputs_embeds

The tensor containing the input embeddings.

TYPE: Union[None, Tensor] DEFAULT: None

past_key_values_length

The length of past key values.

TYPE: int DEFAULT: 0

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
ValueError

If the input shape is not valid.

AttributeError

If the 'token_type_ids' attribute is not found.

TypeError

If the data type of the tensors is not supported.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
def forward(
    self,
    input_ids=None,
    token_type_ids=None,
    position_ids=None,
    inputs_embeds=None,
    past_key_values_length=0,
):
    """
    This method forwards the embeddings for the Roberta model.

    Args:
        self (object): The instance of the class.
        input_ids (Union[None, Tensor]): The input tensor containing the tokenized input.
        token_type_ids (Union[None, Tensor]): The tensor containing token type ids for differentiating
            token types in the input.
        position_ids (Union[None, Tensor]): The tensor containing the position ids for each token in the input.
        inputs_embeds (Union[None, Tensor]): The tensor containing the input embeddings.
        past_key_values_length (int): The length of past key values.

    Returns:
        None.

    Raises:
        ValueError: If the input shape is not valid.
        AttributeError: If the 'token_type_ids' attribute is not found.
        TypeError: If the data type of the tensors is not supported.
    """
    if position_ids is None:
        if input_ids is not None:
            # Create the position ids from the input token ids. Any padded tokens remain padded.
            position_ids = create_position_ids_from_input_ids(
                input_ids, self.padding_idx, past_key_values_length
            )
        else:
            position_ids = self.create_position_ids_from_inputs_embeds(
                inputs_embeds
            )

    if input_ids is not None:
        input_shape = input_ids.shape
    else:
        input_shape = inputs_embeds.shape[:-1]

    seq_length = input_shape[1]

    # Setting the token_type_ids to the registered buffer in forwardor where it is all zeros, which usually occurs
    # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves
    # issue #5664
    if token_type_ids is None:
        if hasattr(self, "token_type_ids"):
            buffered_token_type_ids = self.token_type_ids[:, :seq_length]
            buffered_token_type_ids_expanded = buffered_token_type_ids.expand(
                (input_shape[0], seq_length)
            )
            token_type_ids = buffered_token_type_ids_expanded
        else:
            token_type_ids = ops.zeros(input_shape, dtype=mindspore.int64)
    if inputs_embeds is None:
        inputs_embeds = self.word_embeddings(input_ids)
    token_type_embeddings = self.token_type_embeddings(token_type_ids)

    embeddings = inputs_embeds + token_type_embeddings
    if self.position_embedding_type == "absolute":
        position_embeddings = self.position_embeddings(position_ids)
        embeddings += position_embeddings
    embeddings = self.LayerNorm(embeddings)
    embeddings = self.dropout(embeddings)
    return embeddings

mindnlp.transformers.models.roberta.modeling_roberta.RobertaEncoder

Bases: Module

This class represents a RobertaEncoder, which is a neural network encoder for the RoBERTa model. It inherits from the nn.Module class and is responsible for encoding input sequences using a stack of multiple RobertaLayer modules.

The RobertaEncoder class contains an init method to initialize the encoder with a given configuration, and a forward method to perform the encoding process. The forward method takes in various input tensors and optional parameters, and returns the encoded output and optional additional information such as hidden states, attentions, and cross-attentions.

The encoder utilizes a stack of RobertaLayer modules, where each layer applies a series of transformations to the input hidden states using self-attention and optionally cross-attention mechanisms. The forward method iterates through the layers, applying the transformations and updating the hidden states accordingly.

Additionally, the encoder supports gradient checkpointing and caching of past key values for efficient training and inference.

For consistency, always use triple double quotes around docstrings.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
class RobertaEncoder(nn.Module):

    """
    This class represents a RobertaEncoder, which is a neural network encoder for the RoBERTa model.
    It inherits from the nn.Module class and is responsible for encoding input sequences using a stack of
    multiple RobertaLayer modules.

    The RobertaEncoder class contains an __init__ method to initialize the encoder with a given configuration,
    and a forward method to perform the encoding process. The forward method takes in various input tensors and
    optional parameters, and returns the encoded output and optional additional information such as hidden states,
    attentions, and cross-attentions.

    The encoder utilizes a stack of RobertaLayer modules, where each layer applies a series of transformations to the
    input hidden states using self-attention and optionally cross-attention mechanisms. The forward method iterates
    through the layers, applying the transformations and updating the hidden states accordingly.

    Additionally, the encoder supports gradient checkpointing and caching of past key values for efficient training
    and inference.

    For consistency, always use triple double quotes around docstrings.
    """
    def __init__(self, config):
        """
        Initializes a new instance of the RobertaEncoder class.

        Args:
            self (RobertaEncoder): The instance of the RobertaEncoder class.
            config (dict): A dictionary containing configuration parameters for the encoder.
                It should include the following keys:

                - num_hidden_layers (int): The number of hidden layers in the encoder.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__()
        self.config = config
        self.layer = nn.ModuleList([RobertaLayer(config) for _ in range(config.num_hidden_layers)])
        self.gradient_checkpointing = False

    def forward(
        self,
        hidden_states: mindspore.Tensor,
        attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = False,
        output_hidden_states: Optional[bool] = False,
        return_dict: Optional[bool] = True,
    ) -> Union[Tuple[mindspore.Tensor], BaseModelOutputWithPastAndCrossAttentions]:
        """
        Constructs the RobertaEncoder.

        Args:
            self: The object instance.
            hidden_states (mindspore.Tensor): The input hidden states of the encoder layer.
                Shape: (batch_size, sequence_length, hidden_size).
            attention_mask (Optional[mindspore.Tensor]): The attention mask tensor.
                If provided, should be of shape (batch_size, sequence_length), with 0s indicating tokens to be masked
                and 1s indicating tokens to be attended to.
            head_mask (Optional[mindspore.Tensor]): The head mask tensor. If provided, should be of
                shape (num_layers, num_heads), with 0s indicating heads to be masked and 1s indicating heads to be used.
            encoder_hidden_states (Optional[mindspore.Tensor]): The hidden states of the encoder layer.
                Shape: (batch_size, sequence_length, hidden_size).
            encoder_attention_mask (Optional[mindspore.Tensor]): The attention mask tensor for encoder layer.
                If provided, should be of shape (batch_size, sequence_length), with 0s indicating tokens to be
                masked and 1s indicating tokens to be attended to.
            past_key_values (Optional[Tuple[Tuple[mindspore.Tensor]]]): The past key values. If provided,
                should be of shape (num_layers, 2, batch_size, num_heads, sequence_length, hidden_size // num_heads).
            use_cache (Optional[bool]): Whether to use cache. If True, the cache will be used and updated.
                If False, the cache will be ignored. Default: None.
            output_attentions (Optional[bool]): Whether to output attentions. If True, attentions will be output.
                Default: False.
            output_hidden_states (Optional[bool]): Whether to output hidden states.
                If True, hidden states will be output. Default: False.
            return_dict (Optional[bool]): Whether to return a dictionary as output. If True, a dictionary containing
                the output tensors will be returned. If False, a tuple will be returned. Default: True.

        Returns:
            Union[Tuple[mindspore.Tensor], BaseModelOutputWithPastAndCrossAttentions]:
                The output of the encoder layer. If return_dict is True, a dictionary containing the output tensors will
                be returned. If return_dict is False, a tuple of tensors will be returned. The output tensors include:

                - last_hidden_state (mindspore.Tensor): The last hidden state of the encoder layer.
                Shape: (batch_size, sequence_length, hidden_size).
                - past_key_values (Tuple[Tuple[mindspore.Tensor]]): The updated past key values. If use_cache is True,
                the key values for each layer will be returned. Shape: (num_layers, 2, batch_size, num_heads,
                sequence_length, hidden_size // num_heads).
                - hidden_states (Tuple[mindspore.Tensor]): The hidden states of the encoder layer.
                If output_hidden_states is True, all hidden states for each layer will be returned. Shape: (num_layers,
                batch_size, sequence_length, hidden_size).
                - attentions (Tuple[mindspore.Tensor]): The self-attention weights of the encoder layer.
                If output_attentions is True, all self-attention weights for each layer will be returned. Shape:
                (num_layers, batch_size, num_heads, sequence_length, sequence_length).
                - cross_attentions (Tuple[mindspore.Tensor]): The cross-attention weights of the encoder layer.
                If output_attentions is True and add_cross_attention is True, all cross-attention weights for each
                layer will be returned. Shape: (num_layers, batch_size, num_heads, sequence_length, encoder_sequence_length).

        Raises:
            None.
        """
        all_hidden_states = () if output_hidden_states else None
        all_self_attentions = () if output_attentions else None
        all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None

        next_decoder_cache = () if use_cache else None
        for i, layer_module in enumerate(self.layer):
            if output_hidden_states:
                all_hidden_states = all_hidden_states + (hidden_states,)

            layer_head_mask = head_mask[i] if head_mask is not None else None
            past_key_value = past_key_values[i] if past_key_values is not None else None

            layer_outputs = layer_module(
                hidden_states,
                attention_mask,
                layer_head_mask,
                encoder_hidden_states,
                encoder_attention_mask,
                past_key_value,
                output_attentions,
            )

            hidden_states = layer_outputs[0]
            if use_cache:
                next_decoder_cache += (layer_outputs[-1],)
            if output_attentions:
                all_self_attentions = all_self_attentions + (layer_outputs[1],)
                if self.config.add_cross_attention:
                    all_cross_attentions = all_cross_attentions + (layer_outputs[2],)

        if output_hidden_states:
            all_hidden_states = all_hidden_states + (hidden_states,)

        if not return_dict:
            return tuple(
                v
                for v in [
                    hidden_states,
                    next_decoder_cache,
                    all_hidden_states,
                    all_self_attentions,
                    all_cross_attentions,
                ]
                if v is not None
            )
        return BaseModelOutputWithPastAndCrossAttentions(
            last_hidden_state=hidden_states,
            past_key_values=next_decoder_cache,
            hidden_states=all_hidden_states,
            attentions=all_self_attentions,
            cross_attentions=all_cross_attentions,
        )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaEncoder.__init__(config)

Initializes a new instance of the RobertaEncoder class.

PARAMETER DESCRIPTION
self

The instance of the RobertaEncoder class.

TYPE: RobertaEncoder

config

A dictionary containing configuration parameters for the encoder. It should include the following keys:

  • num_hidden_layers (int): The number of hidden layers in the encoder.

TYPE: dict

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
def __init__(self, config):
    """
    Initializes a new instance of the RobertaEncoder class.

    Args:
        self (RobertaEncoder): The instance of the RobertaEncoder class.
        config (dict): A dictionary containing configuration parameters for the encoder.
            It should include the following keys:

            - num_hidden_layers (int): The number of hidden layers in the encoder.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__()
    self.config = config
    self.layer = nn.ModuleList([RobertaLayer(config) for _ in range(config.num_hidden_layers)])
    self.gradient_checkpointing = False

mindnlp.transformers.models.roberta.modeling_roberta.RobertaEncoder.forward(hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=False, output_hidden_states=False, return_dict=True)

Constructs the RobertaEncoder.

PARAMETER DESCRIPTION
self

The object instance.

hidden_states

The input hidden states of the encoder layer. Shape: (batch_size, sequence_length, hidden_size).

TYPE: Tensor

attention_mask

The attention mask tensor. If provided, should be of shape (batch_size, sequence_length), with 0s indicating tokens to be masked and 1s indicating tokens to be attended to.

TYPE: Optional[Tensor] DEFAULT: None

head_mask

The head mask tensor. If provided, should be of shape (num_layers, num_heads), with 0s indicating heads to be masked and 1s indicating heads to be used.

TYPE: Optional[Tensor] DEFAULT: None

encoder_hidden_states

The hidden states of the encoder layer. Shape: (batch_size, sequence_length, hidden_size).

TYPE: Optional[Tensor] DEFAULT: None

encoder_attention_mask

The attention mask tensor for encoder layer. If provided, should be of shape (batch_size, sequence_length), with 0s indicating tokens to be masked and 1s indicating tokens to be attended to.

TYPE: Optional[Tensor] DEFAULT: None

past_key_values

The past key values. If provided, should be of shape (num_layers, 2, batch_size, num_heads, sequence_length, hidden_size // num_heads).

TYPE: Optional[Tuple[Tuple[Tensor]]] DEFAULT: None

use_cache

Whether to use cache. If True, the cache will be used and updated. If False, the cache will be ignored. Default: None.

TYPE: Optional[bool] DEFAULT: None

output_attentions

Whether to output attentions. If True, attentions will be output. Default: False.

TYPE: Optional[bool] DEFAULT: False

output_hidden_states

Whether to output hidden states. If True, hidden states will be output. Default: False.

TYPE: Optional[bool] DEFAULT: False

return_dict

Whether to return a dictionary as output. If True, a dictionary containing the output tensors will be returned. If False, a tuple will be returned. Default: True.

TYPE: Optional[bool] DEFAULT: True

RETURNS DESCRIPTION
Union[Tuple[Tensor], BaseModelOutputWithPastAndCrossAttentions]

Union[Tuple[mindspore.Tensor], BaseModelOutputWithPastAndCrossAttentions]: The output of the encoder layer. If return_dict is True, a dictionary containing the output tensors will be returned. If return_dict is False, a tuple of tensors will be returned. The output tensors include:

  • last_hidden_state (mindspore.Tensor): The last hidden state of the encoder layer. Shape: (batch_size, sequence_length, hidden_size).
  • past_key_values (Tuple[Tuple[mindspore.Tensor]]): The updated past key values. If use_cache is True, the key values for each layer will be returned. Shape: (num_layers, 2, batch_size, num_heads, sequence_length, hidden_size // num_heads).
  • hidden_states (Tuple[mindspore.Tensor]): The hidden states of the encoder layer. If output_hidden_states is True, all hidden states for each layer will be returned. Shape: (num_layers, batch_size, sequence_length, hidden_size).
  • attentions (Tuple[mindspore.Tensor]): The self-attention weights of the encoder layer. If output_attentions is True, all self-attention weights for each layer will be returned. Shape: (num_layers, batch_size, num_heads, sequence_length, sequence_length).
  • cross_attentions (Tuple[mindspore.Tensor]): The cross-attention weights of the encoder layer. If output_attentions is True and add_cross_attention is True, all cross-attention weights for each layer will be returned. Shape: (num_layers, batch_size, num_heads, sequence_length, encoder_sequence_length).
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
def forward(
    self,
    hidden_states: mindspore.Tensor,
    attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = False,
    output_hidden_states: Optional[bool] = False,
    return_dict: Optional[bool] = True,
) -> Union[Tuple[mindspore.Tensor], BaseModelOutputWithPastAndCrossAttentions]:
    """
    Constructs the RobertaEncoder.

    Args:
        self: The object instance.
        hidden_states (mindspore.Tensor): The input hidden states of the encoder layer.
            Shape: (batch_size, sequence_length, hidden_size).
        attention_mask (Optional[mindspore.Tensor]): The attention mask tensor.
            If provided, should be of shape (batch_size, sequence_length), with 0s indicating tokens to be masked
            and 1s indicating tokens to be attended to.
        head_mask (Optional[mindspore.Tensor]): The head mask tensor. If provided, should be of
            shape (num_layers, num_heads), with 0s indicating heads to be masked and 1s indicating heads to be used.
        encoder_hidden_states (Optional[mindspore.Tensor]): The hidden states of the encoder layer.
            Shape: (batch_size, sequence_length, hidden_size).
        encoder_attention_mask (Optional[mindspore.Tensor]): The attention mask tensor for encoder layer.
            If provided, should be of shape (batch_size, sequence_length), with 0s indicating tokens to be
            masked and 1s indicating tokens to be attended to.
        past_key_values (Optional[Tuple[Tuple[mindspore.Tensor]]]): The past key values. If provided,
            should be of shape (num_layers, 2, batch_size, num_heads, sequence_length, hidden_size // num_heads).
        use_cache (Optional[bool]): Whether to use cache. If True, the cache will be used and updated.
            If False, the cache will be ignored. Default: None.
        output_attentions (Optional[bool]): Whether to output attentions. If True, attentions will be output.
            Default: False.
        output_hidden_states (Optional[bool]): Whether to output hidden states.
            If True, hidden states will be output. Default: False.
        return_dict (Optional[bool]): Whether to return a dictionary as output. If True, a dictionary containing
            the output tensors will be returned. If False, a tuple will be returned. Default: True.

    Returns:
        Union[Tuple[mindspore.Tensor], BaseModelOutputWithPastAndCrossAttentions]:
            The output of the encoder layer. If return_dict is True, a dictionary containing the output tensors will
            be returned. If return_dict is False, a tuple of tensors will be returned. The output tensors include:

            - last_hidden_state (mindspore.Tensor): The last hidden state of the encoder layer.
            Shape: (batch_size, sequence_length, hidden_size).
            - past_key_values (Tuple[Tuple[mindspore.Tensor]]): The updated past key values. If use_cache is True,
            the key values for each layer will be returned. Shape: (num_layers, 2, batch_size, num_heads,
            sequence_length, hidden_size // num_heads).
            - hidden_states (Tuple[mindspore.Tensor]): The hidden states of the encoder layer.
            If output_hidden_states is True, all hidden states for each layer will be returned. Shape: (num_layers,
            batch_size, sequence_length, hidden_size).
            - attentions (Tuple[mindspore.Tensor]): The self-attention weights of the encoder layer.
            If output_attentions is True, all self-attention weights for each layer will be returned. Shape:
            (num_layers, batch_size, num_heads, sequence_length, sequence_length).
            - cross_attentions (Tuple[mindspore.Tensor]): The cross-attention weights of the encoder layer.
            If output_attentions is True and add_cross_attention is True, all cross-attention weights for each
            layer will be returned. Shape: (num_layers, batch_size, num_heads, sequence_length, encoder_sequence_length).

    Raises:
        None.
    """
    all_hidden_states = () if output_hidden_states else None
    all_self_attentions = () if output_attentions else None
    all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None

    next_decoder_cache = () if use_cache else None
    for i, layer_module in enumerate(self.layer):
        if output_hidden_states:
            all_hidden_states = all_hidden_states + (hidden_states,)

        layer_head_mask = head_mask[i] if head_mask is not None else None
        past_key_value = past_key_values[i] if past_key_values is not None else None

        layer_outputs = layer_module(
            hidden_states,
            attention_mask,
            layer_head_mask,
            encoder_hidden_states,
            encoder_attention_mask,
            past_key_value,
            output_attentions,
        )

        hidden_states = layer_outputs[0]
        if use_cache:
            next_decoder_cache += (layer_outputs[-1],)
        if output_attentions:
            all_self_attentions = all_self_attentions + (layer_outputs[1],)
            if self.config.add_cross_attention:
                all_cross_attentions = all_cross_attentions + (layer_outputs[2],)

    if output_hidden_states:
        all_hidden_states = all_hidden_states + (hidden_states,)

    if not return_dict:
        return tuple(
            v
            for v in [
                hidden_states,
                next_decoder_cache,
                all_hidden_states,
                all_self_attentions,
                all_cross_attentions,
            ]
            if v is not None
        )
    return BaseModelOutputWithPastAndCrossAttentions(
        last_hidden_state=hidden_states,
        past_key_values=next_decoder_cache,
        hidden_states=all_hidden_states,
        attentions=all_self_attentions,
        cross_attentions=all_cross_attentions,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForCausalLM

Bases: RobertaPreTrainedModel

RobertaForCausalLM

This class is a RoBERTa model for causal language modeling. It predicts the next word in a sequence given the previous words.

Class Inheritance

RobertaForCausalLM inherits from RobertaPreTrainedModel.

PARAMETER DESCRIPTION
config

RobertaConfig The configuration object that specifies the model architecture and hyperparameters.

ATTRIBUTE DESCRIPTION
roberta

RobertaModel The RoBERTa model that encodes the input sequence.

lm_head

RobertaLMHead The linear layer that predicts the next word in the sequence.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
class RobertaForCausalLM(RobertaPreTrainedModel):

    """
        RobertaForCausalLM

        This class is a RoBERTa model for causal language modeling. It predicts the next word in a sequence given the
        previous words.

        Class Inheritance:
            `RobertaForCausalLM` inherits from `RobertaPreTrainedModel`.

        Args:
            config: `RobertaConfig`
                The configuration object that specifies the model architecture and hyperparameters.

        Attributes:
            roberta: `RobertaModel`
                The RoBERTa model that encodes the input sequence.
            lm_head: `RobertaLMHead`
                The linear layer that predicts the next word in the sequence.

        Methods:
            get_output_embeddings
                Retrieve the output embeddings of the model.
            set_output_embeddings
                Set new output embeddings for the model.
            forward
                Perform the forward pass of the model for causal language modeling.
            prepare_inputs_for_generation
                Prepare the inputs for generation by removing the prefix and adjusting the attention mask.
            _reorder_cache
                Reorder the cache of past key values based on the beam index.
    """
    _tied_weights_keys = ["lm_head.decoder.weight", "lm_head.decoder.bias"]

    def __init__(self, config):
        """
        Initializes a new instance of the `RobertaForCausalLM` class.

        Args:
            self: The object itself.
            config: An instance of the `RobertaConfig` class containing the model configuration settings.
                This parameter is required for the initialization of the `RobertaModel` and `RobertaLMHead` objects.

        Returns:
            None

        Raises:
            None
        """
        super().__init__(config)

        if not config.is_decoder:
            logger.warning(
                "If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.`"
            )

        self.roberta = RobertaModel(config, add_pooling_layer=False)
        self.lm_head = RobertaLMHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def get_output_embeddings(self):
        """
        Returns the output embeddings for the RobertaForCausalLM model.

        Args:
            self: An instance of the RobertaForCausalLM class.

        Returns:
            None.

        Raises:
            None.

        This method returns the output embeddings for the RobertaForCausalLM model.
        The output embeddings are obtained from the decoder of the lm_head.
        """
        return self.lm_head.decoder

    def set_output_embeddings(self, new_embeddings):
        """
        Sets the output embeddings of the RobertaForCausalLM model.

        Args:
            self (RobertaForCausalLM): The instance of the RobertaForCausalLM class.
            new_embeddings (torch.nn.Module): The new embeddings to be set as the output embeddings.

        Returns:
            None.

        Raises:
            None.
        """
        self.lm_head.decoder = new_embeddings

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        past_key_values: Tuple[Tuple[mindspore.Tensor]] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], CausalLMOutputWithCrossAttentions]:
        r"""
        Args:
            encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
                Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
                the model is configured as a decoder.
            encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
                the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

                - 1 for tokens that are **not masked**,
                - 0 for tokens that are **masked**.
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
                `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are
                ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
            past_key_values (`tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4 tensors
                of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
                Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
                If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
                don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
                `decoder_input_ids` of shape `(batch_size, sequence_length)`.
            use_cache (`bool`, *optional*):
                If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
                `past_key_values`).

        Returns:
            Union[Tuple[mindspore.Tensor], CausalLMOutputWithCrossAttentions]

        Example:
            ```python
            >>> from transformers import AutoTokenizer, RobertaForCausalLM, AutoConfig
            ...
            >>> tokenizer = AutoTokenizer.from_pretrained("roberta-base")
            >>> config = AutoConfig.from_pretrained("roberta-base")
            >>> config.is_decoder = True
            >>> model = RobertaForCausalLM.from_pretrained("roberta-base", config=config)
            ...
            >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="ms")
            >>> outputs = model(**inputs)
            ...
            >>> prediction_logits = outputs.logits
            ```
        """
        return_dict = (
            return_dict if return_dict is not None else self.config.use_return_dict
        )
        if labels is not None:
            use_cache = False

        outputs = self.roberta(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_attention_mask,
            past_key_values=past_key_values,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]
        prediction_scores = self.lm_head(sequence_output)

        lm_loss = None
        if labels is not None:
            # we are doing next-token prediction; shift prediction scores and input ids by one
            shifted_prediction_scores = prediction_scores[:, :-1, :]
            labels = labels[:, 1:]
            lm_loss = F.cross_entropy(
                shifted_prediction_scores.view(-1, self.config.vocab_size),
                labels.view(-1),
            )

        if not return_dict:
            output = (prediction_scores,) + outputs[2:]
            return ((lm_loss,) + output) if lm_loss is not None else output

        return CausalLMOutputWithCrossAttentions(
            loss=lm_loss,
            logits=prediction_scores,
            past_key_values=outputs.past_key_values,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
            cross_attentions=outputs.cross_attentions,
        )

    def prepare_inputs_for_generation(
        self, input_ids, past_key_values=None, attention_mask=None, **model_kwargs
    ):
        """
        Prepares the inputs for generation in the RobertaForCausalLM class.

        Args:
            self (RobertaForCausalLM): The instance of the RobertaForCausalLM class.
            input_ids (torch.Tensor): The input tensor of shape (batch_size, sequence_length)
                containing the input token IDs.
            past_key_values (tuple, optional): A tuple of past key values. Defaults to None.
            attention_mask (torch.Tensor, optional): The attention mask tensor of shape (batch_size, sequence_length).
                Defaults to None.
            **model_kwargs: Additional keyword arguments for the model.

        Returns:
            dict:
                A dictionary containing the prepared inputs for generation with the following key-value pairs:

                - 'input_ids' (torch.Tensor): The input tensor with modified sequence length.
                - 'attention_mask' (torch.Tensor): The attention mask tensor.
                - 'past_key_values' (tuple): The modified tuple of past key values or None.

        Raises:
            None.
        """
        input_shape = input_ids.shape
        # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
        if attention_mask is None:
            attention_mask = ops.ones(input_shape)

        # cut decoder_input_ids if past_key_values is used
        if past_key_values is not None:
            past_length = past_key_values[0][0].shape[2]

            # Some generation methods already pass only the last input ID
            if input_ids.shape[1] > past_length:
                remove_prefix_length = past_length
            else:
                # Default to old behavior: keep only final ID
                remove_prefix_length = input_ids.shape[1] - 1

            input_ids = input_ids[:, remove_prefix_length:]

        return {
            "input_ids": input_ids,
            "attention_mask": attention_mask,
            "past_key_values": past_key_values,
        }

    def _reorder_cache(self, past_key_values, beam_idx):
        """
        Reorders the cache by selecting specific elements based on the beam indexes.

        Args:
            self (RobertaForCausalLM): The instance of the RobertaForCausalLM class.
            past_key_values (tuple): A tuple containing the past key-values for each layer.
                Each element in the tuple is a tensor representing the hidden states for a specific layer.
            beam_idx (tensor): A tensor containing the indexes of the selected beams.

        Returns:
            tuple: The reordered past key-values.
                Each element in the tuple is a tensor representing the hidden states for a specific layer.
                The tensors are selected based on the beam indexes.

        Raises:
            None.
        """
        reordered_past = ()
        for layer_past in past_key_values:
            reordered_past += (
                tuple(
                    past_state.index_select(0, beam_idx) for past_state in layer_past
                ),
            )
        return reordered_past

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForCausalLM.__init__(config)

Initializes a new instance of the RobertaForCausalLM class.

PARAMETER DESCRIPTION
self

The object itself.

config

An instance of the RobertaConfig class containing the model configuration settings. This parameter is required for the initialization of the RobertaModel and RobertaLMHead objects.

RETURNS DESCRIPTION

None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
def __init__(self, config):
    """
    Initializes a new instance of the `RobertaForCausalLM` class.

    Args:
        self: The object itself.
        config: An instance of the `RobertaConfig` class containing the model configuration settings.
            This parameter is required for the initialization of the `RobertaModel` and `RobertaLMHead` objects.

    Returns:
        None

    Raises:
        None
    """
    super().__init__(config)

    if not config.is_decoder:
        logger.warning(
            "If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.`"
        )

    self.roberta = RobertaModel(config, add_pooling_layer=False)
    self.lm_head = RobertaLMHead(config)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForCausalLM.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
encoder_hidden_states

Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

TYPE: (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional* DEFAULT: None

encoder_attention_mask

Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

  • 1 for tokens that are not masked,
  • 0 for tokens that are masked.

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

labels

Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

use_cache

If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

TYPE: `bool`, *optional* DEFAULT: None

RETURNS DESCRIPTION
Union[Tuple[Tensor], CausalLMOutputWithCrossAttentions]

Union[Tuple[mindspore.Tensor], CausalLMOutputWithCrossAttentions]

Example
>>> from transformers import AutoTokenizer, RobertaForCausalLM, AutoConfig
...
>>> tokenizer = AutoTokenizer.from_pretrained("roberta-base")
>>> config = AutoConfig.from_pretrained("roberta-base")
>>> config.is_decoder = True
>>> model = RobertaForCausalLM.from_pretrained("roberta-base", config=config)
...
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="ms")
>>> outputs = model(**inputs)
...
>>> prediction_logits = outputs.logits
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    past_key_values: Tuple[Tuple[mindspore.Tensor]] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], CausalLMOutputWithCrossAttentions]:
    r"""
    Args:
        encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
            Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
            the model is configured as a decoder.
        encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
            the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

            - 1 for tokens that are **not masked**,
            - 0 for tokens that are **masked**.
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
            `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are
            ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
        past_key_values (`tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4 tensors
            of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
            Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
            If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
            don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
            `decoder_input_ids` of shape `(batch_size, sequence_length)`.
        use_cache (`bool`, *optional*):
            If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
            `past_key_values`).

    Returns:
        Union[Tuple[mindspore.Tensor], CausalLMOutputWithCrossAttentions]

    Example:
        ```python
        >>> from transformers import AutoTokenizer, RobertaForCausalLM, AutoConfig
        ...
        >>> tokenizer = AutoTokenizer.from_pretrained("roberta-base")
        >>> config = AutoConfig.from_pretrained("roberta-base")
        >>> config.is_decoder = True
        >>> model = RobertaForCausalLM.from_pretrained("roberta-base", config=config)
        ...
        >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="ms")
        >>> outputs = model(**inputs)
        ...
        >>> prediction_logits = outputs.logits
        ```
    """
    return_dict = (
        return_dict if return_dict is not None else self.config.use_return_dict
    )
    if labels is not None:
        use_cache = False

    outputs = self.roberta(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_attention_mask,
        past_key_values=past_key_values,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]
    prediction_scores = self.lm_head(sequence_output)

    lm_loss = None
    if labels is not None:
        # we are doing next-token prediction; shift prediction scores and input ids by one
        shifted_prediction_scores = prediction_scores[:, :-1, :]
        labels = labels[:, 1:]
        lm_loss = F.cross_entropy(
            shifted_prediction_scores.view(-1, self.config.vocab_size),
            labels.view(-1),
        )

    if not return_dict:
        output = (prediction_scores,) + outputs[2:]
        return ((lm_loss,) + output) if lm_loss is not None else output

    return CausalLMOutputWithCrossAttentions(
        loss=lm_loss,
        logits=prediction_scores,
        past_key_values=outputs.past_key_values,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
        cross_attentions=outputs.cross_attentions,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForCausalLM.get_output_embeddings()

Returns the output embeddings for the RobertaForCausalLM model.

PARAMETER DESCRIPTION
self

An instance of the RobertaForCausalLM class.

RETURNS DESCRIPTION

None.

This method returns the output embeddings for the RobertaForCausalLM model. The output embeddings are obtained from the decoder of the lm_head.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
def get_output_embeddings(self):
    """
    Returns the output embeddings for the RobertaForCausalLM model.

    Args:
        self: An instance of the RobertaForCausalLM class.

    Returns:
        None.

    Raises:
        None.

    This method returns the output embeddings for the RobertaForCausalLM model.
    The output embeddings are obtained from the decoder of the lm_head.
    """
    return self.lm_head.decoder

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForCausalLM.prepare_inputs_for_generation(input_ids, past_key_values=None, attention_mask=None, **model_kwargs)

Prepares the inputs for generation in the RobertaForCausalLM class.

PARAMETER DESCRIPTION
self

The instance of the RobertaForCausalLM class.

TYPE: RobertaForCausalLM

input_ids

The input tensor of shape (batch_size, sequence_length) containing the input token IDs.

TYPE: Tensor

past_key_values

A tuple of past key values. Defaults to None.

TYPE: tuple DEFAULT: None

attention_mask

The attention mask tensor of shape (batch_size, sequence_length). Defaults to None.

TYPE: Tensor DEFAULT: None

**model_kwargs

Additional keyword arguments for the model.

DEFAULT: {}

RETURNS DESCRIPTION
dict

A dictionary containing the prepared inputs for generation with the following key-value pairs:

  • 'input_ids' (torch.Tensor): The input tensor with modified sequence length.
  • 'attention_mask' (torch.Tensor): The attention mask tensor.
  • 'past_key_values' (tuple): The modified tuple of past key values or None.
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
def prepare_inputs_for_generation(
    self, input_ids, past_key_values=None, attention_mask=None, **model_kwargs
):
    """
    Prepares the inputs for generation in the RobertaForCausalLM class.

    Args:
        self (RobertaForCausalLM): The instance of the RobertaForCausalLM class.
        input_ids (torch.Tensor): The input tensor of shape (batch_size, sequence_length)
            containing the input token IDs.
        past_key_values (tuple, optional): A tuple of past key values. Defaults to None.
        attention_mask (torch.Tensor, optional): The attention mask tensor of shape (batch_size, sequence_length).
            Defaults to None.
        **model_kwargs: Additional keyword arguments for the model.

    Returns:
        dict:
            A dictionary containing the prepared inputs for generation with the following key-value pairs:

            - 'input_ids' (torch.Tensor): The input tensor with modified sequence length.
            - 'attention_mask' (torch.Tensor): The attention mask tensor.
            - 'past_key_values' (tuple): The modified tuple of past key values or None.

    Raises:
        None.
    """
    input_shape = input_ids.shape
    # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
    if attention_mask is None:
        attention_mask = ops.ones(input_shape)

    # cut decoder_input_ids if past_key_values is used
    if past_key_values is not None:
        past_length = past_key_values[0][0].shape[2]

        # Some generation methods already pass only the last input ID
        if input_ids.shape[1] > past_length:
            remove_prefix_length = past_length
        else:
            # Default to old behavior: keep only final ID
            remove_prefix_length = input_ids.shape[1] - 1

        input_ids = input_ids[:, remove_prefix_length:]

    return {
        "input_ids": input_ids,
        "attention_mask": attention_mask,
        "past_key_values": past_key_values,
    }

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForCausalLM.set_output_embeddings(new_embeddings)

Sets the output embeddings of the RobertaForCausalLM model.

PARAMETER DESCRIPTION
self

The instance of the RobertaForCausalLM class.

TYPE: RobertaForCausalLM

new_embeddings

The new embeddings to be set as the output embeddings.

TYPE: Module

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
def set_output_embeddings(self, new_embeddings):
    """
    Sets the output embeddings of the RobertaForCausalLM model.

    Args:
        self (RobertaForCausalLM): The instance of the RobertaForCausalLM class.
        new_embeddings (torch.nn.Module): The new embeddings to be set as the output embeddings.

    Returns:
        None.

    Raises:
        None.
    """
    self.lm_head.decoder = new_embeddings

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForMaskedLM

Bases: RobertaPreTrainedModel

RobertaForMaskedLM is a Python class that represents a RoBERTa model for masked language modeling tasks. This class inherits from RobertaPreTrainedModel and provides methods for initializing the model, getting and setting output embeddings, and forwarding the model for masked language modeling tasks. It also includes a detailed forward method for processing input data and computing the masked language modeling loss.

The class includes the following methods:

  • __init__: Initializes the RobertaForMaskedLM instance.
  • get_output_embeddings: Returns the output embeddings of the model.
  • set_output_embeddings: Sets the output embeddings of the model to the specified new embeddings.
  • forward: Constructs the model for masked language modeling tasks and computes the masked language modeling loss.

The forward method supports various input parameters such as input IDs, attention mask, token type IDs, position IDs, head mask, input embeddings, encoder hidden states, encoder attention mask, labels, output attentions, output hidden states, and return dictionary. It also includes detailed information about the expected shape and type of the input data, as well as the optional arguments.

Additionally, the class includes warnings and error handling for specific configurations, ensuring the proper usage of the RobertaForMaskedLM model for bi-directional self-attention.

Note

The detailed method signatures and implementation details have been omitted for brevity and clarity.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
class RobertaForMaskedLM(RobertaPreTrainedModel):

    """
    `RobertaForMaskedLM` is a Python class that represents a RoBERTa model for masked language modeling tasks.
    This class inherits from `RobertaPreTrainedModel` and provides methods for initializing the model,
    getting and setting output embeddings, and forwarding the model for masked language modeling tasks.
    It also includes a detailed `forward` method for processing input data and computing the masked language
    modeling loss.

    The class includes the following methods:

    - `__init__`: Initializes the `RobertaForMaskedLM` instance.
    - `get_output_embeddings`: Returns the output embeddings of the model.
    - `set_output_embeddings`: Sets the output embeddings of the model to the specified new embeddings.
    - `forward`: Constructs the model for masked language modeling tasks and computes the masked language modeling loss.

    The `forward` method supports various input parameters such as input IDs, attention mask, token type IDs,
    position IDs, head mask, input embeddings, encoder hidden states, encoder attention mask, labels, output attentions,
    output hidden states, and return dictionary. It also includes detailed information about the expected shape and
    type of the input data, as well as the optional arguments.

    Additionally, the class includes warnings and error handling for specific configurations, ensuring the proper usage
    of the `RobertaForMaskedLM` model for bi-directional self-attention.

    Note:
        The detailed method signatures and implementation details have been omitted for brevity and clarity.
    """
    _tied_weights_keys = ["lm_head.decoder.weight", "lm_head.decoder.bias"]

    def __init__(self, config):
        """
        Initializes a new instance of the 'RobertaForMaskedLM' class.

        Args:
            self: The current object instance.
            config:
                An instance of the 'Config' class containing the configuration settings for the model.

                - Type: Config
                - Purpose: Specifies the model's configuration.
                - Restrictions: None

        Returns:
            None

        Raises:
            None
        """
        super().__init__(config)

        if config.is_decoder:
            logger.warning(
                "If you want to use `RobertaForMaskedLM` make sure `config.is_decoder=False` for "
                "bi-directional self-attention."
            )

        self.roberta = RobertaModel(config, add_pooling_layer=False)
        self.lm_head = RobertaLMHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def get_output_embeddings(self):
        """
        Returns the output embeddings for the RobertaForMaskedLM model.

        Args:
            self: An instance of the RobertaForMaskedLM class.

        Returns:
            A tensor of size (batch_size, sequence_length, hidden_size) representing the output embeddings.

        Raises:
            None.
        """
        return self.lm_head.decoder

    def set_output_embeddings(self, new_embeddings):
        """
        This method sets the output embeddings for the RobertaForMaskedLM model.

        Args:
            self (RobertaForMaskedLM): The instance of the RobertaForMaskedLM class.
            new_embeddings (torch.nn.Module): The new output embeddings to be set for the model.
                It should be an instance of torch.nn.Module.

        Returns:
            None.

        Raises:
            None.
        """
        self.lm_head.decoder = new_embeddings

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], MaskedLMOutput]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
                config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
                loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
            kwargs (`Dict[str, any]`, optional, defaults to *{}*):
                Used to hide legacy arguments that have been deprecated.
        """
        return_dict = (
            return_dict if return_dict is not None else self.config.use_return_dict
        )

        outputs = self.roberta(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_attention_mask,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
        sequence_output = outputs[0]
        prediction_scores = self.lm_head(sequence_output)

        masked_lm_loss = None
        if labels is not None:
            masked_lm_loss = F.cross_entropy(
                prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)
            )

        if not return_dict:
            output = (prediction_scores,) + outputs[2:]
            return (
                ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
            )

        return MaskedLMOutput(
            loss=masked_lm_loss,
            logits=prediction_scores,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForMaskedLM.__init__(config)

Initializes a new instance of the 'RobertaForMaskedLM' class.

PARAMETER DESCRIPTION
self

The current object instance.

config

An instance of the 'Config' class containing the configuration settings for the model.

  • Type: Config
  • Purpose: Specifies the model's configuration.
  • Restrictions: None

RETURNS DESCRIPTION

None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
def __init__(self, config):
    """
    Initializes a new instance of the 'RobertaForMaskedLM' class.

    Args:
        self: The current object instance.
        config:
            An instance of the 'Config' class containing the configuration settings for the model.

            - Type: Config
            - Purpose: Specifies the model's configuration.
            - Restrictions: None

    Returns:
        None

    Raises:
        None
    """
    super().__init__(config)

    if config.is_decoder:
        logger.warning(
            "If you want to use `RobertaForMaskedLM` make sure `config.is_decoder=False` for "
            "bi-directional self-attention."
        )

    self.roberta = RobertaModel(config, add_pooling_layer=False)
    self.lm_head = RobertaLMHead(config)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForMaskedLM.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

kwargs

Used to hide legacy arguments that have been deprecated.

TYPE: `Dict[str, any]`, optional, defaults to *{}*

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], MaskedLMOutput]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
            config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
            loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
        kwargs (`Dict[str, any]`, optional, defaults to *{}*):
            Used to hide legacy arguments that have been deprecated.
    """
    return_dict = (
        return_dict if return_dict is not None else self.config.use_return_dict
    )

    outputs = self.roberta(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_attention_mask,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )
    sequence_output = outputs[0]
    prediction_scores = self.lm_head(sequence_output)

    masked_lm_loss = None
    if labels is not None:
        masked_lm_loss = F.cross_entropy(
            prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)
        )

    if not return_dict:
        output = (prediction_scores,) + outputs[2:]
        return (
            ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
        )

    return MaskedLMOutput(
        loss=masked_lm_loss,
        logits=prediction_scores,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForMaskedLM.get_output_embeddings()

Returns the output embeddings for the RobertaForMaskedLM model.

PARAMETER DESCRIPTION
self

An instance of the RobertaForMaskedLM class.

RETURNS DESCRIPTION

A tensor of size (batch_size, sequence_length, hidden_size) representing the output embeddings.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
def get_output_embeddings(self):
    """
    Returns the output embeddings for the RobertaForMaskedLM model.

    Args:
        self: An instance of the RobertaForMaskedLM class.

    Returns:
        A tensor of size (batch_size, sequence_length, hidden_size) representing the output embeddings.

    Raises:
        None.
    """
    return self.lm_head.decoder

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForMaskedLM.set_output_embeddings(new_embeddings)

This method sets the output embeddings for the RobertaForMaskedLM model.

PARAMETER DESCRIPTION
self

The instance of the RobertaForMaskedLM class.

TYPE: RobertaForMaskedLM

new_embeddings

The new output embeddings to be set for the model. It should be an instance of torch.nn.Module.

TYPE: Module

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
def set_output_embeddings(self, new_embeddings):
    """
    This method sets the output embeddings for the RobertaForMaskedLM model.

    Args:
        self (RobertaForMaskedLM): The instance of the RobertaForMaskedLM class.
        new_embeddings (torch.nn.Module): The new output embeddings to be set for the model.
            It should be an instance of torch.nn.Module.

    Returns:
        None.

    Raises:
        None.
    """
    self.lm_head.decoder = new_embeddings

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForMultipleChoice

Bases: RobertaPreTrainedModel

RobertaForMultipleChoice is a class for fine-tuning a pre-trained Roberta model for multiple choice tasks.

This class inherits from RobertaPreTrainedModel and implements the necessary methods for forwarding the model architecture and computing the multiple choice classification loss.

ATTRIBUTE DESCRIPTION
roberta

The RobertaModel instance for handling the main Roberta model.

TYPE: RobertaModel

dropout

Dropout layer for regularization.

TYPE: Dropout

classifier

Dense layer for classification.

TYPE: Linear

METHOD DESCRIPTION
__init__

Initializes the RobertaForMultipleChoice instance with the given configuration.

forward

Constructs the model architecture and computes the multiple choice classification loss.

PARAMETER DESCRIPTION
input_ids

Input tensor containing the token indices.

TYPE: Optional[Tensor]

token_type_ids

Input tensor containing the token type ids.

TYPE: Optional[Tensor]

attention_mask

Input tensor containing the attention mask.

TYPE: Optional[Tensor]

labels

Tensor containing the labels for classification loss.

TYPE: Optional[Tensor]

position_ids

Tensor containing the positional indices.

TYPE: Optional[Tensor]

head_mask

Tensor containing the head mask.

TYPE: Optional[Tensor]

inputs_embeds

Tensor containing the embedded input.

TYPE: Optional[Tensor]

output_attentions

Flag indicating whether to output attentions.

TYPE: Optional[bool]

output_hidden_states

Flag indicating whether to output hidden states.

TYPE: Optional[bool]

return_dict

Flag indicating whether to return outputs as a dictionary.

TYPE: Optional[bool]

RETURNS DESCRIPTION

Union[Tuple[mindspore.Tensor], MultipleChoiceModelOutput]: Tuple containing the loss and model outputs.

RAISES DESCRIPTION
ValueError

If the input shape does not match the expected dimensions for multiple choice classification.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
class RobertaForMultipleChoice(RobertaPreTrainedModel):

    """
    RobertaForMultipleChoice is a class for fine-tuning a pre-trained Roberta model for multiple choice tasks.

    This class inherits from RobertaPreTrainedModel and implements the necessary methods for forwarding the model
    architecture and computing the multiple choice classification loss.

    Attributes:
        roberta (RobertaModel): The RobertaModel instance for handling the main Roberta model.
        dropout (nn.Dropout): Dropout layer for regularization.
        classifier (nn.Linear): Dense layer for classification.

    Methods:
        __init__: Initializes the RobertaForMultipleChoice instance with the given configuration.
        forward:
            Constructs the model architecture and computes the multiple choice classification loss.

    Parameters:
        input_ids (Optional[mindspore.Tensor]): Input tensor containing the token indices.
        token_type_ids (Optional[mindspore.Tensor]): Input tensor containing the token type ids.
        attention_mask (Optional[mindspore.Tensor]): Input tensor containing the attention mask.
        labels (Optional[mindspore.Tensor]): Tensor containing the labels for classification loss.
        position_ids (Optional[mindspore.Tensor]): Tensor containing the positional indices.
        head_mask (Optional[mindspore.Tensor]): Tensor containing the head mask.
        inputs_embeds (Optional[mindspore.Tensor]): Tensor containing the embedded input.
        output_attentions (Optional[bool]): Flag indicating whether to output attentions.
        output_hidden_states (Optional[bool]): Flag indicating whether to output hidden states.
        return_dict (Optional[bool]): Flag indicating whether to return outputs as a dictionary.

    Returns:
        Union[Tuple[mindspore.Tensor], MultipleChoiceModelOutput]: Tuple containing the loss and model outputs.

    Raises:
        ValueError: If the input shape does not match the expected dimensions for multiple choice classification.
    """
    def __init__(self, config):
        """
        Initializes a new instance of the `RobertaForMultipleChoice` class.

        Args:
            self: The object itself.
            config: An instance of the `RobertaConfig` class containing the model configuration settings.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(config)

        self.roberta = RobertaModel(config)
        self.dropout = nn.Dropout(p=config.hidden_dropout_prob)
        self.classifier = nn.Linear(config.hidden_size, 1)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], MultipleChoiceModelOutput]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for computing the multiple choice classification loss. Indices should be in `[0, ...,
                num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See
                `input_ids` above)
        """
        return_dict = (
            return_dict if return_dict is not None else self.config.use_return_dict
        )
        num_choices = (
            input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
        )

        flat_input_ids = (
            input_ids.view(-1, input_ids.shape[-1]) if input_ids is not None else None
        )
        flat_position_ids = (
            position_ids.view(-1, position_ids.shape[-1])
            if position_ids is not None
            else None
        )
        flat_token_type_ids = (
            token_type_ids.view(-1, token_type_ids.shape[-1])
            if token_type_ids is not None
            else None
        )
        flat_attention_mask = (
            attention_mask.view(-1, attention_mask.shape[-1])
            if attention_mask is not None
            else None
        )
        flat_inputs_embeds = (
            inputs_embeds.view(-1, inputs_embeds.shape[-2], inputs_embeds.shape[-1])
            if inputs_embeds is not None
            else None
        )

        outputs = self.roberta(
            flat_input_ids,
            position_ids=flat_position_ids,
            token_type_ids=flat_token_type_ids,
            attention_mask=flat_attention_mask,
            head_mask=head_mask,
            inputs_embeds=flat_inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
        pooled_output = outputs[1]

        pooled_output = self.dropout(pooled_output)
        logits = self.classifier(pooled_output)
        reshaped_logits = logits.view(-1, num_choices)

        loss = None
        if labels is not None:
            loss = F.cross_entropy(reshaped_logits, labels)

        if not return_dict:
            output = (reshaped_logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return MultipleChoiceModelOutput(
            loss=loss,
            logits=reshaped_logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForMultipleChoice.__init__(config)

Initializes a new instance of the RobertaForMultipleChoice class.

PARAMETER DESCRIPTION
self

The object itself.

config

An instance of the RobertaConfig class containing the model configuration settings.

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
def __init__(self, config):
    """
    Initializes a new instance of the `RobertaForMultipleChoice` class.

    Args:
        self: The object itself.
        config: An instance of the `RobertaConfig` class containing the model configuration settings.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(config)

    self.roberta = RobertaModel(config)
    self.dropout = nn.Dropout(p=config.hidden_dropout_prob)
    self.classifier = nn.Linear(config.hidden_size, 1)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForMultipleChoice.forward(input_ids=None, token_type_ids=None, attention_mask=None, labels=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above)

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], MultipleChoiceModelOutput]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the multiple choice classification loss. Indices should be in `[0, ...,
            num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See
            `input_ids` above)
    """
    return_dict = (
        return_dict if return_dict is not None else self.config.use_return_dict
    )
    num_choices = (
        input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
    )

    flat_input_ids = (
        input_ids.view(-1, input_ids.shape[-1]) if input_ids is not None else None
    )
    flat_position_ids = (
        position_ids.view(-1, position_ids.shape[-1])
        if position_ids is not None
        else None
    )
    flat_token_type_ids = (
        token_type_ids.view(-1, token_type_ids.shape[-1])
        if token_type_ids is not None
        else None
    )
    flat_attention_mask = (
        attention_mask.view(-1, attention_mask.shape[-1])
        if attention_mask is not None
        else None
    )
    flat_inputs_embeds = (
        inputs_embeds.view(-1, inputs_embeds.shape[-2], inputs_embeds.shape[-1])
        if inputs_embeds is not None
        else None
    )

    outputs = self.roberta(
        flat_input_ids,
        position_ids=flat_position_ids,
        token_type_ids=flat_token_type_ids,
        attention_mask=flat_attention_mask,
        head_mask=head_mask,
        inputs_embeds=flat_inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )
    pooled_output = outputs[1]

    pooled_output = self.dropout(pooled_output)
    logits = self.classifier(pooled_output)
    reshaped_logits = logits.view(-1, num_choices)

    loss = None
    if labels is not None:
        loss = F.cross_entropy(reshaped_logits, labels)

    if not return_dict:
        output = (reshaped_logits,) + outputs[2:]
        return ((loss,) + output) if loss is not None else output

    return MultipleChoiceModelOutput(
        loss=loss,
        logits=reshaped_logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForQuestionAnswering

Bases: RobertaPreTrainedModel

RobertaForQuestionAnswering is a class representing a model for question answering tasks based on the RoBERTa architecture. It inherits from RobertaPreTrainedModel and provides functionalities for forwarding the model and processing inputs for question answering.

ATTRIBUTE DESCRIPTION
num_labels

The number of labels for the question answering task.

TYPE: int

roberta

The RoBERTa model used for processing input sequences.

TYPE: RobertaModel

qa_outputs

A dense layer for outputting logits for the start and end positions of the labelled span.

TYPE: Linear

METHOD DESCRIPTION
__init__

Initializes the RobertaForQuestionAnswering model with the provided configuration.

forward

Constructs the model using the input tensors and returns the output logits for start and end positions. Optionally computes the total loss if start and end positions are provided.

Args:

  • input_ids (Optional[mindspore.Tensor]): The input tensor containing token indices.
  • attention_mask (Optional[mindspore.Tensor]): The tensor indicating which tokens should be attended to.
  • token_type_ids (Optional[mindspore.Tensor]): The tensor indicating token types.
  • position_ids (Optional[mindspore.Tensor]): The tensor indicating token positions.
  • head_mask (Optional[mindspore.Tensor]): The tensor for masking specific heads in the self-attention mechanism.
  • inputs_embeds (Optional[mindspore.Tensor]): The embedded input tensors.
  • start_positions (Optional[mindspore.Tensor]): The labels for the start positions of the labelled span.
  • end_positions (Optional[mindspore.Tensor]): The labels for the end positions of the labelled span.
  • output_attentions (Optional[bool]): Flag indicating whether to output attention weights.
  • output_hidden_states (Optional[bool]): Flag indicating whether to output hidden states.
  • return_dict (Optional[bool]): Flag indicating whether to return outputs as a dictionary.

Returns:

  • Union[Tuple[mindspore.Tensor], QuestionAnsweringModelOutput]: The output logits for start and end positions, and optionally the total loss.
RAISES DESCRIPTION
ValueError

If the start_positions or end_positions have incorrect dimensions.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
class RobertaForQuestionAnswering(RobertaPreTrainedModel):

    """
    RobertaForQuestionAnswering is a class representing a model for question answering tasks based on the RoBERTa
    architecture.
    It inherits from RobertaPreTrainedModel and provides functionalities for forwarding the model and processing
    inputs for question answering.

    Attributes:
        num_labels (int): The number of labels for the question answering task.
        roberta (RobertaModel): The RoBERTa model used for processing input sequences.
        qa_outputs (mindspore.nn.Linear): A dense layer for outputting logits for the start and end positions of the
            labelled span.

    Methods:
        __init__: Initializes the RobertaForQuestionAnswering model with the provided configuration.
        forward:
            Constructs the model using the input tensors and returns the output logits for start and end positions.
            Optionally computes the total loss if start and end positions are provided.

            Args:

            - input_ids (Optional[mindspore.Tensor]): The input tensor containing token indices.
            - attention_mask (Optional[mindspore.Tensor]): The tensor indicating which tokens should be attended to.
            - token_type_ids (Optional[mindspore.Tensor]): The tensor indicating token types.
            - position_ids (Optional[mindspore.Tensor]): The tensor indicating token positions.
            - head_mask (Optional[mindspore.Tensor]): The tensor for masking specific heads in the self-attention mechanism.
            - inputs_embeds (Optional[mindspore.Tensor]): The embedded input tensors.
            - start_positions (Optional[mindspore.Tensor]): The labels for the start positions of the labelled span.
            - end_positions (Optional[mindspore.Tensor]): The labels for the end positions of the labelled span.
            - output_attentions (Optional[bool]): Flag indicating whether to output attention weights.
            - output_hidden_states (Optional[bool]): Flag indicating whether to output hidden states.
            - return_dict (Optional[bool]): Flag indicating whether to return outputs as a dictionary.

            Returns:

            - Union[Tuple[mindspore.Tensor], QuestionAnsweringModelOutput]: The output logits for start and end positions,
            and optionally the total loss.

    Raises:
        ValueError: If the start_positions or end_positions have incorrect dimensions.
    """
    def __init__(self, config):
        """
        Initializes a new instance of the RobertaForQuestionAnswering class.

        Args:
            self: The object instance itself.
            config: A configuration object containing parameters for model initialization.
                It must have the attribute 'num_labels' specifying the number of labels.

        Returns:
            None.

        Raises:
            TypeError: If the 'config' parameter is not provided or is not of the expected type.
            ValueError: If the 'num_labels' attribute is missing in the 'config' object.
            RuntimeError: If an issue occurs during the initialization process of the RobertaForQuestionAnswering object.
        """
        super().__init__(config)
        self.num_labels = config.num_labels

        self.roberta = RobertaModel(config, add_pooling_layer=False)
        self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        start_positions: Optional[mindspore.Tensor] = None,
        end_positions: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], QuestionAnsweringModelOutput]:
        r"""
        Args:
            start_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for position (index) of the start of the labelled span for computing the token classification loss.
                Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
                are not taken into account for computing the loss.
            end_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for position (index) of the end of the labelled span for computing the token classification loss.
                Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
                are not taken into account for computing the loss.
        """
        return_dict = (
            return_dict if return_dict is not None else self.config.use_return_dict
        )

        outputs = self.roberta(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]

        logits = self.qa_outputs(sequence_output)
        start_logits, end_logits = logits.split(1, axis=-1)
        start_logits = start_logits.squeeze(-1)
        end_logits = end_logits.squeeze(-1)

        total_loss = None
        if start_positions is not None and end_positions is not None:
            # If we are on multi-GPU, split add a dimension
            if start_positions.ndim > 1:
                start_positions = start_positions.squeeze(-1)
            if end_positions.ndim > 1:
                end_positions = end_positions.squeeze(-1)
            # sometimes the start/end positions are outside our model inputs, we ignore these terms
            ignored_index = start_logits.shape[1]
            start_positions = start_positions.clamp(0, ignored_index)
            end_positions = end_positions.clamp(0, ignored_index)

            start_loss = F.cross_entropy(
                start_logits, start_positions, ignore_index=ignored_index
            )
            end_loss = F.cross_entropy(
                end_logits, end_positions, ignore_index=ignored_index
            )
            total_loss = (start_loss + end_loss) / 2

        if not return_dict:
            output = (start_logits, end_logits) + outputs[2:]
            return ((total_loss,) + output) if total_loss is not None else output

        return QuestionAnsweringModelOutput(
            loss=total_loss,
            start_logits=start_logits,
            end_logits=end_logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForQuestionAnswering.__init__(config)

Initializes a new instance of the RobertaForQuestionAnswering class.

PARAMETER DESCRIPTION
self

The object instance itself.

config

A configuration object containing parameters for model initialization. It must have the attribute 'num_labels' specifying the number of labels.

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
TypeError

If the 'config' parameter is not provided or is not of the expected type.

ValueError

If the 'num_labels' attribute is missing in the 'config' object.

RuntimeError

If an issue occurs during the initialization process of the RobertaForQuestionAnswering object.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
def __init__(self, config):
    """
    Initializes a new instance of the RobertaForQuestionAnswering class.

    Args:
        self: The object instance itself.
        config: A configuration object containing parameters for model initialization.
            It must have the attribute 'num_labels' specifying the number of labels.

    Returns:
        None.

    Raises:
        TypeError: If the 'config' parameter is not provided or is not of the expected type.
        ValueError: If the 'num_labels' attribute is missing in the 'config' object.
        RuntimeError: If an issue occurs during the initialization process of the RobertaForQuestionAnswering object.
    """
    super().__init__(config)
    self.num_labels = config.num_labels

    self.roberta = RobertaModel(config, add_pooling_layer=False)
    self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForQuestionAnswering.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
start_positions

Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

end_positions

Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    start_positions: Optional[mindspore.Tensor] = None,
    end_positions: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], QuestionAnsweringModelOutput]:
    r"""
    Args:
        start_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for position (index) of the start of the labelled span for computing the token classification loss.
            Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
            are not taken into account for computing the loss.
        end_positions (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for position (index) of the end of the labelled span for computing the token classification loss.
            Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
            are not taken into account for computing the loss.
    """
    return_dict = (
        return_dict if return_dict is not None else self.config.use_return_dict
    )

    outputs = self.roberta(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]

    logits = self.qa_outputs(sequence_output)
    start_logits, end_logits = logits.split(1, axis=-1)
    start_logits = start_logits.squeeze(-1)
    end_logits = end_logits.squeeze(-1)

    total_loss = None
    if start_positions is not None and end_positions is not None:
        # If we are on multi-GPU, split add a dimension
        if start_positions.ndim > 1:
            start_positions = start_positions.squeeze(-1)
        if end_positions.ndim > 1:
            end_positions = end_positions.squeeze(-1)
        # sometimes the start/end positions are outside our model inputs, we ignore these terms
        ignored_index = start_logits.shape[1]
        start_positions = start_positions.clamp(0, ignored_index)
        end_positions = end_positions.clamp(0, ignored_index)

        start_loss = F.cross_entropy(
            start_logits, start_positions, ignore_index=ignored_index
        )
        end_loss = F.cross_entropy(
            end_logits, end_positions, ignore_index=ignored_index
        )
        total_loss = (start_loss + end_loss) / 2

    if not return_dict:
        output = (start_logits, end_logits) + outputs[2:]
        return ((total_loss,) + output) if total_loss is not None else output

    return QuestionAnsweringModelOutput(
        loss=total_loss,
        start_logits=start_logits,
        end_logits=end_logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForSequenceClassification

Bases: RobertaPreTrainedModel

This class represents a Roberta model for sequence classification tasks. It is a subclass of RobertaPreTrainedModel and is specifically designed for sequence classification tasks.

The class's code includes an initialization method (init) and a forward method.

The init method initializes the RobertaForSequenceClassification object by taking a config argument. It calls the super() method to initialize the parent class (RobertaPreTrainedModel) with the provided config. It also initializes other attributes such as num_labels and classifier.

The forward method takes several input arguments and returns either a tuple of tensors or a SequenceClassifierOutput object. It performs the main computation of the model. It first calls the roberta() method of the parent class to obtain the sequence output. Then, it passes the sequence output to the classifier to obtain the logits. If labels are provided, it calculates the loss based on the problem type specified in the config. The loss and other outputs are returned as per the value of the return_dict parameter.

It is important to note that this class is specifically designed for sequence classification tasks, where the labels can be used to compute either a regression loss (Mean-Square loss) or a classification loss (Cross-Entropy). The problem type is determined automatically based on the number of labels and the dtype of the labels tensor.

For more details on the usage and functionality of this class, please refer to the RobertaForSequenceClassification documentation.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
class RobertaForSequenceClassification(RobertaPreTrainedModel):

    """
    This class represents a Roberta model for sequence classification tasks.
    It is a subclass of RobertaPreTrainedModel and is specifically designed for sequence classification tasks.

    The class's code includes an initialization method (__init__) and a forward method.

    The __init__ method initializes the RobertaForSequenceClassification object by taking a config argument.
    It calls the super() method to initialize the parent class (RobertaPreTrainedModel) with the
    provided config. It also initializes other attributes such as num_labels and classifier.

    The forward method takes several input arguments and returns either a tuple of tensors or a
    SequenceClassifierOutput object. It performs the main computation of the model. It first calls the roberta()
    method of the parent class to obtain the sequence output. Then, it passes the sequence output to the classifier
    to obtain the logits. If labels are provided, it calculates the loss based on the problem type
    specified in the config. The loss and other outputs are returned as per the value of the return_dict parameter.

    It is important to note that this class is specifically designed for sequence classification tasks,
    where the labels can be used to compute either a regression loss (Mean-Square loss) or a classification
    loss (Cross-Entropy). The problem type is determined automatically based on the number of labels and the dtype
    of the labels tensor.

    For more details on the usage and functionality of this class, please refer to the RobertaForSequenceClassification
    documentation.
    """
    def __init__(self, config):
        """
        Initializes a new instance of the RobertaForSequenceClassification class.

        Args:
            self: The instance of the class.
            config (RobertaConfig): The configuration object for the Roberta model.
                It contains the model configuration settings such as num_labels, which is the number of labels
                for classification. This parameter is required for configuring the model initialization.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(config)
        self.num_labels = config.num_labels
        self.config = config

        self.roberta = RobertaModel(config, add_pooling_layer=False)
        self.classifier = RobertaClassificationHead(config)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], SequenceClassifierOutput]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
                Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
                config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
                `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
        """
        return_dict = (
            return_dict if return_dict is not None else self.config.use_return_dict
        )

        outputs = self.roberta(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
        sequence_output = outputs[0]
        logits = self.classifier(sequence_output)

        loss = None
        if labels is not None:
            if self.config.problem_type is None:
                if self.num_labels == 1:
                    self.config.problem_type = "regression"
                elif self.num_labels > 1 and (
                    labels.dtype in (mindspore.int32, mindspore.int64)
                ):
                    self.config.problem_type = "single_label_classification"
                else:
                    self.config.problem_type = "multi_label_classification"

            if self.config.problem_type == "regression":
                if self.num_labels == 1:
                    loss = F.mse_loss(logits.squeeze(), labels.squeeze())
                else:
                    loss = F.mse_loss(logits, labels)
            elif self.config.problem_type == "single_label_classification":
                loss = F.cross_entropy(
                    logits.view(-1, self.num_labels), labels.view(-1)
                )
            elif self.config.problem_type == "multi_label_classification":
                loss = F.binary_cross_entropy_with_logits(logits, labels)

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return SequenceClassifierOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForSequenceClassification.__init__(config)

Initializes a new instance of the RobertaForSequenceClassification class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

The configuration object for the Roberta model. It contains the model configuration settings such as num_labels, which is the number of labels for classification. This parameter is required for configuring the model initialization.

TYPE: RobertaConfig

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
def __init__(self, config):
    """
    Initializes a new instance of the RobertaForSequenceClassification class.

    Args:
        self: The instance of the class.
        config (RobertaConfig): The configuration object for the Roberta model.
            It contains the model configuration settings such as num_labels, which is the number of labels
            for classification. This parameter is required for configuring the model initialization.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(config)
    self.num_labels = config.num_labels
    self.config = config

    self.roberta = RobertaModel(config, add_pooling_layer=False)
    self.classifier = RobertaClassificationHead(config)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForSequenceClassification.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

TYPE: `mindspore.Tensor` of shape `(batch_size,)`, *optional* DEFAULT: None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], SequenceClassifierOutput]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size,)`, *optional*):
            Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
            config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
            `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
    """
    return_dict = (
        return_dict if return_dict is not None else self.config.use_return_dict
    )

    outputs = self.roberta(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )
    sequence_output = outputs[0]
    logits = self.classifier(sequence_output)

    loss = None
    if labels is not None:
        if self.config.problem_type is None:
            if self.num_labels == 1:
                self.config.problem_type = "regression"
            elif self.num_labels > 1 and (
                labels.dtype in (mindspore.int32, mindspore.int64)
            ):
                self.config.problem_type = "single_label_classification"
            else:
                self.config.problem_type = "multi_label_classification"

        if self.config.problem_type == "regression":
            if self.num_labels == 1:
                loss = F.mse_loss(logits.squeeze(), labels.squeeze())
            else:
                loss = F.mse_loss(logits, labels)
        elif self.config.problem_type == "single_label_classification":
            loss = F.cross_entropy(
                logits.view(-1, self.num_labels), labels.view(-1)
            )
        elif self.config.problem_type == "multi_label_classification":
            loss = F.binary_cross_entropy_with_logits(logits, labels)

    if not return_dict:
        output = (logits,) + outputs[2:]
        return ((loss,) + output) if loss is not None else output

    return SequenceClassifierOutput(
        loss=loss,
        logits=logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForTokenClassification

Bases: RobertaPreTrainedModel

This class represents a Roberta model for token classification. It is a subclass of the RobertaPreTrainedModel.

Class Attributes
  • num_labels (int): The number of labels for token classification.
  • roberta (RobertaModel): The RoBERTa model.
  • dropout (Dropout): The dropout layer.
  • classifier (Dense): The classifier layer.
METHOD DESCRIPTION
__init__

Initializes the RobertaForTokenClassification instance with the given configuration.

ATTRIBUTE DESCRIPTION
return_dict

Indicates whether to return a dictionary as output.

TYPE: bool

PARAMETER DESCRIPTION
input_ids

The input tensor of shape (batch_size, sequence_length).

TYPE: Optional[Tensor]

attention_mask

The attention mask tensor of shape (batch_size, sequence_length).

TYPE: Optional[Tensor]

token_type_ids

The token type IDs tensor of shape (batch_size, sequence_length).

TYPE: Optional[Tensor]

position_ids

The position IDs tensor of shape (batch_size, sequence_length).

TYPE: Optional[Tensor]

head_mask

The head mask tensor of shape (batch_size, num_heads, sequence_length, sequence_length).

TYPE: Optional[Tensor]

inputs_embeds

The embedded inputs tensor of shape (batch_size, sequence_length, hidden_size).

TYPE: Optional[Tensor]

labels

The labels tensor of shape (batch_size, sequence_length).

TYPE: Optional[Tensor]

output_attentions

Indicates whether to output attentions.

TYPE: Optional[bool]

output_hidden_states

Indicates whether to output hidden states.

TYPE: Optional[bool]

return_dict

Indicates whether to return a dictionary as output.

TYPE: Optional[bool]

RETURNS DESCRIPTION

Conditional Return:

  • If return_dict is False, returns a tuple containing the loss tensor, logits tensor, and the remaining outputs.
  • If return_dict is True, returns a TokenClassifierOutput object containing the loss tensor, logits tensor, hidden states, and attentions.
Note

The labels tensor should contain indices in the range [0, num_labels-1] for computing the token classification loss.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
class RobertaForTokenClassification(RobertaPreTrainedModel):

    """
    This class represents a Roberta model for token classification. It is a subclass of the RobertaPreTrainedModel.

    Class Attributes:
        - num_labels (int): The number of labels for token classification.
        - roberta (RobertaModel): The RoBERTa model.
        - dropout (Dropout): The dropout layer.
        - classifier (Dense): The classifier layer.

    Methods:
        __init__(self, config): Initializes the RobertaForTokenClassification instance with the given configuration.
        forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels,
            output_attentions, output_hidden_states, return_dict): Constructs the token classification model and
            returns the output.

    Attributes:
        return_dict (bool): Indicates whether to return a dictionary as output.

    Parameters:
        input_ids (Optional[mindspore.Tensor]): The input tensor of shape (batch_size, sequence_length).
        attention_mask (Optional[mindspore.Tensor]): The attention mask tensor of shape (batch_size, sequence_length).
        token_type_ids (Optional[mindspore.Tensor]): The token type IDs tensor of shape (batch_size, sequence_length).
        position_ids (Optional[mindspore.Tensor]): The position IDs tensor of shape (batch_size, sequence_length).
        head_mask (Optional[mindspore.Tensor]): The head mask tensor of shape (batch_size, num_heads, sequence_length, sequence_length).
        inputs_embeds (Optional[mindspore.Tensor]): The embedded inputs tensor of shape (batch_size, sequence_length, hidden_size).
        labels (Optional[mindspore.Tensor]): The labels tensor of shape (batch_size, sequence_length).
        output_attentions (Optional[bool]): Indicates whether to output attentions.
        output_hidden_states (Optional[bool]): Indicates whether to output hidden states.
        return_dict (Optional[bool]): Indicates whether to return a dictionary as output.

    Returns:
        Conditional Return:

            - If return_dict is False, returns a tuple containing the loss tensor, logits tensor, and the remaining outputs.
            - If return_dict is True, returns a TokenClassifierOutput object containing the loss tensor, logits tensor,
            hidden states, and attentions.

    Note:
        The labels tensor should contain indices in the range [0, num_labels-1] for computing the token
        classification loss.
    """
    def __init__(self, config):
        """
        Initializes a new instance of the `RobertaForTokenClassification` class.

        Args:
            self: The object itself.
            config: A `RobertaConfig` instance containing the configuration parameters for the model.

        Returns:
            None

        Raises:
            None
        """
        super().__init__(config)
        self.num_labels = config.num_labels

        self.roberta = RobertaModel(config, add_pooling_layer=False)
        classifier_dropout = (
            config.classifier_dropout
            if config.classifier_dropout is not None
            else config.hidden_dropout_prob
        )
        self.dropout = nn.Dropout(p=classifier_dropout)
        self.classifier = nn.Linear(config.hidden_size, config.num_labels)

        # Initialize weights and apply final processing
        self.post_init()

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        labels: Optional[mindspore.Tensor] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], TokenClassifierOutput]:
        r"""
        Args:
            labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
        """
        return_dict = (
            return_dict if return_dict is not None else self.config.use_return_dict
        )

        outputs = self.roberta(
            input_ids,
            attention_mask=attention_mask,
            token_type_ids=token_type_ids,
            position_ids=position_ids,
            head_mask=head_mask,
            inputs_embeds=inputs_embeds,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )

        sequence_output = outputs[0]

        sequence_output = self.dropout(sequence_output)
        logits = self.classifier(sequence_output)

        loss = None
        if labels is not None:
            loss = F.cross_entropy(logits.view(-1, self.num_labels), labels.view(-1))

        if not return_dict:
            output = (logits,) + outputs[2:]
            return ((loss,) + output) if loss is not None else output

        return TokenClassifierOutput(
            loss=loss,
            logits=logits,
            hidden_states=outputs.hidden_states,
            attentions=outputs.attentions,
        )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForTokenClassification.__init__(config)

Initializes a new instance of the RobertaForTokenClassification class.

PARAMETER DESCRIPTION
self

The object itself.

config

A RobertaConfig instance containing the configuration parameters for the model.

RETURNS DESCRIPTION

None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
def __init__(self, config):
    """
    Initializes a new instance of the `RobertaForTokenClassification` class.

    Args:
        self: The object itself.
        config: A `RobertaConfig` instance containing the configuration parameters for the model.

    Returns:
        None

    Raises:
        None
    """
    super().__init__(config)
    self.num_labels = config.num_labels

    self.roberta = RobertaModel(config, add_pooling_layer=False)
    classifier_dropout = (
        config.classifier_dropout
        if config.classifier_dropout is not None
        else config.hidden_dropout_prob
    )
    self.dropout = nn.Dropout(p=classifier_dropout)
    self.classifier = nn.Linear(config.hidden_size, config.num_labels)

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaForTokenClassification.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
labels

Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    labels: Optional[mindspore.Tensor] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], TokenClassifierOutput]:
    r"""
    Args:
        labels (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
    """
    return_dict = (
        return_dict if return_dict is not None else self.config.use_return_dict
    )

    outputs = self.roberta(
        input_ids,
        attention_mask=attention_mask,
        token_type_ids=token_type_ids,
        position_ids=position_ids,
        head_mask=head_mask,
        inputs_embeds=inputs_embeds,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )

    sequence_output = outputs[0]

    sequence_output = self.dropout(sequence_output)
    logits = self.classifier(sequence_output)

    loss = None
    if labels is not None:
        loss = F.cross_entropy(logits.view(-1, self.num_labels), labels.view(-1))

    if not return_dict:
        output = (logits,) + outputs[2:]
        return ((loss,) + output) if loss is not None else output

    return TokenClassifierOutput(
        loss=loss,
        logits=logits,
        hidden_states=outputs.hidden_states,
        attentions=outputs.attentions,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaIntermediate

Bases: Module

Represents the intermediate layer of the Roberta model for processing hidden states.

This class inherits from nn.Module and provides methods for forwarding the intermediate layer of the Roberta model.

ATTRIBUTE DESCRIPTION
dense

A dense layer with specified hidden size and intermediate size.

TYPE: Linear

intermediate_act_fn

Activation function applied to hidden states.

TYPE: function

METHOD DESCRIPTION
__init__

Initializes the RobertaIntermediate instance with the given configuration.

forward

Constructs the intermediate layer by passing the hidden states through the dense layer and activation function.

Example
>>> config = RobertaConfig(hidden_size=768, intermediate_size=3072, hidden_act='gelu')
>>> intermediate_layer = RobertaIntermediate(config)
>>> hidden_states = intermediate_layer.forward(input_hidden_states)
Example
>>> config = RobertaConfig(hidden_size=768, intermediate_size=3072, hidden_act='gelu')
>>> intermediate_layer = RobertaIntermediate(config)
>>> hidden_states = intermediate_layer.forward(input_hidden_states)
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
class RobertaIntermediate(nn.Module):

    """
    Represents the intermediate layer of the Roberta model for processing hidden states.

    This class inherits from nn.Module and provides methods for forwarding the intermediate layer of the Roberta model.

    Attributes:
        dense (nn.Linear): A dense layer with specified hidden size and intermediate size.
        intermediate_act_fn (function): Activation function applied to hidden states.

    Methods:
        __init__: Initializes the RobertaIntermediate instance with the given configuration.
        forward: Constructs the intermediate layer by passing the hidden states through the dense layer and activation function.

    Example:
        ```python
        >>> config = RobertaConfig(hidden_size=768, intermediate_size=3072, hidden_act='gelu')
        >>> intermediate_layer = RobertaIntermediate(config)
        >>> hidden_states = intermediate_layer.forward(input_hidden_states)
        ```

    Example:
        ```python
        >>> config = RobertaConfig(hidden_size=768, intermediate_size=3072, hidden_act='gelu')
        >>> intermediate_layer = RobertaIntermediate(config)
        >>> hidden_states = intermediate_layer.forward(input_hidden_states)
        ```
    """
    def __init__(self, config):
        """
        Initializes a new instance of the RobertaIntermediate class.

        Args:
            self: The instance of the class.
            config: An object of type 'config' containing configuration parameters for the intermediate layer.
                It is expected to have attributes like 'hidden_size', 'intermediate_size', and 'hidden_act'.

        Returns:
            None.

        Raises:
            TypeError: If the 'config' parameter is not provided or is not of the expected type.
            ValueError: If the 'config' parameter does not contain the required attributes.
        """
        super().__init__()
        self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
        if isinstance(config.hidden_act, str):
            self.intermediate_act_fn = ACT2FN[config.hidden_act]
        else:
            self.intermediate_act_fn = config.hidden_act

    def forward(self, hidden_states: mindspore.Tensor) -> mindspore.Tensor:
        """
        This method forwards the intermediate representation of the Roberta model.

        Args:
            self (RobertaIntermediate): The instance of the RobertaIntermediate class.
            hidden_states (mindspore.Tensor): The input tensor representing the hidden states.

        Returns:
            mindspore.Tensor: A tensor representing the intermediate states of the Roberta model.

        Raises:
            None
        """
        hidden_states = self.dense(hidden_states)
        hidden_states = self.intermediate_act_fn(hidden_states)
        return hidden_states

mindnlp.transformers.models.roberta.modeling_roberta.RobertaIntermediate.__init__(config)

Initializes a new instance of the RobertaIntermediate class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

An object of type 'config' containing configuration parameters for the intermediate layer. It is expected to have attributes like 'hidden_size', 'intermediate_size', and 'hidden_act'.

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
TypeError

If the 'config' parameter is not provided or is not of the expected type.

ValueError

If the 'config' parameter does not contain the required attributes.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
def __init__(self, config):
    """
    Initializes a new instance of the RobertaIntermediate class.

    Args:
        self: The instance of the class.
        config: An object of type 'config' containing configuration parameters for the intermediate layer.
            It is expected to have attributes like 'hidden_size', 'intermediate_size', and 'hidden_act'.

    Returns:
        None.

    Raises:
        TypeError: If the 'config' parameter is not provided or is not of the expected type.
        ValueError: If the 'config' parameter does not contain the required attributes.
    """
    super().__init__()
    self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
    if isinstance(config.hidden_act, str):
        self.intermediate_act_fn = ACT2FN[config.hidden_act]
    else:
        self.intermediate_act_fn = config.hidden_act

mindnlp.transformers.models.roberta.modeling_roberta.RobertaIntermediate.forward(hidden_states)

This method forwards the intermediate representation of the Roberta model.

PARAMETER DESCRIPTION
self

The instance of the RobertaIntermediate class.

TYPE: RobertaIntermediate

hidden_states

The input tensor representing the hidden states.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

mindspore.Tensor: A tensor representing the intermediate states of the Roberta model.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
def forward(self, hidden_states: mindspore.Tensor) -> mindspore.Tensor:
    """
    This method forwards the intermediate representation of the Roberta model.

    Args:
        self (RobertaIntermediate): The instance of the RobertaIntermediate class.
        hidden_states (mindspore.Tensor): The input tensor representing the hidden states.

    Returns:
        mindspore.Tensor: A tensor representing the intermediate states of the Roberta model.

    Raises:
        None
    """
    hidden_states = self.dense(hidden_states)
    hidden_states = self.intermediate_act_fn(hidden_states)
    return hidden_states

mindnlp.transformers.models.roberta.modeling_roberta.RobertaLMHead

Bases: Module

Roberta Head for masked language modeling.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
class RobertaLMHead(nn.Module):
    """Roberta Head for masked language modeling."""
    def __init__(self, config):
        """
        Initialize the RobertaLMHead class.

        Args:
            self (RobertaLMHead): The instance of the RobertaLMHead class.
            config (object):
                An object containing configuration parameters.

                - hidden_size (int): The size of the hidden layer.
                - layer_norm_eps (float): Epsilon value for layer normalization.
                - vocab_size (int): The size of the vocabulary.

        Returns:
            None.

        Raises:
            TypeError: If config is not provided or is not an object.
            ValueError: If the config object does not contain the required parameters.
        """
        super().__init__()
        self.dense = nn.Linear(config.hidden_size, config.hidden_size)
        self.layer_norm = nn.LayerNorm(
            (config.hidden_size,), eps=config.layer_norm_eps
        )

        self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
        self.bias = Parameter(initializer("zeros", config.vocab_size), "bias")
        self.decoder.bias = self.bias

    def forward(self, features):
        """
        Constructs the output of the language model head for a given set of features.

        Args:
            self (RobertaLMHead): The instance of the RobertaLMHead class.
            features (tensor): The input features for forwarding the output.
                It should be a tensor of shape (batch_size, sequence_length, hidden_size).

        Returns:
            tensor: The forwarded output tensor of shape (batch_size, sequence_length, hidden_size).

        Raises:
            ValueError: If the input features tensor is not of the expected shape.
            RuntimeError: If there is an issue in the execution of the method.
        """
        x = self.dense(features)
        x = F.gelu(x)
        x = self.layer_norm(x)

        # project back to size of vocabulary with bias
        x = self.decoder(x)

        return x

    def _tie_weights(self):
        """
        This method ties the weights of the decoder's bias to the model's bias.

        Args:
            self (RobertaLMHead): The instance of the RobertaLMHead class.
                This parameter is used to access the decoder bias and tie it to the model's bias.

        Returns:
            None.

        Raises:
            This method does not raise any exceptions.
        """
        self.bias = self.decoder.bias

mindnlp.transformers.models.roberta.modeling_roberta.RobertaLMHead.__init__(config)

Initialize the RobertaLMHead class.

PARAMETER DESCRIPTION
self

The instance of the RobertaLMHead class.

TYPE: RobertaLMHead

config

An object containing configuration parameters.

  • hidden_size (int): The size of the hidden layer.
  • layer_norm_eps (float): Epsilon value for layer normalization.
  • vocab_size (int): The size of the vocabulary.

TYPE: object

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
TypeError

If config is not provided or is not an object.

ValueError

If the config object does not contain the required parameters.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
def __init__(self, config):
    """
    Initialize the RobertaLMHead class.

    Args:
        self (RobertaLMHead): The instance of the RobertaLMHead class.
        config (object):
            An object containing configuration parameters.

            - hidden_size (int): The size of the hidden layer.
            - layer_norm_eps (float): Epsilon value for layer normalization.
            - vocab_size (int): The size of the vocabulary.

    Returns:
        None.

    Raises:
        TypeError: If config is not provided or is not an object.
        ValueError: If the config object does not contain the required parameters.
    """
    super().__init__()
    self.dense = nn.Linear(config.hidden_size, config.hidden_size)
    self.layer_norm = nn.LayerNorm(
        (config.hidden_size,), eps=config.layer_norm_eps
    )

    self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
    self.bias = Parameter(initializer("zeros", config.vocab_size), "bias")
    self.decoder.bias = self.bias

mindnlp.transformers.models.roberta.modeling_roberta.RobertaLMHead.forward(features)

Constructs the output of the language model head for a given set of features.

PARAMETER DESCRIPTION
self

The instance of the RobertaLMHead class.

TYPE: RobertaLMHead

features

The input features for forwarding the output. It should be a tensor of shape (batch_size, sequence_length, hidden_size).

TYPE: tensor

RETURNS DESCRIPTION
tensor

The forwarded output tensor of shape (batch_size, sequence_length, hidden_size).

RAISES DESCRIPTION
ValueError

If the input features tensor is not of the expected shape.

RuntimeError

If there is an issue in the execution of the method.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
def forward(self, features):
    """
    Constructs the output of the language model head for a given set of features.

    Args:
        self (RobertaLMHead): The instance of the RobertaLMHead class.
        features (tensor): The input features for forwarding the output.
            It should be a tensor of shape (batch_size, sequence_length, hidden_size).

    Returns:
        tensor: The forwarded output tensor of shape (batch_size, sequence_length, hidden_size).

    Raises:
        ValueError: If the input features tensor is not of the expected shape.
        RuntimeError: If there is an issue in the execution of the method.
    """
    x = self.dense(features)
    x = F.gelu(x)
    x = self.layer_norm(x)

    # project back to size of vocabulary with bias
    x = self.decoder(x)

    return x

mindnlp.transformers.models.roberta.modeling_roberta.RobertaLayer

Bases: Module

Represents a layer of the Roberta model for natural language processing tasks. This layer includes self-attention and cross-attention mechanisms.

This class inherits from nn.Module and contains methods for initializing the layer and forwarding the layer's functionality.

ATTRIBUTE DESCRIPTION
chunk_size_feed_forward

The chunk size for the feed-forward computation.

TYPE: int

seq_len_dim

The dimension for sequence length.

TYPE: int

attention

The self-attention mechanism used in the layer.

TYPE: RobertaAttention

is_decoder

Indicates if the layer is used as a decoder model.

TYPE: bool

add_cross_attention

Indicates if cross-attention is added to the layer.

TYPE: bool

crossattention

The cross-attention mechanism used in the layer, if cross-attention is added.

TYPE: RobertaAttention

intermediate

The intermediate processing module of the layer.

TYPE: RobertaIntermediate

output

The output module of the layer.

TYPE: RobertaOutput

METHOD DESCRIPTION
__init__

Initializes the RobertaLayer with the given configuration.

forward

Constructs the layer using the given input and arguments, applying self-attention and cross-attention if applicable.

feed_forward_chunk

Performs the feed-forward computation using the given attention output.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
class RobertaLayer(nn.Module):

    """
    Represents a layer of the Roberta model for natural language processing tasks.
    This layer includes self-attention and cross-attention mechanisms.

    This class inherits from nn.Module and contains methods for initializing the layer and forwarding the
    layer's functionality.

    Attributes:
        chunk_size_feed_forward (int): The chunk size for the feed-forward computation.
        seq_len_dim (int): The dimension for sequence length.
        attention (RobertaAttention): The self-attention mechanism used in the layer.
        is_decoder (bool): Indicates if the layer is used as a decoder model.
        add_cross_attention (bool): Indicates if cross-attention is added to the layer.
        crossattention (RobertaAttention): The cross-attention mechanism used in the layer, if cross-attention is added.
        intermediate (RobertaIntermediate): The intermediate processing module of the layer.
        output (RobertaOutput): The output module of the layer.

    Methods:
        __init__: Initializes the RobertaLayer with the given configuration.
        forward: Constructs the layer using the given input and arguments,
            applying self-attention and cross-attention if applicable.
        feed_forward_chunk: Performs the feed-forward computation using the given attention output.
    """
    def __init__(self, config):
        """
        Initializes an instance of the `RobertaLayer` class.

        Args:
            self: The instance of the `RobertaLayer` class.
            config: An object of type `Config` containing the configuration settings for the model.

        Returns:
            None.

        Raises:
            ValueError: If `add_cross_attention` is set to `True` but the model is not used as a decoder model.

        """
        super().__init__()
        self.chunk_size_feed_forward = config.chunk_size_feed_forward
        self.seq_len_dim = 1
        self.attention = RobertaAttention(config)
        self.is_decoder = config.is_decoder
        self.add_cross_attention = config.add_cross_attention
        if self.add_cross_attention:
            if not self.is_decoder:
                raise ValueError(f"{self} should be used as a decoder model if cross attention is added")
            self.crossattention = RobertaAttention(config, position_embedding_type="absolute")
        self.intermediate = RobertaIntermediate(config)
        self.output = RobertaOutput(config)

    def forward(
        self,
        hidden_states: mindspore.Tensor,
        attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        past_key_value: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
        output_attentions: Optional[bool] = False,
    ) -> Tuple[mindspore.Tensor]:
        """
        Constructs a single layer of the Roberta model.

        Args:
            self (RobertaLayer): The instance of the RobertaLayer class.
            hidden_states (mindspore.Tensor): The input tensor of shape (batch_size, sequence_length, hidden_size)
                representing the hidden states.
            attention_mask (Optional[mindspore.Tensor]): An optional tensor of shape (batch_size, sequence_length)
                representing the attention mask. Defaults to None.
            head_mask (Optional[mindspore.Tensor]): An optional tensor of shape
                (num_attention_heads, sequence_length, sequence_length) representing the head mask. Defaults to None.
            encoder_hidden_states (Optional[mindspore.Tensor]): An optional tensor of shape
                (batch_size, encoder_sequence_length, hidden_size) representing the hidden states of the encoder.
                Defaults to None.
            encoder_attention_mask (Optional[mindspore.Tensor]): An optional tensor of shape
                (batch_size, encoder_sequence_length) representing the attention mask for the encoder. Defaults to None.
            past_key_value (Optional[Tuple[Tuple[mindspore.Tensor]]]): An optional tuple containing past key-value
                tensors. Defaults to None.
            output_attentions (Optional[bool]): An optional boolean value indicating whether to output attentions.
                Defaults to False.

        Returns:
            Tuple[mindspore.Tensor]:
                A tuple containing the following:

                - layer_output (mindspore.Tensor): The output tensor of shape (batch_size, sequence_length, hidden_size)
                representing the layer output.
                - present_key_value (mindspore.Tensor): The tensor of shape
                (batch_size, num_heads, sequence_length, hidden_size), containing the present key-value tensors.
                Only returned if self.is_decoder is True.

        Raises:
            ValueError: If `encoder_hidden_states` are passed, and `self` is not instantiated with cross-attention
                layers by setting `config.add_cross_attention=True`.
        """
        # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
        self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
        self_attention_outputs = self.attention(
            hidden_states,
            attention_mask,
            head_mask,
            output_attentions=output_attentions,
            past_key_value=self_attn_past_key_value,
        )
        attention_output = self_attention_outputs[0]

        # if decoder, the last output is tuple of self-attn cache
        if self.is_decoder:
            outputs = self_attention_outputs[1:-1]
            present_key_value = self_attention_outputs[-1]
        else:
            outputs = self_attention_outputs[1:]  # add self attentions if we output attention weights

        cross_attn_present_key_value = None
        if self.is_decoder and encoder_hidden_states is not None:
            if not hasattr(self, "crossattention"):
                raise ValueError(
                    f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers"
                    " by setting `config.add_cross_attention=True`"
                )

            # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple
            cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
            cross_attention_outputs = self.crossattention(
                attention_output,
                attention_mask,
                head_mask,
                encoder_hidden_states,
                encoder_attention_mask,
                cross_attn_past_key_value,
                output_attentions,
            )
            attention_output = cross_attention_outputs[0]
            outputs = outputs + cross_attention_outputs[1:-1]  # add cross attentions if we output attention weights

            # add cross-attn cache to positions 3,4 of present_key_value tuple
            cross_attn_present_key_value = cross_attention_outputs[-1]
            present_key_value = present_key_value + cross_attn_present_key_value

        layer_output = apply_chunking_to_forward(
            self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
        )
        outputs = (layer_output,) + outputs

        # if decoder, return the attn key/values as the last output
        if self.is_decoder:
            outputs = outputs + (present_key_value,)

        return outputs

    def feed_forward_chunk(self, attention_output):
        """
        Method that carries out feed-forward processing on the attention output in a RobertaLayer.

        Args:
            self (RobertaLayer): The instance of the RobertaLayer class.
            attention_output (tensor): The input tensor representing the attention output.
                This tensor is expected to have a specific shape and structure required for processing.

        Returns:
            None.

        Raises:
            None.
        """
        intermediate_output = self.intermediate(attention_output)
        layer_output = self.output(intermediate_output, attention_output)
        return layer_output

mindnlp.transformers.models.roberta.modeling_roberta.RobertaLayer.__init__(config)

Initializes an instance of the RobertaLayer class.

PARAMETER DESCRIPTION
self

The instance of the RobertaLayer class.

config

An object of type Config containing the configuration settings for the model.

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
ValueError

If add_cross_attention is set to True but the model is not used as a decoder model.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
def __init__(self, config):
    """
    Initializes an instance of the `RobertaLayer` class.

    Args:
        self: The instance of the `RobertaLayer` class.
        config: An object of type `Config` containing the configuration settings for the model.

    Returns:
        None.

    Raises:
        ValueError: If `add_cross_attention` is set to `True` but the model is not used as a decoder model.

    """
    super().__init__()
    self.chunk_size_feed_forward = config.chunk_size_feed_forward
    self.seq_len_dim = 1
    self.attention = RobertaAttention(config)
    self.is_decoder = config.is_decoder
    self.add_cross_attention = config.add_cross_attention
    if self.add_cross_attention:
        if not self.is_decoder:
            raise ValueError(f"{self} should be used as a decoder model if cross attention is added")
        self.crossattention = RobertaAttention(config, position_embedding_type="absolute")
    self.intermediate = RobertaIntermediate(config)
    self.output = RobertaOutput(config)

mindnlp.transformers.models.roberta.modeling_roberta.RobertaLayer.feed_forward_chunk(attention_output)

Method that carries out feed-forward processing on the attention output in a RobertaLayer.

PARAMETER DESCRIPTION
self

The instance of the RobertaLayer class.

TYPE: RobertaLayer

attention_output

The input tensor representing the attention output. This tensor is expected to have a specific shape and structure required for processing.

TYPE: tensor

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
def feed_forward_chunk(self, attention_output):
    """
    Method that carries out feed-forward processing on the attention output in a RobertaLayer.

    Args:
        self (RobertaLayer): The instance of the RobertaLayer class.
        attention_output (tensor): The input tensor representing the attention output.
            This tensor is expected to have a specific shape and structure required for processing.

    Returns:
        None.

    Raises:
        None.
    """
    intermediate_output = self.intermediate(attention_output)
    layer_output = self.output(intermediate_output, attention_output)
    return layer_output

mindnlp.transformers.models.roberta.modeling_roberta.RobertaLayer.forward(hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_value=None, output_attentions=False)

Constructs a single layer of the Roberta model.

PARAMETER DESCRIPTION
self

The instance of the RobertaLayer class.

TYPE: RobertaLayer

hidden_states

The input tensor of shape (batch_size, sequence_length, hidden_size) representing the hidden states.

TYPE: Tensor

attention_mask

An optional tensor of shape (batch_size, sequence_length) representing the attention mask. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

head_mask

An optional tensor of shape (num_attention_heads, sequence_length, sequence_length) representing the head mask. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

encoder_hidden_states

An optional tensor of shape (batch_size, encoder_sequence_length, hidden_size) representing the hidden states of the encoder. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

encoder_attention_mask

An optional tensor of shape (batch_size, encoder_sequence_length) representing the attention mask for the encoder. Defaults to None.

TYPE: Optional[Tensor] DEFAULT: None

past_key_value

An optional tuple containing past key-value tensors. Defaults to None.

TYPE: Optional[Tuple[Tuple[Tensor]]] DEFAULT: None

output_attentions

An optional boolean value indicating whether to output attentions. Defaults to False.

TYPE: Optional[bool] DEFAULT: False

RETURNS DESCRIPTION
Tuple[Tensor]

Tuple[mindspore.Tensor]: A tuple containing the following:

  • layer_output (mindspore.Tensor): The output tensor of shape (batch_size, sequence_length, hidden_size) representing the layer output.
  • present_key_value (mindspore.Tensor): The tensor of shape (batch_size, num_heads, sequence_length, hidden_size), containing the present key-value tensors. Only returned if self.is_decoder is True.
RAISES DESCRIPTION
ValueError

If encoder_hidden_states are passed, and self is not instantiated with cross-attention layers by setting config.add_cross_attention=True.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
def forward(
    self,
    hidden_states: mindspore.Tensor,
    attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    past_key_value: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
    output_attentions: Optional[bool] = False,
) -> Tuple[mindspore.Tensor]:
    """
    Constructs a single layer of the Roberta model.

    Args:
        self (RobertaLayer): The instance of the RobertaLayer class.
        hidden_states (mindspore.Tensor): The input tensor of shape (batch_size, sequence_length, hidden_size)
            representing the hidden states.
        attention_mask (Optional[mindspore.Tensor]): An optional tensor of shape (batch_size, sequence_length)
            representing the attention mask. Defaults to None.
        head_mask (Optional[mindspore.Tensor]): An optional tensor of shape
            (num_attention_heads, sequence_length, sequence_length) representing the head mask. Defaults to None.
        encoder_hidden_states (Optional[mindspore.Tensor]): An optional tensor of shape
            (batch_size, encoder_sequence_length, hidden_size) representing the hidden states of the encoder.
            Defaults to None.
        encoder_attention_mask (Optional[mindspore.Tensor]): An optional tensor of shape
            (batch_size, encoder_sequence_length) representing the attention mask for the encoder. Defaults to None.
        past_key_value (Optional[Tuple[Tuple[mindspore.Tensor]]]): An optional tuple containing past key-value
            tensors. Defaults to None.
        output_attentions (Optional[bool]): An optional boolean value indicating whether to output attentions.
            Defaults to False.

    Returns:
        Tuple[mindspore.Tensor]:
            A tuple containing the following:

            - layer_output (mindspore.Tensor): The output tensor of shape (batch_size, sequence_length, hidden_size)
            representing the layer output.
            - present_key_value (mindspore.Tensor): The tensor of shape
            (batch_size, num_heads, sequence_length, hidden_size), containing the present key-value tensors.
            Only returned if self.is_decoder is True.

    Raises:
        ValueError: If `encoder_hidden_states` are passed, and `self` is not instantiated with cross-attention
            layers by setting `config.add_cross_attention=True`.
    """
    # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
    self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
    self_attention_outputs = self.attention(
        hidden_states,
        attention_mask,
        head_mask,
        output_attentions=output_attentions,
        past_key_value=self_attn_past_key_value,
    )
    attention_output = self_attention_outputs[0]

    # if decoder, the last output is tuple of self-attn cache
    if self.is_decoder:
        outputs = self_attention_outputs[1:-1]
        present_key_value = self_attention_outputs[-1]
    else:
        outputs = self_attention_outputs[1:]  # add self attentions if we output attention weights

    cross_attn_present_key_value = None
    if self.is_decoder and encoder_hidden_states is not None:
        if not hasattr(self, "crossattention"):
            raise ValueError(
                f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers"
                " by setting `config.add_cross_attention=True`"
            )

        # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple
        cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
        cross_attention_outputs = self.crossattention(
            attention_output,
            attention_mask,
            head_mask,
            encoder_hidden_states,
            encoder_attention_mask,
            cross_attn_past_key_value,
            output_attentions,
        )
        attention_output = cross_attention_outputs[0]
        outputs = outputs + cross_attention_outputs[1:-1]  # add cross attentions if we output attention weights

        # add cross-attn cache to positions 3,4 of present_key_value tuple
        cross_attn_present_key_value = cross_attention_outputs[-1]
        present_key_value = present_key_value + cross_attn_present_key_value

    layer_output = apply_chunking_to_forward(
        self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
    )
    outputs = (layer_output,) + outputs

    # if decoder, return the attn key/values as the last output
    if self.is_decoder:
        outputs = outputs + (present_key_value,)

    return outputs

mindnlp.transformers.models.roberta.modeling_roberta.RobertaModel

Bases: RobertaPreTrainedModel

The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.

To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass.

.. _Attention is all you need: https://arxiv.org/abs/1706.03762

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
class RobertaModel(RobertaPreTrainedModel):
    """
    The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
    cross-attention is added between the self-attention layers, following the architecture described in *Attention is
    all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
    Kaiser and Illia Polosukhin.

    To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
    to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
    `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.

    .. _*Attention is all you need*: https://arxiv.org/abs/1706.03762

    """
    # Copied from transformers.models.bert.modeling_bert.BertModel.__init__ with Bert->Roberta
    def __init__(self, config, add_pooling_layer=True):
        """
        Initializes a new instance of the RobertaModel class.

        Args:
            self: The current object instance.
            config (object): An instance of the configuration class that contains the model configuration parameters.
            add_pooling_layer (bool, optional): Determines whether to add a pooling layer to the model. Defaults to True.

        Returns:
            None.

        Raises:
            None.

        Description:
            This method initializes a new instance of the RobertaModel class. It takes the following parameters:

            - self: The current object instance.
            - config: An instance of the configuration class that contains the model configuration parameters.
            - add_pooling_layer: A boolean value that determines whether to add a pooling layer to the model.

            The method initializes the following attributes:

            - self.config: Stores the provided configuration object.
            - self.embeddings: An instance of the RobertaEmbeddings class, initialized with the provided configuration.
            - self.encoder: An instance of the RobertaEncoder class, initialized with the provided configuration.
            - self.pooler: An instance of the RobertaPooler class, initialized with the provided configuration
            if add_pooling_layer is True, otherwise set to None.

            After initialization, this method calls the post_init() method to perform any additional setup
            or initialization steps.
        """
        super().__init__(config)
        self.config = config

        self.embeddings = RobertaEmbeddings(config)
        self.encoder = RobertaEncoder(config)

        self.pooler = RobertaPooler(config) if add_pooling_layer else None

        # Initialize weights and apply final processing
        self.post_init()

    def get_input_embeddings(self):
        """
        Returns the input embeddings of the RobertaModel.

        Args:
            self (RobertaModel): An instance of the RobertaModel class.

        Returns:
            None.

        Raises:
            None.
        """
        return self.embeddings.word_embeddings

    def set_input_embeddings(self, value):
        """
        Sets the input embeddings for the RobertaModel.

        Args:
            self (RobertaModel): The instance of the RobertaModel class.
            value (object): The input embeddings to be set for the model. This can be a tensor or any other object
                that can be assigned to the `word_embeddings` attribute of the `embeddings` object.

        Returns:
            None.

        Raises:
            None.

        Note:
            The `word_embeddings` attribute of the `embeddings` object is a key component of the RobertaModel.
            It represents the input embeddings used for the model's forward pass.
            By setting the input embeddings using this method, you can customize the input representation for the model.

        Example:
            ```python
            >>> model = RobertaModel()
            >>> embeddings = torch.tensor([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
            >>> model.set_input_embeddings(embeddings)
            ```
        """
        self.embeddings.word_embeddings = value

    def _prune_heads(self, heads_to_prune):
        """
        Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
        class PreTrainedModel
        """
        for layer, heads in heads_to_prune.items():
            self.encoder.layer[layer].attention.prune_heads(heads)

    def forward(
        self,
        input_ids: Optional[mindspore.Tensor] = None,
        attention_mask: Optional[mindspore.Tensor] = None,
        token_type_ids: Optional[mindspore.Tensor] = None,
        position_ids: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        inputs_embeds: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        past_key_values: Optional[List[mindspore.Tensor]] = None,
        use_cache: Optional[bool] = None,
        output_attentions: Optional[bool] = None,
        output_hidden_states: Optional[bool] = None,
        return_dict: Optional[bool] = None,
    ) -> Union[Tuple[mindspore.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]:
        r"""
        Args:
            encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
                Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
                the model is configured as a decoder.
            encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
                Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
                the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

                - 1 for tokens that are **not masked**,
                - 0 for tokens that are **masked**.
            past_key_values (`tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4
                tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
                Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
                If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
                don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
                `decoder_input_ids` of shape `(batch_size, sequence_length)`.
            use_cache (`bool`, *optional*):
                If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
                `past_key_values`).
        """
        output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
        output_hidden_states = (
            output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
        )
        return_dict = return_dict if return_dict is not None else self.config.use_return_dict

        if self.config.is_decoder:
            use_cache = use_cache if use_cache is not None else self.config.use_cache
        else:
            use_cache = False

        if input_ids is not None and inputs_embeds is not None:
            raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
        if input_ids is not None:
            self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
            input_shape = input_ids.shape
        elif inputs_embeds is not None:
            input_shape = inputs_embeds.shape[:-1]
        else:
            raise ValueError("You have to specify either input_ids or inputs_embeds")

        batch_size, seq_length = input_shape

        # past_key_values_length
        past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0

        if attention_mask is None:
            attention_mask = ops.ones(((batch_size, seq_length + past_key_values_length)))

        if token_type_ids is None:
            if hasattr(self.embeddings, "token_type_ids"):
                buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
                buffered_token_type_ids_expanded = buffered_token_type_ids.broadcast_to((batch_size, seq_length))
                token_type_ids = buffered_token_type_ids_expanded
            else:
                token_type_ids = ops.zeros(input_shape, dtype=mindspore.int64)
        # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
        # ourselves in which case we just need to make it broadcastable to all heads.
        extended_attention_mask: mindspore.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)

        # If a 2D or 3D attention mask is provided for the cross-attention
        # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
        if self.config.is_decoder and encoder_hidden_states is not None:
            encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.shape
            encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
            if encoder_attention_mask is None:
                encoder_attention_mask = ops.ones(encoder_hidden_shape)
            encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
        else:
            encoder_extended_attention_mask = None

        # Prepare head mask if needed
        # 1.0 in head_mask indicate we keep the head
        # attention_probs has shape bsz x n_heads x N x N
        # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
        # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
        head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)

        embedding_output = self.embeddings(
            input_ids=input_ids,
            position_ids=position_ids,
            token_type_ids=token_type_ids,
            inputs_embeds=inputs_embeds,
            past_key_values_length=past_key_values_length,
        )
        encoder_outputs = self.encoder(
            embedding_output,
            attention_mask=extended_attention_mask,
            head_mask=head_mask,
            encoder_hidden_states=encoder_hidden_states,
            encoder_attention_mask=encoder_extended_attention_mask,
            past_key_values=past_key_values,
            use_cache=use_cache,
            output_attentions=output_attentions,
            output_hidden_states=output_hidden_states,
            return_dict=return_dict,
        )
        sequence_output = encoder_outputs[0]
        pooled_output = self.pooler(sequence_output) if self.pooler is not None else None

        if not return_dict:
            return (sequence_output, pooled_output) + encoder_outputs[1:]

        return BaseModelOutputWithPoolingAndCrossAttentions(
            last_hidden_state=sequence_output,
            pooler_output=pooled_output,
            past_key_values=encoder_outputs.past_key_values,
            hidden_states=encoder_outputs.hidden_states,
            attentions=encoder_outputs.attentions,
            cross_attentions=encoder_outputs.cross_attentions,
        )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaModel.__init__(config, add_pooling_layer=True)

Initializes a new instance of the RobertaModel class.

PARAMETER DESCRIPTION
self

The current object instance.

config

An instance of the configuration class that contains the model configuration parameters.

TYPE: object

add_pooling_layer

Determines whether to add a pooling layer to the model. Defaults to True.

TYPE: bool DEFAULT: True

RETURNS DESCRIPTION

None.

Description

This method initializes a new instance of the RobertaModel class. It takes the following parameters:

  • self: The current object instance.
  • config: An instance of the configuration class that contains the model configuration parameters.
  • add_pooling_layer: A boolean value that determines whether to add a pooling layer to the model.

The method initializes the following attributes:

  • self.config: Stores the provided configuration object.
  • self.embeddings: An instance of the RobertaEmbeddings class, initialized with the provided configuration.
  • self.encoder: An instance of the RobertaEncoder class, initialized with the provided configuration.
  • self.pooler: An instance of the RobertaPooler class, initialized with the provided configuration if add_pooling_layer is True, otherwise set to None.

After initialization, this method calls the post_init() method to perform any additional setup or initialization steps.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
def __init__(self, config, add_pooling_layer=True):
    """
    Initializes a new instance of the RobertaModel class.

    Args:
        self: The current object instance.
        config (object): An instance of the configuration class that contains the model configuration parameters.
        add_pooling_layer (bool, optional): Determines whether to add a pooling layer to the model. Defaults to True.

    Returns:
        None.

    Raises:
        None.

    Description:
        This method initializes a new instance of the RobertaModel class. It takes the following parameters:

        - self: The current object instance.
        - config: An instance of the configuration class that contains the model configuration parameters.
        - add_pooling_layer: A boolean value that determines whether to add a pooling layer to the model.

        The method initializes the following attributes:

        - self.config: Stores the provided configuration object.
        - self.embeddings: An instance of the RobertaEmbeddings class, initialized with the provided configuration.
        - self.encoder: An instance of the RobertaEncoder class, initialized with the provided configuration.
        - self.pooler: An instance of the RobertaPooler class, initialized with the provided configuration
        if add_pooling_layer is True, otherwise set to None.

        After initialization, this method calls the post_init() method to perform any additional setup
        or initialization steps.
    """
    super().__init__(config)
    self.config = config

    self.embeddings = RobertaEmbeddings(config)
    self.encoder = RobertaEncoder(config)

    self.pooler = RobertaPooler(config) if add_pooling_layer else None

    # Initialize weights and apply final processing
    self.post_init()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaModel.forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)

PARAMETER DESCRIPTION
encoder_hidden_states

Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

TYPE: (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional* DEFAULT: None

encoder_attention_mask

Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]:

  • 1 for tokens that are not masked,
  • 0 for tokens that are masked.

TYPE: `mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional* DEFAULT: None

use_cache

If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).

TYPE: `bool`, *optional* DEFAULT: None

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
def forward(
    self,
    input_ids: Optional[mindspore.Tensor] = None,
    attention_mask: Optional[mindspore.Tensor] = None,
    token_type_ids: Optional[mindspore.Tensor] = None,
    position_ids: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    inputs_embeds: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    past_key_values: Optional[List[mindspore.Tensor]] = None,
    use_cache: Optional[bool] = None,
    output_attentions: Optional[bool] = None,
    output_hidden_states: Optional[bool] = None,
    return_dict: Optional[bool] = None,
) -> Union[Tuple[mindspore.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]:
    r"""
    Args:
        encoder_hidden_states  (`mindspore.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
            Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
            the model is configured as a decoder.
        encoder_attention_mask (`mindspore.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
            Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
            the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:

            - 1 for tokens that are **not masked**,
            - 0 for tokens that are **masked**.
        past_key_values (`tuple(tuple(mindspore.Tensor))` of length `config.n_layers` with each tuple having 4
            tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
            Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
            If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
            don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
            `decoder_input_ids` of shape `(batch_size, sequence_length)`.
        use_cache (`bool`, *optional*):
            If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
            `past_key_values`).
    """
    output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
    output_hidden_states = (
        output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
    )
    return_dict = return_dict if return_dict is not None else self.config.use_return_dict

    if self.config.is_decoder:
        use_cache = use_cache if use_cache is not None else self.config.use_cache
    else:
        use_cache = False

    if input_ids is not None and inputs_embeds is not None:
        raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
    if input_ids is not None:
        self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
        input_shape = input_ids.shape
    elif inputs_embeds is not None:
        input_shape = inputs_embeds.shape[:-1]
    else:
        raise ValueError("You have to specify either input_ids or inputs_embeds")

    batch_size, seq_length = input_shape

    # past_key_values_length
    past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0

    if attention_mask is None:
        attention_mask = ops.ones(((batch_size, seq_length + past_key_values_length)))

    if token_type_ids is None:
        if hasattr(self.embeddings, "token_type_ids"):
            buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
            buffered_token_type_ids_expanded = buffered_token_type_ids.broadcast_to((batch_size, seq_length))
            token_type_ids = buffered_token_type_ids_expanded
        else:
            token_type_ids = ops.zeros(input_shape, dtype=mindspore.int64)
    # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
    # ourselves in which case we just need to make it broadcastable to all heads.
    extended_attention_mask: mindspore.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)

    # If a 2D or 3D attention mask is provided for the cross-attention
    # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
    if self.config.is_decoder and encoder_hidden_states is not None:
        encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.shape
        encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
        if encoder_attention_mask is None:
            encoder_attention_mask = ops.ones(encoder_hidden_shape)
        encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
    else:
        encoder_extended_attention_mask = None

    # Prepare head mask if needed
    # 1.0 in head_mask indicate we keep the head
    # attention_probs has shape bsz x n_heads x N x N
    # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
    # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
    head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)

    embedding_output = self.embeddings(
        input_ids=input_ids,
        position_ids=position_ids,
        token_type_ids=token_type_ids,
        inputs_embeds=inputs_embeds,
        past_key_values_length=past_key_values_length,
    )
    encoder_outputs = self.encoder(
        embedding_output,
        attention_mask=extended_attention_mask,
        head_mask=head_mask,
        encoder_hidden_states=encoder_hidden_states,
        encoder_attention_mask=encoder_extended_attention_mask,
        past_key_values=past_key_values,
        use_cache=use_cache,
        output_attentions=output_attentions,
        output_hidden_states=output_hidden_states,
        return_dict=return_dict,
    )
    sequence_output = encoder_outputs[0]
    pooled_output = self.pooler(sequence_output) if self.pooler is not None else None

    if not return_dict:
        return (sequence_output, pooled_output) + encoder_outputs[1:]

    return BaseModelOutputWithPoolingAndCrossAttentions(
        last_hidden_state=sequence_output,
        pooler_output=pooled_output,
        past_key_values=encoder_outputs.past_key_values,
        hidden_states=encoder_outputs.hidden_states,
        attentions=encoder_outputs.attentions,
        cross_attentions=encoder_outputs.cross_attentions,
    )

mindnlp.transformers.models.roberta.modeling_roberta.RobertaModel.get_input_embeddings()

Returns the input embeddings of the RobertaModel.

PARAMETER DESCRIPTION
self

An instance of the RobertaModel class.

TYPE: RobertaModel

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
def get_input_embeddings(self):
    """
    Returns the input embeddings of the RobertaModel.

    Args:
        self (RobertaModel): An instance of the RobertaModel class.

    Returns:
        None.

    Raises:
        None.
    """
    return self.embeddings.word_embeddings

mindnlp.transformers.models.roberta.modeling_roberta.RobertaModel.set_input_embeddings(value)

Sets the input embeddings for the RobertaModel.

PARAMETER DESCRIPTION
self

The instance of the RobertaModel class.

TYPE: RobertaModel

value

The input embeddings to be set for the model. This can be a tensor or any other object that can be assigned to the word_embeddings attribute of the embeddings object.

TYPE: object

RETURNS DESCRIPTION

None.

Note

The word_embeddings attribute of the embeddings object is a key component of the RobertaModel. It represents the input embeddings used for the model's forward pass. By setting the input embeddings using this method, you can customize the input representation for the model.

Example
>>> model = RobertaModel()
>>> embeddings = torch.tensor([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
>>> model.set_input_embeddings(embeddings)
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
def set_input_embeddings(self, value):
    """
    Sets the input embeddings for the RobertaModel.

    Args:
        self (RobertaModel): The instance of the RobertaModel class.
        value (object): The input embeddings to be set for the model. This can be a tensor or any other object
            that can be assigned to the `word_embeddings` attribute of the `embeddings` object.

    Returns:
        None.

    Raises:
        None.

    Note:
        The `word_embeddings` attribute of the `embeddings` object is a key component of the RobertaModel.
        It represents the input embeddings used for the model's forward pass.
        By setting the input embeddings using this method, you can customize the input representation for the model.

    Example:
        ```python
        >>> model = RobertaModel()
        >>> embeddings = torch.tensor([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]])
        >>> model.set_input_embeddings(embeddings)
        ```
    """
    self.embeddings.word_embeddings = value

mindnlp.transformers.models.roberta.modeling_roberta.RobertaOutput

Bases: Module

This class represents the output of a Roberta model, which is used for fine-tuning tasks. It inherits from the nn.Module class.

The RobertaOutput class applies a series of transformations to the input hidden states and produces the final output tensor.

ATTRIBUTE DESCRIPTION
dense

A fully connected layer that maps the input hidden states to an intermediate size.

TYPE: Linear

LayerNorm

A layer normalization module that normalizes the hidden states.

TYPE: LayerNorm

dropout

A dropout module that applies dropout to the hidden states.

TYPE: Dropout

METHOD DESCRIPTION
forward

Applies the transformation operations to the hidden states and returns the final output tensor.

Example
>>> # Create a `RobertaOutput` instance
>>> output = RobertaOutput(config)
...
>>> # Apply the transformation operations to the hidden states
>>> output_tensor = output.forward(hidden_states, input_tensor)
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
class RobertaOutput(nn.Module):

    """
    This class represents the output of a Roberta model, which is used for fine-tuning tasks.
    It inherits from the `nn.Module` class.

    The `RobertaOutput` class applies a series of transformations to the input hidden states and produces
    the final output tensor.

    Attributes:
        dense (nn.Linear): A fully connected layer that maps the input hidden states to an intermediate size.
        LayerNorm (nn.LayerNorm): A layer normalization module that normalizes the hidden states.
        dropout (nn.Dropout): A dropout module that applies dropout to the hidden states.

    Methods:
        forward:
            Applies the transformation operations to the hidden states and returns the final output tensor.

    Example:
        ```python
        >>> # Create a `RobertaOutput` instance
        >>> output = RobertaOutput(config)
        ...
        >>> # Apply the transformation operations to the hidden states
        >>> output_tensor = output.forward(hidden_states, input_tensor)
        ```
    """
    def __init__(self, config):
        """
        Initializes a new instance of the 'RobertaOutput' class.

        Args:
            self: The current instance of the class.
            config: An object of type 'Config' that holds the configuration parameters.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__()
        self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
        self.LayerNorm = nn.LayerNorm([config.hidden_size], eps=config.layer_norm_eps)
        self.dropout = nn.Dropout(p=config.hidden_dropout_prob)

    def forward(self, hidden_states: mindspore.Tensor, input_tensor: mindspore.Tensor) -> mindspore.Tensor:
        """
        This method forwards the output tensor for the Roberta model.

        Args:
            self: The instance of the RobertaOutput class.
            hidden_states (mindspore.Tensor): The hidden states tensor representing the output from the
                model's encoder layers. It is expected to be a tensor of shape [batch_size, sequence_length, hidden_size].
            input_tensor (mindspore.Tensor): The input tensor representing the output from the previous layer.
                It is expected to be a tensor of the same shape as hidden_states.

        Returns:
            mindspore.Tensor: The forwarded output tensor of the same shape as hidden_states,
                representing the final output of the Roberta model.

        Raises:
            ValueError: If the shapes of hidden_states and input_tensor are not compatible for addition.
            RuntimeError: If an error occurs during the dense, dropout, or LayerNorm operations.
        """
        hidden_states = self.dense(hidden_states)
        hidden_states = self.dropout(hidden_states)
        hidden_states = self.LayerNorm(hidden_states + input_tensor)
        return hidden_states

mindnlp.transformers.models.roberta.modeling_roberta.RobertaOutput.__init__(config)

Initializes a new instance of the 'RobertaOutput' class.

PARAMETER DESCRIPTION
self

The current instance of the class.

config

An object of type 'Config' that holds the configuration parameters.

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
def __init__(self, config):
    """
    Initializes a new instance of the 'RobertaOutput' class.

    Args:
        self: The current instance of the class.
        config: An object of type 'Config' that holds the configuration parameters.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__()
    self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
    self.LayerNorm = nn.LayerNorm([config.hidden_size], eps=config.layer_norm_eps)
    self.dropout = nn.Dropout(p=config.hidden_dropout_prob)

mindnlp.transformers.models.roberta.modeling_roberta.RobertaOutput.forward(hidden_states, input_tensor)

This method forwards the output tensor for the Roberta model.

PARAMETER DESCRIPTION
self

The instance of the RobertaOutput class.

hidden_states

The hidden states tensor representing the output from the model's encoder layers. It is expected to be a tensor of shape [batch_size, sequence_length, hidden_size].

TYPE: Tensor

input_tensor

The input tensor representing the output from the previous layer. It is expected to be a tensor of the same shape as hidden_states.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

mindspore.Tensor: The forwarded output tensor of the same shape as hidden_states, representing the final output of the Roberta model.

RAISES DESCRIPTION
ValueError

If the shapes of hidden_states and input_tensor are not compatible for addition.

RuntimeError

If an error occurs during the dense, dropout, or LayerNorm operations.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
def forward(self, hidden_states: mindspore.Tensor, input_tensor: mindspore.Tensor) -> mindspore.Tensor:
    """
    This method forwards the output tensor for the Roberta model.

    Args:
        self: The instance of the RobertaOutput class.
        hidden_states (mindspore.Tensor): The hidden states tensor representing the output from the
            model's encoder layers. It is expected to be a tensor of shape [batch_size, sequence_length, hidden_size].
        input_tensor (mindspore.Tensor): The input tensor representing the output from the previous layer.
            It is expected to be a tensor of the same shape as hidden_states.

    Returns:
        mindspore.Tensor: The forwarded output tensor of the same shape as hidden_states,
            representing the final output of the Roberta model.

    Raises:
        ValueError: If the shapes of hidden_states and input_tensor are not compatible for addition.
        RuntimeError: If an error occurs during the dense, dropout, or LayerNorm operations.
    """
    hidden_states = self.dense(hidden_states)
    hidden_states = self.dropout(hidden_states)
    hidden_states = self.LayerNorm(hidden_states + input_tensor)
    return hidden_states

mindnlp.transformers.models.roberta.modeling_roberta.RobertaPooler

Bases: Module

This class represents a pooler for the Roberta model. It inherits from the nn.Module class and is responsible for processing hidden states to generate a pooled output.

ATTRIBUTE DESCRIPTION
dense

A fully connected layer that maps the input hidden state to the hidden size.

TYPE: Linear

activation

The activation function applied to the output of the dense layer.

TYPE: Tanh

METHOD DESCRIPTION
__init__

Initializes the RobertaPooler instance with the specified configuration.

forward

Constructs the pooled output from the input hidden states.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
class RobertaPooler(nn.Module):

    """
    This class represents a pooler for the Roberta model. It inherits from the nn.Module class and is responsible
    for processing hidden states to generate a pooled output.

    Attributes:
        dense (nn.Linear): A fully connected layer that maps the input hidden state to the hidden size.
        activation (nn.Tanh): The activation function applied to the output of the dense layer.

    Methods:
        __init__: Initializes the RobertaPooler instance with the specified configuration.
        forward: Constructs the pooled output from the input hidden states.

    """
    def __init__(self, config):
        """
        Initializes a new instance of the RobertaPooler class.

        Args:
            self: The instance of the RobertaPooler class.
            config: An object containing configuration parameters for the RobertaPooler instance.
                It is expected to have a 'hidden_size' attribute specifying the size of the hidden layer.

        Returns:
            None.

        Raises:
            AttributeError: If the 'config' parameter does not have the expected 'hidden_size' attribute.
            TypeError: If the 'config' parameter is not of the expected type.
        """
        super().__init__()
        self.dense = nn.Linear(config.hidden_size, config.hidden_size)
        self.activation = nn.Tanh()

    def forward(self, hidden_states: mindspore.Tensor) -> mindspore.Tensor:
        """
        Constructs a pooled output tensor from the given hidden states using the RobertaPooler module.

        Args:
            self (RobertaPooler): The instance of the RobertaPooler class.
            hidden_states (mindspore.Tensor): The input hidden states tensor of shape
                (batch_size, sequence_length, hidden_size).

        Returns:
            mindspore.Tensor: The pooled output tensor of shape (batch_size, hidden_size).

        Raises:
            TypeError: If the 'hidden_states' parameter is not of type 'mindspore.Tensor'.
            ValueError: If the shape of the 'hidden_states' tensor is not (batch_size, sequence_length, hidden_size).

        Note:
            - The 'hidden_states' tensor should contain the hidden states of the sequence generated by the Roberta model.
            - The 'hidden_states' tensor should have a shape of (batch_size, sequence_length, hidden_size).
            - The 'hidden_states' tensor is expected to be the output of the Roberta model's last layer.
            - The 'hidden_states' tensor should be on the same device as the RobertaPooler module.

        Example:
            ```python
            >>> roberta_pooler = RobertaPooler()
            >>> hidden_states = mindspore.Tensor(np.random.randn(2, 5, 768), dtype=mindspore.float32)
            >>> pooled_output = roberta_pooler.forward(hidden_states)
            ```
        """
        # We "pool" the model by simply taking the hidden state corresponding
        # to the first token.
        first_token_tensor = hidden_states[:, 0]
        pooled_output = self.dense(first_token_tensor)
        pooled_output = self.activation(pooled_output)
        return pooled_output

mindnlp.transformers.models.roberta.modeling_roberta.RobertaPooler.__init__(config)

Initializes a new instance of the RobertaPooler class.

PARAMETER DESCRIPTION
self

The instance of the RobertaPooler class.

config

An object containing configuration parameters for the RobertaPooler instance. It is expected to have a 'hidden_size' attribute specifying the size of the hidden layer.

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
AttributeError

If the 'config' parameter does not have the expected 'hidden_size' attribute.

TypeError

If the 'config' parameter is not of the expected type.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
def __init__(self, config):
    """
    Initializes a new instance of the RobertaPooler class.

    Args:
        self: The instance of the RobertaPooler class.
        config: An object containing configuration parameters for the RobertaPooler instance.
            It is expected to have a 'hidden_size' attribute specifying the size of the hidden layer.

    Returns:
        None.

    Raises:
        AttributeError: If the 'config' parameter does not have the expected 'hidden_size' attribute.
        TypeError: If the 'config' parameter is not of the expected type.
    """
    super().__init__()
    self.dense = nn.Linear(config.hidden_size, config.hidden_size)
    self.activation = nn.Tanh()

mindnlp.transformers.models.roberta.modeling_roberta.RobertaPooler.forward(hidden_states)

Constructs a pooled output tensor from the given hidden states using the RobertaPooler module.

PARAMETER DESCRIPTION
self

The instance of the RobertaPooler class.

TYPE: RobertaPooler

hidden_states

The input hidden states tensor of shape (batch_size, sequence_length, hidden_size).

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

mindspore.Tensor: The pooled output tensor of shape (batch_size, hidden_size).

RAISES DESCRIPTION
TypeError

If the 'hidden_states' parameter is not of type 'mindspore.Tensor'.

ValueError

If the shape of the 'hidden_states' tensor is not (batch_size, sequence_length, hidden_size).

Note
  • The 'hidden_states' tensor should contain the hidden states of the sequence generated by the Roberta model.
  • The 'hidden_states' tensor should have a shape of (batch_size, sequence_length, hidden_size).
  • The 'hidden_states' tensor is expected to be the output of the Roberta model's last layer.
  • The 'hidden_states' tensor should be on the same device as the RobertaPooler module.
Example
>>> roberta_pooler = RobertaPooler()
>>> hidden_states = mindspore.Tensor(np.random.randn(2, 5, 768), dtype=mindspore.float32)
>>> pooled_output = roberta_pooler.forward(hidden_states)
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
def forward(self, hidden_states: mindspore.Tensor) -> mindspore.Tensor:
    """
    Constructs a pooled output tensor from the given hidden states using the RobertaPooler module.

    Args:
        self (RobertaPooler): The instance of the RobertaPooler class.
        hidden_states (mindspore.Tensor): The input hidden states tensor of shape
            (batch_size, sequence_length, hidden_size).

    Returns:
        mindspore.Tensor: The pooled output tensor of shape (batch_size, hidden_size).

    Raises:
        TypeError: If the 'hidden_states' parameter is not of type 'mindspore.Tensor'.
        ValueError: If the shape of the 'hidden_states' tensor is not (batch_size, sequence_length, hidden_size).

    Note:
        - The 'hidden_states' tensor should contain the hidden states of the sequence generated by the Roberta model.
        - The 'hidden_states' tensor should have a shape of (batch_size, sequence_length, hidden_size).
        - The 'hidden_states' tensor is expected to be the output of the Roberta model's last layer.
        - The 'hidden_states' tensor should be on the same device as the RobertaPooler module.

    Example:
        ```python
        >>> roberta_pooler = RobertaPooler()
        >>> hidden_states = mindspore.Tensor(np.random.randn(2, 5, 768), dtype=mindspore.float32)
        >>> pooled_output = roberta_pooler.forward(hidden_states)
        ```
    """
    # We "pool" the model by simply taking the hidden state corresponding
    # to the first token.
    first_token_tensor = hidden_states[:, 0]
    pooled_output = self.dense(first_token_tensor)
    pooled_output = self.activation(pooled_output)
    return pooled_output

mindnlp.transformers.models.roberta.modeling_roberta.RobertaPreTrainedModel

Bases: BertPreTrainedModel

Roberta Pretrained Model.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
1170
1171
1172
1173
class RobertaPreTrainedModel(BertPreTrainedModel):
    """Roberta Pretrained Model."""
    config_class = RobertaConfig
    base_model_prefix = "roberta"

mindnlp.transformers.models.roberta.modeling_roberta.RobertaSelfAttention

Bases: Module

RobertaSelfAttention

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
class RobertaSelfAttention(nn.Module):
    """RobertaSelfAttention"""
    def __init__(self, config, position_embedding_type=None):
        """
        Initializes an instance of the RobertaSelfAttention class.

        Args:
            self: The object itself.
            config (object): A configuration object containing various settings.
            position_embedding_type (str, optional): The type of position embedding to use. Defaults to None.

        Returns:
            None.

        Raises:
            ValueError: If the hidden size is not a multiple of the number of attention heads.

        """
        super().__init__()
        if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
            raise ValueError(
                f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention "
                f"heads ({config.num_attention_heads})"
            )

        self.num_attention_heads = config.num_attention_heads
        self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
        self.all_head_size = self.num_attention_heads * self.attention_head_size

        self.query = nn.Linear(config.hidden_size, self.all_head_size)
        self.key = nn.Linear(config.hidden_size, self.all_head_size)
        self.value = nn.Linear(config.hidden_size, self.all_head_size)

        self.dropout = nn.Dropout(p=config.attention_probs_dropout_prob)
        self.position_embedding_type = position_embedding_type or getattr(
            config, "position_embedding_type", "absolute"
        )
        if self.position_embedding_type in ("relative_key_query", "relative_key"):
            self.max_position_embeddings = config.max_position_embeddings
            self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size)

        self.is_decoder = config.is_decoder

    def transpose_for_scores(self, x: mindspore.Tensor) -> mindspore.Tensor:
        """
        Transposes the input tensor for computing self-attention scores.

        Args:
            self (RobertaSelfAttention): The instance of the `RobertaSelfAttention` class.
            x (mindspore.Tensor): The input tensor to be transposed.
                It should have a shape of (batch_size, sequence_length, hidden_size).

        Returns:
            mindspore.Tensor: The transposed tensor with shape
                (batch_size, num_attention_heads, sequence_length, attention_head_size).

        Raises:
            None.

        Note:
            - The `x` tensor is reshaped to have dimensions (batch_size, sequence_length, num_attention_heads,
                attention_head_size).
            - The `x` tensor is then permuted to have dimensions (batch_size, num_attention_heads, sequence_length,
                attention_head_size).

        Example:
            ```python
            >>> attention = RobertaSelfAttention()
            >>> input_tensor = mindspore.Tensor(np.random.randn(2, 5, 10), mindspore.float32)
            >>> output_tensor = attention.transpose_for_scores(input_tensor)
            >>> print(output_tensor.shape)
            (2, 12, 5, 10)
            ```
        """
        new_x_shape = x.shape[:-1] + (self.num_attention_heads, self.attention_head_size)
        x = x.view(new_x_shape)
        return x.permute(0, 2, 1, 3)

    def forward(
        self,
        hidden_states: mindspore.Tensor,
        attention_mask: Optional[mindspore.Tensor] = None,
        head_mask: Optional[mindspore.Tensor] = None,
        encoder_hidden_states: Optional[mindspore.Tensor] = None,
        encoder_attention_mask: Optional[mindspore.Tensor] = None,
        past_key_value: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
        output_attentions: Optional[bool] = False,
    ) -> Tuple[mindspore.Tensor]:
        """
        Constructs the self-attention mechanism for the Roberta model.

        Args:
            self (RobertaSelfAttention): The instance of the RobertaSelfAttention class.
            hidden_states (mindspore.Tensor): The input hidden states of the model.
                Shape: (batch_size, sequence_length, hidden_size).
            attention_mask (Optional[mindspore.Tensor]): The attention mask tensor. Default: None.
                Shape: (batch_size, sequence_length).
            head_mask (Optional[mindspore.Tensor]): The head mask tensor. Default: None.
                Shape: (num_heads, hidden_size).
            encoder_hidden_states (Optional[mindspore.Tensor]): The hidden states of the encoder. Default: None.
                Shape: (batch_size, encoder_sequence_length, hidden_size).
            encoder_attention_mask (Optional[mindspore.Tensor]): The attention mask for the encoder. Default: None.
                Shape: (batch_size, encoder_sequence_length).
            past_key_value (Optional[Tuple[Tuple[mindspore.Tensor]]]): The past key and value tensors. Default: None.
                Shape: ((batch_size, num_heads, past_sequence_length, head_size),
                (batch_size, num_heads, past_sequence_length, head_size)).
            output_attentions (Optional[bool]): Whether to output attention probabilities. Default: False.

        Returns:
            Tuple[mindspore.Tensor]: A tuple containing the context layer tensor.
                Shape: (batch_size, sequence_length, hidden_size). Optionally, if `output_attentions` is True,
                the tuple also contains the attention probabilities tensor.
                Shape: (batch_size, num_heads, sequence_length, sequence_length).

        Raises:
            None
        """
        mixed_query_layer = self.query(hidden_states)

        # If this is instantiated as a cross-attention module, the keys
        # and values come from an encoder; the attention mask needs to be
        # such that the encoder's padding tokens are not attended to.
        is_cross_attention = encoder_hidden_states is not None
        if is_cross_attention and past_key_value is not None:
            # reuse k,v, cross_attentions
            key_layer = past_key_value[0]
            value_layer = past_key_value[1]
            attention_mask = encoder_attention_mask
        elif is_cross_attention:
            key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
            value_layer = self.transpose_for_scores(self.value(encoder_hidden_states))
            attention_mask = encoder_attention_mask
        elif past_key_value is not None:
            key_layer = self.transpose_for_scores(self.key(hidden_states))
            value_layer = self.transpose_for_scores(self.value(hidden_states))
            key_layer = ops.cat([past_key_value[0], key_layer], dim=2)
            value_layer = ops.cat([past_key_value[1], value_layer], dim=2)
        else:
            key_layer = self.transpose_for_scores(self.key(hidden_states))
            value_layer = self.transpose_for_scores(self.value(hidden_states))

        query_layer = self.transpose_for_scores(mixed_query_layer)

        use_cache = past_key_value is not None
        if self.is_decoder:
            # if cross_attention save Tuple(mindspore.Tensor, mindspore.Tensor) of all cross attention key/value_states.
            # Further calls to cross_attention layer can then reuse all cross-attention
            # key/value_states (first "if" case)
            # if uni-directional self-attention (decoder) save Tuple(mindspore.Tensor, mindspore.Tensor) of
            # all previous decoder key/value_states. Further calls to uni-directional self-attention
            # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
            # if encoder bi-directional self-attention `past_key_value` is always `None`
            past_key_value = (key_layer, value_layer)

        # Take the dot product between "query" and "key" to get the raw attention scores.
        attention_scores = ops.matmul(query_layer, key_layer.swapaxes(-1, -2))

        if self.position_embedding_type in ("relative_key_query", "relative_key"):
            query_length, key_length = query_layer.shape[2], key_layer.shape[2]
            if use_cache:
                position_ids_l = mindspore.Tensor(key_length - 1, dtype=mindspore.int64).view(
                    -1, 1
                )
            else:
                position_ids_l = ops.arange(query_length, dtype=mindspore.int64).view(-1, 1)
            position_ids_r = ops.arange(key_length, dtype=mindspore.int64).view(1, -1)
            distance = position_ids_l - position_ids_r
            positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)
            positional_embedding = positional_embedding.to(dtype=query_layer.dtype)  # fp16 compatibility

            if self.position_embedding_type == "relative_key":
                relative_position_scores = ops.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
                attention_scores = attention_scores + relative_position_scores
            elif self.position_embedding_type == "relative_key_query":
                relative_position_scores_query = ops.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
                relative_position_scores_key = ops.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding)
                attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key

        attention_scores = attention_scores / math.sqrt(self.attention_head_size)
        if attention_mask is not None:
            # Apply the attention mask is (precomputed for all layers in RobertaModel forward() function)
            attention_scores = attention_scores + attention_mask

        # Normalize the attention scores to probabilities.
        attention_probs = ops.softmax(attention_scores, dim=-1)

        # This is actually dropping out entire tokens to attend to, which might
        # seem a bit unusual, but is taken from the original Transformer paper.
        attention_probs = self.dropout(attention_probs)

        # Mask heads if we want to
        if head_mask is not None:
            attention_probs = attention_probs * head_mask

        context_layer = ops.matmul(attention_probs, value_layer)

        context_layer = context_layer.permute(0, 2, 1, 3)
        new_context_layer_shape = context_layer.shape[:-2] + (self.all_head_size,)
        context_layer = context_layer.view(new_context_layer_shape)

        outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)

        if self.is_decoder:
            outputs = outputs + (past_key_value,)
        return outputs

mindnlp.transformers.models.roberta.modeling_roberta.RobertaSelfAttention.__init__(config, position_embedding_type=None)

Initializes an instance of the RobertaSelfAttention class.

PARAMETER DESCRIPTION
self

The object itself.

config

A configuration object containing various settings.

TYPE: object

position_embedding_type

The type of position embedding to use. Defaults to None.

TYPE: str DEFAULT: None

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
ValueError

If the hidden size is not a multiple of the number of attention heads.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
def __init__(self, config, position_embedding_type=None):
    """
    Initializes an instance of the RobertaSelfAttention class.

    Args:
        self: The object itself.
        config (object): A configuration object containing various settings.
        position_embedding_type (str, optional): The type of position embedding to use. Defaults to None.

    Returns:
        None.

    Raises:
        ValueError: If the hidden size is not a multiple of the number of attention heads.

    """
    super().__init__()
    if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
        raise ValueError(
            f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention "
            f"heads ({config.num_attention_heads})"
        )

    self.num_attention_heads = config.num_attention_heads
    self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
    self.all_head_size = self.num_attention_heads * self.attention_head_size

    self.query = nn.Linear(config.hidden_size, self.all_head_size)
    self.key = nn.Linear(config.hidden_size, self.all_head_size)
    self.value = nn.Linear(config.hidden_size, self.all_head_size)

    self.dropout = nn.Dropout(p=config.attention_probs_dropout_prob)
    self.position_embedding_type = position_embedding_type or getattr(
        config, "position_embedding_type", "absolute"
    )
    if self.position_embedding_type in ("relative_key_query", "relative_key"):
        self.max_position_embeddings = config.max_position_embeddings
        self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size)

    self.is_decoder = config.is_decoder

mindnlp.transformers.models.roberta.modeling_roberta.RobertaSelfAttention.forward(hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_value=None, output_attentions=False)

Constructs the self-attention mechanism for the Roberta model.

PARAMETER DESCRIPTION
self

The instance of the RobertaSelfAttention class.

TYPE: RobertaSelfAttention

hidden_states

The input hidden states of the model. Shape: (batch_size, sequence_length, hidden_size).

TYPE: Tensor

attention_mask

The attention mask tensor. Default: None. Shape: (batch_size, sequence_length).

TYPE: Optional[Tensor] DEFAULT: None

head_mask

The head mask tensor. Default: None. Shape: (num_heads, hidden_size).

TYPE: Optional[Tensor] DEFAULT: None

encoder_hidden_states

The hidden states of the encoder. Default: None. Shape: (batch_size, encoder_sequence_length, hidden_size).

TYPE: Optional[Tensor] DEFAULT: None

encoder_attention_mask

The attention mask for the encoder. Default: None. Shape: (batch_size, encoder_sequence_length).

TYPE: Optional[Tensor] DEFAULT: None

past_key_value

The past key and value tensors. Default: None. Shape: ((batch_size, num_heads, past_sequence_length, head_size), (batch_size, num_heads, past_sequence_length, head_size)).

TYPE: Optional[Tuple[Tuple[Tensor]]] DEFAULT: None

output_attentions

Whether to output attention probabilities. Default: False.

TYPE: Optional[bool] DEFAULT: False

RETURNS DESCRIPTION
Tuple[Tensor]

Tuple[mindspore.Tensor]: A tuple containing the context layer tensor. Shape: (batch_size, sequence_length, hidden_size). Optionally, if output_attentions is True, the tuple also contains the attention probabilities tensor. Shape: (batch_size, num_heads, sequence_length, sequence_length).

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
def forward(
    self,
    hidden_states: mindspore.Tensor,
    attention_mask: Optional[mindspore.Tensor] = None,
    head_mask: Optional[mindspore.Tensor] = None,
    encoder_hidden_states: Optional[mindspore.Tensor] = None,
    encoder_attention_mask: Optional[mindspore.Tensor] = None,
    past_key_value: Optional[Tuple[Tuple[mindspore.Tensor]]] = None,
    output_attentions: Optional[bool] = False,
) -> Tuple[mindspore.Tensor]:
    """
    Constructs the self-attention mechanism for the Roberta model.

    Args:
        self (RobertaSelfAttention): The instance of the RobertaSelfAttention class.
        hidden_states (mindspore.Tensor): The input hidden states of the model.
            Shape: (batch_size, sequence_length, hidden_size).
        attention_mask (Optional[mindspore.Tensor]): The attention mask tensor. Default: None.
            Shape: (batch_size, sequence_length).
        head_mask (Optional[mindspore.Tensor]): The head mask tensor. Default: None.
            Shape: (num_heads, hidden_size).
        encoder_hidden_states (Optional[mindspore.Tensor]): The hidden states of the encoder. Default: None.
            Shape: (batch_size, encoder_sequence_length, hidden_size).
        encoder_attention_mask (Optional[mindspore.Tensor]): The attention mask for the encoder. Default: None.
            Shape: (batch_size, encoder_sequence_length).
        past_key_value (Optional[Tuple[Tuple[mindspore.Tensor]]]): The past key and value tensors. Default: None.
            Shape: ((batch_size, num_heads, past_sequence_length, head_size),
            (batch_size, num_heads, past_sequence_length, head_size)).
        output_attentions (Optional[bool]): Whether to output attention probabilities. Default: False.

    Returns:
        Tuple[mindspore.Tensor]: A tuple containing the context layer tensor.
            Shape: (batch_size, sequence_length, hidden_size). Optionally, if `output_attentions` is True,
            the tuple also contains the attention probabilities tensor.
            Shape: (batch_size, num_heads, sequence_length, sequence_length).

    Raises:
        None
    """
    mixed_query_layer = self.query(hidden_states)

    # If this is instantiated as a cross-attention module, the keys
    # and values come from an encoder; the attention mask needs to be
    # such that the encoder's padding tokens are not attended to.
    is_cross_attention = encoder_hidden_states is not None
    if is_cross_attention and past_key_value is not None:
        # reuse k,v, cross_attentions
        key_layer = past_key_value[0]
        value_layer = past_key_value[1]
        attention_mask = encoder_attention_mask
    elif is_cross_attention:
        key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
        value_layer = self.transpose_for_scores(self.value(encoder_hidden_states))
        attention_mask = encoder_attention_mask
    elif past_key_value is not None:
        key_layer = self.transpose_for_scores(self.key(hidden_states))
        value_layer = self.transpose_for_scores(self.value(hidden_states))
        key_layer = ops.cat([past_key_value[0], key_layer], dim=2)
        value_layer = ops.cat([past_key_value[1], value_layer], dim=2)
    else:
        key_layer = self.transpose_for_scores(self.key(hidden_states))
        value_layer = self.transpose_for_scores(self.value(hidden_states))

    query_layer = self.transpose_for_scores(mixed_query_layer)

    use_cache = past_key_value is not None
    if self.is_decoder:
        # if cross_attention save Tuple(mindspore.Tensor, mindspore.Tensor) of all cross attention key/value_states.
        # Further calls to cross_attention layer can then reuse all cross-attention
        # key/value_states (first "if" case)
        # if uni-directional self-attention (decoder) save Tuple(mindspore.Tensor, mindspore.Tensor) of
        # all previous decoder key/value_states. Further calls to uni-directional self-attention
        # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
        # if encoder bi-directional self-attention `past_key_value` is always `None`
        past_key_value = (key_layer, value_layer)

    # Take the dot product between "query" and "key" to get the raw attention scores.
    attention_scores = ops.matmul(query_layer, key_layer.swapaxes(-1, -2))

    if self.position_embedding_type in ("relative_key_query", "relative_key"):
        query_length, key_length = query_layer.shape[2], key_layer.shape[2]
        if use_cache:
            position_ids_l = mindspore.Tensor(key_length - 1, dtype=mindspore.int64).view(
                -1, 1
            )
        else:
            position_ids_l = ops.arange(query_length, dtype=mindspore.int64).view(-1, 1)
        position_ids_r = ops.arange(key_length, dtype=mindspore.int64).view(1, -1)
        distance = position_ids_l - position_ids_r
        positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)
        positional_embedding = positional_embedding.to(dtype=query_layer.dtype)  # fp16 compatibility

        if self.position_embedding_type == "relative_key":
            relative_position_scores = ops.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
            attention_scores = attention_scores + relative_position_scores
        elif self.position_embedding_type == "relative_key_query":
            relative_position_scores_query = ops.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
            relative_position_scores_key = ops.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding)
            attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key

    attention_scores = attention_scores / math.sqrt(self.attention_head_size)
    if attention_mask is not None:
        # Apply the attention mask is (precomputed for all layers in RobertaModel forward() function)
        attention_scores = attention_scores + attention_mask

    # Normalize the attention scores to probabilities.
    attention_probs = ops.softmax(attention_scores, dim=-1)

    # This is actually dropping out entire tokens to attend to, which might
    # seem a bit unusual, but is taken from the original Transformer paper.
    attention_probs = self.dropout(attention_probs)

    # Mask heads if we want to
    if head_mask is not None:
        attention_probs = attention_probs * head_mask

    context_layer = ops.matmul(attention_probs, value_layer)

    context_layer = context_layer.permute(0, 2, 1, 3)
    new_context_layer_shape = context_layer.shape[:-2] + (self.all_head_size,)
    context_layer = context_layer.view(new_context_layer_shape)

    outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)

    if self.is_decoder:
        outputs = outputs + (past_key_value,)
    return outputs

mindnlp.transformers.models.roberta.modeling_roberta.RobertaSelfAttention.transpose_for_scores(x)

Transposes the input tensor for computing self-attention scores.

PARAMETER DESCRIPTION
self

The instance of the RobertaSelfAttention class.

TYPE: RobertaSelfAttention

x

The input tensor to be transposed. It should have a shape of (batch_size, sequence_length, hidden_size).

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

mindspore.Tensor: The transposed tensor with shape (batch_size, num_attention_heads, sequence_length, attention_head_size).

Note
  • The x tensor is reshaped to have dimensions (batch_size, sequence_length, num_attention_heads, attention_head_size).
  • The x tensor is then permuted to have dimensions (batch_size, num_attention_heads, sequence_length, attention_head_size).
Example
>>> attention = RobertaSelfAttention()
>>> input_tensor = mindspore.Tensor(np.random.randn(2, 5, 10), mindspore.float32)
>>> output_tensor = attention.transpose_for_scores(input_tensor)
>>> print(output_tensor.shape)
(2, 12, 5, 10)
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
def transpose_for_scores(self, x: mindspore.Tensor) -> mindspore.Tensor:
    """
    Transposes the input tensor for computing self-attention scores.

    Args:
        self (RobertaSelfAttention): The instance of the `RobertaSelfAttention` class.
        x (mindspore.Tensor): The input tensor to be transposed.
            It should have a shape of (batch_size, sequence_length, hidden_size).

    Returns:
        mindspore.Tensor: The transposed tensor with shape
            (batch_size, num_attention_heads, sequence_length, attention_head_size).

    Raises:
        None.

    Note:
        - The `x` tensor is reshaped to have dimensions (batch_size, sequence_length, num_attention_heads,
            attention_head_size).
        - The `x` tensor is then permuted to have dimensions (batch_size, num_attention_heads, sequence_length,
            attention_head_size).

    Example:
        ```python
        >>> attention = RobertaSelfAttention()
        >>> input_tensor = mindspore.Tensor(np.random.randn(2, 5, 10), mindspore.float32)
        >>> output_tensor = attention.transpose_for_scores(input_tensor)
        >>> print(output_tensor.shape)
        (2, 12, 5, 10)
        ```
    """
    new_x_shape = x.shape[:-1] + (self.num_attention_heads, self.attention_head_size)
    x = x.view(new_x_shape)
    return x.permute(0, 2, 1, 3)

mindnlp.transformers.models.roberta.modeling_roberta.RobertaSelfOutput

Bases: Module

This class represents the self-output module of the Roberta model. It applies a dense layer, layer normalization, and dropout to the hidden states, and then adds them to the input tensor.

PARAMETER DESCRIPTION
config

The configuration object that contains the settings for the module.

TYPE: obj

RETURNS DESCRIPTION
Tensor

The output tensor after applying the self-output operations.

Example
>>> config = RobertaConfig(hidden_size=768, layer_norm_eps=1e-5, hidden_dropout_prob=0.1)
>>> self_output = RobertaSelfOutput(config)
>>> hidden_states = mindspore.Tensor(...)
>>> input_tensor = mindspore.Tensor(...)
>>> output = self_output.forward(hidden_states, input_tensor)
Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
class RobertaSelfOutput(nn.Module):

    """
    This class represents the self-output module of the Roberta model. It applies a dense layer, layer normalization,
    and dropout to the hidden states, and then adds them to the input tensor.

    Args:
        config (obj): The configuration object that contains the settings for the module.

    Returns:
        Tensor: The output tensor after applying the self-output operations.

    Raises:
        None.

    Example:
        ```python
        >>> config = RobertaConfig(hidden_size=768, layer_norm_eps=1e-5, hidden_dropout_prob=0.1)
        >>> self_output = RobertaSelfOutput(config)
        >>> hidden_states = mindspore.Tensor(...)
        >>> input_tensor = mindspore.Tensor(...)
        >>> output = self_output.forward(hidden_states, input_tensor)
        ```
    """
    def __init__(self, config):
        """
        Initializes a new instance of the RobertaSelfOutput class.

        Args:
            self: The instance of the class.
            config:
                An instance of the configuration class containing the following attributes:

                - hidden_size (int): The size of the hidden layer.
                - layer_norm_eps (float): The epsilon value for layer normalization.
                - hidden_dropout_prob (float): The dropout probability for the hidden layer.

        Returns:
            None.

        Raises:
            TypeError: If the provided config parameter is not of the expected type.
            ValueError: If the hidden_size attribute in the config parameter is not a positive integer.
            ValueError: If the layer_norm_eps attribute in the config parameter is not a positive float.
            ValueError: If the hidden_dropout_prob attribute in the config parameter is not a float between 0 and 1.
        """
        super().__init__()
        self.dense = nn.Linear(config.hidden_size, config.hidden_size)
        self.LayerNorm = nn.LayerNorm([config.hidden_size], eps=config.layer_norm_eps)
        self.dropout = nn.Dropout(p=config.hidden_dropout_prob)

    def forward(self, hidden_states: mindspore.Tensor, input_tensor: mindspore.Tensor) -> mindspore.Tensor:
        """
        Constructs the output of the RobertaSelfOutput layer.

        Args:
            self (RobertaSelfOutput): The instance of the RobertaSelfOutput class.
            hidden_states (mindspore.Tensor): The tensor containing the hidden states.
                This tensor should have the shape (batch_size, sequence_length, hidden_size).
            input_tensor (mindspore.Tensor): The tensor containing the input states.
                This tensor should have the same shape as the hidden_states tensor.

        Returns:
            mindspore.Tensor: The output tensor after applying the RobertaSelfOutput layer.
                This tensor has the same shape as the input_tensor.

        Raises:
            None.
        """
        hidden_states = self.dense(hidden_states)
        hidden_states = self.dropout(hidden_states)
        hidden_states = self.LayerNorm(hidden_states + input_tensor)
        return hidden_states

mindnlp.transformers.models.roberta.modeling_roberta.RobertaSelfOutput.__init__(config)

Initializes a new instance of the RobertaSelfOutput class.

PARAMETER DESCRIPTION
self

The instance of the class.

config

An instance of the configuration class containing the following attributes:

  • hidden_size (int): The size of the hidden layer.
  • layer_norm_eps (float): The epsilon value for layer normalization.
  • hidden_dropout_prob (float): The dropout probability for the hidden layer.

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
TypeError

If the provided config parameter is not of the expected type.

ValueError

If the hidden_size attribute in the config parameter is not a positive integer.

ValueError

If the layer_norm_eps attribute in the config parameter is not a positive float.

ValueError

If the hidden_dropout_prob attribute in the config parameter is not a float between 0 and 1.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
def __init__(self, config):
    """
    Initializes a new instance of the RobertaSelfOutput class.

    Args:
        self: The instance of the class.
        config:
            An instance of the configuration class containing the following attributes:

            - hidden_size (int): The size of the hidden layer.
            - layer_norm_eps (float): The epsilon value for layer normalization.
            - hidden_dropout_prob (float): The dropout probability for the hidden layer.

    Returns:
        None.

    Raises:
        TypeError: If the provided config parameter is not of the expected type.
        ValueError: If the hidden_size attribute in the config parameter is not a positive integer.
        ValueError: If the layer_norm_eps attribute in the config parameter is not a positive float.
        ValueError: If the hidden_dropout_prob attribute in the config parameter is not a float between 0 and 1.
    """
    super().__init__()
    self.dense = nn.Linear(config.hidden_size, config.hidden_size)
    self.LayerNorm = nn.LayerNorm([config.hidden_size], eps=config.layer_norm_eps)
    self.dropout = nn.Dropout(p=config.hidden_dropout_prob)

mindnlp.transformers.models.roberta.modeling_roberta.RobertaSelfOutput.forward(hidden_states, input_tensor)

Constructs the output of the RobertaSelfOutput layer.

PARAMETER DESCRIPTION
self

The instance of the RobertaSelfOutput class.

TYPE: RobertaSelfOutput

hidden_states

The tensor containing the hidden states. This tensor should have the shape (batch_size, sequence_length, hidden_size).

TYPE: Tensor

input_tensor

The tensor containing the input states. This tensor should have the same shape as the hidden_states tensor.

TYPE: Tensor

RETURNS DESCRIPTION
Tensor

mindspore.Tensor: The output tensor after applying the RobertaSelfOutput layer. This tensor has the same shape as the input_tensor.

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
def forward(self, hidden_states: mindspore.Tensor, input_tensor: mindspore.Tensor) -> mindspore.Tensor:
    """
    Constructs the output of the RobertaSelfOutput layer.

    Args:
        self (RobertaSelfOutput): The instance of the RobertaSelfOutput class.
        hidden_states (mindspore.Tensor): The tensor containing the hidden states.
            This tensor should have the shape (batch_size, sequence_length, hidden_size).
        input_tensor (mindspore.Tensor): The tensor containing the input states.
            This tensor should have the same shape as the hidden_states tensor.

    Returns:
        mindspore.Tensor: The output tensor after applying the RobertaSelfOutput layer.
            This tensor has the same shape as the input_tensor.

    Raises:
        None.
    """
    hidden_states = self.dense(hidden_states)
    hidden_states = self.dropout(hidden_states)
    hidden_states = self.LayerNorm(hidden_states + input_tensor)
    return hidden_states

mindnlp.transformers.models.roberta.modeling_roberta.create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0)

Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols are ignored. This is modified from fairseq's utils.make_positions.

PARAMETER DESCRIPTION
x

mindspore.Tensor x:

RETURNS DESCRIPTION

mindspore.Tensor

Source code in mindnlp\transformers\models\roberta\modeling_roberta.py
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
def create_position_ids_from_input_ids(
    input_ids, padding_idx, past_key_values_length=0
):
    """
    Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
    are ignored. This is modified from fairseq's `utils.make_positions`.

    Args:
        x: mindspore.Tensor x:

    Returns:
        mindspore.Tensor
    """
    # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.
    mask = input_ids.ne(padding_idx).int()
    incremental_indices = (
        ops.cumsum(mask, dim=1).astype(mask.dtype) + past_key_values_length
    ) * mask
    return incremental_indices.long() + padding_idx

mindnlp.transformers.models.roberta.configuration_roberta

RoBERTa configuration

mindnlp.transformers.models.roberta.configuration_roberta.RobertaConfig

Bases: PretrainedConfig

Roberta Config.

Source code in mindnlp\transformers\models\roberta\configuration_roberta.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
class RobertaConfig(PretrainedConfig):
    """Roberta Config."""
    model_type = "roberta"

    def __init__(
        self,
        vocab_size=50265,
        hidden_size=768,
        num_hidden_layers=12,
        num_attention_heads=12,
        intermediate_size=3072,
        hidden_act="gelu",
        hidden_dropout_prob=0.1,
        attention_probs_dropout_prob=0.1,
        max_position_embeddings=512,
        type_vocab_size=2,
        initializer_range=0.02,
        layer_norm_eps=1e-12,
        pad_token_id=1,
        bos_token_id=0,
        eos_token_id=2,
        position_embedding_type="absolute",
        use_cache=True,
        classifier_dropout=None,
        **kwargs,
    ):
        """
        This method initializes an instance of the RobertaConfig class.

        Args:
            vocab_size (int): The size of the vocabulary. Default is 50265.
            hidden_size (int): The size of the hidden layers and the size of the embeddings. Default is 768.
            num_hidden_layers (int): The number of hidden layers in the model. Default is 12.
            num_attention_heads (int): The number of attention heads for each layer. Default is 12.
            intermediate_size (int): The size of the "intermediate" (i.e., feed-forward) layer in the transformer.
                Default is 3072.
            hidden_act (str): The non-linear activation function for the hidden layers. Default is 'gelu'.
            hidden_dropout_prob (float): The dropout probability for all fully connected layers in the embeddings
                and transformer layers. Default is 0.1.
            attention_probs_dropout_prob (float): The dropout probability for the attention probabilities.
                Default is 0.1.
            max_position_embeddings (int): The maximum sequence length that this model might ever be used with.
                Default is 512.
            type_vocab_size (int): The size of the "type" vocabulary. Default is 2.
            initializer_range (float): The standard deviation of the truncated_normal_initializer for initializing
                all weight matrices. Default is 0.02.
            layer_norm_eps (float): The epsilon used by LayerNorm layers. Default is 1e-12.
            pad_token_id (int): The id of the padding token. Default is 1.
            bos_token_id (int): The id of the beginning of the sequence token. Default is 0.
            eos_token_id (int): The id of the end of the sequence token. Default is 2.
            position_embedding_type (str): The type of position embedding. Default is 'absolute'.
            use_cache (bool): Whether or not to use caching for the model. Default is True.
            classifier_dropout (float): The dropout probability for the classifier. Default is None.
            **kwargs: Additional keyword arguments.

        Returns:
            None.

        Raises:
            None.
        """
        super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)

        self.vocab_size = vocab_size
        self.hidden_size = hidden_size
        self.num_hidden_layers = num_hidden_layers
        self.num_attention_heads = num_attention_heads
        self.hidden_act = hidden_act
        self.intermediate_size = intermediate_size
        self.hidden_dropout_prob = hidden_dropout_prob
        self.attention_probs_dropout_prob = attention_probs_dropout_prob
        self.max_position_embeddings = max_position_embeddings
        self.type_vocab_size = type_vocab_size
        self.initializer_range = initializer_range
        self.layer_norm_eps = layer_norm_eps
        self.position_embedding_type = position_embedding_type
        self.use_cache = use_cache
        self.classifier_dropout = classifier_dropout

mindnlp.transformers.models.roberta.configuration_roberta.RobertaConfig.__init__(vocab_size=50265, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, pad_token_id=1, bos_token_id=0, eos_token_id=2, position_embedding_type='absolute', use_cache=True, classifier_dropout=None, **kwargs)

This method initializes an instance of the RobertaConfig class.

PARAMETER DESCRIPTION
vocab_size

The size of the vocabulary. Default is 50265.

TYPE: int DEFAULT: 50265

hidden_size

The size of the hidden layers and the size of the embeddings. Default is 768.

TYPE: int DEFAULT: 768

num_hidden_layers

The number of hidden layers in the model. Default is 12.

TYPE: int DEFAULT: 12

num_attention_heads

The number of attention heads for each layer. Default is 12.

TYPE: int DEFAULT: 12

intermediate_size

The size of the "intermediate" (i.e., feed-forward) layer in the transformer. Default is 3072.

TYPE: int DEFAULT: 3072

hidden_act

The non-linear activation function for the hidden layers. Default is 'gelu'.

TYPE: str DEFAULT: 'gelu'

hidden_dropout_prob

The dropout probability for all fully connected layers in the embeddings and transformer layers. Default is 0.1.

TYPE: float DEFAULT: 0.1

attention_probs_dropout_prob

The dropout probability for the attention probabilities. Default is 0.1.

TYPE: float DEFAULT: 0.1

max_position_embeddings

The maximum sequence length that this model might ever be used with. Default is 512.

TYPE: int DEFAULT: 512

type_vocab_size

The size of the "type" vocabulary. Default is 2.

TYPE: int DEFAULT: 2

initializer_range

The standard deviation of the truncated_normal_initializer for initializing all weight matrices. Default is 0.02.

TYPE: float DEFAULT: 0.02

layer_norm_eps

The epsilon used by LayerNorm layers. Default is 1e-12.

TYPE: float DEFAULT: 1e-12

pad_token_id

The id of the padding token. Default is 1.

TYPE: int DEFAULT: 1

bos_token_id

The id of the beginning of the sequence token. Default is 0.

TYPE: int DEFAULT: 0

eos_token_id

The id of the end of the sequence token. Default is 2.

TYPE: int DEFAULT: 2

position_embedding_type

The type of position embedding. Default is 'absolute'.

TYPE: str DEFAULT: 'absolute'

use_cache

Whether or not to use caching for the model. Default is True.

TYPE: bool DEFAULT: True

classifier_dropout

The dropout probability for the classifier. Default is None.

TYPE: float DEFAULT: None

**kwargs

Additional keyword arguments.

DEFAULT: {}

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\configuration_roberta.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
def __init__(
    self,
    vocab_size=50265,
    hidden_size=768,
    num_hidden_layers=12,
    num_attention_heads=12,
    intermediate_size=3072,
    hidden_act="gelu",
    hidden_dropout_prob=0.1,
    attention_probs_dropout_prob=0.1,
    max_position_embeddings=512,
    type_vocab_size=2,
    initializer_range=0.02,
    layer_norm_eps=1e-12,
    pad_token_id=1,
    bos_token_id=0,
    eos_token_id=2,
    position_embedding_type="absolute",
    use_cache=True,
    classifier_dropout=None,
    **kwargs,
):
    """
    This method initializes an instance of the RobertaConfig class.

    Args:
        vocab_size (int): The size of the vocabulary. Default is 50265.
        hidden_size (int): The size of the hidden layers and the size of the embeddings. Default is 768.
        num_hidden_layers (int): The number of hidden layers in the model. Default is 12.
        num_attention_heads (int): The number of attention heads for each layer. Default is 12.
        intermediate_size (int): The size of the "intermediate" (i.e., feed-forward) layer in the transformer.
            Default is 3072.
        hidden_act (str): The non-linear activation function for the hidden layers. Default is 'gelu'.
        hidden_dropout_prob (float): The dropout probability for all fully connected layers in the embeddings
            and transformer layers. Default is 0.1.
        attention_probs_dropout_prob (float): The dropout probability for the attention probabilities.
            Default is 0.1.
        max_position_embeddings (int): The maximum sequence length that this model might ever be used with.
            Default is 512.
        type_vocab_size (int): The size of the "type" vocabulary. Default is 2.
        initializer_range (float): The standard deviation of the truncated_normal_initializer for initializing
            all weight matrices. Default is 0.02.
        layer_norm_eps (float): The epsilon used by LayerNorm layers. Default is 1e-12.
        pad_token_id (int): The id of the padding token. Default is 1.
        bos_token_id (int): The id of the beginning of the sequence token. Default is 0.
        eos_token_id (int): The id of the end of the sequence token. Default is 2.
        position_embedding_type (str): The type of position embedding. Default is 'absolute'.
        use_cache (bool): Whether or not to use caching for the model. Default is True.
        classifier_dropout (float): The dropout probability for the classifier. Default is None.
        **kwargs: Additional keyword arguments.

    Returns:
        None.

    Raises:
        None.
    """
    super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)

    self.vocab_size = vocab_size
    self.hidden_size = hidden_size
    self.num_hidden_layers = num_hidden_layers
    self.num_attention_heads = num_attention_heads
    self.hidden_act = hidden_act
    self.intermediate_size = intermediate_size
    self.hidden_dropout_prob = hidden_dropout_prob
    self.attention_probs_dropout_prob = attention_probs_dropout_prob
    self.max_position_embeddings = max_position_embeddings
    self.type_vocab_size = type_vocab_size
    self.initializer_range = initializer_range
    self.layer_norm_eps = layer_norm_eps
    self.position_embedding_type = position_embedding_type
    self.use_cache = use_cache
    self.classifier_dropout = classifier_dropout

mindnlp.transformers.models.roberta.tokenization_roberta

Tokenization classes for RoBERTa.

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer

Bases: PreTrainedTokenizer

Constructs a RoBERTa tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

Example
>>> from transformers import RobertaTokenizer
...
>>> tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
>>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one).

This tokenizer inherits from [PreTrainedTokenizer] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

PARAMETER DESCRIPTION
vocab_file

Path to the vocabulary file.

TYPE: `str`

merges_file

Path to the merges file.

TYPE: `str`

errors

Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

TYPE: `str`, *optional*, defaults to `"replace"` DEFAULT: 'replace'

bos_token

The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

eos_token

The end of sequence token.

When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

sep_token

The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

cls_token

The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

unk_token

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

TYPE: `str`, *optional*, defaults to `"<unk>"` DEFAULT: '<unk>'

pad_token

The token used for padding, for example when batching sequences of different lengths.

TYPE: `str`, *optional*, defaults to `"<pad>"` DEFAULT: '<pad>'

mask_token

The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

TYPE: `str`, *optional*, defaults to `"<mask>"` DEFAULT: '<mask>'

add_prefix_space

Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (RoBERTa tokenizer detect beginning of words by the preceding space).

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
class RobertaTokenizer(PreTrainedTokenizer):
    """
    Constructs a RoBERTa tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

    This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
    be encoded differently whether it is at the beginning of the sentence (without space) or not:

    Example:
        ```python
        >>> from transformers import RobertaTokenizer
        ...
        >>> tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
        >>> tokenizer("Hello world")["input_ids"]
        [0, 31414, 232, 2]
        >>> tokenizer(" Hello world")["input_ids"]
        [0, 20920, 232, 2]
        ```

    You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
    call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

    <Tip>

    When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).

    </Tip>

    This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
    this superclass for more information regarding those methods.

    Args:
        vocab_file (`str`):
            Path to the vocabulary file.
        merges_file (`str`):
            Path to the merges file.
        errors (`str`, *optional*, defaults to `"replace"`):
            Paradigm to follow when decoding bytes to UTF-8. See
            [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
        bos_token (`str`, *optional*, defaults to `"<s>"`):
            The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

            <Tip>

            When building a sequence using special tokens, this is not the token that is used for the beginning of
            sequence. The token used is the `cls_token`.

            </Tip>

        eos_token (`str`, *optional*, defaults to `"</s>"`):
            The end of sequence token.

            <Tip>

            When building a sequence using special tokens, this is not the token that is used for the end of sequence.
            The token used is the `sep_token`.

            </Tip>

        sep_token (`str`, *optional*, defaults to `"</s>"`):
            The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
            sequence classification or for a text and a question for question answering. It is also used as the last
            token of a sequence built with special tokens.
        cls_token (`str`, *optional*, defaults to `"<s>"`):
            The classifier token which is used when doing sequence classification (classification of the whole sequence
            instead of per-token classification). It is the first token of the sequence when built with special tokens.
        unk_token (`str`, *optional*, defaults to `"<unk>"`):
            The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
            token instead.
        pad_token (`str`, *optional*, defaults to `"<pad>"`):
            The token used for padding, for example when batching sequences of different lengths.
        mask_token (`str`, *optional*, defaults to `"<mask>"`):
            The token used for masking values. This is the token used when training this model with masked language
            modeling. This is the token which the model will try to predict.
        add_prefix_space (`bool`, *optional*, defaults to `False`):
            Whether or not to add an initial space to the input. This allows to treat the leading word just as any
            other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
    """
    vocab_files_names = VOCAB_FILES_NAMES
    pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
    max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
    model_input_names = ["input_ids", "attention_mask"]

    def __init__(
        self,
        vocab_file,
        merges_file,
        errors="replace",
        bos_token="<s>",
        eos_token="</s>",
        sep_token="</s>",
        cls_token="<s>",
        unk_token="<unk>",
        pad_token="<pad>",
        mask_token="<mask>",
        add_prefix_space=False,
        **kwargs,
    ):
        """
        This method initializes an instance of the RobertaTokenizer class.

        Args:
            self: The instance of the class.
            vocab_file (str): The path to the vocabulary file.
            merges_file (str): The path to the merges file.
            errors (str, optional): The error handling scheme for encoding and decoding. Defaults to 'replace'.
            bos_token (str, optional): The beginning of sequence token. Defaults to '<s>'.
            eos_token (str, optional): The end of sequence token. Defaults to '</s>'.
            sep_token (str, optional): The separator token. Defaults to '</s>'.
            cls_token (str, optional): The classification token. Defaults to '<s>'.
            unk_token (str, optional): The unknown token. Defaults to '<unk>'.
            pad_token (str, optional): The padding token. Defaults to '<pad>'.
            mask_token (str, optional): The mask token. Defaults to '<mask>'.
            add_prefix_space (bool, optional): Whether to add prefix space. Defaults to False.
            **kwargs: Additional keyword arguments.

        Returns:
            None.

        Raises:
            FileNotFoundError: If vocab_file or merges_file is not found.
            ValueError: If an invalid argument is provided for any token parameter.
            IOError: If an I/O error occurs when handling the files.
        """
        bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token
        pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
        eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token
        unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token
        sep_token = AddedToken(sep_token, lstrip=False, rstrip=False) if isinstance(sep_token, str) else sep_token
        cls_token = AddedToken(cls_token, lstrip=False, rstrip=False) if isinstance(cls_token, str) else cls_token

        # Mask token behave like a normal word, i.e. include the space before it
        mask_token = (
            AddedToken(mask_token, lstrip=True, rstrip=False, normalized=False)
            if isinstance(mask_token, str)
            else mask_token
        )

        # these special tokens are not part of the vocab.json, let's add them in the correct order

        with open(vocab_file, encoding="utf-8") as vocab_handle:
            self.encoder = json.load(vocab_handle)
        self.decoder = {v: k for k, v in self.encoder.items()}
        self.errors = errors  # how to handle errors in decoding
        self.byte_encoder = bytes_to_unicode()
        self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
        with open(merges_file, encoding="utf-8") as merges_handle:
            bpe_merges = merges_handle.read().split("\n")[1:-1]
        bpe_merges = [tuple(merge.split()) for merge in bpe_merges]
        self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
        self.cache = {}
        self.add_prefix_space = add_prefix_space

        # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
        self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")

        super().__init__(
            errors=errors,
            bos_token=bos_token,
            eos_token=eos_token,
            unk_token=unk_token,
            sep_token=sep_token,
            cls_token=cls_token,
            pad_token=pad_token,
            mask_token=mask_token,
            add_prefix_space=add_prefix_space,
            **kwargs,
        )

    @property
    def vocab_size(self):
        """
        Returns the size of the vocabulary used by the RobertaTokenizer instance.

        Args:
            self (RobertaTokenizer): The instance of the RobertaTokenizer class.

        Returns:
            int: The number of unique tokens in the vocabulary of the tokenizer.

        Raises:
            None.
        """
        return len(self.encoder)

    def get_vocab(self):
        """
        Returns a vocabulary dictionary containing both the base encoder and any additional tokens added to the tokenizer.

        Args:
            self (RobertaTokenizer): An instance of the RobertaTokenizer class.

        Returns:
            dict or None:
                The vocabulary dictionary containing the base encoder and any additional tokens added to the tokenizer.
                If the tokenizer has not been initialized with a base encoder or any additional tokens, None is returned.

        Raises:
            None.

        Note:
            The vocabulary dictionary is created by copying the base encoder dictionary and updating it with the
            added_tokens_encoder dictionary. The base encoder dictionary contains the original encoding for the tokenizer,
            while the added_tokens_encoder dictionary contains any additional tokens that have been added to the tokenizer.

        Example:
            ```python
            >>> tokenizer = RobertaTokenizer()
            >>> vocab = tokenizer.get_vocab()
            >>> print(vocab)
            {'<s>': 0, '<pad>': 1, '</s>': 2, '<unk>': 3, '<mask>': 4}
            ```
        """
        vocab = dict(self.encoder).copy()
        vocab.update(self.added_tokens_encoder)
        return vocab

    def bpe(self, token):
        """
        This method is a part of the RobertaTokenizer class and implements Byte-Pair Encoding (BPE) for tokenizing input tokens.

        Args:
            self (RobertaTokenizer): The instance of the RobertaTokenizer class.
            token (str): The input token to be tokenized using BPE. It is a string representing a single token to
                be processed. Must not be None.

        Returns:
            str: The token after applying the Byte-Pair Encoding process. Returns the token as a string after processing
                it through the BPE algorithm.

        Raises:
            ValueError: If the input token is None or empty.
            TypeError: If the input token is not a string.
            KeyError: If the input token is not found in the cache attribute of the RobertaTokenizer instance.
            IndexError: If an index is out of range while processing the token.
            Exception: Any other unexpected exceptions may be raised during the BPE process.
        """
        if token in self.cache:
            return self.cache[token]
        word = tuple(token)
        pairs = get_pairs(word)

        if not pairs:
            return token

        while True:
            bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
            if bigram not in self.bpe_ranks:
                break
            first, second = bigram
            new_word = []
            i = 0
            while i < len(word):
                try:
                    j = word.index(first, i)
                except ValueError:
                    new_word.extend(word[i:])
                    break
                else:
                    new_word.extend(word[i:j])
                    i = j

                if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
                    new_word.append(first + second)
                    i += 2
                else:
                    new_word.append(word[i])
                    i += 1
            new_word = tuple(new_word)
            word = new_word
            if len(word) == 1:
                break
            pairs = get_pairs(word)
        word = " ".join(word)
        self.cache[token] = word
        return word

    def _tokenize(self, text):
        """Tokenize a string."""
        bpe_tokens = []
        for token in re.findall(self.pat, text):
            token = "".join(
                self.byte_encoder[b] for b in token.encode("utf-8")
            )  # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case)
            bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" "))
        return bpe_tokens

    def _convert_token_to_id(self, token):
        """Converts a token (str) in an id using the vocab."""
        return self.encoder.get(token, self.encoder.get(self.unk_token))

    def _convert_id_to_token(self, index):
        """Converts an index (integer) in a token (str) using the vocab."""
        return self.decoder.get(index)

    def convert_tokens_to_string(self, tokens):
        """Converts a sequence of tokens (string) in a single string."""
        text = "".join(tokens)
        text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
        return text

    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
        """
        Saves the vocabulary and merge files of the RobertaTokenizer.

        Args:
            self (RobertaTokenizer): An instance of the RobertaTokenizer class.
            save_directory (str): The directory where the vocabulary and merge files will be saved.
            filename_prefix (Optional[str], optional): The prefix to be added to the filenames. Defaults to None.

        Returns:
            Tuple[str]: A tuple containing the paths of the saved vocabulary and merge files.

        Raises:
            OSError: If the save_directory is not a valid directory.

        This method saves the vocabulary file and merge file used by the RobertaTokenizer.
        The vocabulary file contains the encoder dictionary in JSON format, while the merge file contains the BPE merge
        indices and tokens. The files are saved in the specified save_directory with optional filename_prefix added to
        the filenames.

        Note:
            If the save_directory does not exist, the method will raise an OSError.

        Example:
            ```python
            >>> tokenizer = RobertaTokenizer()
            >>> tokenizer.save_vocabulary('/path/to/save')
            ('/path/to/save/vocab.txt', '/path/to/save/merges.txt')
            ```
        """
        if not os.path.isdir(save_directory):
            logger.error(f"Vocabulary path ({save_directory}) should be a directory")
            return
        vocab_file = os.path.join(
            save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
        )
        merge_file = os.path.join(
            save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"]
        )

        with open(vocab_file, "w", encoding="utf-8") as f:
            f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")

        index = 0
        with open(merge_file, "w", encoding="utf-8") as writer:
            writer.write("#version: 0.2\n")
            for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]):
                if index != token_index:
                    logger.warning(
                        f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive."
                        " Please check that the tokenizer is not corrupted!"
                    )
                    index = token_index
                writer.write(" ".join(bpe_tokens) + "\n")
                index += 1

        return vocab_file, merge_file

    def build_inputs_with_special_tokens(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
    ) -> List[int]:
        """
        Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
        adding special tokens. A RoBERTa sequence has the following format:

        - single sequence: `<s> X </s>`
        - pair of sequences: `<s> A </s></s> B </s>`

        Args:
            token_ids_0 (`List[int]`):
                List of IDs to which the special tokens will be added.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.

        Returns:
            `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
        """
        if token_ids_1 is None:
            return [self.cls_token_id] + token_ids_0 + [self.sep_token_id]
        cls = [self.cls_token_id]
        sep = [self.sep_token_id]
        return cls + token_ids_0 + sep + sep + token_ids_1 + sep

    def get_special_tokens_mask(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
    ) -> List[int]:
        """
        Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
        special tokens using the tokenizer `prepare_for_model` method.

        Args:
            token_ids_0 (`List[int]`):
                List of IDs.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.
            already_has_special_tokens (`bool`, *optional*, defaults to `False`):
                Whether or not the token list is already formatted with special tokens for the model.

        Returns:
            `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
        """
        if already_has_special_tokens:
            return super().get_special_tokens_mask(
                token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
            )

        if token_ids_1 is None:
            return [1] + ([0] * len(token_ids_0)) + [1]
        return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]

    def create_token_type_ids_from_sequences(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
    ) -> List[int]:
        """
        Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not
        make use of token type ids, therefore a list of zeros is returned.

        Args:
            token_ids_0 (`List[int]`):
                List of IDs.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.

        Returns:
            `List[int]`: List of zeros.
        """
        sep = [self.sep_token_id]
        cls = [self.cls_token_id]

        if token_ids_1 is None:
            return len(cls + token_ids_0 + sep) * [0]
        return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]

    def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
        """
        Prepares the given text for tokenization by adding a prefix space if necessary.

        Args:
            self (RobertaTokenizer): The instance of the RobertaTokenizer class.
            text (str): The input text to be prepared for tokenization.
            is_split_into_words (bool, optional): A flag indicating whether the text is already split into words.
                Defaults to False.
            **kwargs: Additional keyword arguments.
                add_prefix_space (bool, optional):

                A flag indicating whether a prefix space should be added to the text.
                If not provided, the value from the self.add_prefix_space attribute will be used.

        Returns:
            str: The prepared text after adding a prefix space if required.

        Raises:
            None

        Note:
            - If is_split_into_words is True or add_prefix_space is True, and the text is not empty and does not start
            with a space, a prefix space will be added to the text.
            - The original kwargs dictionary is modified by removing the 'add_prefix_space' key using the pop() method.

        Example:
            ```python
            >>> tokenizer = RobertaTokenizer()
            >>> prepared_text = tokenizer.prepare_for_tokenization("Hello world!", is_split_into_words=True)
            >>> print(prepared_text)
            >>> # Output: " Hello world!"
            ```
        """
        add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space)
        if (is_split_into_words or add_prefix_space) and (len(text) > 0 and not text[0].isspace()):
            text = " " + text
        return (text, kwargs)

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.vocab_size property

Returns the size of the vocabulary used by the RobertaTokenizer instance.

PARAMETER DESCRIPTION
self

The instance of the RobertaTokenizer class.

TYPE: RobertaTokenizer

RETURNS DESCRIPTION
int

The number of unique tokens in the vocabulary of the tokenizer.

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.__init__(vocab_file, merges_file, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_prefix_space=False, **kwargs)

This method initializes an instance of the RobertaTokenizer class.

PARAMETER DESCRIPTION
self

The instance of the class.

vocab_file

The path to the vocabulary file.

TYPE: str

merges_file

The path to the merges file.

TYPE: str

errors

The error handling scheme for encoding and decoding. Defaults to 'replace'.

TYPE: str DEFAULT: 'replace'

bos_token

The beginning of sequence token. Defaults to ''.

TYPE: str DEFAULT: '<s>'

eos_token

The end of sequence token. Defaults to ''.

TYPE: str DEFAULT: '</s>'

sep_token

The separator token. Defaults to ''.

TYPE: str DEFAULT: '</s>'

cls_token

The classification token. Defaults to ''.

TYPE: str DEFAULT: '<s>'

unk_token

The unknown token. Defaults to ''.

TYPE: str DEFAULT: '<unk>'

pad_token

The padding token. Defaults to ''.

TYPE: str DEFAULT: '<pad>'

mask_token

The mask token. Defaults to ''.

TYPE: str DEFAULT: '<mask>'

add_prefix_space

Whether to add prefix space. Defaults to False.

TYPE: bool DEFAULT: False

**kwargs

Additional keyword arguments.

DEFAULT: {}

RETURNS DESCRIPTION

None.

RAISES DESCRIPTION
FileNotFoundError

If vocab_file or merges_file is not found.

ValueError

If an invalid argument is provided for any token parameter.

IOError

If an I/O error occurs when handling the files.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
def __init__(
    self,
    vocab_file,
    merges_file,
    errors="replace",
    bos_token="<s>",
    eos_token="</s>",
    sep_token="</s>",
    cls_token="<s>",
    unk_token="<unk>",
    pad_token="<pad>",
    mask_token="<mask>",
    add_prefix_space=False,
    **kwargs,
):
    """
    This method initializes an instance of the RobertaTokenizer class.

    Args:
        self: The instance of the class.
        vocab_file (str): The path to the vocabulary file.
        merges_file (str): The path to the merges file.
        errors (str, optional): The error handling scheme for encoding and decoding. Defaults to 'replace'.
        bos_token (str, optional): The beginning of sequence token. Defaults to '<s>'.
        eos_token (str, optional): The end of sequence token. Defaults to '</s>'.
        sep_token (str, optional): The separator token. Defaults to '</s>'.
        cls_token (str, optional): The classification token. Defaults to '<s>'.
        unk_token (str, optional): The unknown token. Defaults to '<unk>'.
        pad_token (str, optional): The padding token. Defaults to '<pad>'.
        mask_token (str, optional): The mask token. Defaults to '<mask>'.
        add_prefix_space (bool, optional): Whether to add prefix space. Defaults to False.
        **kwargs: Additional keyword arguments.

    Returns:
        None.

    Raises:
        FileNotFoundError: If vocab_file or merges_file is not found.
        ValueError: If an invalid argument is provided for any token parameter.
        IOError: If an I/O error occurs when handling the files.
    """
    bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token
    pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
    eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token
    unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token
    sep_token = AddedToken(sep_token, lstrip=False, rstrip=False) if isinstance(sep_token, str) else sep_token
    cls_token = AddedToken(cls_token, lstrip=False, rstrip=False) if isinstance(cls_token, str) else cls_token

    # Mask token behave like a normal word, i.e. include the space before it
    mask_token = (
        AddedToken(mask_token, lstrip=True, rstrip=False, normalized=False)
        if isinstance(mask_token, str)
        else mask_token
    )

    # these special tokens are not part of the vocab.json, let's add them in the correct order

    with open(vocab_file, encoding="utf-8") as vocab_handle:
        self.encoder = json.load(vocab_handle)
    self.decoder = {v: k for k, v in self.encoder.items()}
    self.errors = errors  # how to handle errors in decoding
    self.byte_encoder = bytes_to_unicode()
    self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
    with open(merges_file, encoding="utf-8") as merges_handle:
        bpe_merges = merges_handle.read().split("\n")[1:-1]
    bpe_merges = [tuple(merge.split()) for merge in bpe_merges]
    self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges))))
    self.cache = {}
    self.add_prefix_space = add_prefix_space

    # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions
    self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""")

    super().__init__(
        errors=errors,
        bos_token=bos_token,
        eos_token=eos_token,
        unk_token=unk_token,
        sep_token=sep_token,
        cls_token=cls_token,
        pad_token=pad_token,
        mask_token=mask_token,
        add_prefix_space=add_prefix_space,
        **kwargs,
    )

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.bpe(token)

This method is a part of the RobertaTokenizer class and implements Byte-Pair Encoding (BPE) for tokenizing input tokens.

PARAMETER DESCRIPTION
self

The instance of the RobertaTokenizer class.

TYPE: RobertaTokenizer

token

The input token to be tokenized using BPE. It is a string representing a single token to be processed. Must not be None.

TYPE: str

RETURNS DESCRIPTION
str

The token after applying the Byte-Pair Encoding process. Returns the token as a string after processing it through the BPE algorithm.

RAISES DESCRIPTION
ValueError

If the input token is None or empty.

TypeError

If the input token is not a string.

KeyError

If the input token is not found in the cache attribute of the RobertaTokenizer instance.

IndexError

If an index is out of range while processing the token.

Exception

Any other unexpected exceptions may be raised during the BPE process.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
def bpe(self, token):
    """
    This method is a part of the RobertaTokenizer class and implements Byte-Pair Encoding (BPE) for tokenizing input tokens.

    Args:
        self (RobertaTokenizer): The instance of the RobertaTokenizer class.
        token (str): The input token to be tokenized using BPE. It is a string representing a single token to
            be processed. Must not be None.

    Returns:
        str: The token after applying the Byte-Pair Encoding process. Returns the token as a string after processing
            it through the BPE algorithm.

    Raises:
        ValueError: If the input token is None or empty.
        TypeError: If the input token is not a string.
        KeyError: If the input token is not found in the cache attribute of the RobertaTokenizer instance.
        IndexError: If an index is out of range while processing the token.
        Exception: Any other unexpected exceptions may be raised during the BPE process.
    """
    if token in self.cache:
        return self.cache[token]
    word = tuple(token)
    pairs = get_pairs(word)

    if not pairs:
        return token

    while True:
        bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf")))
        if bigram not in self.bpe_ranks:
            break
        first, second = bigram
        new_word = []
        i = 0
        while i < len(word):
            try:
                j = word.index(first, i)
            except ValueError:
                new_word.extend(word[i:])
                break
            else:
                new_word.extend(word[i:j])
                i = j

            if word[i] == first and i < len(word) - 1 and word[i + 1] == second:
                new_word.append(first + second)
                i += 2
            else:
                new_word.append(word[i])
                i += 1
        new_word = tuple(new_word)
        word = new_word
        if len(word) == 1:
            break
        pairs = get_pairs(word)
    word = " ".join(word)
    self.cache[token] = word
    return word

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A RoBERTa sequence has the following format:

  • single sequence: <s> X </s>
  • pair of sequences: <s> A </s></s> B </s>
PARAMETER DESCRIPTION
token_ids_0

List of IDs to which the special tokens will be added.

TYPE: `List[int]`

token_ids_1

Optional second list of IDs for sequence pairs.

TYPE: `List[int]`, *optional* DEFAULT: None

RETURNS DESCRIPTION
List[int]

List[int]: List of input IDs with the appropriate special tokens.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
def build_inputs_with_special_tokens(
    self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
    """
    Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
    adding special tokens. A RoBERTa sequence has the following format:

    - single sequence: `<s> X </s>`
    - pair of sequences: `<s> A </s></s> B </s>`

    Args:
        token_ids_0 (`List[int]`):
            List of IDs to which the special tokens will be added.
        token_ids_1 (`List[int]`, *optional*):
            Optional second list of IDs for sequence pairs.

    Returns:
        `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
    """
    if token_ids_1 is None:
        return [self.cls_token_id] + token_ids_0 + [self.sep_token_id]
    cls = [self.cls_token_id]
    sep = [self.sep_token_id]
    return cls + token_ids_0 + sep + sep + token_ids_1 + sep

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.convert_tokens_to_string(tokens)

Converts a sequence of tokens (string) in a single string.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
401
402
403
404
405
def convert_tokens_to_string(self, tokens):
    """Converts a sequence of tokens (string) in a single string."""
    text = "".join(tokens)
    text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors)
    return text

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)

Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

PARAMETER DESCRIPTION
token_ids_0

List of IDs.

TYPE: `List[int]`

token_ids_1

Optional second list of IDs for sequence pairs.

TYPE: `List[int]`, *optional* DEFAULT: None

RETURNS DESCRIPTION
List[int]

List[int]: List of zeros.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
def create_token_type_ids_from_sequences(
    self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
    """
    Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not
    make use of token type ids, therefore a list of zeros is returned.

    Args:
        token_ids_0 (`List[int]`):
            List of IDs.
        token_ids_1 (`List[int]`, *optional*):
            Optional second list of IDs for sequence pairs.

    Returns:
        `List[int]`: List of zeros.
    """
    sep = [self.sep_token_id]
    cls = [self.cls_token_id]

    if token_ids_1 is None:
        return len(cls + token_ids_0 + sep) * [0]
    return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)

Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.

PARAMETER DESCRIPTION
token_ids_0

List of IDs.

TYPE: `List[int]`

token_ids_1

Optional second list of IDs for sequence pairs.

TYPE: `List[int]`, *optional* DEFAULT: None

already_has_special_tokens

Whether or not the token list is already formatted with special tokens for the model.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

RETURNS DESCRIPTION
List[int]

List[int]: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
def get_special_tokens_mask(
    self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
    """
    Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
    special tokens using the tokenizer `prepare_for_model` method.

    Args:
        token_ids_0 (`List[int]`):
            List of IDs.
        token_ids_1 (`List[int]`, *optional*):
            Optional second list of IDs for sequence pairs.
        already_has_special_tokens (`bool`, *optional*, defaults to `False`):
            Whether or not the token list is already formatted with special tokens for the model.

    Returns:
        `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
    """
    if already_has_special_tokens:
        return super().get_special_tokens_mask(
            token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
        )

    if token_ids_1 is None:
        return [1] + ([0] * len(token_ids_0)) + [1]
    return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.get_vocab()

Returns a vocabulary dictionary containing both the base encoder and any additional tokens added to the tokenizer.

PARAMETER DESCRIPTION
self

An instance of the RobertaTokenizer class.

TYPE: RobertaTokenizer

RETURNS DESCRIPTION

dict or None: The vocabulary dictionary containing the base encoder and any additional tokens added to the tokenizer. If the tokenizer has not been initialized with a base encoder or any additional tokens, None is returned.

Note

The vocabulary dictionary is created by copying the base encoder dictionary and updating it with the added_tokens_encoder dictionary. The base encoder dictionary contains the original encoding for the tokenizer, while the added_tokens_encoder dictionary contains any additional tokens that have been added to the tokenizer.

Example
>>> tokenizer = RobertaTokenizer()
>>> vocab = tokenizer.get_vocab()
>>> print(vocab)
{'<s>': 0, '<pad>': 1, '</s>': 2, '<unk>': 3, '<mask>': 4}
Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
def get_vocab(self):
    """
    Returns a vocabulary dictionary containing both the base encoder and any additional tokens added to the tokenizer.

    Args:
        self (RobertaTokenizer): An instance of the RobertaTokenizer class.

    Returns:
        dict or None:
            The vocabulary dictionary containing the base encoder and any additional tokens added to the tokenizer.
            If the tokenizer has not been initialized with a base encoder or any additional tokens, None is returned.

    Raises:
        None.

    Note:
        The vocabulary dictionary is created by copying the base encoder dictionary and updating it with the
        added_tokens_encoder dictionary. The base encoder dictionary contains the original encoding for the tokenizer,
        while the added_tokens_encoder dictionary contains any additional tokens that have been added to the tokenizer.

    Example:
        ```python
        >>> tokenizer = RobertaTokenizer()
        >>> vocab = tokenizer.get_vocab()
        >>> print(vocab)
        {'<s>': 0, '<pad>': 1, '</s>': 2, '<unk>': 3, '<mask>': 4}
        ```
    """
    vocab = dict(self.encoder).copy()
    vocab.update(self.added_tokens_encoder)
    return vocab

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.prepare_for_tokenization(text, is_split_into_words=False, **kwargs)

Prepares the given text for tokenization by adding a prefix space if necessary.

PARAMETER DESCRIPTION
self

The instance of the RobertaTokenizer class.

TYPE: RobertaTokenizer

text

The input text to be prepared for tokenization.

TYPE: str

is_split_into_words

A flag indicating whether the text is already split into words. Defaults to False.

TYPE: bool DEFAULT: False

**kwargs

Additional keyword arguments. add_prefix_space (bool, optional):

A flag indicating whether a prefix space should be added to the text. If not provided, the value from the self.add_prefix_space attribute will be used.

DEFAULT: {}

RETURNS DESCRIPTION
str

The prepared text after adding a prefix space if required.

Note
  • If is_split_into_words is True or add_prefix_space is True, and the text is not empty and does not start with a space, a prefix space will be added to the text.
  • The original kwargs dictionary is modified by removing the 'add_prefix_space' key using the pop() method.
Example
>>> tokenizer = RobertaTokenizer()
>>> prepared_text = tokenizer.prepare_for_tokenization("Hello world!", is_split_into_words=True)
>>> print(prepared_text)
>>> # Output: " Hello world!"
Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
    """
    Prepares the given text for tokenization by adding a prefix space if necessary.

    Args:
        self (RobertaTokenizer): The instance of the RobertaTokenizer class.
        text (str): The input text to be prepared for tokenization.
        is_split_into_words (bool, optional): A flag indicating whether the text is already split into words.
            Defaults to False.
        **kwargs: Additional keyword arguments.
            add_prefix_space (bool, optional):

            A flag indicating whether a prefix space should be added to the text.
            If not provided, the value from the self.add_prefix_space attribute will be used.

    Returns:
        str: The prepared text after adding a prefix space if required.

    Raises:
        None

    Note:
        - If is_split_into_words is True or add_prefix_space is True, and the text is not empty and does not start
        with a space, a prefix space will be added to the text.
        - The original kwargs dictionary is modified by removing the 'add_prefix_space' key using the pop() method.

    Example:
        ```python
        >>> tokenizer = RobertaTokenizer()
        >>> prepared_text = tokenizer.prepare_for_tokenization("Hello world!", is_split_into_words=True)
        >>> print(prepared_text)
        >>> # Output: " Hello world!"
        ```
    """
    add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space)
    if (is_split_into_words or add_prefix_space) and (len(text) > 0 and not text[0].isspace()):
        text = " " + text
    return (text, kwargs)

mindnlp.transformers.models.roberta.tokenization_roberta.RobertaTokenizer.save_vocabulary(save_directory, filename_prefix=None)

Saves the vocabulary and merge files of the RobertaTokenizer.

PARAMETER DESCRIPTION
self

An instance of the RobertaTokenizer class.

TYPE: RobertaTokenizer

save_directory

The directory where the vocabulary and merge files will be saved.

TYPE: str

filename_prefix

The prefix to be added to the filenames. Defaults to None.

TYPE: Optional[str] DEFAULT: None

RETURNS DESCRIPTION
Tuple[str]

Tuple[str]: A tuple containing the paths of the saved vocabulary and merge files.

RAISES DESCRIPTION
OSError

If the save_directory is not a valid directory.

This method saves the vocabulary file and merge file used by the RobertaTokenizer. The vocabulary file contains the encoder dictionary in JSON format, while the merge file contains the BPE merge indices and tokens. The files are saved in the specified save_directory with optional filename_prefix added to the filenames.

Note

If the save_directory does not exist, the method will raise an OSError.

Example
>>> tokenizer = RobertaTokenizer()
>>> tokenizer.save_vocabulary('/path/to/save')
('/path/to/save/vocab.txt', '/path/to/save/merges.txt')
Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
    """
    Saves the vocabulary and merge files of the RobertaTokenizer.

    Args:
        self (RobertaTokenizer): An instance of the RobertaTokenizer class.
        save_directory (str): The directory where the vocabulary and merge files will be saved.
        filename_prefix (Optional[str], optional): The prefix to be added to the filenames. Defaults to None.

    Returns:
        Tuple[str]: A tuple containing the paths of the saved vocabulary and merge files.

    Raises:
        OSError: If the save_directory is not a valid directory.

    This method saves the vocabulary file and merge file used by the RobertaTokenizer.
    The vocabulary file contains the encoder dictionary in JSON format, while the merge file contains the BPE merge
    indices and tokens. The files are saved in the specified save_directory with optional filename_prefix added to
    the filenames.

    Note:
        If the save_directory does not exist, the method will raise an OSError.

    Example:
        ```python
        >>> tokenizer = RobertaTokenizer()
        >>> tokenizer.save_vocabulary('/path/to/save')
        ('/path/to/save/vocab.txt', '/path/to/save/merges.txt')
        ```
    """
    if not os.path.isdir(save_directory):
        logger.error(f"Vocabulary path ({save_directory}) should be a directory")
        return
    vocab_file = os.path.join(
        save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
    )
    merge_file = os.path.join(
        save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"]
    )

    with open(vocab_file, "w", encoding="utf-8") as f:
        f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n")

    index = 0
    with open(merge_file, "w", encoding="utf-8") as writer:
        writer.write("#version: 0.2\n")
        for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]):
            if index != token_index:
                logger.warning(
                    f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive."
                    " Please check that the tokenizer is not corrupted!"
                )
                index = token_index
            writer.write(" ".join(bpe_tokens) + "\n")
            index += 1

    return vocab_file, merge_file

mindnlp.transformers.models.roberta.tokenization_roberta.bytes_to_unicode() cached

Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control characters the bpe code barfs on.

The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
@lru_cache()
def bytes_to_unicode():
    """
    Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control
    characters the bpe code barfs on.

    The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab
    if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for
    decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup
    tables between utf-8 bytes and unicode strings.
    """
    bs = (
        list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1))
    )
    cs = bs[:]
    n = 0
    for b in range(2**8):
        if b not in bs:
            bs.append(b)
            cs.append(2**8 + n)
            n += 1
    cs = [chr(n) for n in cs]
    return dict(zip(bs, cs))

mindnlp.transformers.models.roberta.tokenization_roberta.get_pairs(word)

Return set of symbol pairs in a word.

Word is represented as tuple of symbols (symbols being variable-length strings).

Source code in mindnlp\transformers\models\roberta\tokenization_roberta.py
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_pairs(word):
    """
    Return set of symbol pairs in a word.

    Word is represented as tuple of symbols (symbols being variable-length strings).
    """
    pairs = set()
    prev_char = word[0]
    for char in word[1:]:
        pairs.add((prev_char, char))
        prev_char = char
    return pairs

mindnlp.transformers.models.roberta.tokenization_roberta_fast

Fast Tokenization classes for RoBERTa.

mindnlp.transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast

Bases: PreTrainedTokenizerFast

Construct a "fast" RoBERTa tokenizer (backed by HuggingFace's tokenizers library), derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.

This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not:

Example
>>> from transformers import RobertaTokenizerFast
...
>>> tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2]
>>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]

You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True.

This tokenizer inherits from [PreTrainedTokenizerFast] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

PARAMETER DESCRIPTION
vocab_file

Path to the vocabulary file.

TYPE: `str` DEFAULT: None

merges_file

Path to the merges file.

TYPE: `str` DEFAULT: None

errors

Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information.

TYPE: `str`, *optional*, defaults to `"replace"` DEFAULT: 'replace'

bos_token

The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

eos_token

The end of sequence token.

When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

sep_token

The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.

TYPE: `str`, *optional*, defaults to `"</s>"` DEFAULT: '</s>'

cls_token

The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.

TYPE: `str`, *optional*, defaults to `"<s>"` DEFAULT: '<s>'

unk_token

The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

TYPE: `str`, *optional*, defaults to `"<unk>"` DEFAULT: '<unk>'

pad_token

The token used for padding, for example when batching sequences of different lengths.

TYPE: `str`, *optional*, defaults to `"<pad>"` DEFAULT: '<pad>'

mask_token

The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.

TYPE: `str`, *optional*, defaults to `"<mask>"` DEFAULT: '<mask>'

add_prefix_space

Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (RoBERTa tokenizer detect beginning of words by the preceding space).

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

trim_offsets

Whether the post processing step should trim offsets to avoid including whitespaces.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

Source code in mindnlp\transformers\models\roberta\tokenization_roberta_fast.py
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
class RobertaTokenizerFast(PreTrainedTokenizerFast):
    """
    Construct a "fast" RoBERTa tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2
    tokenizer, using byte-level Byte-Pair-Encoding.

    This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
    be encoded differently whether it is at the beginning of the sentence (without space) or not:

    Example:
        ```python
        >>> from transformers import RobertaTokenizerFast
        ...
        >>> tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
        >>> tokenizer("Hello world")["input_ids"]
        [0, 31414, 232, 2]
        >>> tokenizer(" Hello world")["input_ids"]
        [0, 20920, 232, 2]
        ```

    You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
    call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.

    <Tip>

    When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.

    </Tip>

    This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
    refer to this superclass for more information regarding those methods.

    Args:
        vocab_file (`str`):
            Path to the vocabulary file.
        merges_file (`str`):
            Path to the merges file.
        errors (`str`, *optional*, defaults to `"replace"`):
            Paradigm to follow when decoding bytes to UTF-8. See
            [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
        bos_token (`str`, *optional*, defaults to `"<s>"`):
            The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.

            <Tip>

            When building a sequence using special tokens, this is not the token that is used for the beginning of
            sequence. The token used is the `cls_token`.

            </Tip>

        eos_token (`str`, *optional*, defaults to `"</s>"`):
            The end of sequence token.

            <Tip>

            When building a sequence using special tokens, this is not the token that is used for the end of sequence.
            The token used is the `sep_token`.

            </Tip>

        sep_token (`str`, *optional*, defaults to `"</s>"`):
            The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
            sequence classification or for a text and a question for question answering. It is also used as the last
            token of a sequence built with special tokens.
        cls_token (`str`, *optional*, defaults to `"<s>"`):
            The classifier token which is used when doing sequence classification (classification of the whole sequence
            instead of per-token classification). It is the first token of the sequence when built with special tokens.
        unk_token (`str`, *optional*, defaults to `"<unk>"`):
            The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
            token instead.
        pad_token (`str`, *optional*, defaults to `"<pad>"`):
            The token used for padding, for example when batching sequences of different lengths.
        mask_token (`str`, *optional*, defaults to `"<mask>"`):
            The token used for masking values. This is the token used when training this model with masked language
            modeling. This is the token which the model will try to predict.
        add_prefix_space (`bool`, *optional*, defaults to `False`):
            Whether or not to add an initial space to the input. This allows to treat the leading word just as any
            other word. (RoBERTa tokenizer detect beginning of words by the preceding space).
        trim_offsets (`bool`, *optional*, defaults to `True`):
            Whether the post processing step should trim offsets to avoid including whitespaces.
    """
    vocab_files_names = VOCAB_FILES_NAMES
    pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
    max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
    model_input_names = ["input_ids", "attention_mask"]
    slow_tokenizer_class = RobertaTokenizer

    def __init__(
        self,
        vocab_file=None,
        merges_file=None,
        tokenizer_file=None,
        errors="replace",
        bos_token="<s>",
        eos_token="</s>",
        sep_token="</s>",
        cls_token="<s>",
        unk_token="<unk>",
        pad_token="<pad>",
        mask_token="<mask>",
        add_prefix_space=False,
        trim_offsets=True,
        **kwargs,
    ):
        """
        Initializes a new instance of the `RobertaTokenizerFast` class.

        Args:
            self: The instance of the class itself.
            vocab_file (str, optional): The path to the vocabulary file. Default is None.
            merges_file (str, optional): The path to the merges file. Default is None.
            tokenizer_file (str, optional): The path to the tokenizer file. Default is None.
            errors (str, optional): Specifies the error handling during tokenization. Default is 'replace'.
            bos_token (str, optional): The beginning of sentence token. Default is '<s>'.
            eos_token (str, optional): The end of sentence token. Default is '</s>'.
            sep_token (str, optional): The separator token. Default is '</s>'.
            cls_token (str, optional): The classification token. Default is '<s>'.
            unk_token (str, optional): The unknown token. Default is '<unk>'.
            pad_token (str, optional): The padding token. Default is '<pad>'.
            mask_token (str or AddedToken, optional): The masking token. Default is '<mask>'.
            add_prefix_space (bool, optional): Specifies if a space should be added as a prefix to each token.
                Default is False.
            trim_offsets (bool, optional): Specifies if offsets should be trimmed. Default is True.
            **kwargs: Additional keyword arguments.

        Returns:
            None.

        Raises:
            None.
        """
        mask_token = (
            AddedToken(mask_token, lstrip=True, rstrip=False, normalized=False)
            if isinstance(mask_token, str)
            else mask_token
        )
        super().__init__(
            vocab_file,
            merges_file,
            tokenizer_file=tokenizer_file,
            errors=errors,
            bos_token=bos_token,
            eos_token=eos_token,
            sep_token=sep_token,
            cls_token=cls_token,
            unk_token=unk_token,
            pad_token=pad_token,
            mask_token=mask_token,
            add_prefix_space=add_prefix_space,
            trim_offsets=trim_offsets,
            **kwargs,
        )

        pre_tok_state = json.loads(self.backend_tokenizer.pre_tokenizer.__getstate__())
        if pre_tok_state.get("add_prefix_space", add_prefix_space) != add_prefix_space:
            pre_tok_class = getattr(pre_tokenizers, pre_tok_state.pop("type"))
            pre_tok_state["add_prefix_space"] = add_prefix_space
            self.backend_tokenizer.pre_tokenizer = pre_tok_class(**pre_tok_state)

        self.add_prefix_space = add_prefix_space

        tokenizer_component = "post_processor"
        tokenizer_component_instance = getattr(self.backend_tokenizer, tokenizer_component, None)
        if tokenizer_component_instance:
            state = json.loads(tokenizer_component_instance.__getstate__())

            # The lists 'sep' and 'cls' must be cased in tuples for the object `post_processor_class`
            if "sep" in state:
                state["sep"] = tuple(state["sep"])
            if "cls" in state:
                state["cls"] = tuple(state["cls"])

            changes_to_apply = False

            if state.get("add_prefix_space", add_prefix_space) != add_prefix_space:
                state["add_prefix_space"] = add_prefix_space
                changes_to_apply = True

            if state.get("trim_offsets", trim_offsets) != trim_offsets:
                state["trim_offsets"] = trim_offsets
                changes_to_apply = True

            if changes_to_apply:
                component_class = getattr(processors, state.pop("type"))
                new_value = component_class(**state)
                setattr(self.backend_tokenizer, tokenizer_component, new_value)

    @property
    def mask_token(self) -> str:
        """
        Return:
            `str`: Mask token, to use when training a model with masked-language modeling. Log an error if used while not
            having been set.

        Roberta tokenizer has a special mask token to be usable in the fill-mask pipeline. The mask token will greedily
        comprise the space before the *<mask>*.
        """
        if self._mask_token is None:
            if self.verbose:
                logger.error("Using mask_token, but it is not set yet.")
            return None
        return str(self._mask_token)

    @mask_token.setter
    def mask_token(self, value):
        """
        Overriding the default behavior of the mask token to have it eat the space before it.

        This is needed to preserve backward compatibility with all the previously used models based on Roberta.
        """
        # Mask token behave like a normal word, i.e. include the space before it
        # So we set lstrip to True
        value = AddedToken(value, lstrip=True, rstrip=False) if isinstance(value, str) else value
        self._mask_token = value

    def _batch_encode_plus(self, *args, **kwargs) -> BatchEncoding:
        """
        This method, _batch_encode_plus, is a part of the RobertaTokenizerFast class and is responsible for batch
        encoding inputs.

        Args:
            self: This parameter represents the instance of the class and is required for accessing the class
                attributes and methods.

        Returns:
            BatchEncoding: This method returns a BatchEncoding object that contains the batch-encoded inputs.

        Raises:
            AssertionError: This method may raise an AssertionError if the condition 'self.add_prefix_space or not
                is_split_into_words' is not met, indicating that the class needs to be instantiated with
                add_prefix_space=True to use it with pretokenized inputs.
        """
        is_split_into_words = kwargs.get("is_split_into_words", False)
        assert self.add_prefix_space or not is_split_into_words, (
            f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True "
            "to use it with pretokenized inputs."
        )

        return super()._batch_encode_plus(*args, **kwargs)

    def _encode_plus(self, *args, **kwargs) -> BatchEncoding:
        """
        Encodes the inputs into a batch of tokenized sequences using the fast version of the Roberta tokenizer.

        Args:
            self (RobertaTokenizerFast): An instance of the RobertaTokenizerFast class.

        Returns:
            BatchEncoding: A dictionary-like object containing the encoded sequences.

        Raises:
            AssertionError: If `is_split_into_words` is `True` but `add_prefix_space` is `False`.

        Note:
            This method is intended to be used with pretokenized inputs. To use it with pretokenized inputs,
            the `add_prefix_space` parameter of the `RobertaTokenizerFast` class should be set to `True`.
        """
        is_split_into_words = kwargs.get("is_split_into_words", False)

        assert self.add_prefix_space or not is_split_into_words, (
            f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True "
            "to use it with pretokenized inputs."
        )

        return super()._encode_plus(*args, **kwargs)

    def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
        """
        Saves the tokenizer's vocabulary to disk.

        Args:
            self (RobertaTokenizerFast): An instance of the RobertaTokenizerFast class.
            save_directory (str): The directory path where the vocabulary files will be saved.
            filename_prefix (Optional[str], default=None): An optional prefix to add to the filenames of the
                vocabulary files. If not provided, no prefix will be added.

        Returns:
            Tuple[str]: A tuple containing the filenames of the saved vocabulary files.

        Raises:
            None.

        Note:
            The saved vocabulary files will be stored in the specified directory with the following filenames:

            - If a filename prefix is provided, the files will be named as: "{filename_prefix}_vocab.json" and
            "{filename_prefix}_merges.txt".
            - If no filename prefix is provided, the files will be named as: "vocab.json" and "merges.txt".
        """
        files = self._tokenizer.model.save(save_directory, name=filename_prefix)
        return tuple(files)

    def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
        """
        Builds inputs with special tokens for the RobertaTokenizerFast class.

        Args:
            self (RobertaTokenizerFast): The instance of the RobertaTokenizerFast class.
            token_ids_0 (List[int]): The list of token IDs for the first sequence.
            token_ids_1 (List[int], optional): The list of token IDs for the second sequence. Defaults to None.

        Returns:
            None

        Raises:
            None

        Description:
            This method takes in two sequences of token IDs, token_ids_0 and token_ids_1, and builds a new list of
            token IDs with special tokens added. The special tokens include the beginning of sequence
            (bos_token_id) and the end of sequence (eos_token_id).

            The method first adds the bos_token_id to the beginning of the token_ids_0 list, followed by all the
            token IDs in token_ids_0, and then adds the eos_token_id to the end of the list.
            If token_ids_1 is provided, the method appends the eos_token_id, followed by all the token IDs in
            token_ids_1, and finally adds another eos_token_id to the end of the list.

            If token_ids_1 is not provided, the method simply returns the list output containing the special tokens
            and token_ids_0. If token_ids_1 is provided, the method returns the list output containing the
            special tokens, token_ids_0, special tokens, and token_ids_1.

        Example:
            ```python
            >>> tokenizer = RobertaTokenizerFast()
            >>> token_ids_0 = [10, 20, 30]
            >>> token_ids_1 = [40, 50, 60]
            >>> output = tokenizer.build_inputs_with_special_tokens(token_ids_0, token_ids_1)
            >>> print(output)
            >>> # Output: [0, 10, 20, 30, 2, 2, 40, 50, 60, 2]
            ```

        Note:
            The bos_token_id and eos_token_id are specific token IDs used to mark the beginning and end of a sequence
            respectively.
        """
        output = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
        if token_ids_1 is None:
            return output

        return output + [self.eos_token_id] + token_ids_1 + [self.eos_token_id]

    def create_token_type_ids_from_sequences(
        self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
    ) -> List[int]:
        """
        Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not
        make use of token type ids, therefore a list of zeros is returned.

        Args:
            token_ids_0 (`List[int]`):
                List of IDs.
            token_ids_1 (`List[int]`, *optional*):
                Optional second list of IDs for sequence pairs.

        Returns:
            `List[int]`: List of zeros.
        """
        sep = [self.sep_token_id]
        cls = [self.cls_token_id]

        if token_ids_1 is None:
            return len(cls + token_ids_0 + sep) * [0]
        return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]

mindnlp.transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.mask_token: str property writable

Return

str: Mask token, to use when training a model with masked-language modeling. Log an error if used while not having been set.

Roberta tokenizer has a special mask token to be usable in the fill-mask pipeline. The mask token will greedily comprise the space before the .

mindnlp.transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.__init__(vocab_file=None, merges_file=None, tokenizer_file=None, errors='replace', bos_token='<s>', eos_token='</s>', sep_token='</s>', cls_token='<s>', unk_token='<unk>', pad_token='<pad>', mask_token='<mask>', add_prefix_space=False, trim_offsets=True, **kwargs)

Initializes a new instance of the RobertaTokenizerFast class.

PARAMETER DESCRIPTION
self

The instance of the class itself.

vocab_file

The path to the vocabulary file. Default is None.

TYPE: str DEFAULT: None

merges_file

The path to the merges file. Default is None.

TYPE: str DEFAULT: None

tokenizer_file

The path to the tokenizer file. Default is None.

TYPE: str DEFAULT: None

errors

Specifies the error handling during tokenization. Default is 'replace'.

TYPE: str DEFAULT: 'replace'

bos_token

The beginning of sentence token. Default is ''.

TYPE: str DEFAULT: '<s>'

eos_token

The end of sentence token. Default is ''.

TYPE: str DEFAULT: '</s>'

sep_token

The separator token. Default is ''.

TYPE: str DEFAULT: '</s>'

cls_token

The classification token. Default is ''.

TYPE: str DEFAULT: '<s>'

unk_token

The unknown token. Default is ''.

TYPE: str DEFAULT: '<unk>'

pad_token

The padding token. Default is ''.

TYPE: str DEFAULT: '<pad>'

mask_token

The masking token. Default is ''.

TYPE: str or AddedToken DEFAULT: '<mask>'

add_prefix_space

Specifies if a space should be added as a prefix to each token. Default is False.

TYPE: bool DEFAULT: False

trim_offsets

Specifies if offsets should be trimmed. Default is True.

TYPE: bool DEFAULT: True

**kwargs

Additional keyword arguments.

DEFAULT: {}

RETURNS DESCRIPTION

None.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta_fast.py
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
def __init__(
    self,
    vocab_file=None,
    merges_file=None,
    tokenizer_file=None,
    errors="replace",
    bos_token="<s>",
    eos_token="</s>",
    sep_token="</s>",
    cls_token="<s>",
    unk_token="<unk>",
    pad_token="<pad>",
    mask_token="<mask>",
    add_prefix_space=False,
    trim_offsets=True,
    **kwargs,
):
    """
    Initializes a new instance of the `RobertaTokenizerFast` class.

    Args:
        self: The instance of the class itself.
        vocab_file (str, optional): The path to the vocabulary file. Default is None.
        merges_file (str, optional): The path to the merges file. Default is None.
        tokenizer_file (str, optional): The path to the tokenizer file. Default is None.
        errors (str, optional): Specifies the error handling during tokenization. Default is 'replace'.
        bos_token (str, optional): The beginning of sentence token. Default is '<s>'.
        eos_token (str, optional): The end of sentence token. Default is '</s>'.
        sep_token (str, optional): The separator token. Default is '</s>'.
        cls_token (str, optional): The classification token. Default is '<s>'.
        unk_token (str, optional): The unknown token. Default is '<unk>'.
        pad_token (str, optional): The padding token. Default is '<pad>'.
        mask_token (str or AddedToken, optional): The masking token. Default is '<mask>'.
        add_prefix_space (bool, optional): Specifies if a space should be added as a prefix to each token.
            Default is False.
        trim_offsets (bool, optional): Specifies if offsets should be trimmed. Default is True.
        **kwargs: Additional keyword arguments.

    Returns:
        None.

    Raises:
        None.
    """
    mask_token = (
        AddedToken(mask_token, lstrip=True, rstrip=False, normalized=False)
        if isinstance(mask_token, str)
        else mask_token
    )
    super().__init__(
        vocab_file,
        merges_file,
        tokenizer_file=tokenizer_file,
        errors=errors,
        bos_token=bos_token,
        eos_token=eos_token,
        sep_token=sep_token,
        cls_token=cls_token,
        unk_token=unk_token,
        pad_token=pad_token,
        mask_token=mask_token,
        add_prefix_space=add_prefix_space,
        trim_offsets=trim_offsets,
        **kwargs,
    )

    pre_tok_state = json.loads(self.backend_tokenizer.pre_tokenizer.__getstate__())
    if pre_tok_state.get("add_prefix_space", add_prefix_space) != add_prefix_space:
        pre_tok_class = getattr(pre_tokenizers, pre_tok_state.pop("type"))
        pre_tok_state["add_prefix_space"] = add_prefix_space
        self.backend_tokenizer.pre_tokenizer = pre_tok_class(**pre_tok_state)

    self.add_prefix_space = add_prefix_space

    tokenizer_component = "post_processor"
    tokenizer_component_instance = getattr(self.backend_tokenizer, tokenizer_component, None)
    if tokenizer_component_instance:
        state = json.loads(tokenizer_component_instance.__getstate__())

        # The lists 'sep' and 'cls' must be cased in tuples for the object `post_processor_class`
        if "sep" in state:
            state["sep"] = tuple(state["sep"])
        if "cls" in state:
            state["cls"] = tuple(state["cls"])

        changes_to_apply = False

        if state.get("add_prefix_space", add_prefix_space) != add_prefix_space:
            state["add_prefix_space"] = add_prefix_space
            changes_to_apply = True

        if state.get("trim_offsets", trim_offsets) != trim_offsets:
            state["trim_offsets"] = trim_offsets
            changes_to_apply = True

        if changes_to_apply:
            component_class = getattr(processors, state.pop("type"))
            new_value = component_class(**state)
            setattr(self.backend_tokenizer, tokenizer_component, new_value)

mindnlp.transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)

Builds inputs with special tokens for the RobertaTokenizerFast class.

PARAMETER DESCRIPTION
self

The instance of the RobertaTokenizerFast class.

TYPE: RobertaTokenizerFast

token_ids_0

The list of token IDs for the first sequence.

TYPE: List[int]

token_ids_1

The list of token IDs for the second sequence. Defaults to None.

TYPE: List[int] DEFAULT: None

RETURNS DESCRIPTION

None

Description

This method takes in two sequences of token IDs, token_ids_0 and token_ids_1, and builds a new list of token IDs with special tokens added. The special tokens include the beginning of sequence (bos_token_id) and the end of sequence (eos_token_id).

The method first adds the bos_token_id to the beginning of the token_ids_0 list, followed by all the token IDs in token_ids_0, and then adds the eos_token_id to the end of the list. If token_ids_1 is provided, the method appends the eos_token_id, followed by all the token IDs in token_ids_1, and finally adds another eos_token_id to the end of the list.

If token_ids_1 is not provided, the method simply returns the list output containing the special tokens and token_ids_0. If token_ids_1 is provided, the method returns the list output containing the special tokens, token_ids_0, special tokens, and token_ids_1.

Example
>>> tokenizer = RobertaTokenizerFast()
>>> token_ids_0 = [10, 20, 30]
>>> token_ids_1 = [40, 50, 60]
>>> output = tokenizer.build_inputs_with_special_tokens(token_ids_0, token_ids_1)
>>> print(output)
>>> # Output: [0, 10, 20, 30, 2, 2, 40, 50, 60, 2]
Note

The bos_token_id and eos_token_id are specific token IDs used to mark the beginning and end of a sequence respectively.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta_fast.py
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
    """
    Builds inputs with special tokens for the RobertaTokenizerFast class.

    Args:
        self (RobertaTokenizerFast): The instance of the RobertaTokenizerFast class.
        token_ids_0 (List[int]): The list of token IDs for the first sequence.
        token_ids_1 (List[int], optional): The list of token IDs for the second sequence. Defaults to None.

    Returns:
        None

    Raises:
        None

    Description:
        This method takes in two sequences of token IDs, token_ids_0 and token_ids_1, and builds a new list of
        token IDs with special tokens added. The special tokens include the beginning of sequence
        (bos_token_id) and the end of sequence (eos_token_id).

        The method first adds the bos_token_id to the beginning of the token_ids_0 list, followed by all the
        token IDs in token_ids_0, and then adds the eos_token_id to the end of the list.
        If token_ids_1 is provided, the method appends the eos_token_id, followed by all the token IDs in
        token_ids_1, and finally adds another eos_token_id to the end of the list.

        If token_ids_1 is not provided, the method simply returns the list output containing the special tokens
        and token_ids_0. If token_ids_1 is provided, the method returns the list output containing the
        special tokens, token_ids_0, special tokens, and token_ids_1.

    Example:
        ```python
        >>> tokenizer = RobertaTokenizerFast()
        >>> token_ids_0 = [10, 20, 30]
        >>> token_ids_1 = [40, 50, 60]
        >>> output = tokenizer.build_inputs_with_special_tokens(token_ids_0, token_ids_1)
        >>> print(output)
        >>> # Output: [0, 10, 20, 30, 2, 2, 40, 50, 60, 2]
        ```

    Note:
        The bos_token_id and eos_token_id are specific token IDs used to mark the beginning and end of a sequence
        respectively.
    """
    output = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
    if token_ids_1 is None:
        return output

    return output + [self.eos_token_id] + token_ids_1 + [self.eos_token_id]

mindnlp.transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)

Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

PARAMETER DESCRIPTION
token_ids_0

List of IDs.

TYPE: `List[int]`

token_ids_1

Optional second list of IDs for sequence pairs.

TYPE: `List[int]`, *optional* DEFAULT: None

RETURNS DESCRIPTION
List[int]

List[int]: List of zeros.

Source code in mindnlp\transformers\models\roberta\tokenization_roberta_fast.py
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
def create_token_type_ids_from_sequences(
    self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
    """
    Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not
    make use of token type ids, therefore a list of zeros is returned.

    Args:
        token_ids_0 (`List[int]`):
            List of IDs.
        token_ids_1 (`List[int]`, *optional*):
            Optional second list of IDs for sequence pairs.

    Returns:
        `List[int]`: List of zeros.
    """
    sep = [self.sep_token_id]
    cls = [self.cls_token_id]

    if token_ids_1 is None:
        return len(cls + token_ids_0 + sep) * [0]
    return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]

mindnlp.transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast.save_vocabulary(save_directory, filename_prefix=None)

Saves the tokenizer's vocabulary to disk.

PARAMETER DESCRIPTION
self

An instance of the RobertaTokenizerFast class.

TYPE: RobertaTokenizerFast

save_directory

The directory path where the vocabulary files will be saved.

TYPE: str

filename_prefix

An optional prefix to add to the filenames of the vocabulary files. If not provided, no prefix will be added.

TYPE: Optional[str], default=None DEFAULT: None

RETURNS DESCRIPTION
Tuple[str]

Tuple[str]: A tuple containing the filenames of the saved vocabulary files.

Note

The saved vocabulary files will be stored in the specified directory with the following filenames:

  • If a filename prefix is provided, the files will be named as: "{filename_prefix}_vocab.json" and "{filename_prefix}_merges.txt".
  • If no filename prefix is provided, the files will be named as: "vocab.json" and "merges.txt".
Source code in mindnlp\transformers\models\roberta\tokenization_roberta_fast.py
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
    """
    Saves the tokenizer's vocabulary to disk.

    Args:
        self (RobertaTokenizerFast): An instance of the RobertaTokenizerFast class.
        save_directory (str): The directory path where the vocabulary files will be saved.
        filename_prefix (Optional[str], default=None): An optional prefix to add to the filenames of the
            vocabulary files. If not provided, no prefix will be added.

    Returns:
        Tuple[str]: A tuple containing the filenames of the saved vocabulary files.

    Raises:
        None.

    Note:
        The saved vocabulary files will be stored in the specified directory with the following filenames:

        - If a filename prefix is provided, the files will be named as: "{filename_prefix}_vocab.json" and
        "{filename_prefix}_merges.txt".
        - If no filename prefix is provided, the files will be named as: "vocab.json" and "merges.txt".
    """
    files = self._tokenizer.model.save(save_directory, name=filename_prefix)
    return tuple(files)