mpt
mindnlp.transformers.models.mpt.configuration_mpt
¶
Mpt configuration
mindnlp.transformers.models.mpt.configuration_mpt.DeprecatedList
¶
Bases: list
Represents a list class that issues a warning about deprecated features when accessed.
This class inherits from the built-in list class and overrides the getitem method to issue a warning message
when accessing elements. The warning message alerts users that archive maps are deprecated and will be removed in
version v4.40.0 as they are no longer relevant. It also provides a recommendation for an alternative method to
retrieve all checkpoints for a given architecture using the huggingface_hub library with the list_models method.
Source code in mindnlp\transformers\models\mpt\configuration_mpt.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | |
mindnlp.transformers.models.mpt.configuration_mpt.DeprecatedList.__getitem__(item)
¶
Get an item from the DeprecatedList object.
| PARAMETER | DESCRIPTION |
|---|---|
self
|
The instance of the DeprecatedList class.
TYPE:
|
item
|
The key to retrieve an item from the DeprecatedList.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
|
None. |
Source code in mindnlp\transformers\models\mpt\configuration_mpt.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | |
mindnlp.transformers.models.mpt.configuration_mpt.MptAttentionConfig
¶
Bases: PretrainedConfig
This is the configuration class to store the configuration of a [MptAttention] class. It is used to instantiate
attention layers according to the specified arguments, defining the layers architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the MPT
mosaicml/mpt-7b architecture. Most of the arguments are kept for backward
compatibility with previous MPT models that are hosted on the Hub (previously with trust_remote_code=True).
Configuration objects inherit from [PretrainedConfig] and can be used to control the model outputs. Read the
documentation from [PretrainedConfig] for more information.
| PARAMETER | DESCRIPTION |
|---|---|
attn_type
|
type of attention to use. Options:
TYPE:
|
attn_pdrop
|
The dropout probability for the attention layers.
TYPE:
|
attn_impl
|
The attention implementation to use. One of
TYPE:
|
clip_qkv
|
If not
TYPE:
|
softmax_scale
|
If not
TYPE:
|
prefix_lm
|
Whether the model should operate as a Prefix LM. This requires passing an extra
TYPE:
|
qk_ln
|
Whether to apply layer normalization to the queries and keys in the attention layer.
TYPE:
|
attn_uses_sequence_id
|
Whether to restrict attention to tokens that have the same token_type_ids. When the model is in
TYPE:
|
alibi
|
Whether or not to use the alibi bias instead of positional embedding.
TYPE:
|
alibi_bias_max
|
The maximum value of the alibi bias.
TYPE:
|
Source code in mindnlp\transformers\models\mpt\configuration_mpt.py
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 | |
mindnlp.transformers.models.mpt.configuration_mpt.MptAttentionConfig.__init__(attn_type='multihead_attention', attn_pdrop=0.0, attn_impl='torch', clip_qkv=None, softmax_scale=None, prefix_lm=False, qk_ln=False, attn_uses_sequence_id=False, alibi=True, alibi_bias_max=8, **kwargs)
¶
Initializes a new instance of the MptAttentionConfig class.
| PARAMETER | DESCRIPTION |
|---|---|
self
|
The instance of the class.
|
attn_type
|
The type of attention. Must be either 'multihead_attention' or 'multiquery_attention'.
TYPE:
|
attn_pdrop
|
The dropout probability for attention weights. Default is 0.0.
TYPE:
|
attn_impl
|
The implementation of attention. Default is 'torch'.
TYPE:
|
clip_qkv
|
Not specified.
DEFAULT:
|
softmax_scale
|
Not specified.
DEFAULT:
|
prefix_lm
|
Indicates if prefix language model is used. Default is False.
TYPE:
|
qk_ln
|
Indicates if layer normalization is applied to query and key. Default is False.
TYPE:
|
attn_uses_sequence_id
|
Indicates if sequence ID is used in attention. Default is False.
TYPE:
|
alibi
|
Indicates if alibi bias is used. Default is True.
TYPE:
|
alibi_bias_max
|
Not specified.
DEFAULT:
|
**kwargs
|
Additional keyword arguments.
DEFAULT:
|
| RETURNS | DESCRIPTION |
|---|---|
|
None. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If 'attn_type' is not either 'multihead_attention' or 'multiquery_attention'. |
Source code in mindnlp\transformers\models\mpt\configuration_mpt.py
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | |
mindnlp.transformers.models.mpt.configuration_mpt.MptAttentionConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
classmethod
¶
Instantiates a new instance of the MptAttentionConfig class from a pre-trained model.
| PARAMETER | DESCRIPTION |
|---|---|
cls
|
The class object that the method was called on.
|
pretrained_model_name_or_path
|
The name or path of the pre-trained model to load.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
PretrainedConfig
|
An instance of the PretrainedConfig class instantiated with the configuration of the pre-trained model.
TYPE:
|
Source code in mindnlp\transformers\models\mpt\configuration_mpt.py
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 | |
mindnlp.transformers.models.mpt.configuration_mpt.MptConfig
¶
Bases: PretrainedConfig
This is the configuration class to store the configuration of a [MptModel]. It is used to instantiate a Mpt model
according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to the Mpt-7b architecture
mosaicml/mpt-7b.
Configuration objects inherit from [PretrainedConfig] and can be used to control the model outputs. Read the
documentation from [PretrainedConfig] for more information.
| PARAMETER | DESCRIPTION |
|---|---|
d_model
|
Dimensionality of the embeddings and hidden states.
TYPE:
|
n_heads
|
Number of attention heads for each attention layer in the Transformer encoder.
TYPE:
|
n_layers
|
Number of hidden layers in the Transformer encoder.
TYPE:
|
expansion_ratio
|
The ratio of the up/down scale in the MLP.
TYPE:
|
max_seq_len
|
The maximum sequence length of the model.
TYPE:
|
vocab_size
|
Vocabulary size of the Mpt model. Defines the maximum number of different tokens that can be represented by
the
TYPE:
|
resid_pdrop
|
The dropout probability applied to the attention output before combining with residual.
TYPE:
|
layer_norm_epsilon
|
The epsilon to use in the layer normalization layers.
TYPE:
|
emb_pdrop
|
The dropout probability for the embedding layer.
TYPE:
|
learned_pos_emb
|
Whether to use learned positional embeddings.
TYPE:
|
attn_config
|
A dictionary used to configure the model's attention module.
TYPE:
|
init_device
|
The device to use for parameter initialization. Defined for backward compatibility
TYPE:
|
logit_scale
|
If not None, scale the logits by this value.
TYPE:
|
no_bias
|
Whether to use bias in all linear layers.
TYPE:
|
verbose
|
The verbosity level to use for logging. Used in the previous versions of MPT models for logging. This argument is deprecated.
TYPE:
|
embedding_fraction
|
The fraction to scale the gradients of the embedding layer by.
TYPE:
|
norm_type
|
Type of layer norm to use. All MPT models uses the same layer norm implementation. Defined for backward compatibility.
TYPE:
|
use_cache
|
Whether or not the model should return the last key/values attentions (not used by all models).
TYPE:
|
initializer_range
|
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
TYPE:
|
Example
>>> from transformers import MptConfig, MptModel
...
>>> # Initializing a Mpt configuration
>>> configuration = MptConfig()
...
>>> # Initializing a model (with random weights) from the configuration
>>> model = MptModel(configuration)
...
>>> # Accessing the model configuration
>>> configuration = model.config
Source code in mindnlp\transformers\models\mpt\configuration_mpt.py
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | |
mindnlp.transformers.models.mpt.configuration_mpt.MptConfig.__init__(d_model=2048, n_heads=16, n_layers=24, expansion_ratio=4, max_seq_len=2048, vocab_size=50368, resid_pdrop=0.0, layer_norm_epsilon=1e-05, emb_pdrop=0.0, learned_pos_emb=True, attn_config=None, init_device='cpu', logit_scale=None, no_bias=True, verbose=0, embedding_fraction=1.0, norm_type='low_precision_layernorm', use_cache=False, initializer_range=0.02, **kwargs)
¶
Initializes an instance of the MptConfig class.
| PARAMETER | DESCRIPTION |
|---|---|
self
|
The object instance.
|
d_model
|
The dimensionality of the model's hidden states. Defaults to 2048.
TYPE:
|
n_heads
|
The number of attention heads. Defaults to 16.
TYPE:
|
n_layers
|
The number of layers in the model. Defaults to 24.
TYPE:
|
expansion_ratio
|
The expansion ratio for feed-forward layers. Defaults to 4.
TYPE:
|
max_seq_len
|
The maximum sequence length. Defaults to 2048.
TYPE:
|
vocab_size
|
The size of the vocabulary. Defaults to 50368.
TYPE:
|
resid_pdrop
|
The dropout probability for residual connections. Defaults to 0.0.
TYPE:
|
layer_norm_epsilon
|
The epsilon value for layer normalization. Defaults to 1e-05.
TYPE:
|
emb_pdrop
|
The dropout probability for token embeddings. Defaults to 0.0.
TYPE:
|
learned_pos_emb
|
Whether to use learned positional embeddings. Defaults to True.
TYPE:
|
attn_config
|
The attention configuration. Defaults to None.
TYPE:
|
init_device
|
The device to initialize the model on. Defaults to 'cpu'.
TYPE:
|
logit_scale
|
The scale factor for logits or 'none' to disable scaling. Defaults to None.
TYPE:
|
no_bias
|
Whether to exclude biases in the model. Defaults to True.
TYPE:
|
verbose
|
The verbosity level. Defaults to 0.
TYPE:
|
embedding_fraction
|
The fraction of the embedding table to use. Defaults to 1.0.
TYPE:
|
norm_type
|
The type of layer normalization. Defaults to 'low_precision_layernorm'.
TYPE:
|
use_cache
|
Whether to use caching in the model. Defaults to False.
TYPE:
|
initializer_range
|
The range for weight initialization. Defaults to 0.02.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
|
None. |
Source code in mindnlp\transformers\models\mpt\configuration_mpt.py
275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 | |
mindnlp.transformers.models.mpt.modeling_mpt
¶
MindSpore MPT model.
mindnlp.transformers.models.mpt.modeling_mpt.MptAttention
¶
Bases: Module
Multi-head self attention. Using torch or triton attention implemetation enables user to also use additive bias.
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptForCausalLM
¶
Bases: MptPreTrainedModel
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptForCausalLM.forward(input_ids=None, past_key_values=None, attention_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)
¶
labels (mindspore.Tensor of shape (batch_size, sequence_length), optional):
Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set
labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100
are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptForQuestionAnswering
¶
Bases: MptPreTrainedModel
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptForQuestionAnswering.forward(input_ids=None, attention_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None, return_dict=None)
¶
start_positions (mindspore.Tensor of shape (batch_size,), optional):
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
end_positions (mindspore.Tensor of shape (batch_size,), optional):
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence
are not taken into account for computing the loss.
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptForSequenceClassification
¶
Bases: MptPreTrainedModel
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptForSequenceClassification.forward(input_ids=None, past_key_values=None, attention_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)
¶
labels (mindspore.Tensor of shape (batch_size,), optional):
Labels for computing the sequence classification/regression loss. Indices should be in [0, ...,
config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptForTokenClassification
¶
Bases: MptPreTrainedModel
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptForTokenClassification.forward(input_ids=None, past_key_values=None, attention_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, **deprecated_arguments)
¶
labels (mindspore.Tensor of shape (batch_size,), optional):
Labels for computing the sequence classification/regression loss. Indices should be in [0, ...,
config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If
config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 | |
mindnlp.transformers.models.mpt.modeling_mpt.MptPreTrainedModel
¶
Bases: PreTrainedModel
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | |
mindnlp.transformers.models.mpt.modeling_mpt.build_mpt_alibi_tensor(num_heads, sequence_length, alibi_bias_max=8)
¶
Link to paper: https://arxiv.org/abs/2108.12409 - Alibi tensor is not causal as the original paper mentions, it relies on a translation invariance of softmax for quick implementation. This implementation has been copied from the alibi implementation of MPT source code that led to slightly different results than the Bloom alibi: https://huggingface.co/mosaicml/mpt-7b/blob/main/attention.py#L292
Source code in mindnlp\transformers\models\mpt\modeling_mpt.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | |