跳转至

configuration_utils

mindnlp.transformers.configuration_utils.PretrainedConfig

Base class for all configuration classes. Handles a few parameters common to all models' configurations as well as methods for loading/downloading/saving configurations.

A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to initialize a model does not load the model weights. It only affects the model's configuration.

Class attributes (overridden by derived classes):

  • model_type (str) -- An identifier for the model type, serialized into the JSON file, and used to recreate the correct object in [~transformers.AutoConfig].
  • is_composition (bool) -- Whether the config class is composed of multiple sub-configs. In this case the config has to be initialized from two or more configs of type [~transformers.PretrainedConfig] like: [~transformers.EncoderDecoderConfig] or [~RagConfig].
  • keys_to_ignore_at_inference (List[str]) -- A list of keys to ignore by default when looking at dictionary outputs of the model during inference.
  • attribute_map (Dict[str, str]) -- A dict that maps model specific attribute names to the standardized naming of attributes.

Common attributes (present in all subclasses):

  • vocab_size (int) -- The number of tokens in the vocabulary, which is also the first dimension of the embeddings matrix (this attribute may be missing for models that don't have a text modality like ViT).
  • hidden_size (int) -- The hidden size of the model.
  • num_attention_heads (int) -- The number of attention heads used in the multi-head attention layers of the model.
  • num_hidden_layers (int) -- The number of blocks in the model.
Arg

name_or_path (str, optional, defaults to ""): Store the string that was passed to [PreTrainedModel.from_pretrained] or [TFPreTrainedModel.from_pretrained] as pretrained_model_name_or_path if the configuration was created with such a method. output_hidden_states (bool, optional, defaults to False): Whether or not the model should return all hidden-states. output_attentions (bool, optional, defaults to False): Whether or not the model should returns all attentions. return_dict (bool, optional, defaults to True): Whether or not the model should return a [~transformers.utils.ModelOutput] instead of a plain tuple. is_encoder_decoder (bool, optional, defaults to False): Whether the model is used as an encoder/decoder or not. is_decoder (bool, optional, defaults to False): Whether the model is used as decoder or not (in which case it's used as an encoder). cross_attention_hidden_size** (bool, optional): The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder setting and the cross-attention hidden dimension differs from self.config.hidden_size. add_cross_attention (bool, optional, defaults to False): Whether cross-attention layers should be added to the model. Note, this option is only relevant for models that can be used as decoder models within the [EncoderDecoderModel] class, which consists of all models in AUTO_MODELS_FOR_CAUSAL_LM. tie_encoder_decoder (bool, optional, defaults to False): Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder and decoder model to have the exact same parameter names. prune_heads (Dict[int, List[int]], optional, defaults to {}): Pruned heads of the model. The keys are the selected layer indices and the associated values, the list of heads to prune in said layer.

For instance `{1: [0, 2], 2: [2, 3]}` will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.

chunk_size_feed_forward (int, optional, defaults to 0): The chunk size of all feed forward layers in the residual attention blocks. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work?.

Parameters for sequence generation

max_length (int, optional, defaults to 20): Maximum length that will be used by default in the generate method of the model. min_length (int, optional, defaults to 0): Minimum length that will be used by default in the generate method of the model. do_sample (bool, optional, defaults to False): Flag that will be used by default in the generate method of the model. Whether or not to use sampling ; use greedy decoding otherwise. early_stopping (bool, optional, defaults to False): Flag that will be used by default in the generate method of the model. Whether to stop the beam search when at least num_beams sentences are finished per batch or not. num_beams (int, optional, defaults to 1): Number of beams for beam search that will be used by default in the generate method of the model. 1 means no beam search. num_beam_groups (int, optional, defaults to 1): Number of groups to divide num_beams into in order to ensure diversity among different groups of beams that will be used by default in the generate method of the model. 1 means no group beam search. diversity_penalty (float, optional, defaults to 0.0): Value to control diversity for group beam search. that will be used by default in the generate method of the model. 0 means no diversity penalty. The higher the penalty, the more diverse are the outputs. temperature (float, optional, defaults to 1.0): The value used to module the next token probabilities that will be used by default in the generate method of the model. Must be strictly positive. top_k (int, optional, defaults to 50): Number of highest probability vocabulary tokens to keep for top-k-filtering that will be used by default in the generate method of the model. top_p (float, optional, defaults to 1): Value that will be used by default in the generate method of the model for top_p. If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. typical_p (float, optional, defaults to 1): Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to typical_p or higher are kept for generation. See this paper for more details. repetition_penalty (float, optional, defaults to 1): Parameter for repetition penalty that will be used by default in the generate method of the model. 1.0 means no penalty. length_penalty (float, optional, defaults to 1): Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences. no_repeat_ngram_size (int, optional, defaults to 0) -- Value that will be used by default in the generate method of the model for no_repeat_ngram_size. If set to int > 0, all ngrams of that size can only occur once. encoder_no_repeat_ngram_size (int, optional, defaults to 0) -- Value that will be used by default in the generate method of the model for encoder_no_repeat_ngram_size. If set to int > 0, all ngrams of that size that occur in the encoder_input_ids cannot occur in the decoder_input_ids. bad_words_ids (List[int], optional): List of token ids that are not allowed to be generated that will be used by default in the generate method of the model. In order to get the tokens of the words that should not appear in the generated text, use tokenizer.encode(bad_word, add_prefix_space=True). num_return_sequences (int, optional, defaults to 1): Number of independently computed returned sequences for each element in the batch that will be used by default in the generate method of the model. output_scores (bool, optional, defaults to False): Whether the model should return the logits when used for generation. return_dict_in_generate (bool, optional, defaults to False): Whether the model should return a [~transformers.utils.ModelOutput] instead of a torch.LongTensor. forced_bos_token_id (int, optional): The id of the token to force as the first generated token after the decoder_start_token_id. Useful for multilingual models like mBART where the first generated token needs to be the target language token. forced_eos_token_id (int, optional): The id of the token to force as the last generated token when max_length is reached. remove_invalid_values (bool, optional): Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation.

Parameters for fine-tuning tasks

architectures (List[str], optional): Model architectures that can be used with the model pretrained weights. finetuning_task (str, optional): Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow or PyTorch) checkpoint. id2label (Dict[int, str], optional): A map from index (for instance prediction index, or target index) to label. label2id (Dict[str, int], optional): A map from label to index for the model. num_labels (int, optional): Number of labels to use in the last layer added to the model, typically for a classification task. task_specific_params (Dict[str, Any], optional): Additional keyword arguments to store for the current task. problem_type (str, optional): Problem type for XxxForSequenceClassification models. Can be one of "regression", "single_label_classification" or "multi_label_classification".

Parameters linked to the tokenizer

tokenizer_class (str, optional): The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the model by default). prefix (str, optional): A specific prompt that should be added at the beginning of each text before calling the model. bos_token_id (int, optional): The id of the beginning-of-stream token. pad_token_id (int, optional): The id of the padding token. eos_token_id (int, optional): The id of the end-of-stream token. decoder_start_token_id (int, optional): If an encoder-decoder model starts decoding with a different token than bos, the id of that token. sep_token_id (int, optional): The id of the separation token.

PyTorch specific parameters

torchscript (bool, optional, defaults to False): Whether or not the model should be used with Torchscript. tie_word_embeddings (bool, optional, defaults to True): Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the model has a output word embedding layer. ms_dtype (str, optional): The dtype of the weights. This attribute can be used to initialize the model to a non-default dtype (which is normally float32) and thus allow for optimal storage allocation. For example, if the saved model is float16, ideally we want to load it back using the minimal amount of memory needed to load float16 weights. Since the config object is stored in plain text, this attribute contains just the floating type string without the torch. prefix. For example, for torch.float16 `ms_dtype is the "float16" string.

This attribute is currently not being used during model loading time, but this may change in the future
versions. But we can already start preparing for the future by saving the dtype with save_pretrained.

TensorFlow specific parameters

use_bfloat16 (bool, optional, defaults to False): Whether or not the model should use BFloat16 scalars (only used by some TensorFlow models). tf_legacy_loss (bool, optional, defaults to False): Whether the model should use legacy TensorFlow losses. Legacy losses have variable output shapes and may not be XLA-compatible. This option is here for backward compatibility and will be removed in Transformers v5.

Source code in mindnlp\transformers\configuration_utils.py
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
class PretrainedConfig:
    # no-format
    r"""
    Base class for all configuration classes. Handles a few parameters common to all models' configurations as well as
    methods for loading/downloading/saving configurations.

    <Tip>

    A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to
    initialize a model does **not** load the model weights. It only affects the model's configuration.

    </Tip>

    Class attributes (overridden by derived classes):

    - **model_type** (`str`) -- An identifier for the model type, serialized into the JSON file, and used to recreate
      the correct object in [`~transformers.AutoConfig`].
    - **is_composition** (`bool`) -- Whether the config class is composed of multiple sub-configs. In this case the
      config has to be initialized from two or more configs of type [`~transformers.PretrainedConfig`] like:
      [`~transformers.EncoderDecoderConfig`] or [`~RagConfig`].
    - **keys_to_ignore_at_inference** (`List[str]`) -- A list of keys to ignore by default when looking at dictionary
      outputs of the model during inference.
    - **attribute_map** (`Dict[str, str]`) -- A dict that maps model specific attribute names to the standardized
      naming of attributes.

    Common attributes (present in all subclasses):

    - **vocab_size** (`int`) -- The number of tokens in the vocabulary, which is also the first dimension of the
      embeddings matrix (this attribute may be missing for models that don't have a text modality like ViT).
    - **hidden_size** (`int`) -- The hidden size of the model.
    - **num_attention_heads** (`int`) -- The number of attention heads used in the multi-head attention layers of the
      model.
    - **num_hidden_layers** (`int`) -- The number of blocks in the model.

    Arg:
        name_or_path (`str`, *optional*, defaults to `""`):
            Store the string that was passed to [`PreTrainedModel.from_pretrained`] or
            [`TFPreTrainedModel.from_pretrained`] as `pretrained_model_name_or_path` if the configuration was created
            with such a method.
        output_hidden_states (`bool`, *optional*, defaults to `False`):
            Whether or not the model should return all hidden-states.
        output_attentions (`bool`, *optional*, defaults to `False`):
            Whether or not the model should returns all attentions.
        return_dict (`bool`, *optional*, defaults to `True`):
            Whether or not the model should return a [`~transformers.utils.ModelOutput`] instead of a plain tuple.
        is_encoder_decoder (`bool`, *optional*, defaults to `False`):
            Whether the model is used as an encoder/decoder or not.
        is_decoder (`bool`, *optional*, defaults to `False`):
            Whether the model is used as decoder or not (in which case it's used as an encoder).
        cross_attention_hidden_size** (`bool`, *optional*):
            The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder
            setting and the cross-attention hidden dimension differs from `self.config.hidden_size`.
        add_cross_attention (`bool`, *optional*, defaults to `False`):
            Whether cross-attention layers should be added to the model. Note, this option is only relevant for models
            that can be used as decoder models within the [`EncoderDecoderModel`] class, which consists of all models
            in `AUTO_MODELS_FOR_CAUSAL_LM`.
        tie_encoder_decoder (`bool`, *optional*, defaults to `False`):
            Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder
            and decoder model to have the exact same parameter names.
        prune_heads (`Dict[int, List[int]]`, *optional*, defaults to `{}`):
            Pruned heads of the model. The keys are the selected layer indices and the associated values, the list of
            heads to prune in said layer.

            For instance `{1: [0, 2], 2: [2, 3]}` will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.
        chunk_size_feed_forward (`int`, *optional*, defaults to `0`):
            The chunk size of all feed forward layers in the residual attention blocks. A chunk size of `0` means that
            the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes `n` <
            sequence_length embeddings at a time. For more information on feed forward chunking, see [How does Feed
            Forward Chunking work?](../glossary.html#feed-forward-chunking).

        > Parameters for sequence generation

        max_length (`int`, *optional*, defaults to 20):
            Maximum length that will be used by default in the `generate` method of the model.
        min_length (`int`, *optional*, defaults to 0):
            Minimum length that will be used by default in the `generate` method of the model.
        do_sample (`bool`, *optional*, defaults to `False`):
            Flag that will be used by default in the `generate` method of the model. Whether or not to use sampling ;
            use greedy decoding otherwise.
        early_stopping (`bool`, *optional*, defaults to `False`):
            Flag that will be used by default in the `generate` method of the model. Whether to stop the beam search
            when at least `num_beams` sentences are finished per batch or not.
        num_beams (`int`, *optional*, defaults to 1):
            Number of beams for beam search that will be used by default in the `generate` method of the model. 1 means
            no beam search.
        num_beam_groups (`int`, *optional*, defaults to 1):
            Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams
            that will be used by default in the `generate` method of the model. 1 means no group beam search.
        diversity_penalty (`float`, *optional*, defaults to 0.0):
            Value to control diversity for group beam search. that will be used by default in the `generate` method of
            the model. 0 means no diversity penalty. The higher the penalty, the more diverse are the outputs.
        temperature (`float`, *optional*, defaults to 1.0):
            The value used to module the next token probabilities that will be used by default in the `generate` method
            of the model. Must be strictly positive.
        top_k (`int`, *optional*, defaults to 50):
            Number of highest probability vocabulary tokens to keep for top-k-filtering that will be used by default in
            the `generate` method of the model.
        top_p (`float`, *optional*, defaults to 1):
            Value that will be used by default in the `generate` method of the model for `top_p`. If set to float < 1,
            only the most probable tokens with probabilities that add up to `top_p` or higher are kept for generation.
        typical_p (`float`, *optional*, defaults to 1):
            Local typicality measures how similar the conditional probability of predicting a target token next is to
            the expected conditional probability of predicting a random token next, given the partial text already
            generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that
            add up to `typical_p` or higher are kept for generation. See [this
            paper](https://arxiv.org/pdf/2202.00666.pdf) for more details.
        repetition_penalty (`float`, *optional*, defaults to 1):
            Parameter for repetition penalty that will be used by default in the `generate` method of the model. 1.0
            means no penalty.
        length_penalty (`float`, *optional*, defaults to 1):
            Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to
            the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log
            likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while
            `length_penalty` < 0.0 encourages shorter sequences.
        no_repeat_ngram_size (`int`, *optional*, defaults to 0) -- Value that will be used by default in the
            `generate` method of the model for `no_repeat_ngram_size`. If set to int > 0, all ngrams of that size can
            only occur once.
        encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0) -- Value that will be used by
            default in the `generate` method of the model for `encoder_no_repeat_ngram_size`. If set to int > 0, all
            ngrams of that size that occur in the `encoder_input_ids` cannot occur in the `decoder_input_ids`.
        bad_words_ids (`List[int]`, *optional*):
            List of token ids that are not allowed to be generated that will be used by default in the `generate`
            method of the model. In order to get the tokens of the words that should not appear in the generated text,
            use `tokenizer.encode(bad_word, add_prefix_space=True)`.
        num_return_sequences (`int`, *optional*, defaults to 1):
            Number of independently computed returned sequences for each element in the batch that will be used by
            default in the `generate` method of the model.
        output_scores (`bool`, *optional*, defaults to `False`):
            Whether the model should return the logits when used for generation.
        return_dict_in_generate (`bool`, *optional*, defaults to `False`):
            Whether the model should return a [`~transformers.utils.ModelOutput`] instead of a `torch.LongTensor`.
        forced_bos_token_id (`int`, *optional*):
            The id of the token to force as the first generated token after the `decoder_start_token_id`. Useful for
            multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be the target
            language token.
        forced_eos_token_id (`int`, *optional*):
            The id of the token to force as the last generated token when `max_length` is reached.
        remove_invalid_values (`bool`, *optional*):
            Whether to remove possible _nan_ and _inf_ outputs of the model to prevent the generation method to crash.
            Note that using `remove_invalid_values` can slow down generation.

        > Parameters for fine-tuning tasks

        architectures (`List[str]`, *optional*):
            Model architectures that can be used with the model pretrained weights.
        finetuning_task (`str`, *optional*):
            Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow
            or PyTorch) checkpoint.
        id2label (`Dict[int, str]`, *optional*):
            A map from index (for instance prediction index, or target index) to label.
        label2id (`Dict[str, int]`, *optional*): A map from label to index for the model.
        num_labels (`int`, *optional*):
            Number of labels to use in the last layer added to the model, typically for a classification task.
        task_specific_params (`Dict[str, Any]`, *optional*):
            Additional keyword arguments to store for the current task.
        problem_type (`str`, *optional*):
            Problem type for `XxxForSequenceClassification` models. Can be one of `"regression"`,
            `"single_label_classification"` or `"multi_label_classification"`.

        > Parameters linked to the tokenizer

        tokenizer_class (`str`, *optional*):
            The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the
            model by default).
        prefix (`str`, *optional*):
            A specific prompt that should be added at the beginning of each text before calling the model.
        bos_token_id (`int`, *optional*): The id of the _beginning-of-stream_ token.
        pad_token_id (`int`, *optional*): The id of the _padding_ token.
        eos_token_id (`int`, *optional*): The id of the _end-of-stream_ token.
        decoder_start_token_id (`int`, *optional*):
            If an encoder-decoder model starts decoding with a different token than _bos_, the id of that token.
        sep_token_id (`int`, *optional*): The id of the _separation_ token.

        > PyTorch specific parameters

        torchscript (`bool`, *optional*, defaults to `False`):
            Whether or not the model should be used with Torchscript.
        tie_word_embeddings (`bool`, *optional*, defaults to `True`):
            Whether the model's input and output word embeddings should be tied. Note that this is only relevant if the
            model has a output word embedding layer.
        ms_dtype (`str`, *optional*):
            The `dtype` of the weights. This attribute can be used to initialize the model to a non-default `dtype`
            (which is normally `float32`) and thus allow for optimal storage allocation. For example, if the saved
            model is `float16`, ideally we want to load it back using the minimal amount of memory needed to load
            `float16` weights. Since the config object is stored in plain text, this attribute contains just the
            floating type string without the `torch.` prefix. For example, for `torch.float16` ``ms_dtype` is the
            `"float16"` string.

            This attribute is currently not being used during model loading time, but this may change in the future
            versions. But we can already start preparing for the future by saving the dtype with save_pretrained.

        > TensorFlow specific parameters

        use_bfloat16 (`bool`, *optional*, defaults to `False`):
            Whether or not the model should use BFloat16 scalars (only used by some TensorFlow models).
        tf_legacy_loss (`bool`, *optional*, defaults to `False`):
            Whether the model should use legacy TensorFlow losses. Legacy losses have variable output shapes and may
            not be XLA-compatible. This option is here for backward compatibility and will be removed in Transformers
            v5.
    """

    model_type: str = ""
    is_composition: bool = False
    attribute_map: Dict[str, str] = {}
    _auto_class: Optional[str] = None

    def __setattr__(self, key, value):
        if key in super().__getattribute__("attribute_map"):
            key = super().__getattribute__("attribute_map")[key]
        super().__setattr__(key, value)

    def __getattribute__(self, key):
        if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
            key = super().__getattribute__("attribute_map")[key]
        return super().__getattribute__(key)

    def __init__(self, **kwargs):
        # Attributes with defaults
        self.return_dict = kwargs.pop("return_dict", True)
        self.output_hidden_states = kwargs.pop("output_hidden_states", False)
        self.output_attentions = kwargs.pop("output_attentions", False)
        self.torchscript = kwargs.pop("torchscript", False)  # Only used by PyTorch models
        self.ms_dtype = kwargs.pop("ms_dtype", None)  # Only used by PyTorch models
        self.use_bfloat16 = kwargs.pop("use_bfloat16", False)
        self.tf_legacy_loss = kwargs.pop("tf_legacy_loss", False)  # Only used by TensorFlow models
        self.pruned_heads = kwargs.pop("pruned_heads", {})
        self.tie_word_embeddings = kwargs.pop(
            "tie_word_embeddings", True
        )  # Whether input and output word embeddings should be tied for all MLM, LM and Seq2Seq models.
        self.chunk_size_feed_forward = kwargs.pop("chunk_size_feed_forward", 0)

        # Is decoder is used in encoder-decoder models to differentiate encoder from decoder
        self.is_encoder_decoder = kwargs.pop("is_encoder_decoder", False)
        self.is_decoder = kwargs.pop("is_decoder", False)
        self.cross_attention_hidden_size = kwargs.pop("cross_attention_hidden_size", None)
        self.add_cross_attention = kwargs.pop("add_cross_attention", False)
        self.tie_encoder_decoder = kwargs.pop("tie_encoder_decoder", False)

        # Retrocompatibility: Parameters for sequence generation. While we will keep the ability to load these
        # parameters, saving them will be deprecated. In a distant future, we won't need to load them.
        for parameter_name, default_value in self._get_generation_defaults().items():
            setattr(self, parameter_name, kwargs.pop(parameter_name, default_value))

        # Fine-tuning task arguments
        self.architectures = kwargs.pop("architectures", None)
        self.finetuning_task = kwargs.pop("finetuning_task", None)
        self.id2label = kwargs.pop("id2label", None)
        self.label2id = kwargs.pop("label2id", None)
        if self.label2id is not None and not isinstance(self.label2id, dict):
            raise ValueError("Argument label2id should be a dictionary.")
        if self.id2label is not None:
            if not isinstance(self.id2label, dict):
                raise ValueError("Argument id2label should be a dictionary.")
            num_labels = kwargs.pop("num_labels", None)
            if num_labels is not None and len(self.id2label) != num_labels:
                logger.warning(
                    f"You passed along `num_labels={num_labels}` with an incompatible id to label map: "
                    f"{self.id2label}. The number of labels wil be overwritten to {self.num_labels}."
                )
            self.id2label = {int(key): value for key, value in self.id2label.items()}
            # Keys are always strings in JSON so convert ids to int here.
        else:
            self.num_labels = kwargs.pop("num_labels", 2)

        if self.ms_dtype is not None and isinstance(self.ms_dtype, str):
            # we will start using self.ms_dtype in v5, but to be consistent with
            # from_pretrained's ms_dtype arg convert it to an actual torch.dtype object
            if is_mindspore_available():
                import mindspore

                self.ms_dtype = getattr(mindspore, self.ms_dtype)

        # Tokenizer arguments TODO: eventually tokenizer and models should share the same config
        self.tokenizer_class = kwargs.pop("tokenizer_class", None)
        self.prefix = kwargs.pop("prefix", None)
        self.bos_token_id = kwargs.pop("bos_token_id", None)
        self.pad_token_id = kwargs.pop("pad_token_id", None)
        self.eos_token_id = kwargs.pop("eos_token_id", None)
        self.sep_token_id = kwargs.pop("sep_token_id", None)

        self.decoder_start_token_id = kwargs.pop("decoder_start_token_id", None)

        # task specific arguments
        self.task_specific_params = kwargs.pop("task_specific_params", None)

        # regression / multi-label classification
        self.problem_type = kwargs.pop("problem_type", None)
        allowed_problem_types = ("regression", "single_label_classification", "multi_label_classification")
        if self.problem_type is not None and self.problem_type not in allowed_problem_types:
            raise ValueError(
                f"The config parameter `problem_type` was not understood: received {self.problem_type} "
                "but only 'regression', 'single_label_classification' and 'multi_label_classification' are valid."
            )

        # TPU arguments
        if kwargs.pop("xla_device", None) is not None:
            logger.warning(
                "The `xla_device` argument has been deprecated in v4.4.0 of Transformers. It is ignored and you can "
                "safely remove it from your `config.json` file."
            )

        # Name or path to the pretrained checkpoint
        self._name_or_path = str(kwargs.pop("name_or_path", ""))
        # Config hash
        self._commit_hash = kwargs.pop("_commit_hash", None)

        # Attention implementation to use, if relevant.
        self._attn_implementation_internal = kwargs.pop("attn_implementation", None)

        # Drop the transformers version info
        self.transformers_version = kwargs.pop("transformers_version", None)

        # Deal with gradient checkpointing
        if kwargs.get("gradient_checkpointing", False):
            warnings.warn(
                "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 "
                "Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the "
                "`Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`."
            )

        # Additional attributes without default values
        for key, value in kwargs.items():
            try:
                setattr(self, key, value)
            except AttributeError as err:
                logger.error(f"Can't set {key} with value {value} for {self}")
                raise err

    @property
    def name_or_path(self) -> str:
        return getattr(self, "_name_or_path", None)

    @name_or_path.setter
    def name_or_path(self, value):
        self._name_or_path = str(value)  # Make sure that name_or_path is a string (for JSON encoding)

    @property
    def use_return_dict(self) -> bool:
        """
        `bool`: Whether or not return [`~utils.ModelOutput`] instead of tuples.
        """
        # If torchscript is set, force `return_dict=False` to avoid jit errors
        return self.return_dict and not self.torchscript

    @property
    def num_labels(self) -> int:
        """
        `int`: The number of labels for classification models.
        """
        return len(self.id2label)

    @num_labels.setter
    def num_labels(self, num_labels: int):
        if not hasattr(self, "id2label") or self.id2label is None or len(self.id2label) != num_labels:
            self.id2label = {i: f"LABEL_{i}" for i in range(num_labels)}
            self.label2id = dict(zip(self.id2label.values(), self.id2label.keys()))

    @property
    def _attn_implementation(self):
        # This property is made private for now (as it cannot be changed and a PreTrainedModel.use_attn_implementation method needs to be implemented.)
        if hasattr(self, "_attn_implementation_internal"):
            if self._attn_implementation_internal is None:
                # `config.attn_implementation` should never be None, for backward compatibility.
                return "eager"
            else:
                return self._attn_implementation_internal
        else:
            return "eager"

    @_attn_implementation.setter
    def _attn_implementation(self, value):
        self._attn_implementation_internal = value

    def save_pretrained(self, save_directory: Union[str, os.PathLike], **kwargs):
        """
        Save a configuration object to the directory `save_directory`, so that it can be re-loaded using the
        [`~PretrainedConfig.from_pretrained`] class method.

        Args:
            save_directory (`str` or `os.PathLike`):
                Directory where the configuration JSON file will be saved (will be created if it does not exist).
            kwargs (`Dict[str, Any]`, *optional*):
                Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
        """
        self._set_token_in_kwargs(kwargs)

        if os.path.isfile(save_directory):
            raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file")

        non_default_generation_parameters = {}
        for parameter_name, default_value in self._get_generation_defaults().items():
            if hasattr(self, parameter_name) and getattr(self, parameter_name) != default_value:
                non_default_generation_parameters[parameter_name] = getattr(self, parameter_name)
        if len(non_default_generation_parameters) > 0:
            logger.warning(
                "Some non-default generation parameters are set in the model config. These should go into a "
                "GenerationConfig file instead.\n"
                f"Non-default generation parameters: {str(non_default_generation_parameters)}"
            )

        os.makedirs(save_directory, exist_ok=True)

        # If we save using the predefined names, we can load using `from_pretrained`
        output_config_file = os.path.join(save_directory, CONFIG_NAME)

        self.to_json_file(output_config_file, use_diff=True)
        logger.info(f"Configuration saved in {output_config_file}")


    @staticmethod
    def _set_token_in_kwargs(kwargs, token=None):
        """Temporary method to deal with `token` and `use_auth_token`.

        This method is to avoid apply the same changes in all model config classes that overwrite `from_pretrained`.

        Need to clean up `use_auth_token` in a follow PR.
        """
        # Some model config classes like CLIP define their own `from_pretrained` without the new argument `token` yet.
        if token is None:
            token = kwargs.pop("token", None)
        use_auth_token = kwargs.pop("use_auth_token", None)

        if use_auth_token is not None:
            warnings.warn(
                "The `use_auth_token` argument is deprecated. Please use `token` instead.",
                FutureWarning,
            )
            if token is not None:
                raise ValueError(
                    "`token` and `use_auth_token` are both specified. Please set only the argument `token`."
                )
            token = use_auth_token

        if token is not None:
            kwargs["token"] = token

    @classmethod
    def from_pretrained(
        cls,
        pretrained_model_name_or_path: Union[str, os.PathLike],
        cache_dir: Optional[Union[str, os.PathLike]] = None,
        force_download: bool = False,
        local_files_only: bool = False,
        token: Optional[Union[str, bool]] = None,
        revision: str = "main",
        mirror='huggingface',
        **kwargs,
    ) -> "PretrainedConfig":
        r"""
        Instantiate a [`PretrainedConfig`] (or a derived class) from a pretrained model configuration.

        Args:
            pretrained_model_name_or_path (`str` or `os.PathLike`):
                This can be either:

                - a string, the *model id* of a pretrained model configuration hosted inside a model repo on
                  huggingface.co.
                - a path to a *directory* containing a configuration file saved using the
                  [`~PretrainedConfig.save_pretrained`] method, e.g., `./my_model_directory/`.
                - a path or url to a saved configuration JSON *file*, e.g., `./my_model_directory/configuration.json`.
            cache_dir (`str` or `os.PathLike`, *optional*):
                Path to a directory in which a downloaded pretrained model configuration should be cached if the
                standard cache should not be used.
            force_download (`bool`, *optional*, defaults to `False`):
                Whether or not to force to (re-)download the configuration files and override the cached versions if
                they exist.
            resume_download:
                Deprecated and ignored. All downloads are now resumed by default when possible.
                Will be removed in v5 of Transformers.
            proxies (`Dict[str, str]`, *optional*):
                A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
                'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
            token (`str` or `bool`, *optional*):
                The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
                the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
            revision (`str`, *optional*, defaults to `"main"`):
                The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
                git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
                identifier allowed by git.

                <Tip>

                To test a pull request you made on the Hub, you can pass `revision="refs/pr/<pr_number>".

                </Tip>

            return_unused_kwargs (`bool`, *optional*, defaults to `False`):
                If `False`, then this function returns just the final configuration object.

                If `True`, then this functions returns a `Tuple(config, unused_kwargs)` where *unused_kwargs* is a
                dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the
                part of `kwargs` which has not been used to update `config` and is otherwise ignored.
            subfolder (`str`, *optional*, defaults to `""`):
                In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can
                specify the folder name here.
            kwargs (`Dict[str, Any]`, *optional*):
                The values in kwargs of any keys which are configuration attributes will be used to override the loaded
                values. Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled
                by the `return_unused_kwargs` keyword parameter.

        Returns:
            [`PretrainedConfig`]: The configuration object instantiated from this pretrained model.

        Examples:

        ```python
        # We can't instantiate directly the base class *PretrainedConfig* so let's show the examples on a
        # derived class: BertConfig
        config = BertConfig.from_pretrained(
            "google-bert/bert-base-uncased"
        )  # Download configuration from huggingface.co and cache.
        config = BertConfig.from_pretrained(
            "./test/saved_model/"
        )  # E.g. config (or model) was saved using *save_pretrained('./test/saved_model/')*
        config = BertConfig.from_pretrained("./test/saved_model/my_configuration.json")
        config = BertConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False)
        assert config.output_attentions == True
        config, unused_kwargs = BertConfig.from_pretrained(
            "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
        )
        assert config.output_attentions == True
        assert unused_kwargs == {"foo": False}
        ```"""
        kwargs["cache_dir"] = cache_dir
        kwargs["force_download"] = force_download
        kwargs["local_files_only"] = local_files_only
        kwargs["revision"] = revision
        kwargs["mirror"] = mirror

        cls._set_token_in_kwargs(kwargs, token)

        config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
        if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
            logger.warning(
                f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
                f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
            )

        return cls.from_dict(config_dict, **kwargs)

    @classmethod
    def get_config_dict(
        cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs
    ) -> Tuple[Dict[str, Any], Dict[str, Any]]:
        """
        From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used for instantiating a
        [`PretrainedConfig`] using `from_dict`.

        Parameters:
            pretrained_model_name_or_path (`str` or `os.PathLike`):
                The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.

        Returns:
            `Tuple[Dict, Dict]`: The dictionary(ies) that will be used to instantiate the configuration object.

        """
        cls._set_token_in_kwargs(kwargs)

        original_kwargs = copy.deepcopy(kwargs)
        # Get config dict associated with the base config file
        config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
        if 'torch_dtype' in config_dict:
            config_dict['ms_dtype'] = config_dict.pop('torch_dtype')

        if "_commit_hash" in config_dict:
            original_kwargs["_commit_hash"] = config_dict["_commit_hash"]

        # That config file may point us toward another config file to use.
        if "configuration_files" in config_dict:
            configuration_file = get_configuration_file(config_dict["configuration_files"])
            config_dict, kwargs = cls._get_config_dict(
                pretrained_model_name_or_path, _configuration_file=configuration_file, **original_kwargs
            )

        return config_dict, kwargs

    @classmethod
    def _get_config_dict(
        cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs
    ) -> Tuple[Dict[str, Any], Dict[str, Any]]:
        cache_dir = kwargs.pop("cache_dir", None)
        force_download = kwargs.pop("force_download", False)
        resume_download = kwargs.pop("resume_download", None)
        proxies = kwargs.pop("proxies", None)
        token = kwargs.pop("token", None)
        local_files_only = kwargs.pop("local_files_only", False)
        revision = kwargs.pop("revision", None)
        trust_remote_code = kwargs.pop("trust_remote_code", None)
        subfolder = kwargs.pop("subfolder", "")
        from_pipeline = kwargs.pop("_from_pipeline", None)
        from_auto_class = kwargs.pop("_from_auto", False)
        commit_hash = kwargs.pop("_commit_hash", None)
        mirror = kwargs.get('mirror', 'huggingface')

        gguf_file = kwargs.get("gguf_file", None)

        if trust_remote_code is True:
            logger.warning(
                "The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is"
                " ignored."
            )

        user_agent = {"file_type": "config", "from_auto_class": from_auto_class}
        if from_pipeline is not None:
            user_agent["using_pipeline"] = from_pipeline

        pretrained_model_name_or_path = str(pretrained_model_name_or_path)

        is_local = os.path.isdir(pretrained_model_name_or_path)
        if os.path.isfile(os.path.join(subfolder, pretrained_model_name_or_path)):
            # Special case when pretrained_model_name_or_path is a local file
            resolved_config_file = pretrained_model_name_or_path
            is_local = True
        elif is_remote_url(pretrained_model_name_or_path):
            configuration_file = pretrained_model_name_or_path if gguf_file is None else gguf_file
            resolved_config_file = download_url(pretrained_model_name_or_path)
        else:
            configuration_file = kwargs.pop("_configuration_file", CONFIG_NAME) if gguf_file is None else gguf_file

            try:
                # Load from local folder or from cache or download from model Hub and cache
                resolved_config_file = cached_file(
                    pretrained_model_name_or_path,
                    configuration_file,
                    cache_dir=cache_dir,
                    force_download=force_download,
                    proxies=proxies,
                    resume_download=resume_download,
                    local_files_only=local_files_only,
                    token=token,
                    user_agent=user_agent,
                    revision=revision,
                    subfolder=subfolder,
                    mirror=mirror,
                )
                commit_hash = extract_commit_hash(resolved_config_file, commit_hash)
            except EnvironmentError:
                # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to
                # the original exception.
                raise
            except Exception:
                # For any other exception, we throw a generic error.
                raise EnvironmentError(
                    f"Can't load the configuration of '{pretrained_model_name_or_path}'. If you were trying to load it"
                    " from 'https://huggingface.co/models', make sure you don't have a local directory with the same"
                    f" name. Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory"
                    f" containing a {configuration_file} file"
                )

        try:
            # if gguf_file:
            #     config_dict = load_gguf_checkpoint(resolved_config_file, return_tensors=False)["config"]
            # else:
            # Load config dict
            config_dict = cls._dict_from_json_file(resolved_config_file)

            config_dict["_commit_hash"] = commit_hash
        except (json.JSONDecodeError, UnicodeDecodeError):
            raise EnvironmentError(
                f"It looks like the config file at '{resolved_config_file}' is not a valid JSON file."
            )

        if is_local:
            logger.info(f"loading configuration file {resolved_config_file}")
        else:
            logger.info(f"loading configuration file {configuration_file} from cache at {resolved_config_file}")

        # if "auto_map" in config_dict and not is_local:
        #     config_dict["auto_map"] = add_model_info_to_auto_map(
        #         config_dict["auto_map"], pretrained_model_name_or_path
        #     )
        # if "custom_pipelines" in config_dict and not is_local:
        #     config_dict["custom_pipelines"] = add_model_info_to_custom_pipelines(
        #         config_dict["custom_pipelines"], pretrained_model_name_or_path
        #     )
        return config_dict, kwargs

    @classmethod
    def from_dict(cls, config_dict: Dict[str, Any], **kwargs) -> "PretrainedConfig":
        """
        Instantiates a [`PretrainedConfig`] from a Python dictionary of parameters.

        Args:
            config_dict (`Dict[str, Any]`):
                Dictionary that will be used to instantiate the configuration object. Such a dictionary can be
                retrieved from a pretrained checkpoint by leveraging the [`~PretrainedConfig.get_config_dict`] method.
            kwargs (`Dict[str, Any]`):
                Additional parameters from which to initialize the configuration object.

        Returns:
            [`PretrainedConfig`]: The configuration object instantiated from those parameters.
        """
        return_unused_kwargs = kwargs.pop("return_unused_kwargs", False)
        # Those arguments may be passed along for our internal telemetry.
        # We remove them so they don't appear in `return_unused_kwargs`.
        kwargs.pop("_from_auto", None)
        kwargs.pop("_from_pipeline", None)
        # The commit hash might have been updated in the `config_dict`, we don't want the kwargs to erase that update.
        if "_commit_hash" in kwargs and "_commit_hash" in config_dict:
            kwargs["_commit_hash"] = config_dict["_commit_hash"]

        # We remove it from kwargs so that it does not appear in `return_unused_kwargs`.
        config_dict["attn_implementation"] = kwargs.pop("attn_implementation", None)

        config = cls(**config_dict)

        if hasattr(config, "pruned_heads"):
            config.pruned_heads = {int(key): value for key, value in config.pruned_heads.items()}

        # Update config with kwargs if needed
        if "num_labels" in kwargs and "id2label" in kwargs:
            num_labels = kwargs["num_labels"]
            id2label = kwargs["id2label"] if kwargs["id2label"] is not None else []
            if len(id2label) != num_labels:
                raise ValueError(
                    f"You passed along `num_labels={num_labels }` with an incompatible id to label map: "
                    f"{kwargs['id2label']}. Since those arguments are inconsistent with each other, you should remove "
                    "one of them."
                )
        to_remove = []
        for key, value in kwargs.items():
            if hasattr(config, key):
                current_attr = getattr(config, key)
                # To authorize passing a custom subconfig as kwarg in models that have nested configs.
                if isinstance(current_attr, PretrainedConfig) and isinstance(value, dict):
                    value = current_attr.__class__(**value)
                setattr(config, key, value)
                if key != "ms_dtype":
                    to_remove.append(key)
        for key in to_remove:
            kwargs.pop(key, None)

        logger.info(f"Model config {config}")
        if return_unused_kwargs:
            return config, kwargs
        else:
            return config

    @classmethod
    def from_json_file(cls, json_file: Union[str, os.PathLike]) -> "PretrainedConfig":
        """
        Instantiates a [`PretrainedConfig`] from the path to a JSON file of parameters.

        Args:
            json_file (`str` or `os.PathLike`):
                Path to the JSON file containing the parameters.

        Returns:
            [`PretrainedConfig`]: The configuration object instantiated from that JSON file.

        """
        config_dict = cls._dict_from_json_file(json_file)
        return cls(**config_dict)

    @classmethod
    def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]):
        with open(json_file, "r", encoding="utf-8") as reader:
            text = reader.read()
        return json.loads(text)

    def __eq__(self, other):
        return isinstance(other, PretrainedConfig) and (self.__dict__ == other.__dict__)

    def __repr__(self):
        return f"{self.__class__.__name__} {self.to_json_string()}"

    def to_diff_dict(self) -> Dict[str, Any]:
        """
        Removes all attributes from config which correspond to the default config attributes for better readability and
        serializes to a Python dictionary.

        Returns:
            `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance,
        """
        config_dict = self.to_dict()

        # get the default config dict
        default_config_dict = PretrainedConfig().to_dict()

        # get class specific config dict
        class_config_dict = self.__class__().to_dict() if not self.is_composition else {}

        serializable_config_dict = {}

        # only serialize values that differ from the default config
        for key, value in config_dict.items():
            if (
                isinstance(getattr(self, key, None), PretrainedConfig)
                and key in class_config_dict
                and isinstance(class_config_dict[key], dict)
            ):
                # For nested configs we need to clean the diff recursively
                diff = recursive_diff_dict(value, class_config_dict[key], config_obj=getattr(self, key, None))
                if "model_type" in value:
                    # Needs to be set even if it's not in the diff
                    diff["model_type"] = value["model_type"]
                if len(diff) > 0:
                    serializable_config_dict[key] = diff
            elif (
                key not in default_config_dict
                or key == "transformers_version"
                or value != default_config_dict[key]
                or (key in class_config_dict and value != class_config_dict[key])
            ):
                serializable_config_dict[key] = value

        if hasattr(self, "quantization_config"):
            serializable_config_dict["quantization_config"] = (
                self.quantization_config.to_dict()
                if not isinstance(self.quantization_config, dict)
                else self.quantization_config
            )

            # pop the `_pre_quantization_dtype` as torch.dtypes are not serializable.
            _ = serializable_config_dict.pop("_pre_quantization_dtype", None)

        self.dict_ms_dtype_to_str(serializable_config_dict)

        if "_attn_implementation_internal" in serializable_config_dict:
            del serializable_config_dict["_attn_implementation_internal"]

        return serializable_config_dict

    def to_dict(self) -> Dict[str, Any]:
        """
        Serializes this instance to a Python dictionary.

        Returns:
            `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
        """
        output = copy.deepcopy(self.__dict__)
        if hasattr(self.__class__, "model_type"):
            output["model_type"] = self.__class__.model_type
        if "_auto_class" in output:
            del output["_auto_class"]
        if "_commit_hash" in output:
            del output["_commit_hash"]
        if "_attn_implementation_internal" in output:
            del output["_attn_implementation_internal"]

        for key, value in output.items():
            # Deal with nested configs like CLIP
            if isinstance(value, PretrainedConfig):
                value = value.to_dict()
                del value["transformers_version"]

            output[key] = value

        if hasattr(self, "quantization_config"):
            output["quantization_config"] = (
                self.quantization_config.to_dict()
                if not isinstance(self.quantization_config, dict)
                else self.quantization_config
            )

            # pop the `_pre_quantization_dtype` as torch.dtypes are not serializable.
            _ = output.pop("_pre_quantization_dtype", None)

        self.dict_ms_dtype_to_str(output)

        return output

    def to_json_string(self, use_diff: bool = True) -> str:
        """
        Serializes this instance to a JSON string.

        Args:
            use_diff (`bool`, *optional*, defaults to `True`):
                If set to `True`, only the difference between the config instance and the default `PretrainedConfig()`
                is serialized to JSON string.

        Returns:
            `str`: String containing all the attributes that make up this configuration instance in JSON format.
        """
        if use_diff is True:
            config_dict = self.to_diff_dict()
        else:
            config_dict = self.to_dict()
        return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"

    def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True):
        """
        Save this instance to a JSON file.

        Args:
            json_file_path (`str` or `os.PathLike`):
                Path to the JSON file in which this configuration instance's parameters will be saved.
            use_diff (`bool`, *optional*, defaults to `True`):
                If set to `True`, only the difference between the config instance and the default `PretrainedConfig()`
                is serialized to JSON file.
        """
        with open(json_file_path, "w", encoding="utf-8") as writer:
            writer.write(self.to_json_string(use_diff=use_diff))

    def update(self, config_dict: Dict[str, Any]):
        """
        Updates attributes of this class with attributes from `config_dict`.

        Args:
            config_dict (`Dict[str, Any]`): Dictionary of attributes that should be updated for this class.
        """
        for key, value in config_dict.items():
            setattr(self, key, value)

    def update_from_string(self, update_str: str):
        """
        Updates attributes of this class with attributes from `update_str`.

        The expected format is ints, floats and strings as is, and for booleans use `true` or `false`. For example:
        "n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"

        The keys to change have to already exist in the config object.

        Args:
            update_str (`str`): String with attributes that should be updated for this class.

        """

        d = dict(x.split("=") for x in update_str.split(","))
        for k, v in d.items():
            if not hasattr(self, k):
                raise ValueError(f"key {k} isn't in the original config dict")

            old_v = getattr(self, k)
            if isinstance(old_v, bool):
                if v.lower() in ["true", "1", "y", "yes"]:
                    v = True
                elif v.lower() in ["false", "0", "n", "no"]:
                    v = False
                else:
                    raise ValueError(f"can't derive true or false from {v} (key {k})")
            elif isinstance(old_v, int):
                v = int(v)
            elif isinstance(old_v, float):
                v = float(v)
            elif not isinstance(old_v, str):
                raise TypeError(
                    f"You can only update int, float, bool or string values in the config, got {v} for key {k}"
                )

            setattr(self, k, v)

    def dict_ms_dtype_to_str(self, d: Dict[str, Any]) -> None:
        """
        Checks whether the passed dictionary and its nested dicts have a *ms_dtype* key and if it's not None,
        converts torch.dtype to a string of just the type. For example, `torch.float32` get converted into *"float32"*
        string, which can then be stored in the json format.
        """
        if d.get("ms_dtype", None) is not None and not isinstance(d["ms_dtype"], str):
            d["ms_dtype"] = str(d["ms_dtype"]).lower()
        for value in d.values():
            if isinstance(value, dict):
                self.dict_ms_dtype_to_str(value)

    @classmethod
    def register_for_auto_class(cls, auto_class="AutoConfig"):
        """
        Register this class with a given auto class. This should only be used for custom configurations as the ones in
        the library are already mapped with `AutoConfig`.

        <Tip warning={true}>

        This API is experimental and may have some slight breaking changes in the next releases.

        </Tip>

        Args:
            auto_class (`str` or `type`, *optional*, defaults to `"AutoConfig"`):
                The auto class to register this new configuration with.
        """
        if not isinstance(auto_class, str):
            auto_class = auto_class.__name__

        import mindnlp.transformers.models.auto as auto_module

        if not hasattr(auto_module, auto_class):
            raise ValueError(f"{auto_class} is not a valid auto class.")

        cls._auto_class = auto_class

    @staticmethod
    def _get_generation_defaults() -> Dict[str, Any]:
        return {
            "max_length": 20,
            "min_length": 0,
            "do_sample": False,
            "early_stopping": False,
            "num_beams": 1,
            "num_beam_groups": 1,
            "diversity_penalty": 0.0,
            "temperature": 1.0,
            "top_k": 50,
            "top_p": 1.0,
            "typical_p": 1.0,
            "repetition_penalty": 1.0,
            "length_penalty": 1.0,
            "no_repeat_ngram_size": 0,
            "encoder_no_repeat_ngram_size": 0,
            "bad_words_ids": None,
            "num_return_sequences": 1,
            "output_scores": False,
            "return_dict_in_generate": False,
            "forced_bos_token_id": None,
            "forced_eos_token_id": None,
            "remove_invalid_values": False,
            "exponential_decay_length_penalty": None,
            "suppress_tokens": None,
            "begin_suppress_tokens": None,
        }

    def _has_non_default_generation_parameters(self) -> bool:
        """
        Whether or not this instance holds non-default generation parameters.
        """
        for parameter_name, default_value in self._get_generation_defaults().items():
            if hasattr(self, parameter_name) and getattr(self, parameter_name) != default_value:
                return True
        return False

mindnlp.transformers.configuration_utils.PretrainedConfig.num_labels: int property writable

int: The number of labels for classification models.

mindnlp.transformers.configuration_utils.PretrainedConfig.use_return_dict: bool property

bool: Whether or not return [~utils.ModelOutput] instead of tuples.

mindnlp.transformers.configuration_utils.PretrainedConfig.dict_ms_dtype_to_str(d)

Checks whether the passed dictionary and its nested dicts have a ms_dtype key and if it's not None, converts torch.dtype to a string of just the type. For example, torch.float32 get converted into "float32" string, which can then be stored in the json format.

Source code in mindnlp\transformers\configuration_utils.py
986
987
988
989
990
991
992
993
994
995
996
def dict_ms_dtype_to_str(self, d: Dict[str, Any]) -> None:
    """
    Checks whether the passed dictionary and its nested dicts have a *ms_dtype* key and if it's not None,
    converts torch.dtype to a string of just the type. For example, `torch.float32` get converted into *"float32"*
    string, which can then be stored in the json format.
    """
    if d.get("ms_dtype", None) is not None and not isinstance(d["ms_dtype"], str):
        d["ms_dtype"] = str(d["ms_dtype"]).lower()
    for value in d.values():
        if isinstance(value, dict):
            self.dict_ms_dtype_to_str(value)

mindnlp.transformers.configuration_utils.PretrainedConfig.from_dict(config_dict, **kwargs) classmethod

Instantiates a [PretrainedConfig] from a Python dictionary of parameters.

PARAMETER DESCRIPTION
config_dict

Dictionary that will be used to instantiate the configuration object. Such a dictionary can be retrieved from a pretrained checkpoint by leveraging the [~PretrainedConfig.get_config_dict] method.

TYPE: `Dict[str, Any]`

kwargs

Additional parameters from which to initialize the configuration object.

TYPE: `Dict[str, Any]` DEFAULT: {}

RETURNS DESCRIPTION
PretrainedConfig

[PretrainedConfig]: The configuration object instantiated from those parameters.

Source code in mindnlp\transformers\configuration_utils.py
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
@classmethod
def from_dict(cls, config_dict: Dict[str, Any], **kwargs) -> "PretrainedConfig":
    """
    Instantiates a [`PretrainedConfig`] from a Python dictionary of parameters.

    Args:
        config_dict (`Dict[str, Any]`):
            Dictionary that will be used to instantiate the configuration object. Such a dictionary can be
            retrieved from a pretrained checkpoint by leveraging the [`~PretrainedConfig.get_config_dict`] method.
        kwargs (`Dict[str, Any]`):
            Additional parameters from which to initialize the configuration object.

    Returns:
        [`PretrainedConfig`]: The configuration object instantiated from those parameters.
    """
    return_unused_kwargs = kwargs.pop("return_unused_kwargs", False)
    # Those arguments may be passed along for our internal telemetry.
    # We remove them so they don't appear in `return_unused_kwargs`.
    kwargs.pop("_from_auto", None)
    kwargs.pop("_from_pipeline", None)
    # The commit hash might have been updated in the `config_dict`, we don't want the kwargs to erase that update.
    if "_commit_hash" in kwargs and "_commit_hash" in config_dict:
        kwargs["_commit_hash"] = config_dict["_commit_hash"]

    # We remove it from kwargs so that it does not appear in `return_unused_kwargs`.
    config_dict["attn_implementation"] = kwargs.pop("attn_implementation", None)

    config = cls(**config_dict)

    if hasattr(config, "pruned_heads"):
        config.pruned_heads = {int(key): value for key, value in config.pruned_heads.items()}

    # Update config with kwargs if needed
    if "num_labels" in kwargs and "id2label" in kwargs:
        num_labels = kwargs["num_labels"]
        id2label = kwargs["id2label"] if kwargs["id2label"] is not None else []
        if len(id2label) != num_labels:
            raise ValueError(
                f"You passed along `num_labels={num_labels }` with an incompatible id to label map: "
                f"{kwargs['id2label']}. Since those arguments are inconsistent with each other, you should remove "
                "one of them."
            )
    to_remove = []
    for key, value in kwargs.items():
        if hasattr(config, key):
            current_attr = getattr(config, key)
            # To authorize passing a custom subconfig as kwarg in models that have nested configs.
            if isinstance(current_attr, PretrainedConfig) and isinstance(value, dict):
                value = current_attr.__class__(**value)
            setattr(config, key, value)
            if key != "ms_dtype":
                to_remove.append(key)
    for key in to_remove:
        kwargs.pop(key, None)

    logger.info(f"Model config {config}")
    if return_unused_kwargs:
        return config, kwargs
    else:
        return config

mindnlp.transformers.configuration_utils.PretrainedConfig.from_json_file(json_file) classmethod

Instantiates a [PretrainedConfig] from the path to a JSON file of parameters.

PARAMETER DESCRIPTION
json_file

Path to the JSON file containing the parameters.

TYPE: `str` or `os.PathLike`

RETURNS DESCRIPTION
PretrainedConfig

[PretrainedConfig]: The configuration object instantiated from that JSON file.

Source code in mindnlp\transformers\configuration_utils.py
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
@classmethod
def from_json_file(cls, json_file: Union[str, os.PathLike]) -> "PretrainedConfig":
    """
    Instantiates a [`PretrainedConfig`] from the path to a JSON file of parameters.

    Args:
        json_file (`str` or `os.PathLike`):
            Path to the JSON file containing the parameters.

    Returns:
        [`PretrainedConfig`]: The configuration object instantiated from that JSON file.

    """
    config_dict = cls._dict_from_json_file(json_file)
    return cls(**config_dict)

mindnlp.transformers.configuration_utils.PretrainedConfig.from_pretrained(pretrained_model_name_or_path, cache_dir=None, force_download=False, local_files_only=False, token=None, revision='main', mirror='huggingface', **kwargs) classmethod

Instantiate a [PretrainedConfig] (or a derived class) from a pretrained model configuration.

PARAMETER DESCRIPTION
pretrained_model_name_or_path

This can be either:

  • a string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co.
  • a path to a directory containing a configuration file saved using the [~PretrainedConfig.save_pretrained] method, e.g., ./my_model_directory/.
  • a path or url to a saved configuration JSON file, e.g., ./my_model_directory/configuration.json.

TYPE: `str` or `os.PathLike`

cache_dir

Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

TYPE: `str` or `os.PathLike`, *optional* DEFAULT: None

force_download

Whether or not to force to (re-)download the configuration files and override the cached versions if they exist.

TYPE: `bool`, *optional*, defaults to `False` DEFAULT: False

resume_download

Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.

proxies

A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.

TYPE: `Dict[str, str]`, *optional*

token

The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).

TYPE: `str` or `bool`, *optional* DEFAULT: None

revision

The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

To test a pull request you made on the Hub, you can pass `revision="refs/pr/".

TYPE: `str`, *optional*, defaults to `"main"` DEFAULT: 'main'

return_unused_kwargs

If False, then this function returns just the final configuration object.

If True, then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored.

TYPE: `bool`, *optional*, defaults to `False`

subfolder

In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here.

TYPE: `str`, *optional*, defaults to `""`

kwargs

The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter.

TYPE: `Dict[str, Any]`, *optional* DEFAULT: {}

RETURNS DESCRIPTION
PretrainedConfig

[PretrainedConfig]: The configuration object instantiated from this pretrained model.

# We can't instantiate directly the base class *PretrainedConfig* so let's show the examples on a
# derived class: BertConfig
config = BertConfig.from_pretrained(
    "google-bert/bert-base-uncased"
)  # Download configuration from huggingface.co and cache.
config = BertConfig.from_pretrained(
    "./test/saved_model/"
)  # E.g. config (or model) was saved using *save_pretrained('./test/saved_model/')*
config = BertConfig.from_pretrained("./test/saved_model/my_configuration.json")
config = BertConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False)
assert config.output_attentions == True
config, unused_kwargs = BertConfig.from_pretrained(
    "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
)
assert config.output_attentions == True
assert unused_kwargs == {"foo": False}
Source code in mindnlp\transformers\configuration_utils.py
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
@classmethod
def from_pretrained(
    cls,
    pretrained_model_name_or_path: Union[str, os.PathLike],
    cache_dir: Optional[Union[str, os.PathLike]] = None,
    force_download: bool = False,
    local_files_only: bool = False,
    token: Optional[Union[str, bool]] = None,
    revision: str = "main",
    mirror='huggingface',
    **kwargs,
) -> "PretrainedConfig":
    r"""
    Instantiate a [`PretrainedConfig`] (or a derived class) from a pretrained model configuration.

    Args:
        pretrained_model_name_or_path (`str` or `os.PathLike`):
            This can be either:

            - a string, the *model id* of a pretrained model configuration hosted inside a model repo on
              huggingface.co.
            - a path to a *directory* containing a configuration file saved using the
              [`~PretrainedConfig.save_pretrained`] method, e.g., `./my_model_directory/`.
            - a path or url to a saved configuration JSON *file*, e.g., `./my_model_directory/configuration.json`.
        cache_dir (`str` or `os.PathLike`, *optional*):
            Path to a directory in which a downloaded pretrained model configuration should be cached if the
            standard cache should not be used.
        force_download (`bool`, *optional*, defaults to `False`):
            Whether or not to force to (re-)download the configuration files and override the cached versions if
            they exist.
        resume_download:
            Deprecated and ignored. All downloads are now resumed by default when possible.
            Will be removed in v5 of Transformers.
        proxies (`Dict[str, str]`, *optional*):
            A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
            'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
        token (`str` or `bool`, *optional*):
            The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
            the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
        revision (`str`, *optional*, defaults to `"main"`):
            The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
            git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
            identifier allowed by git.

            <Tip>

            To test a pull request you made on the Hub, you can pass `revision="refs/pr/<pr_number>".

            </Tip>

        return_unused_kwargs (`bool`, *optional*, defaults to `False`):
            If `False`, then this function returns just the final configuration object.

            If `True`, then this functions returns a `Tuple(config, unused_kwargs)` where *unused_kwargs* is a
            dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the
            part of `kwargs` which has not been used to update `config` and is otherwise ignored.
        subfolder (`str`, *optional*, defaults to `""`):
            In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can
            specify the folder name here.
        kwargs (`Dict[str, Any]`, *optional*):
            The values in kwargs of any keys which are configuration attributes will be used to override the loaded
            values. Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled
            by the `return_unused_kwargs` keyword parameter.

    Returns:
        [`PretrainedConfig`]: The configuration object instantiated from this pretrained model.

    Examples:

    ```python
    # We can't instantiate directly the base class *PretrainedConfig* so let's show the examples on a
    # derived class: BertConfig
    config = BertConfig.from_pretrained(
        "google-bert/bert-base-uncased"
    )  # Download configuration from huggingface.co and cache.
    config = BertConfig.from_pretrained(
        "./test/saved_model/"
    )  # E.g. config (or model) was saved using *save_pretrained('./test/saved_model/')*
    config = BertConfig.from_pretrained("./test/saved_model/my_configuration.json")
    config = BertConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False)
    assert config.output_attentions == True
    config, unused_kwargs = BertConfig.from_pretrained(
        "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
    )
    assert config.output_attentions == True
    assert unused_kwargs == {"foo": False}
    ```"""
    kwargs["cache_dir"] = cache_dir
    kwargs["force_download"] = force_download
    kwargs["local_files_only"] = local_files_only
    kwargs["revision"] = revision
    kwargs["mirror"] = mirror

    cls._set_token_in_kwargs(kwargs, token)

    config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
    if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
        logger.warning(
            f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
            f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
        )

    return cls.from_dict(config_dict, **kwargs)

mindnlp.transformers.configuration_utils.PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) classmethod

From a pretrained_model_name_or_path, resolve to a dictionary of parameters, to be used for instantiating a [PretrainedConfig] using from_dict.

PARAMETER DESCRIPTION
pretrained_model_name_or_path

The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.

TYPE: `str` or `os.PathLike`

RETURNS DESCRIPTION
Tuple[Dict[str, Any], Dict[str, Any]]

Tuple[Dict, Dict]: The dictionary(ies) that will be used to instantiate the configuration object.

Source code in mindnlp\transformers\configuration_utils.py
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
@classmethod
def get_config_dict(
    cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
    """
    From a `pretrained_model_name_or_path`, resolve to a dictionary of parameters, to be used for instantiating a
    [`PretrainedConfig`] using `from_dict`.

    Parameters:
        pretrained_model_name_or_path (`str` or `os.PathLike`):
            The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.

    Returns:
        `Tuple[Dict, Dict]`: The dictionary(ies) that will be used to instantiate the configuration object.

    """
    cls._set_token_in_kwargs(kwargs)

    original_kwargs = copy.deepcopy(kwargs)
    # Get config dict associated with the base config file
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
    if 'torch_dtype' in config_dict:
        config_dict['ms_dtype'] = config_dict.pop('torch_dtype')

    if "_commit_hash" in config_dict:
        original_kwargs["_commit_hash"] = config_dict["_commit_hash"]

    # That config file may point us toward another config file to use.
    if "configuration_files" in config_dict:
        configuration_file = get_configuration_file(config_dict["configuration_files"])
        config_dict, kwargs = cls._get_config_dict(
            pretrained_model_name_or_path, _configuration_file=configuration_file, **original_kwargs
        )

    return config_dict, kwargs

mindnlp.transformers.configuration_utils.PretrainedConfig.register_for_auto_class(auto_class='AutoConfig') classmethod

Register this class with a given auto class. This should only be used for custom configurations as the ones in the library are already mapped with AutoConfig.

This API is experimental and may have some slight breaking changes in the next releases.

PARAMETER DESCRIPTION
auto_class

The auto class to register this new configuration with.

TYPE: `str` or `type`, *optional*, defaults to `"AutoConfig"` DEFAULT: 'AutoConfig'

Source code in mindnlp\transformers\configuration_utils.py
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
@classmethod
def register_for_auto_class(cls, auto_class="AutoConfig"):
    """
    Register this class with a given auto class. This should only be used for custom configurations as the ones in
    the library are already mapped with `AutoConfig`.

    <Tip warning={true}>

    This API is experimental and may have some slight breaking changes in the next releases.

    </Tip>

    Args:
        auto_class (`str` or `type`, *optional*, defaults to `"AutoConfig"`):
            The auto class to register this new configuration with.
    """
    if not isinstance(auto_class, str):
        auto_class = auto_class.__name__

    import mindnlp.transformers.models.auto as auto_module

    if not hasattr(auto_module, auto_class):
        raise ValueError(f"{auto_class} is not a valid auto class.")

    cls._auto_class = auto_class

mindnlp.transformers.configuration_utils.PretrainedConfig.save_pretrained(save_directory, **kwargs)

Save a configuration object to the directory save_directory, so that it can be re-loaded using the [~PretrainedConfig.from_pretrained] class method.

PARAMETER DESCRIPTION
save_directory

Directory where the configuration JSON file will be saved (will be created if it does not exist).

TYPE: `str` or `os.PathLike`

kwargs

Additional key word arguments passed along to the [~utils.PushToHubMixin.push_to_hub] method.

TYPE: `Dict[str, Any]`, *optional* DEFAULT: {}

Source code in mindnlp\transformers\configuration_utils.py
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
def save_pretrained(self, save_directory: Union[str, os.PathLike], **kwargs):
    """
    Save a configuration object to the directory `save_directory`, so that it can be re-loaded using the
    [`~PretrainedConfig.from_pretrained`] class method.

    Args:
        save_directory (`str` or `os.PathLike`):
            Directory where the configuration JSON file will be saved (will be created if it does not exist).
        kwargs (`Dict[str, Any]`, *optional*):
            Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
    """
    self._set_token_in_kwargs(kwargs)

    if os.path.isfile(save_directory):
        raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file")

    non_default_generation_parameters = {}
    for parameter_name, default_value in self._get_generation_defaults().items():
        if hasattr(self, parameter_name) and getattr(self, parameter_name) != default_value:
            non_default_generation_parameters[parameter_name] = getattr(self, parameter_name)
    if len(non_default_generation_parameters) > 0:
        logger.warning(
            "Some non-default generation parameters are set in the model config. These should go into a "
            "GenerationConfig file instead.\n"
            f"Non-default generation parameters: {str(non_default_generation_parameters)}"
        )

    os.makedirs(save_directory, exist_ok=True)

    # If we save using the predefined names, we can load using `from_pretrained`
    output_config_file = os.path.join(save_directory, CONFIG_NAME)

    self.to_json_file(output_config_file, use_diff=True)
    logger.info(f"Configuration saved in {output_config_file}")

mindnlp.transformers.configuration_utils.PretrainedConfig.to_dict()

Serializes this instance to a Python dictionary.

RETURNS DESCRIPTION
Dict[str, Any]

Dict[str, Any]: Dictionary of all the attributes that make up this configuration instance.

Source code in mindnlp\transformers\configuration_utils.py
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
def to_dict(self) -> Dict[str, Any]:
    """
    Serializes this instance to a Python dictionary.

    Returns:
        `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance.
    """
    output = copy.deepcopy(self.__dict__)
    if hasattr(self.__class__, "model_type"):
        output["model_type"] = self.__class__.model_type
    if "_auto_class" in output:
        del output["_auto_class"]
    if "_commit_hash" in output:
        del output["_commit_hash"]
    if "_attn_implementation_internal" in output:
        del output["_attn_implementation_internal"]

    for key, value in output.items():
        # Deal with nested configs like CLIP
        if isinstance(value, PretrainedConfig):
            value = value.to_dict()
            del value["transformers_version"]

        output[key] = value

    if hasattr(self, "quantization_config"):
        output["quantization_config"] = (
            self.quantization_config.to_dict()
            if not isinstance(self.quantization_config, dict)
            else self.quantization_config
        )

        # pop the `_pre_quantization_dtype` as torch.dtypes are not serializable.
        _ = output.pop("_pre_quantization_dtype", None)

    self.dict_ms_dtype_to_str(output)

    return output

mindnlp.transformers.configuration_utils.PretrainedConfig.to_diff_dict()

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

RETURNS DESCRIPTION
Dict[str, Any]

Dict[str, Any]: Dictionary of all the attributes that make up this configuration instance,

Source code in mindnlp\transformers\configuration_utils.py
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
def to_diff_dict(self) -> Dict[str, Any]:
    """
    Removes all attributes from config which correspond to the default config attributes for better readability and
    serializes to a Python dictionary.

    Returns:
        `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance,
    """
    config_dict = self.to_dict()

    # get the default config dict
    default_config_dict = PretrainedConfig().to_dict()

    # get class specific config dict
    class_config_dict = self.__class__().to_dict() if not self.is_composition else {}

    serializable_config_dict = {}

    # only serialize values that differ from the default config
    for key, value in config_dict.items():
        if (
            isinstance(getattr(self, key, None), PretrainedConfig)
            and key in class_config_dict
            and isinstance(class_config_dict[key], dict)
        ):
            # For nested configs we need to clean the diff recursively
            diff = recursive_diff_dict(value, class_config_dict[key], config_obj=getattr(self, key, None))
            if "model_type" in value:
                # Needs to be set even if it's not in the diff
                diff["model_type"] = value["model_type"]
            if len(diff) > 0:
                serializable_config_dict[key] = diff
        elif (
            key not in default_config_dict
            or key == "transformers_version"
            or value != default_config_dict[key]
            or (key in class_config_dict and value != class_config_dict[key])
        ):
            serializable_config_dict[key] = value

    if hasattr(self, "quantization_config"):
        serializable_config_dict["quantization_config"] = (
            self.quantization_config.to_dict()
            if not isinstance(self.quantization_config, dict)
            else self.quantization_config
        )

        # pop the `_pre_quantization_dtype` as torch.dtypes are not serializable.
        _ = serializable_config_dict.pop("_pre_quantization_dtype", None)

    self.dict_ms_dtype_to_str(serializable_config_dict)

    if "_attn_implementation_internal" in serializable_config_dict:
        del serializable_config_dict["_attn_implementation_internal"]

    return serializable_config_dict

mindnlp.transformers.configuration_utils.PretrainedConfig.to_json_file(json_file_path, use_diff=True)

Save this instance to a JSON file.

PARAMETER DESCRIPTION
json_file_path

Path to the JSON file in which this configuration instance's parameters will be saved.

TYPE: `str` or `os.PathLike`

use_diff

If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON file.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

Source code in mindnlp\transformers\configuration_utils.py
924
925
926
927
928
929
930
931
932
933
934
935
936
def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True):
    """
    Save this instance to a JSON file.

    Args:
        json_file_path (`str` or `os.PathLike`):
            Path to the JSON file in which this configuration instance's parameters will be saved.
        use_diff (`bool`, *optional*, defaults to `True`):
            If set to `True`, only the difference between the config instance and the default `PretrainedConfig()`
            is serialized to JSON file.
    """
    with open(json_file_path, "w", encoding="utf-8") as writer:
        writer.write(self.to_json_string(use_diff=use_diff))

mindnlp.transformers.configuration_utils.PretrainedConfig.to_json_string(use_diff=True)

Serializes this instance to a JSON string.

PARAMETER DESCRIPTION
use_diff

If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON string.

TYPE: `bool`, *optional*, defaults to `True` DEFAULT: True

RETURNS DESCRIPTION
str

str: String containing all the attributes that make up this configuration instance in JSON format.

Source code in mindnlp\transformers\configuration_utils.py
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
def to_json_string(self, use_diff: bool = True) -> str:
    """
    Serializes this instance to a JSON string.

    Args:
        use_diff (`bool`, *optional*, defaults to `True`):
            If set to `True`, only the difference between the config instance and the default `PretrainedConfig()`
            is serialized to JSON string.

    Returns:
        `str`: String containing all the attributes that make up this configuration instance in JSON format.
    """
    if use_diff is True:
        config_dict = self.to_diff_dict()
    else:
        config_dict = self.to_dict()
    return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"

mindnlp.transformers.configuration_utils.PretrainedConfig.update(config_dict)

Updates attributes of this class with attributes from config_dict.

PARAMETER DESCRIPTION
config_dict

Dictionary of attributes that should be updated for this class.

TYPE: `Dict[str, Any]`

Source code in mindnlp\transformers\configuration_utils.py
938
939
940
941
942
943
944
945
946
def update(self, config_dict: Dict[str, Any]):
    """
    Updates attributes of this class with attributes from `config_dict`.

    Args:
        config_dict (`Dict[str, Any]`): Dictionary of attributes that should be updated for this class.
    """
    for key, value in config_dict.items():
        setattr(self, key, value)

mindnlp.transformers.configuration_utils.PretrainedConfig.update_from_string(update_str)

Updates attributes of this class with attributes from update_str.

The expected format is ints, floats and strings as is, and for booleans use true or false. For example: "n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"

The keys to change have to already exist in the config object.

PARAMETER DESCRIPTION
update_str

String with attributes that should be updated for this class.

TYPE: `str`

Source code in mindnlp\transformers\configuration_utils.py
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
def update_from_string(self, update_str: str):
    """
    Updates attributes of this class with attributes from `update_str`.

    The expected format is ints, floats and strings as is, and for booleans use `true` or `false`. For example:
    "n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"

    The keys to change have to already exist in the config object.

    Args:
        update_str (`str`): String with attributes that should be updated for this class.

    """

    d = dict(x.split("=") for x in update_str.split(","))
    for k, v in d.items():
        if not hasattr(self, k):
            raise ValueError(f"key {k} isn't in the original config dict")

        old_v = getattr(self, k)
        if isinstance(old_v, bool):
            if v.lower() in ["true", "1", "y", "yes"]:
                v = True
            elif v.lower() in ["false", "0", "n", "no"]:
                v = False
            else:
                raise ValueError(f"can't derive true or false from {v} (key {k})")
        elif isinstance(old_v, int):
            v = int(v)
        elif isinstance(old_v, float):
            v = float(v)
        elif not isinstance(old_v, str):
            raise TypeError(
                f"You can only update int, float, bool or string values in the config, got {v} for key {k}"
            )

        setattr(self, k, v)