Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug fixes] update attribute map handler #4421

Merged
merged 4 commits into from
Jan 11, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 14 additions & 14 deletions paddlenlp/transformers/configuration_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -195,8 +195,12 @@ def convert_to_legacy_config(attribute_map: Dict[str, str], config: Dict[str, An
args.append(init_arg)
config["init_args"] = args

# TODO(wj-Mcat): to improve compatibility for: old local config and new PretrainedConfig, eg:
# { "init_args": [], "init_class": "", "num_classes": 12 }
for standard_field, paddle_field in attribute_map.items():
config[paddle_field] = config.pop(standard_field, None) or config.pop(paddle_field, None)
value = config.pop(standard_field, None) or config.pop(paddle_field, None)
if value is not None:
config[paddle_field] = value
Comment on lines +201 to +203
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

问题

问题是出在:将 attribute_map中的values 参数都映射到第一层级参数上,且都是:target_paddle_field: None。故导致init_args下的参数没办法正确 map 上来,比如d_model

为什么这样调整

  • 老版本model_config.json
{
    "init_args": [
        {
            "tie_word_embeddings": false,
            "pad_token_id": 0,
            "bos_token_id": 0,
            "eos_token_id": 1,
            "vocab_size": 32128,
            "d_model": 768,
            "d_kv": 64,
            "d_ff": 2048,
            "num_layers": 12,
            "num_decoder_layers": 12,
            "num_heads": 12,
            "relative_attention_num_buckets": 32,
            "dropout_rate": 0.1,
            "layer_norm_epsilon": 1e-06,
            "initializer_factor": 1.0,
            "feed_forward_proj": "gated-gelu",
            "init_class": "T5Model"
        }
    ],
    "init_class": "T5ForConditionalGeneration"
}

在老板本配置文件中,模型的参数是放在init_args参数中,而这个模块是先递归调用convert_to_legacy_config函数(深度优先),针对于模型参数做 map,此时可能会将hidden_size -> d_model

可是执行完毕之后,老代码会在根目录上设置:d_model: None,导致flatten_model_config 中没办法将init_args中的正确d_model映射回来。

现在这种做法是可以解决这个问题。

  • 在新版本的配置文件config.json
{
  "architectures": [
    "T5ForConditionalGeneration"
  ],
  "bos_token_id": 0,
  "d_ff": 2048,
  "d_kv": 64,
  "d_model": 768,
  "dropout_rate": 0.1,
  "enable_recompute": false,
  "eos_token_id": 1,
  "feed_forward_proj": "gated-gelu",
  "initializer_factor": 1.0,
  "is_encoder_decoder": true,
  "layer_norm_epsilon": 1e-06,
  "model_type": "t5",
  "num_decoder_layers": 12,
  "num_heads": 12,
  "num_layers": 12,
  "pad_token_id": 0,
  "paddlenlp_version": null,
  "relative_attention_max_distance": 128,
  "relative_attention_num_buckets": 32,
  "tie_word_embeddings": false,
  "use_cache": true,
  "vocab_size": 32128
}

因为没有init_args参数,也是可以针对于模型参数做 map。

return config


Expand Down Expand Up @@ -729,16 +733,6 @@ def from_pretrained(

config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)

# do standard config map: there are some old-school pretrained-config not refactored.
config_dict = convert_to_legacy_config(cls.attribute_map, config_dict)

config_dict = flatten_model_config(config_dict)
if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
logger.warning(
f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
)

Comment on lines -732 to -741
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

convert_to_legacy_configflatten_model_config迁移到from_dict函数中,因为:from_dictfrom_pretrained的调用函数。

return cls.from_dict(config_dict, **kwargs)

@classmethod
Expand Down Expand Up @@ -859,12 +853,18 @@ def from_dict(cls, config_dict: Dict[str, Any], **kwargs) -> "PretrainedConfig":
[`PretrainedConfig`]: The configuration object instantiated from those parameters.
"""
return_unused_kwargs = kwargs.pop("return_unused_kwargs", False)
# Those arguments may be passed along for our internal telemetry.
# We remove them so they don't appear in `return_unused_kwargs`.

# convert local config to legacy config
# do standard config map: there are some old-school pretrained-config not refactored.
config_dict = convert_to_legacy_config(cls.attribute_map, config_dict)

config_dict = flatten_model_config(config_dict)

if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
logger.warning(
f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
)

config = cls(**config_dict)

if hasattr(config, "pruned_heads"):
Expand Down