ICode9

精准搜索请尝试: 精确搜索
首页 > 编程语言> 文章详细

bert-for-tf2源码解读10------权重参数对应的结构图

2021-03-19 13:33:59  阅读:185  来源: 互联网

标签:bert 768 10 attention layer encoder 源码 output


从bert之中读取出来的权重矩阵内容如下:

{
'cls/seq_relationship/output_bias': [2],                      Unused weights                                                                    
'cls/predictions/transform/dense/kernel': [768, 768],         Unused weights
'cls/predictions/transform/dense/bias': [768],                Unused weights
'cls/predictions/transform/LayerNorm/gamma': [768],           Unused weights 
'cls/predictions/transform/LayerNorm/beta': [768],            Unused weights
'cls/predictions/output_bias': [30522],                       Unused weights


'cls/seq_relationship/output_weights': [2, 768],              Unused weights     
'bert/embeddings/token_type_embeddings': [2, 768],            Unused weights
'bert/embeddings/word_embeddings': [30522, 768],              1
'bert/embeddings/LayerNorm/beta': [768],                      4
'bert/pooler/dense/bias': [768],                              Unused weights
'bert/embeddings/position_embeddings': [512, 768],            2
'bert/embeddings/LayerNorm/gamma': [768],                     3
'bert/pooler/dense/kernel': [768, 768],                       Unused weights

(1.bert/embeddings/word_embeddings:[30522,768]
2.bert/embeddings/position_embeddings:[512,768]
3.bert/embeddings/LayerNorm/gamma:[768]
4.bert/embeddings/LayerNorm/beta: [768])


'bert/encoder/layer_0/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_0/attention/self/query/bias': [768],
'bert/encoder/layer_0/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_0/attention/self/key/bias': [768],
'bert/encoder/layer_0/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_0/attention/self/value/bias': [768],
'bert/encoder/layer_0/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_0/attention/output/dense/bias': [768],
'bert/encoder/layer_0/intermediate/dense/kernel': [768,3072],
'bert/encoder/layer_0/intermediate/dense/bias': [3072,],
'bert/encoder/layer_0/output/dense/kernel': [3072,768],
'bert/encoder/layer_0/output/dense/bias': [768,],
'bert/encoder/layer_0/output/LayerNorm/gamma': [768,], 
'bert/encoder/layer_0/output/LayerNorm/beta': [768,],
'bert/encoder/layer_0/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_0/attention/output/LayerNorm/beta': [768],




'bert/encoder/layer_1/attention/self/value/bias': [768],
'bert/encoder/layer_1/output/dense/kernel': [3072, 768],
'bert/encoder/layer_1/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_1/output/LayerNorm/beta': [768],
'bert/encoder/layer_1/intermediate/dense/bias': [3072],
'bert/encoder/layer_1/attention/output/dense/bias': [768],
'bert/encoder/layer_1/attention/output/LayerNorm/beta': [768],
'bert/encoder/layer_1/attention/self/query/kernel': [768, 768]
'bert/encoder/layer_1/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_1/attention/self/key/bias': [768],
'bert/encoder/layer_1/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_1/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_1/intermediate/dense/kernel': [768, 3072],
'bert/encoder/layer_1/output/dense/bias': [768],
'bert/encoder/layer_1/attention/self/query/bias': [768],
'bert/encoder/layer_1/output/LayerNorm/gamma': [768],





'bert/encoder/layer_2/attention/self/value/bias': [768],
'bert/encoder/layer_2/attention/self/query/bias': [768],   
'bert/encoder/layer_2/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_2/attention/self/key/bias': [768],
'bert/encoder/layer_2/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_2/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_2/attention/output/dense/bias': [768],
'bert/encoder/layer_2/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_2/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_2/intermediate/dense/bias': [3072],
'bert/encoder/layer_2/intermediate/dense/kernel': [768, 3072],
'bert/encoder/layer_2/output/LayerNorm/beta': [768],
'bert/encoder/layer_2/output/LayerNorm/gamma': [768],
'bert/encoder/layer_2/output/dense/bias': [768],
'bert/encoder/layer_2/output/dense/kernel': [3072, 768],
'bert/encoder/layer_2/attention/output/LayerNorm/beta': [768],




'bert/encoder/layer_3/attention/self/key/bias': [768],
'bert/encoder/layer_3/output/LayerNorm/gamma': [768],
'bert/encoder/layer_3/output/LayerNorm/beta': [768],
'bert/encoder/layer_3/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_3/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_3/output/dense/kernel': [3072, 768],
'bert/encoder/layer_3/attention/output/LayerNorm/beta': [768],
'bert/encoder/layer_3/attention/output/dense/bias': [768],
'bert/encoder/layer_3/attention/self/query/bias': [768],
'bert/encoder/layer_3/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_3/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_3/intermediate/dense/kernel': [768, 3072],
'bert/encoder/layer_3/output/dense/bias': [768],
'bert/encoder/layer_3/attention/self/value/bias': [768],
'bert/encoder/layer_3/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_3/intermediate/dense/bias': [3072],



'bert/encoder/layer_4/attention/output/LayerNorm/beta': [768],
'bert/encoder/layer_4/intermediate/dense/bias': [3072],
'bert/encoder/layer_4/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_4/output/dense/bias': [768],
'bert/encoder/layer_4/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_4/attention/output/dense/bias': [768],
'bert/encoder/layer_4/attention/self/key/bias': [768],
'bert/encoder/layer_4/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_4/attention/self/query/bias': [768],
'bert/encoder/layer_4/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_4/attention/self/value/bias': [768],
'bert/encoder/layer_4/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_4/output/LayerNorm/beta': [768],
'bert/encoder/layer_4/output/LayerNorm/gamma': [768],
'bert/encoder/layer_4/output/dense/kernel': [3072, 768],
'bert/encoder/layer_4/intermediate/dense/kernel': [768, 3072],






'bert/encoder/layer_5/output/dense/bias': [768], 
'bert/encoder/layer_5/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_5/attention/output/dense/bias': [768],
'bert/encoder/layer_5/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_5/attention/self/value/bias': [768],
'bert/encoder/layer_5/attention/self/query/bias': [768],
'bert/encoder/layer_5/output/dense/kernel': [3072, 768],
'bert/encoder/layer_5/intermediate/dense/bias': [3072],
'bert/encoder/layer_5/attention/output/LayerNorm/beta': [768],
'bert/encoder/layer_5/attention/self/key/bias': [768],
'bert/encoder/layer_5/output/LayerNorm/beta': [768],
'bert/encoder/layer_5/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_5/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_5/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_5/intermediate/dense/kernel': [768, 3072],
'bert/encoder/layer_5/output/LayerNorm/gamma': [768],





'bert/encoder/layer_6/attention/self/value/bias': [768],
'bert/encoder/layer_6/output/dense/kernel': [3072, 768],
'bert/encoder/layer_6/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_6/intermediate/dense/bias': [3072],
'bert/encoder/layer_6/attention/output/dense/bias': [768],
'bert/encoder/layer_6/output/dense/bias': [768],
'bert/encoder/layer_6/attention/output/LayerNorm/beta': [768],
'bert/encoder/layer_6/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_6/attention/self/key/bias': [768],
'bert/encoder/layer_6/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_6/attention/self/query/bias': [768],
'bert/encoder/layer_6/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_6/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_6/intermediate/dense/kernel': [768, 3072],
'bert/encoder/layer_6/output/LayerNorm/beta': [768],
'bert/encoder/layer_6/output/LayerNorm/gamma': [768],






'bert/encoder/layer_7/output/LayerNorm/beta': [768],
'bert/encoder/layer_7/output/dense/bias': [768],
'bert/encoder/layer_7/output/dense/kernel': [3072, 768]
'bert/encoder/layer_7/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_7/attention/output/LayerNorm/beta': [768],
'bert/encoder/layer_7/output/LayerNorm/gamma': [768],
'bert/encoder/layer_7/attention/self/query/bias': [768],
'bert/encoder/layer_7/attention/output/dense/bias': [768],
'bert/encoder/layer_7/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_7/intermediate/dense/bias': [3072],
'bert/encoder/layer_7/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_7/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_7/attention/self/key/bias': [768],
'bert/encoder/layer_7/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_7/attention/self/value/bias': [768],
'bert/encoder/layer_7/intermediate/dense/kernel': [768, 3072],




'bert/encoder/layer_8/output/LayerNorm/beta': [768], 
'bert/encoder/layer_8/intermediate/dense/kernel': [768, 3072], 
'bert/encoder/layer_8/intermediate/dense/bias': [3072], 
'bert/encoder/layer_8/attention/self/value/kernel': [768, 768], 
'bert/encoder/layer_8/attention/self/value/bias': [768], 
'bert/encoder/layer_8/attention/self/query/bias': [768],
'bert/encoder/layer_8/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_8/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_8/output/dense/kernel': [3072, 768],
'bert/encoder/layer_8/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_8/attention/self/key/bias': [768],
'bert/encoder/layer_8/output/LayerNorm/gamma': [768],
'bert/encoder/layer_8/attention/output/dense/bias': [768],
'bert/encoder/layer_8/attention/output/LayerNorm/beta': [768],
'bert/encoder/layer_8/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_8/output/dense/bias': [768],




'bert/encoder/layer_9/output/dense/kernel': [3072, 768], 
'bert/encoder/layer_9/output/dense/bias': [768], 
'bert/encoder/layer_9/output/LayerNorm/gamma': [768], 
'bert/encoder/layer_9/intermediate/dense/kernel': [768, 3072], 
'bert/encoder/layer_9/intermediate/dense/bias': [3072], 
'bert/encoder/layer_9/attention/self/query/bias': [768], 
'bert/encoder/layer_9/attention/self/key/bias': [768], 
'bert/encoder/layer_9/attention/output/LayerNorm/beta': [768], 
'bert/encoder/layer_9/attention/self/value/bias': [768], 
'bert/encoder/layer_9/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_9/output/LayerNorm/beta': [768],
'bert/encoder/layer_9/attention/output/dense/bias': [768],
'bert/encoder/layer_9/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_9/attention/output/dense/kernel': [768, 768],
'bert/encoder/layer_9/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_9/attention/self/value/kernel': [768, 768],




'bert/encoder/layer_10/output/dense/bias': [768],
'bert/encoder/layer_10/intermediate/dense/kernel': [768, 3072],
'bert/encoder/layer_10/attention/self/query/bias': [768],
'bert/encoder/layer_10/attention/self/key/bias': [768],
'bert/encoder/layer_10/attention/output/dense/bias': [768],
'bert/encoder/layer_10/output/LayerNorm/beta': [768],
'bert/encoder/layer_10/output/LayerNorm/gamma': [768],
'bert/encoder/layer_10/attention/output/LayerNorm/gamma': [768],
'bert/encoder/layer_10/output/dense/kernel': [3072, 768],
'bert/encoder/layer_10/attention/self/query/kernel': [768, 768],
'bert/encoder/layer_10/attention/self/key/kernel': [768, 768],
'bert/encoder/layer_10/intermediate/dense/bias': [3072],
'bert/encoder/layer_10/attention/self/value/bias': [768],
'bert/encoder/layer_10/attention/output/LayerNorm/beta': [768],
'bert/encoder/layer_10/attention/self/value/kernel': [768, 768],
'bert/encoder/layer_10/attention/output/dense/kernel': [768, 768], 




'bert/encoder/layer_11/attention/output/dense/kernel': [768, 768],    1
'bert/encoder/layer_11/attention/self/value/kernel': [768, 768],      2
'bert/encoder/layer_11/attention/self/query/bias': [768],             3
'bert/encoder/layer_11/attention/self/value/bias': [768],             4
'bert/encoder/layer_11/attention/self/key/kernel': [768, 768],        5
'bert/encoder/layer_11/output/LayerNorm/gamma': [768],                6
'bert/encoder/layer_11/attention/self/query/kernel': [768, 768],      7
'bert/encoder/layer_11/intermediate/dense/kernel': [768, 3072],       8
'bert/encoder/layer_11/attention/output/dense/bias': [768],           9
'bert/encoder/layer_11/output/dense/kernel': [3072, 768],             10
'bert/encoder/layer_11/output/LayerNorm/beta': [768],                 11
'bert/encoder/layer_11/attention/output/LayerNorm/gamma': [768],      12
'bert/encoder/layer_11/attention/output/LayerNorm/beta': [768],       13
'bert/encoder/layer_11/attention/self/key/bias': [768],               14
'bert/encoder/layer_11/output/dense/bias': [768],                     15
'bert/encoder/layer_11/intermediate/dense/bias': [3072],              16
}

这里面所有的权重值均为从bert_model.ckpt.data-00000-of-00001,bert_model.ckpt.index,bert_model.ckpt.meta读取出来的参数,其中标注unused weights的内容为微调之后并没有使用到的对应的参数,其余的参数为微调之后使用到的参数,对应的总体参数结构图如下:
对应的各项参数总体结构图对应的整个bert模型参数的结构图为:
整个bert模型的结构参数图

标签:bert,768,10,attention,layer,encoder,源码,output
来源: https://blog.csdn.net/znevegiveup1/article/details/115003642

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有