ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

使用transformer实现超高质量唐诗生成

2021-04-23 23:34:05  阅读:330  来源: 互联网

标签:transformer 唐诗 self mask len 超高 tf model size


0、概述

唐诗生成在汉语的nlp领域应用非常广泛,从传统的RNN、LSTM、Attention生成质量被不断提升。随着Transformer模型提出很多NLP的深度学习模型都被改写。那么Transformer在唐诗生成领域的表现如何呢。我们来看一下,本文通过通过实例的方式详细描述了transformer的基本结构,以及唐诗生成的基本步骤。本文使用的框架为tensorflow2.2.

1、加载环境

import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import sklearn
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras

import re
import jieba
import opencc
import io

1-1、优化cpu按需使用

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        # Currently, memory growth needs to be the same across GPUs
        for gpu in gpus:
            tf.config.experimental.set_memory_growth(gpu, True)
            logical_gpus = tf.config.experimental.list_logical_devices('GPU')
            print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
    except RuntimeError as e:
        # Memory growth must be set before GPUs have been initialized
        print(e)

2、加载数据集

将繁体字转换为简体字

cc = opencc.OpenCC('t2s')
def preprocess_sentence_cn(w):
  #将繁体字转换为简体字  
    w = cc.convert(w)
    w = ' '.join(list(w))
    w = re.sub(r'[" "]+', " ", w)
    w = w.strip().rstrip()
    w = '<start> ' + w + ' <end>'
    return w

创建数据集

def create_dataset(path):
    lines = io.open(path,encoding='utf8').read().strip().split('\n')
   
    sentence_pairs = [[preprocess_sentence_cn(w) for w in line.split(' ')] for line in lines]
    return zip(*sentence_pairs)
train,targ=create_dataset('../data/poem5.txt')

预览数据

print(len(train),len(targ))
print(train[0],targ[0])

输出结果如下

570159 570159
<start> 秦 川 雄 帝 宅 <end> <start> 函 谷 壮 皇 居 <end>

构建字典 id 与 汉字的双向映射函数

def tokenize(lang):
    tokenizer=keras.preprocessing.text.Tokenizer(filters='')
    tokenizer.fit_on_texts(lang)
    tensor=tokenizer.texts_to_sequences(lang)
    tensor=keras.preprocessing.sequence.pad_sequences(tensor,padding='post')
    return tensor,tokenizer
def load_dataset(inp_lang,targ_lang):
    input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
    target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
    return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer

测试代码

input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(train,targ)
for item in input_tensor[0]:
    print('%d ---> %s'%(item,inp_lang.index_word[item]))
print("===================================================")
for item in target_tensor[0]:
    print('%d ---> %s'%(item,targ_lang.index_word[item]))

输出效果如下

1 ---> <start>
540 ---> 秦
411 ---> 川
853 ---> 雄
443 ---> 帝
755 ---> 宅
2 ---> <end>
===================================================
1 ---> <start>
2168 ---> 函
471 ---> 谷
813 ---> 壮
660 ---> 皇
205 ---> 居
2 ---> <end>

 取句子最大汉字数

def max_length(tensor):
    return max(len(t) for t in tensor)
max_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)

2-1 分割数据集

from sklearn.model_selection import train_test_split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val=sklearn.model_selection.train_test_split(input_tensor,target_tensor,test_size=0.2)

2-2 生成数据集

# 定义缓冲区大小 2000
BUFFER_SIZE = len(input_tensor_train)
#定义批次内数据量
BATCH_SIZE = 512
#定义每一轮训练需要经过多少批次
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
# 定义字典大小,由于字典本身从1开始,0默认作为padding元素所以真正的字典大小需要加1
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.cache()
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)

2-3 验证数据集形状

example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape

输出如下

(TensorShape([2048, 7]), TensorShape([2048, 7]))

3 构建transformer模型

3-1 详解位置编码

3-1-1 位置编码设计意图

  • 它能为每个时间步输出一个独一无二的编码;
  • 不同长度的句子之间,任何两个时间步之间的距离应该保持一致;
  • 模型应该能毫不费力地泛化到更长的句子。它的值应该是有界的;
  • 它必须是确定性的。

3-1-2 位置编码公式

PE_{(pos,2i)}=sin(pos/{10000^{2i/d_{model}}})

PE_{(pos,2i+1)}=cos(pos/{10000^{2i/d_{model}}})

  • 其中 i 为 1到 d/2 的均匀分布
  • pos 为当前单词在整个单词序列中的位置 取值范围 0 到 pos-1
  • d_{model}代表向量的维度由于后期在做self_attention时需要和单词的 embedding相加所有该项取值为 embedding_dim = 256

3-1-3 编程实现位置编码

def get_angles(pos, i, d_model):
    angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
    return pos * angle_rates
def positional_encoding(position, d_model):
    angle_rads = get_angles(np.arange(position)[:, np.newaxis],
                          np.arange(d_model)[np.newaxis, :],
                          d_model)

    # 将 sin 应用于数组中的偶数索引(indices);2i
    angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])

    # 将 cos 应用于数组中的奇数索引;2i+1
    angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])

    pos_encoding = angle_rads[np.newaxis, ...]

    return tf.cast(pos_encoding, dtype=tf.float32)

3-1-4 为何采用这样的编码公式?

1 可以表示位置之间的线性关系
根据三角函数定理有如下公式
\begin{cases} sin(\alpha+\beta)=sin(\alpha)cos(\beta)+cos(\alpha)sin(\beta) \\ cos(\alpha+\beta)=cos(\alpha)cos(\beta)-sin(\alpha)sin(\beta) \end{cases}
 假设当前位置为pos, 以及其对应的位置编码,现在可以得到pos+k位置的编码为

由上面可以发现位置向量可以表示为相对位置的线性表示

2 位置编码采用正弦余弦波的形式进行编码,由于该三角函数为周期函数如何能正确的表示相对位置而不产生混淆

poscode=positional_encoding(100,512).numpy()
poscode=poscode[0]
x=np.arange(100)
plt.figure(figsize=(20,100))
j=1
for i in np.arange(0,500,50):
    plt.subplot(100,1,j)
    j+=1
    plt.plot(x,poscode[:,i])
plt.show()

编码波形如下

plt.figure(figsize=(20,100))
j=1
for i in np.arange(0,500,50):
    plt.subplot(100,1,j)
    j+=1
    plt.plot(x,poscode[:,i+1])
plt.show()

编码效果如下

3、从上面的图像中可以看出

  • 整体位置编码是正弦与余弦组合的形式出现,在i取值越高,整体图像趋近于线性变换
  • 由于正弦与余弦图像是完全互补的如果正弦图像的变化不明显时可以采用其对应的余弦图像进行补齐
  • 虽然正弦与余弦时周期性函数,在多种组合的情况下就会产生唯一性编码(很重要)

3-2 构建遮挡(mask)

3-2-1 构建填充遮挡(padding mask)

遮挡一批序列中所有的填充标记(pad tokens)。这确保了模型不会将填充作为输入。该 mask 表明填充值 0 出现的位置:在这些位置 mask 输出 1,否则输出 0。

def create_padding_mask(seq):
    seq = tf.cast(tf.math.equal(seq, 0), tf.float32)

    # 添加额外的维度来将填充加到
    # 注意力对数(logits)。
    return seq[:, tf.newaxis, tf.newaxis, :]  # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)

代码输出如下

<tf.Tensor: shape=(3, 1, 1, 5), dtype=float32, numpy=
array([[[[0., 0., 1., 1., 0.]]],
       [[[0., 0., 0., 1., 1.]]],
       [[[1., 1., 1., 0., 0.]]]], dtype=float32)>

3-2-2 前瞻遮挡(look-ahead mask)

前瞻遮挡(look-ahead mask)用于遮挡一个序列中的后续标记(future tokens)。换句话说,该 mask 表明了不应该使用的条目。保证只能看到已经出现的单词

def create_look_ahead_mask(size):
    mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
    return mask  # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp

代码输出如下

<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[0., 1., 1.],
       [0., 0., 1.],
       [0., 0., 0.]], dtype=float32)>

3-3 self attention

3-3-1 缩放点积注意力

def scaled_dot_product_attention(q, k, v, mask):
    """计算注意力权重。
    q, k, v 必须具有匹配的前置维度。
    k, v 必须有匹配的倒数第二个维度,例如:seq_len_k = seq_len_v。
    虽然 mask 根据其类型(填充或前瞻)有不同的形状,
    但是 mask 必须能进行广播转换以便求和。

    参数:
    q: 请求的形状 == (..., seq_len_q, depth)
    k: 主键的形状 == (..., seq_len_k, depth)
    v: 数值的形状 == (..., seq_len_v, depth_v)
    mask: Float 张量,其形状能转换成
          (..., seq_len_q, seq_len_k)。默认为None。

    返回值:
    输出,注意力权重
    """
#     print('k shape',k.shape)
#     print('q shape',q.shape)
    matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)
#     print('qk shape',matmul_qk.shape)
    # 缩放 matmul_qk
    dk = tf.cast(tf.shape(k)[-1], tf.float32)
    scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
    
    # 将 mask 加入到缩放的张量上。
    if mask is not None:
        scaled_attention_logits += (mask * -1e9)  
#     print('scaled_attention_logis',scaled_attention_logits)
    # softmax 在最后一个轴(seq_len_k)上归一化,因此分数
    # 相加等于1。
    attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

    output = tf.matmul(attention_weights, v)  # (..., seq_len_q, depth_v)

    return output, attention_weights

相关功能测试代码如下

def print_out(q, k, v):
    temp_out, temp_attn = scaled_dot_product_attention(
      q, k, v, None)
    print ('Attention weights are:')
    print (temp_attn)
    print ('Output is:')
    print (temp_out)

 这条 `请求(query)符合第二个`主键(key)`,因此返回了第二个`数值(value)`。 

np.set_printoptions(suppress=True)

temp_k = tf.constant([[10,0,0],
                      [0,10,0],
                      [0,0,10],
                      [0,0,10]], dtype=tf.float32)  # (4, 3)

temp_v = tf.constant([[   1,0],
                      [  10,0],
                      [ 100,5],
                      [1000,6]], dtype=tf.float32)  # (4, 2)

# 这条 `请求(query)符合第二个`主键(key)`,
# 因此返回了第二个`数值(value)`。
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)

 这条请求符合重复出现的主键(第三第四个),因此,对所有的相关数值取了平均。

temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)  # (1, 3)
print_out(temp_q, temp_k, temp_v)

3-3-2 多头注意力

多头注意力由四部分组成:

  • 线性层并分拆成多头。
  • 按比缩放的点积注意力。
  • 多头及联。
  • 最后一层线性层
class MultiHeadAttention(tf.keras.layers.Layer):
    def __init__(self, d_model, num_heads):
        super(MultiHeadAttention, self).__init__()
        self.num_heads = num_heads
        self.d_model = d_model

        assert d_model % self.num_heads == 0

        self.depth = d_model // self.num_heads

        self.wq = tf.keras.layers.Dense(d_model)
        self.wk = tf.keras.layers.Dense(d_model)
        self.wv = tf.keras.layers.Dense(d_model)

        self.dense = tf.keras.layers.Dense(d_model)
        
    def split_heads(self, x, batch_size):
        """分拆最后一个维度到 (num_heads, depth).
        转置结果使得形状为 (batch_size, num_heads, seq_len, depth)
        """
        x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
        return tf.transpose(x, perm=[0, 2, 1, 3])
    
    def call(self, v, k, q, mask):
        batch_size = tf.shape(q)[0]

        q = self.wq(q)  # (batch_size, seq_len, d_model)
        k = self.wk(k)  # (batch_size, seq_len, d_model)
        v = self.wv(v)  # (batch_size, seq_len, d_model)
#         print(q.shape)
#         print(k.shape)
#         print(v.shape)
        q = self.split_heads(q, batch_size)  # (batch_size, num_heads, seq_len_q, depth)
        k = self.split_heads(k, batch_size)  # (batch_size, num_heads, seq_len_k, depth)
        v = self.split_heads(v, batch_size)  # (batch_size, num_heads, seq_len_v, depth)
#         print(q.shape)
#         print(k.shape)
#         print(v.shape)
        # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
        # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
        scaled_attention, attention_weights = scaled_dot_product_attention(
            q, k, v, mask)
#         print(scaled_attention)
        scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])  # (batch_size, seq_len_q, num_heads, depth)

        concat_attention = tf.reshape(scaled_attention, 
                                      (batch_size, -1, self.d_model))  # (batch_size, seq_len_q, d_model)

        output = self.dense(concat_attention)  # (batch_size, seq_len_q, d_model)

        return output, attention_weights

功能测试代码如下

temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))  # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)

3-4点式前馈网络(Point wise feed forward network)

def point_wise_feed_forward_network(d_model, dff):
    return tf.keras.Sequential([
      tf.keras.layers.Dense(dff, activation='relu'),  # (batch_size, seq_len, dff)
      tf.keras.layers.Dense(d_model)  # (batch_size, seq_len, d_model)
    ])

功能测试代码如下

sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape

3-5 编码与解码

  • 输入语句经过 N 个编码器层,为序列中的每个词/标记生成一个输出。
  • 解码器关注编码器的输出以及它自身的输入(自注意力)来预测下一个词

3-5-1 编码器层

每个编码器层包括以下子层:

  • 多头注意力(有填充遮挡)
  • 点式前馈网络(Point wise feed forward networks)
class EncoderLayer(tf.keras.layers.Layer):
    def __init__(self,d_model,num_heads,dff,rate=0.1):
        super(EncoderLayer, self).__init__()
        self.mha = MultiHeadAttention(d_model,num_heads)
        self.ffn = point_wise_feed_forward_network(d_model,dff)
        self.layernorm1=tf.keras.layers.LayerNormalization(epsilon=1e-6)
        self.layernorm2=tf.keras.layers.LayerNormalization(epsilon=1e-6)
        self.dropout1=tf.keras.layers.Dropout(rate)
        self.dropout2=tf.keras.layers.Dropout(rate)
    def __call__(self,x,training, mask):
        attn_output, _ = self.mha(x, x, x, mask)  # (batch_size, input_seq_len, d_model)
        attn_output = self.dropout1(attn_output, training=training)
        out1 = self.layernorm1(x + attn_output)  # (batch_size, input_seq_len, d_model)
        ffn_output = self.ffn(out1)  # (batch_size, input_seq_len, d_model)
        ffn_output = self.dropout2(ffn_output, training=training)
        out2 = self.layernorm2(out1+ffn_output)
        return out2 # (batch_size, input_seq_len, d_model)

3-5-2 解码器层(Decoder layer)

每个解码器层包括以下子层:

  • 遮挡的多头注意力(前瞻遮挡和填充遮挡)
  • 多头注意力(用填充遮挡)。V(数值)和 K(主键)接收编码器输出作为输入。Q(请求)接收遮挡的多头注意力子层的输出。
  • 点式前馈网络
class DecoderLayer(tf.keras.layers.Layer):
    def __init__(self,d_model,num_heads,dff,rate=0.1):
        super(DecoderLayer, self).__init__()
        self.mha1=MultiHeadAttention(d_model,num_heads)
        self.mha2=MultiHeadAttention(d_model,num_heads)
        self.ffn = point_wise_feed_forward_network(d_model, dff)
        self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
        self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
        self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)

        self.dropout1 = tf.keras.layers.Dropout(rate)
        self.dropout2 = tf.keras.layers.Dropout(rate)
        self.dropout3 = tf.keras.layers.Dropout(rate)
    def __call__(self,x,enc_output, training, look_ahead_mask, padding_mask):
        attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)  # (batch_size, input_seq_len, d_model)
        attn1 = self.dropout1(attn1, training=training)
        out1 = self.layernorm1(attn1 + x)
        
        attn2, attn_weights_block2 = self.mha2(enc_output, enc_output, out1, padding_mask)  # (batch_size, input_seq_len, d_model)
        attn2 = self.dropout2(attn2, training=training)
        out2 = self.layernorm2(attn2 + out1)  # (batch_size, target_seq_len, d_model)
        
        ffn_output=self.ffn(out2)
        ffn_output = self.dropout3(ffn_output, training=training)
        out3 = self.layernorm3(ffn_output + out2)
        return out3,attn_weights_block1,attn_weights_block2

3-6 编码器

编码器 包括:

  • 输入嵌入(Input Embedding)
  • 位置编码(Positional Encoding)
  • N 个编码器层(encoder layers)
class Encoder(tf.keras.layers.Layer):
    def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, maximum_position_encoding, rate=0.1):
        super(Encoder, self).__init__()
        self.d_model = d_model
        self.num_layers = num_layers
        self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
        self.pos_encoding = positional_encoding(maximum_position_encoding, 
                                            self.d_model)
        self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate) for _ in range(num_layers)]
        self.dropout = tf.keras.layers.Dropout(rate)
    def __call__(self,x,training, mask):
        seq_len = tf.shape(x)[1]
        x=self.embedding(x)
        x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32)) #???
        x += self.pos_encoding[:, :seq_len, :]
        x = self.dropout(x, training=training)
    
        for i in range(self.num_layers):
            x = self.enc_layers[i](x, training, mask)

        return x  # (batch_size, input_seq_len, d_model)
        

3-7 解码器

解码器包括:

  • 输出嵌入(Output Embedding)
  • 位置编码(Positional Encoding)
  • N 个解码器层(decoder layers)
class Decoder(tf.keras.layers.Layer):
    def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
               maximum_position_encoding, rate=0.1):
        super(Decoder, self).__init__()

        self.d_model = d_model
        self.num_layers = num_layers

        self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
        self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)

        self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate) 
                           for _ in range(num_layers)]
        self.dropout = tf.keras.layers.Dropout(rate)
    
    def call(self, x, enc_output, training, look_ahead_mask, padding_mask):
        
        seq_len = tf.shape(x)[1]
        attention_weights = {}

        x = self.embedding(x)  # (batch_size, target_seq_len, d_model)
        x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
        x += self.pos_encoding[:, :seq_len, :]

        x = self.dropout(x, training=training)

        for i in range(self.num_layers):
            x, block1, block2 = self.dec_layers[i](x, enc_output, training,
                                                 look_ahead_mask, padding_mask)

            attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
            attention_weights['decoder_layer{}_block2'.format(i+1)] = block2

        # x.shape == (batch_size, target_seq_len, d_model)
        return x, attention_weights

 3-8 创建transformer

class Transformer(tf.keras.Model):
    def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size, target_vocab_size, pe_input, pe_target, rate=0.1):
        super(Transformer, self).__init__()

        self.encoder = Encoder(num_layers, d_model, num_heads, dff, 
                               input_vocab_size, pe_input, rate)

        self.decoder = Decoder(num_layers, d_model, num_heads, dff, 
                               target_vocab_size, pe_target, rate)

        self.final_layer = tf.keras.layers.Dense(target_vocab_size)
    
    def call(self, inp, tar, training, enc_padding_mask, look_ahead_mask, dec_padding_mask):

        enc_output = self.encoder(inp, training, enc_padding_mask)  # (batch_size, inp_seq_len, d_model)

        # dec_output.shape == (batch_size, tar_seq_len, d_model)
        dec_output, attention_weights = self.decoder(
            tar, enc_output, training, look_ahead_mask, dec_padding_mask)

        final_output = self.final_layer(dec_output)  # (batch_size, tar_seq_len, target_vocab_size)

        return final_output, attention_weights

3-9 优化器

学习率公式如下

lrate={d_{model}}^{0.5}*min(step\_mum^{-0.5},stem\_num*warmup\_steps^{-1.5})

class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
    def __init__(self, d_model, warmup_steps=4000):
        super(CustomSchedule, self).__init__()

        self.d_model = d_model
        self.d_model = tf.cast(self.d_model, tf.float32)

        self.warmup_steps = warmup_steps
    
    def __call__(self, step):
        arg1 = tf.math.rsqrt(step)
        arg2 = step * (self.warmup_steps ** -1.5)

        return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)

学习率调度测试

learning_rate = CustomSchedule(d_model)

optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98, 
                                     epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)

plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")

变化曲线如下

3-10 损失函数与指标(Loss and metrics)

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
def loss_function(real, pred):
    mask = tf.math.logical_not(tf.math.equal(real, 0))
    loss_ = loss_object(real, pred)

    mask = tf.cast(mask, dtype=loss_.dtype)
    loss_ *= mask

    return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
    name='train_accuracy')

3-11 训练与检查点(Training and checkpointing)

transformer = Transformer(num_layers, d_model, num_heads, dff,
                          input_vocab_size, target_vocab_size, 
                          pe_input=input_vocab_size, 
                          pe_target=target_vocab_size,
                          rate=dropout_rate)

3-11-1 创建遮挡

def create_masks(inp, tar):
    # 编码器填充遮挡
    enc_padding_mask = create_padding_mask(inp)

    # 在解码器的第二个注意力模块使用。
    # 该填充遮挡用于遮挡编码器的输出。
    dec_padding_mask = create_padding_mask(inp)

    # 在解码器的第一个注意力模块使用。
    # 用于填充(pad)和遮挡(mask)解码器获取到的输入的后续标记(future tokens)。
    look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
    dec_target_padding_mask = create_padding_mask(tar)
    combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)

    return enc_padding_mask, combined_mask, dec_padding_mask

功能测试代码如下

example_train,example_tar=next(iter(dataset))
enc_padding_mask, combined_mask, dec_padding_mask=create_masks(example_train,example_tar)
print(enc_padding_mask.shape,dec_padding_mask.shape,combined_mask.shape)

相关输出如下

(2048, 1, 1, 7) (2048, 1, 1, 7) (2048, 1, 7, 7)

3-11-2 训练

train_step代码如下

train_step_signature = [
    tf.TensorSpec(shape=(None, None), dtype=tf.int32),
    tf.TensorSpec(shape=(None, None), dtype=tf.int32),
]

@tf.function(input_signature=train_step_signature)
def train_step(inp,tar):
    tar_inp = tar[:, :-1]
    tar_real = tar[:, 1:]
    enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
    with tf.GradientTape() as tape:
        predictions, _ = transformer(inp, tar_inp, 
                                     True, 
                                     enc_padding_mask, 
                                     combined_mask, 
                                     dec_padding_mask)
        loss = loss_function(tar_real, predictions)
    gradients = tape.gradient(loss, transformer.trainable_variables)    
    optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))

    train_loss(loss)
    train_accuracy(tar_real, predictions)

train过程如下

EPOCHS = 100
for epoch in range(EPOCHS):
    start = time.time()

    train_loss.reset_states()
    train_accuracy.reset_states()
  
  # inp -> portuguese, tar -> english
    for (batch, (inp, tar)) in enumerate(dataset):
        train_step(inp, tar)
    
        if batch % 50 == 0:
            print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, batch, train_loss.result(), train_accuracy.result()))
  
           
    print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1, 
                                        train_loss.result(), 
                                        train_accuracy.result()))

    print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))

训练过程输出类似如下

Epoch 23 Batch 350 Loss 3.6551 Accuracy 0.3451
Epoch 23 Batch 400 Loss 3.6567 Accuracy 0.3450
Epoch 23 Batch 450 Loss 3.6564 Accuracy 0.3451
Epoch 23 Batch 500 Loss 3.6571 Accuracy 0.3450
Epoch 23 Batch 550 Loss 3.6562 Accuracy 0.3451
Epoch 23 Batch 600 Loss 3.6554 Accuracy 0.3453
Epoch 23 Batch 650 Loss 3.6552 Accuracy 0.3452
Epoch 23 Batch 700 Loss 3.6546 Accuracy 0.3453
Epoch 23 Batch 750 Loss 3.6547 Accuracy 0.3454
Epoch 23 Batch 800 Loss 3.6555 Accuracy 0.3453
Epoch 23 Batch 850 Loss 3.6553 Accuracy 0.3454
Epoch 23 Loss 3.6555 Accuracy 0.3455
Time taken for 1 epoch: 53.941662549972534 secs

Epoch 24 Batch 0 Loss 3.6395 Accuracy 0.3467

模型评估函数

def evaluate(inp_sentence):
    
    encoder_input = tf.expand_dims(inp_sentence, 0)

    decoder_input = [1]
    output = tf.expand_dims(decoder_input, 0)
    
    for i in range(7):
        enc_padding_mask, combined_mask, dec_padding_mask = create_masks(encoder_input, output)
  
        # predictions.shape == (batch_size, seq_len, vocab_size)
        predictions, attention_weights = transformer(encoder_input, 
                                                 output,
                                                 False,
                                                 enc_padding_mask,
                                                 combined_mask,
                                                 dec_padding_mask)
    
        # 从 seq_len 维度选择最后一个词
        predictions = predictions[: ,-1:, :]  # (batch_size, 1, vocab_size)

        predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
    
        # 如果 predicted_id 等于结束标记,就返回结果
        if predicted_id == 2:
              return tf.squeeze(output, axis=0), attention_weights
    
        # 连接 predicted_id 与输出,作为解码器的输入传递到解码器。
        output = tf.concat([output, predicted_id], axis=-1)

    return tf.squeeze(output, axis=0), attention_weights

4、效果展示

import random
def convert(lang, tensor):
    result=[]
    for t in tensor:
        result.append(lang.index_word[t])
    return ''.join(result)
def chooce_val():
    max_len=len(input_tensor_val)
    rand_index=random.sample(range(max_len),100)
    val_train=input_tensor_val[rand_index]
    val_targ=target_tensor_val[rand_index]
    for _index in range(100):
        print('上一句')
        print(convert(inp_lang,val_train[_index]))
        print('真实')
        print(convert(targ_lang,val_targ[_index]))
        print('生成')
        output,att_weight=evaluate(val_train[_index])
        print(convert(targ_lang,output.numpy()))
        print('===============================================')

输出如下:

上一句
<start>平生五大夫<end>
真实
<start>投老一秃翁<end>
生成
<start>一见一笑莞
===============================================
上一句
<start>青烟回野烧<end>
真实
<start>翠霭护晴岚<end>
生成
<start>白日照江山

===============================================
上一句
<start>长路与天接<end>
真实
<start>举足蹑星躔<end>
生成
<start>一家与山连
===============================================
上一句
<start>宴回银烛夜<end>
真实
<start>吟度玉关秋<end>
生成
<start>歌动玉楼春
===============================================
上一句
<start>千里芙蓉幕<end>
真实
<start>何由话所思<end>
生成
<start>相期一笑同
===============================================
上一句
<start>路晚逢僧少<end>
真实
<start>门寒过客稀<end>
生成
<start>山寒出寺多
===============================================
上一句
<start>黍苗侵野径<end>
真实
<start>桑椹污闲庭<end>
生成
<start>桑叶绕江村
===============================================
上一句
<start>里闾思长者<end>
真实
<start>门户托佳儿<end>
生成
<start>风俗笑生涯
===============================================
上一句
<start>茅亭亦疏豁<end>
真实
<start>凭槛看春耕<end>
生成
<start>石径亦深浅
===============================================
上一句
<start>哀鸣思战斗<end>
真实
<start>迥立向苍苍<end>
生成
<start>哀挽忆江干
===============================================
上一句
<start>遣愁聊觅句<end>
真实
<start>得句却愁生<end>
生成
<start>不必问何如

===============================================
上一句
<start>好是修行处<end>
真实
<start>师当住几年<end>
生成
<start>春来不可寻
===============================================
上一句
<start>安得如渔翁<end>
真实
<start>垂钓江之涘<end>
生成
<start>一觞共同醉
===============================================
上一句
<start>霜台欹冠豸<end>
真实
<start>赖许往来频<end>
生成
<start>月殿倒婵娟

===============================================
上一句
<start>我何惮行役<end>
真实
<start>沿洄领佳致<end>
生成
<start>一笑不可期
===============================================
上一句
<start>追招不隔日<end>
真实
<start>继践公之堂<end>
生成
<start>独坐独伤神

===============================================
上一句
<start>秋风倾菊酒<end>
真实
<start>霁景下蓬山<end>
生成
<start>秋月照梅花
===============================================
上一句
<start>天授睢坛荚<end>
真实
<start>风兴渭水英<end>
生成
<start>人分汉水花
===============================================
上一句
<start>世言楚使者<end>
真实
<start>乃是汉名卿<end>
生成
<start>不识天地心
===============================================
上一句
<start>结言本同心<end>
真实
<start>悲欢何未齐<end>
生成
<start>相期在云海
===============================================
上一句
<start>河东有贤守<end>
真实
<start>帝念不能已<end>
生成
<start>不见此时人
===============================================
上一句
<start>感时何倏忽<end>
真实
<start>抚旧应涕洟<end>
生成
<start>愁绪乱纷纷
===============================================
上一句
<start>志士本激烈<end>
真实
<start>况当离别情<end>
生成
<start>志士徒伤悲
===============================================
上一句
<start>幅巾朝食罢<end>
真实
<start>芒

标签:transformer,唐诗,self,mask,len,超高,tf,model,size
来源: https://blog.csdn.net/amao1998/article/details/116061920

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有