Tensorflow快餐教程(12) - 用机器写莎士比亚的戏剧

简介: 如何用TFLearn,Keras等高层框架来学习自动生成莎翁的戏剧或者尼采的哲学文章

高层框架:TFLearn和Keras

上一节我们学习了Tensorflow的高层API封装,可以通过简单的几步就生成一个DNN分类器来解决MNIST手写识别问题。

尽管Tensorflow也在不断推进Estimator API。但是,这并不是工具的全部。在Tensorflow官方的API方外,我们还有强大的工具,比如TFLearn和Keras。

这节我们就做一个武器库的展示,看看专门为Tensorflow做的高层框架TFLearn和跨Tensorflow和CNTK几种后端的Keras为我们做了哪些强大的功能封装。

机器来写莎士比亚的戏剧

之前我们简单介绍了强大的用于处理序列数据的RNN。RNN比起其它网络的重要优点是可以学习了序列数据之后进行自生成。
比如,学习《唐诗三百首》可以写诗,学习了Linux Kernel源代码就能写C代码(虽然基本上编译不过)。

我们首先来一个自动写莎士比亚戏剧的例子吧。
在看代码之前我先唠叨几句。深度学习对于数据量的要求还是比较高的,像训练自动生成的这种,一般得几百万到几千万量级的训练数据下才能有好的效果。比如只用几篇小说来训练肯定生成不知所云的小说。就算是人类也做不到只学几首诗就会写诗么。
另外一点就是,训练数据量上来了,对于时间和算力的要求也是指数级提高的。
比如我们用莎翁的戏剧来训练,虽然数据量也不是特别的大,也就16万多行,但是在CPU上训练的话也不是一两个小时能搞定的,大约是天为单位。
后面我们举图像或视频的例子,在CPU上训,论月也是并不意外的。

那么,这个需要训一天左右的例子,代码会多复杂呢?答案是核心代码不过10几行,总共加上数据处理和测试代码也不过50行左右。

from __future__ import absolute_import, division, print_function

import os
import pickle
from six.moves import urllib

import tflearn
from tflearn.data_utils import *

path = "shakespeare_input.txt"
char_idx_file = 'char_idx.pickle'

if not os.path.isfile(path):
    urllib.request.urlretrieve("https://raw.githubusercontent.com/tflearn/tflearn.github.io/master/resources/shakespeare_input.txt", path)

maxlen = 25

char_idx = None
if os.path.isfile(char_idx_file):
  print('Loading previous char_idx')
  char_idx = pickle.load(open(char_idx_file, 'rb'))

X, Y, char_idx = \
    textfile_to_semi_redundant_sequences(path, seq_maxlen=maxlen, redun_step=3,
                                         pre_defined_char_idx=char_idx)

pickle.dump(char_idx, open(char_idx_file,'wb'))

g = tflearn.input_data([None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 512, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 512, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 512)
g = tflearn.dropout(g, 0.5)
g = tflearn.fully_connected(g, len(char_idx), activation='softmax')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy',
                       learning_rate=0.001)

m = tflearn.SequenceGenerator(g, dictionary=char_idx,
                              seq_maxlen=maxlen,
                              clip_gradients=5.0,
                              checkpoint_path='model_shakespeare')

for i in range(50):
    seed = random_sequence_from_textfile(path, maxlen)
    m.fit(X, Y, validation_set=0.1, batch_size=128,
          n_epoch=1, run_id='shakespeare')
    print("-- TESTING...")
    print("-- Test with temperature of 1.0 --")
    print(m.generate(600, temperature=1.0, seq_seed=seed))
    print("-- Test with temperature of 0.5 --")
    print(m.generate(600, temperature=0.5, seq_seed=seed))

上面的例子需要使用TFLearn框架,可以通过

pip install tflearn

来安装。

TFLearn是专门为Tensorflow开发的高层次API框架。
用TFLearn API的主要好处是可读性更好,比如刚才的核心代码:

g = tflearn.input_data([None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 512, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 512, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 512)
g = tflearn.dropout(g, 0.5)
g = tflearn.fully_connected(g, len(char_idx), activation='softmax')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy',
                       learning_rate=0.001)

m = tflearn.SequenceGenerator(g, dictionary=char_idx,
                              seq_maxlen=maxlen,
                              clip_gradients=5.0,
                              checkpoint_path='model_shakespeare')

从输入数据,三层LSTM,三层Dropout,最后是一个softmax的全连接层。

我们再来看一个预测泰坦尼克号幸存概率的网络的结构:

# Build neural network
net = tflearn.input_data(shape=[None, 6])
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net)

# Define model
model = tflearn.DNN(net)
# Start training (apply gradient descent algorithm)
model.fit(data, labels, n_epoch=10, batch_size=16, show_metric=True)

我们来看看训练第一轮之后生成的戏剧,这个阶段肯定还是有点惨,不过基本的意思有了:

THAISA:
Why, sir, say if becel; sunthy alot but of
coos rytermelt, buy -
bived with wond I saTt fas,'? You and grigper.

FIENDANS:
By my wordhand!

KING RECENTEN:
Wish sterest expeun The siops so his fuurs,
And emour so, ane stamn.
she wealiwe muke britgie; I dafs tpichicon, bist,
Turch ose be fast wirpest neerenler.

NONTo:
So befac, sels at, Blove and rackity;
The senent stran spard: and, this not you so the wount
hor hould batil's toor wate
What if a poostit's of bust contot;
Whit twetemes, Game ifon I am
Ures the fast to been'd matter:
To and lause. Tiess her jittarss,
Let concertaet ar: and not!
Not fearle her g

我们再看看训练10轮之后的结果:

PEMBROKE:
There tell the elder pieres,
Would our pestilent shapeing sebaricity. So have partned in me, Project of Yorle
again, and then when you set man
make plash'd of her too sparent
upon this father be dangerous puny or house;
Born is now been left of himself,
This true compary nor no stretches, back that
Horses had hand or question!

POLIXENES:
I have unproach the strangest
padely carry neerful young Yir,
Or hope not fall-a a cause of banque.

JESSICA:
He that comes to find the just,
And eyes gold, substrovious;
Yea pity a god on a foul rioness, these tebles and purish new head meet again?

20轮之后的结果:

y prison,
Fatal and ominous children and the foot, it will
hear with you: it is my pace comprite
To come my soldiers, if I were dread,
Of breath as what I charge with I well;
Her palace and every tailor, the house of wondrous sweet mark!

STANLEY:
Take that spirit, thou hast
'no whore he did eyes, and what men damned, and
I had evils; by lap, or so,
But wholow'st thy report subject,
Had my rabble against thee;
And no rassians which he secure
of genslications; when I have move undertake-inward, into Bertounce;
Upon a shift, meet as we are. He beggars thing
Have for it will, but joy with the minute cannot whom we prarem
-- Test with temperature of 0.5 --
y prison,
Fatal and ominous rein here,
The princess have all to prince and the marsh of his company
To prove brother in the world,
And we the forest prove more than the heavens on the false report of the fools,
Depose the body of my wits.

DUKE SENIOR:
The night better appelled my part.

ANGELO:
Care you that may you understand your grace
I may speak of a point, and seems as in the heart
Who be deeds to show and sale for so
unhouses me of her soul, and the heart of them from the corder black to stand about up.

CLAUDIO:
The place of the world shall be married his love.

从生成城市名字说起

大家的莎士比亚模型应该正在训练过程中吧,咱们闲着也是闲着,不如从一个更简单的例子来看看这个生成过程。
我们还是取TFLearn的官方例子,通过读取美国主要城市名字列表来生成一些新的城市名字。

我们以Z开头的城市为例:

Zachary
Zafra
Zag
Zahl
Zaleski
Zalma
Zama
Zanesfield
Zanesville
Zap
Zapata
Zarah
Zavalla
Zearing
Zebina
Zebulon
Zeeland
Zeigler
Zela
Zelienople
Zell
Zellwood
Zemple
Zena
Zenda
Zenith
Zephyr
Zephyr Cove
Zephyrhills
Zia Pueblo
Zillah
Zilwaukee
Zim
Zimmerman
Zinc
Zion
Zionsville
Zita
Zoar
Zolfo Springs
Zona
Zumbro Falls
Zumbrota
Zuni
Zurich
Zwingle
Zwolle

一共20580个城市。这个训练就快多了,在纯CPU上训练,大约5到6分钟可以训练一轮。

代码如下,跟上面写莎翁的戏剧的如出一辙:

from __future__ import absolute_import, division, print_function

import os
from six import moves
import ssl

import tflearn
from tflearn.data_utils import *

path = "US_Cities.txt"
if not os.path.isfile(path):
    context = ssl._create_unverified_context()
    moves.urllib.request.urlretrieve("https://raw.githubusercontent.com/tflearn/tflearn.github.io/master/resources/US_Cities.txt", path, context=context)

maxlen = 20

string_utf8 = open(path, "r").read().decode('utf-8')
X, Y, char_idx = \
    string_to_semi_redundant_sequences(string_utf8, seq_maxlen=maxlen, redun_step=3)

g = tflearn.input_data(shape=[None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 512, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 512)
g = tflearn.dropout(g, 0.5)
g = tflearn.fully_connected(g, len(char_idx), activation='softmax')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy',
                       learning_rate=0.001)

m = tflearn.SequenceGenerator(g, dictionary=char_idx,
                              seq_maxlen=maxlen,
                              clip_gradients=5.0,
                              checkpoint_path='model_us_cities')

for i in range(40):
    seed = random_sequence_from_string(string_utf8, maxlen)
    m.fit(X, Y, validation_set=0.1, batch_size=128,
          n_epoch=1, run_id='us_cities')
    print("-- TESTING...")
    print("-- Test with temperature of 1.2 --")
    print(m.generate(30, temperature=1.2, seq_seed=seed).encode('utf-8'))
    print("-- Test with temperature of 1.0 --")
    print(m.generate(30, temperature=1.0, seq_seed=seed).encode('utf-8'))
    print("-- Test with temperature of 0.5 --")
    print(m.generate(30, temperature=0.5, seq_seed=seed).encode('utf-8'))

我们看下第一轮训练完生成的城市名:

t and Shoot
Cuthbertd
Lettfrecv
El
Ceoneel Sutd
Sa

第二轮:

stle
Finchford
Finch Dasthond
madloogd
Wlaycoyarfw

第三轮:

averal
Cape Carteret
Acbiropa Heowar Sor Dittoy
Do

第十轮:

hoenchen
Schofield
Stcojos
Schabell
StcaKnerum Cri

第二十轮,好像开始有点意思了:

Hill
Cherry Hills Village
Hillfood Pork
Hillbrook

第三十轮,又有点退化:

ckitat
Kline
Klondike
Klonsder
Klansburg
Dlandon
D

第四十轮:

Branch
Villages of Ocite
Sidaydaton
Sidway
Siddade

第100轮:

Atlasburg
Atmautluak
Attion
Attul
Atta
Aque Creek

tflearn.SequenceGenerator的好处和坏处都是将细节都封装起来了,我们难以看到它的背后发生了什么。

温度参数

其实在前面的结果中,我们只是节选了一种温度下的结果。输出的结果一般都输出几种温度的值。那么这个温度是什么意思呢?

温度是表征概念变化的量。如果温度高,比如大于1,就代表希望输出结果更稳定。稳定的结果就是可能每次生成的句子都是一样的。如果等于1,那么对结果没有影响。如果小于1,那么就会让每次生成的结果变化比较大。
像做诗这样比较浪漫的事情,我们一般希望温度值在0.1以下,有变化才好玩,不是吗?

下面是城市生成值,在三种不同温度下的生成结果:

-- Test with temperature of 1.2 --

Atlasburg
Atmautluak
Attion
Attul
Atta
Aque Creek
-- Test with temperature of 1.0 --

Atlasburg
Atmautluak
Attila
Attaville
Atteville
-- Test with temperature of 0.5 --

Atlasburg
Atmautluak
Attigua
Attinword
Attrove

跨后端高层API - Keras,生成尼采的文章

Keras是可以跨Tensorflow,微软的CNTK等多种后端的API。
可以通过

pip install keras

来安装keras。我们安装了Tensorflow之后,Keras会选用Tensorflow来做它的后端。

我们也看下Keras上文本生成的例子。官方例子是用来生成尼采的句子。

核心语句就6句话:

model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))

optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)

下面是完整的代码,大家跑来玩玩吧。如果对尼采不感兴趣,也可以换成别的文章。不过请注意,正如注释中所说的,文本随便换,但是要保持在10万字符以上。最好是100万字符以上。

'''Example script to generate text from Nietzsche's writings.
At least 20 epochs are required before the generated text
starts sounding coherent.
It is recommended to run this script on GPU, as recurrent
networks are quite computationally intensive.
If you try this script on new data, make sure your corpus
has at least ~100k characters. ~1M is better.
'''

from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
import io

path = get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
with io.open(path, encoding='utf-8') as f:
    text = f.read().lower()
print('corpus length:', len(text))

chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))

# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
    sentences.append(text[i: i + maxlen])
    next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))

print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
    for t, char in enumerate(sentence):
        x[i, t, char_indices[char]] = 1
    y[i, char_indices[next_chars[i]]] = 1


# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))

optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)


def sample(preds, temperature=1.0):
    # helper function to sample an index from a probability array
    preds = np.asarray(preds).astype('float64')
    preds = np.log(preds) / temperature
    exp_preds = np.exp(preds)
    preds = exp_preds / np.sum(exp_preds)
    probas = np.random.multinomial(1, preds, 1)
    return np.argmax(probas)


def on_epoch_end(epoch, logs):
    # Function invoked at end of each epoch. Prints generated text.
    print()
    print('----- Generating text after Epoch: %d' % epoch)

    start_index = random.randint(0, len(text) - maxlen - 1)
    for diversity in [0.2, 0.5, 1.0, 1.2]:
        print('----- diversity:', diversity)

        generated = ''
        sentence = text[start_index: start_index + maxlen]
        generated += sentence
        print('----- Generating with seed: "' + sentence + '"')
        sys.stdout.write(generated)

        for i in range(400):
            x_pred = np.zeros((1, maxlen, len(chars)))
            for t, char in enumerate(sentence):
                x_pred[0, t, char_indices[char]] = 1.

            preds = model.predict(x_pred, verbose=0)[0]
            next_index = sample(preds, diversity)
            next_char = indices_char[next_index]

            generated += next_char
            sentence = sentence[1:] + next_char

            sys.stdout.write(next_char)
            sys.stdout.flush()
        print()

print_callback = LambdaCallback(on_epoch_end=on_epoch_end)

model.fit(x, y,
          batch_size=128,
          epochs=60,
          callbacks=[print_callback])

文本生成背后的原理 - 只不过是概率的预测而己

TFLearn的封装做得太好,我们看不到细节。所以我们参看一下Keras的同功能实现的代码:

        for i in range(400):
            x_pred = np.zeros((1, maxlen, len(chars)))
            for t, char in enumerate(sentence):
                x_pred[0, t, char_indices[char]] = 1.

            preds = model.predict(x_pred, verbose=0)[0]
            next_index = sample(preds, diversity)
            next_char = indices_char[next_index]

            generated += next_char
            sentence = sentence[1:] + next_char

            sys.stdout.write(next_char)
            sys.stdout.flush()

我们可以看到,本质上是调用model.predict来预测在当前序列下最可能出现的字符是什么。

preds = model.predict(x_pred, verbose=0)[0]

从sample的代码中,我们可以看到温度值背后的原理:

def sample(preds, temperature=1.0):
    # helper function to sample an index from a probability array
    preds = np.asarray(preds).astype('float64')
    preds = np.log(preds) / temperature
    exp_preds = np.exp(preds)
    preds = exp_preds / np.sum(exp_preds)
    probas = np.random.multinomial(1, preds, 1)
    return np.argmax(probas)

了解了这些原理之后,使用不同的文本,加上不同的温度值,享受您的机器创作之旅吧!

前情提要

Tensorflow快餐教程(1) - 30行代码搞定手写识别:https://yq.aliyun.com/articles/582122
Tensorflow快餐教程(2) - 标量运算:https://yq.aliyun.com/articles/582490
Tensorflow快餐教程(3) - 向量:https://yq.aliyun.com/articles/584202
Tensorflow快餐教程(4) - 矩阵:https://yq.aliyun.com/articles/584526
Tensorflow快餐教程(5) - 范数:https://yq.aliyun.com/articles/584896
Tensorflow快餐教程(6) - 矩阵分解:https://yq.aliyun.com/articles/585599
Tensorflow快餐教程(7) - 梯度下降:https://yq.aliyun.com/articles/587350
Tensorflow快餐教程(8) - 深度学习简史:https://yq.aliyun.com/articles/588920
Tensorflow快餐教程(9) - 卷积:https://yq.aliyun.com/articles/590233
Tensorflow快餐教程(10) - 循环神经网络: https://yq.aliyun.com/articles/591118
Tensorflow快餐教程(11) - 不懂机器学习就只调API行不行?: https://yq.aliyun.com/articles/594258

目录
相关文章
|
6月前
|
机器学习/深度学习 PyTorch TensorFlow
深度学习框架教程:介绍一些流行的深度学习框架 (如TensorFlow、PyTorch等)
深度学习框架教程:介绍一些流行的深度学习框架 (如TensorFlow、PyTorch等)
62 0
|
8月前
|
XML TensorFlow API
TensorFlow Object Detection API 超详细教程和踩坑过程
TensorFlow Object Detection API 超详细教程和踩坑过程
118 1
|
机器学习/深度学习 Linux TensorFlow
基于TensorFlow训练的人脸识别神经网络 毕业设计完整教程
基于TensorFlow训练的人脸识别神经网络 毕业设计完整教程
196 0
|
机器学习/深度学习 PyTorch TensorFlow
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(三)
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(三)
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(三)
|
机器学习/深度学习 TensorFlow 算法框架/工具
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(二)
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(二)
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(二)
|
机器学习/深度学习 Web App开发 IDE
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(一)
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(一)
史上最全深度学习环境配置教程---适用于各种深度学习框架---Pytorh TensorFlow Keras-等和各种python环境(一)
|
机器学习/深度学习 人工智能 数据可视化
斯坦福tensorflow教程(八) 计算机视觉和卷积网络简介
斯坦福tensorflow教程(八) 计算机视觉和卷积网络简介
125 0
斯坦福tensorflow教程(八) 计算机视觉和卷积网络简介
|
TensorFlow 算法框架/工具 Python
Tensorflow教程(7) 命令行参数tf.flags的使用
Tensorflow教程(7) 命令行参数tf.flags的使用
150 0
Tensorflow教程(7) 命令行参数tf.flags的使用
|
TensorFlow 算法框架/工具
TensorFlow教程(6) tf.Variable() 和tf.get_variable()
TensorFlow教程(6) tf.Variable() 和tf.get_variable()
152 0
|
TensorFlow 算法框架/工具
TensorFlow教程(5) 随机数实例
TensorFlow教程(5) 随机数实例
127 0

热门文章

最新文章