标签除上每幅图像公海赌船网址,标签除上每幅图像

亚历克斯Net(AlexKrizhevsky,ILSV奥迪Q⑤C2013亚军)适合做图像分类。层自左向右、自上向下读取,关联层分为①组,中度、宽度减小,深度扩大。深度扩张减少互连网计算量。

上学笔记TF016:CNN达成、数据集、TFRecord、加载图像、模型、陶冶、调试,tf01陆tfrecord

亚历克斯Net(亚历克斯Krizhevsky,ILSV中华VC2011亚军)适合做图像分类。层自左向右、自上向下读取,关联层分为一组,中度、宽度减小,深度扩展。深度扩张减弱网络计算量。

教练模型数据集 Stanford计算机视觉站点Stanford Dogs
http://vision.stanford.edu/aditya86/ImageNetDogs/
。数据下载解压到模型代码同一路线imagenet-dogs目录下。包蕴的120种狗图像。五分之四教练,十分之二测试。产品模型须求预留原始数据交叉验证。每幅图像JPEG格式(奥迪Q三GB),尺寸不一。

图像转TFRecord文件,有助加快陶冶,简化图像标签相配,图像分离利用检查点文件对模型举办不间断测试。调换图像格式把颜色空间转灰度,图像修改统1尺寸,标签除上每幅图像。操练前只进行三回预管理,时间较长。

glob.glob
枚举内定路径目录,展现数据集文件结构。“*”通配符能够兑现模糊查找。文件名中七个数字对应ImageNet系列WordNetID。ImageNet网址可用WordNetID查图像细节:
http://www.image-net.org/synset?wnid=n02085620

文件名分解为项目和对应的文书名,品种对应文件夹名称。依附品种对图像分组。枚举每一种连串图像,伍分之一图像划入测试集。检查每种项目测试图像是不是至少有整整图像的18%。目录和图像协会到三个与种种门类有关的字典,包罗各品种全体图像。分类图像社团到字典中,简化接纳分类图像及分类进程。

预处理阶段,依次遍历全体分类图像,打开列表普通话件。用dataset图像填充TFRecord文件,把项目包蕴进去。dataset键值对应文件列表标签。record_location
存款和储蓄TFRecord输出路线。枚举dataset,当前目录用于文书划分,每隔十0m幅图像,磨练样本音讯写入新的TFRecord文件,加快写操作进度。不可能被TensorFlow识别为JPEG图像,用try/catch忽略。转为灰度图减弱计算量和内存占用。tf.cast把福睿斯GB值转变来[0,1)区间内。标签按字符串存款和储蓄较高速,最佳转变为整数索引或独热编码秩一张量。

开发每幅图像,转变为灰度图,调节尺寸,增多到TFRecord文件。tf.image.resize_images函数把持有图像调解为同样尺寸,不思考长度宽度比,有扭动。裁剪、边界填充能保持图像长度宽度比。

依据TFRecord文件读取图像,每回加载一丢丢图像及标签。修改图像形状有助练习和输出可视化。相称全体在教练集目录下TFRecord文件加载陶冶图像。每一种TFRecord文件包罗多幅图像。tf.parse_single_example只从文件提取单个样本。批运算可同不时间磨炼多幅图像或单幅图像,必要丰裕系统内部存款和储蓄器。

图像转灰度值为[0,一)浮点类型,相称convolution二d梦想输入。卷积输出第3维和最终①维不改动,中间两维爆发变化。tf.contrib.layers.convolution二d开立模型第1层。weights_initializer设置正态随机值,第一组滤波器填充正态遍及随机数。滤波器设置trainable,新闻输入互联网,权值调治,提升模型准确率。
max_pool把出口降采集样品。ksize、strides
([1,2,2,1]),卷积输出形状减半。输出形状减小,不改动滤波器数量(输出通道)或图像批数量尺寸。减少重量,与图像(滤波器)中度、宽度有关。越多输出通道,滤波器数量增加,贰倍于第2层。三个卷积和池化层收缩输入中度、宽度,增添吃水。很多架构,卷积层和池化层当先五层。磨炼调节和测试时间更加长,能相称越多更复杂情势。
图像每种点与出口神经元创设全连接。softmax,全连接层须要二阶张量。第二维区分图像,第壹维输入张量秩壹张量。tf.reshape
提醒和采用任何全数维,-一把最终池化层调整为硬汉秩一张量。
池化层打开,网络当前场合与预测全连接层整合。weights_initializer接收可调用参数,lambda表明式重临截断正态分布,钦点遍及标准差。dropout
削减模型中神经元重要性。tf.contrib.layers.fully_connected
输出前面全体层与教练中分类的全连接。每一个像素与分类关联。网络每一步将输入图像转化为滤波减小尺码。滤波器与标签相称。减弱训练、测试互连网计算量,输出更具一般性。

教练多少真实标签和模型预测结果,输入报到并且接受集陶冶优化器(优化每层权值)计算模型损失。多次迭代,每一遍升高模型正确率。大多数分拣函数(tf.nn.softmax)必要数值类型标签。各类标签转变代表包罗全部分类列表索引整数。tf.map_fn
匹配各种标签并回到体种类表索引。map依附目录列表创制包涵分类列表。tf.map_fn
可用钦命函数对数码流图张量映射,生成仅包蕴种种标签在具有类标签列表索引秩一张量。tf.nn.softmax用索引预测。

调治将养CNN,阅览滤波器(卷积核)每轮迭代变化。设计精良CNN,第一个卷积层专业,输入权值被随意初阶化。权值通过图像激活,激活函数输出(特征图)随机。特征图可视化,输出外观与原始图相似,被施加静力(static)。静力由全体权值的随便激发。经过多轮迭代,权值被调动拟合演习反馈,滤波器趋于同壹。网络未有,滤波器与图像差别细小形式类似。tf.image_summary得报到并且接受集练习后的滤波器和特征图轻巧视图。数据流图图像概要输出(image
summary
output)从整体领悟所利用的滤波器和输入图像特点图。TensorDebugger,迭代中以GIF动画查看滤波器变化。

文件输入存款和储蓄在SparseTensor,超过一半分量为0。CNN使用稠密输入,各样值都首要,输入超越十一分之伍份额非0。

 

    import tensorflow as tf
    import glob
    from itertools import groupby
    from collections import defaultdict
    sess = tf.InteractiveSession()
    image_filenames = glob.glob("./imagenet-dogs/n02*/*.jpg")
    image_filenames[0:2]
    training_dataset = defaultdict(list)
    testing_dataset = defaultdict(list)
    image_filename_with_breed = map(lambda filename: (filename.split("/")[2], filename), image_filenames)
    for dog_breed, breed_images in groupby(image_filename_with_breed, lambda x: x[0]):
        for i, breed_image in enumerate(breed_images):
            if i % 5 == 0:
                testing_dataset[dog_breed].append(breed_image[1])
            else:
                training_dataset[dog_breed].append(breed_image[1])
        breed_training_count = len(training_dataset[dog_breed])
        breed_testing_count = len(testing_dataset[dog_breed])
        breed_training_count_float = float(breed_training_count)
        breed_testing_count_float = float(breed_testing_count)
        assert round(breed_testing_count_float / (breed_training_count_float + breed_testing_count_float), 2) > 0.18, "Not enough testing images."
    print "training_dataset testing_dataset END ------------------------------------------------------"
    def write_records_file(dataset, record_location):
        writer = None
        current_index = 0
        for breed, images_filenames in dataset.items():
            for image_filename in images_filenames:
                if current_index % 100 == 0:
                    if writer:
                        writer.close()
                    record_filename = "{record_location}-{current_index}.tfrecords".format(
                        record_location=record_location,
                        current_index=current_index)
                    writer = tf.python_io.TFRecordWriter(record_filename)
                    print record_filename + "------------------------------------------------------" 
                current_index += 1
                image_file = tf.read_file(image_filename)
                try:
                    image = tf.image.decode_jpeg(image_file)
                except:
                    print(image_filename)
                    continue
                grayscale_image = tf.image.rgb_to_grayscale(image)
                resized_image = tf.image.resize_images(grayscale_image, [250, 151])
                image_bytes = sess.run(tf.cast(resized_image, tf.uint8)).tobytes()
                image_label = breed.encode("utf-8")
                example = tf.train.Example(features=tf.train.Features(feature={
                    'label': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_label])),
                    'image': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_bytes]))
                }))
                writer.write(example.SerializeToString())
        writer.close()
    write_records_file(testing_dataset, "./output/testing-images/testing-image")
    write_records_file(training_dataset, "./output/training-images/training-image")
    print "write_records_file testing_dataset training_dataset END------------------------------------------------------"
    filename_queue = tf.train.string_input_producer(
    tf.train.match_filenames_once("./output/training-images/*.tfrecords"))
    reader = tf.TFRecordReader()
    _, serialized = reader.read(filename_queue)
    features = tf.parse_single_example(
    serialized,
        features={
            'label': tf.FixedLenFeature([], tf.string),
            'image': tf.FixedLenFeature([], tf.string),
        })
    record_image = tf.decode_raw(features['image'], tf.uint8)
    image = tf.reshape(record_image, [250, 151, 1])
    label = tf.cast(features['label'], tf.string)
    min_after_dequeue = 10
    batch_size = 3
    capacity = min_after_dequeue + 3 * batch_size
    image_batch, label_batch = tf.train.shuffle_batch(
        [image, label], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue)
    print "load image from TFRecord END------------------------------------------------------"
    float_image_batch = tf.image.convert_image_dtype(image_batch, tf.float32)
    conv2d_layer_one = tf.contrib.layers.convolution2d(
        float_image_batch,
        num_outputs=32,
        kernel_size=(5,5),
        activation_fn=tf.nn.relu,
        weights_initializer=tf.random_normal,
        stride=(2, 2),
        trainable=True)
    pool_layer_one = tf.nn.max_pool(conv2d_layer_one,
        ksize=[1, 2, 2, 1],
        strides=[1, 2, 2, 1],
        padding='SAME')
    conv2d_layer_one.get_shape(), pool_layer_one.get_shape()
    print "conv2d_layer_one pool_layer_one END------------------------------------------------------"
    conv2d_layer_two = tf.contrib.layers.convolution2d(
        pool_layer_one,
        num_outputs=64,
        kernel_size=(5,5),
        activation_fn=tf.nn.relu,
        weights_initializer=tf.random_normal,
        stride=(1, 1),
        trainable=True)
    pool_layer_two = tf.nn.max_pool(conv2d_layer_two,
        ksize=[1, 2, 2, 1],
        strides=[1, 2, 2, 1],
        padding='SAME')
    conv2d_layer_two.get_shape(), pool_layer_two.get_shape()
    print "conv2d_layer_two pool_layer_two END------------------------------------------------------"
    flattened_layer_two = tf.reshape(
        pool_layer_two,
        [
            batch_size,
            -1
        ])
    flattened_layer_two.get_shape()
    print "flattened_layer_two END------------------------------------------------------"
    hidden_layer_three = tf.contrib.layers.fully_connected(
        flattened_layer_two,
        512,
        weights_initializer=lambda i, dtype: tf.truncated_normal([38912, 512], stddev=0.1),
        activation_fn=tf.nn.relu
    )
    hidden_layer_three = tf.nn.dropout(hidden_layer_three, 0.1)
    final_fully_connected = tf.contrib.layers.fully_connected(
        hidden_layer_three,
        120,
        weights_initializer=lambda i, dtype: tf.truncated_normal([512, 120], stddev=0.1)
    )
    print "final_fully_connected END------------------------------------------------------"
    labels = list(map(lambda c: c.split("/")[-1], glob.glob("./imagenet-dogs/*")))
    train_labels = tf.map_fn(lambda l: tf.where(tf.equal(labels, l))[0,0:1][0], label_batch, dtype=tf.int64)
    loss = tf.reduce_mean(
        tf.nn.sparse_softmax_cross_entropy_with_logits(
            final_fully_connected, train_labels))
    batch = tf.Variable(0)
    learning_rate = tf.train.exponential_decay(
        0.01,
        batch * 3,
        120,
        0.95,
        staircase=True)
    optimizer = tf.train.AdamOptimizer(
        learning_rate, 0.9).minimize(
        loss, global_step=batch)
    train_prediction = tf.nn.softmax(final_fully_connected)
    print "train_prediction END------------------------------------------------------"
    filename_queue.close(cancel_pending_enqueues=True)
    coord.request_stop()
    coord.join(threads)
    print "END------------------------------------------------------"

 

参考资料:
《面向机器智能的TensorFlow实施》

接待加我微信交流:qingxingfengzi
自己的微信公众号:qingxingfengzigz
自家爱人张幸清的微信公众号:qingqingfeifangz

http://www.bkjia.com/Pythonjc/1213552.htmlwww.bkjia.comtruehttp://www.bkjia.com/Pythonjc/1213552.htmlTechArticle学习笔记TF016:CNN实现、数据集、TFRecord、加载图像、模型、训练、调试,tf016tfrecord
亚历克斯Net(亚历克斯 Krizhevsky,ILSV本田CR-VC二〇一二亚军)适合做图像分类。层自左…

教练模型数据集 StanfordComputer视觉站点Stanford Dogs
http://vision.stanford.edu/aditya86/ImageNetDogs/
。数据下载解压到模型代码同一路线imagenet-dogs目录下。包涵的120种狗图像。8/10教练,五分之一测试。产品模型要求预留原始数据交叉验证。每幅图像JPEG格式(RGB),尺寸不一。

图像转TFRecord文件,有助加快演习,简化图像标签相称,图像分离利用检查点文件对模型实行不间断测试。转变图像格式把颜色空间转灰度,图像修改统1尺寸,标签除上每幅图像。练习前只举行贰回预管理,时间较长。

glob.glob
枚举钦命路径目录,展现数据集文件结构。“*”通配符能够达成模糊查找。文件名中七个数字对应ImageNet体系WordNetID。ImageNet网址可用WordNetID查图像细节:
http://www.image-net.org/synset?wnid=n02085620

文本名分解为项目和呼应的文书名,品种对应文件夹名称。依附品种对图像分组。枚举每一种品种图像,十分二图像划入测试集。检查每一种门类测试图像是或不是至少有任何图像的1八%。目录和图像协会到七个与种种体系有关的字典,包括每一项目全部图像。分类图像组织到字典中,简化选用分类图像及分类进度。

预管理阶段,依次遍历全部分类图像,展开列表中文件。用dataset图像填充TFRecord文件,把品种包括进去。dataset键值对应文件列表标签。record_location
存款和储蓄TFRecord输出路线。枚举dataset,当前目录用于文书划分,每隔100m幅图像,陶冶样本消息写入新的TFRecord文件,增加速度写操作进度。不恐怕被TensorFlow识别为JPEG图像,用try/catch忽略。转为灰度图收缩总括量和内部存款和储蓄器占用。tf.cast把OdysseyGB值调换来[0,1)区间内。标签按字符串存储较便捷,最佳转换为整数索引或独热编码秩壹张量。

开采每幅图像,转变为灰度图,调节尺寸,增添到TFRecord文件。tf.image.resize_images函数把富有图像调度为一样尺寸,不思量长度宽度比,有扭动。裁剪、边界填充能保持图像长度宽度比。

根据TFRecord文件读取图像,每趟加载一丢丢图像及标签。修改图像形状有助磨练和出口可视化。相称全体在磨炼集目录下TFRecord文件加载演练图像。种种TFRecord文件包括多幅图像。tf.parse_single_example只从文件提取单个样本。批运算可同不常候练习多幅图像或单幅图像,供给丰硕系统内部存款和储蓄器。

图像转灰度值为[0,一)浮点类型,相称convolution贰d可望输入。卷积输出第一维和最终一维不变,中间两维爆发变化。tf.contrib.layers.convolution二d创设模型第3层。weights_initializer设置正态随机值,第3组滤波器填充正态布满随机数。滤波器设置trainable,音讯输入互连网,权值调度,升高模型正确率。
max_pool把出口降采样。ksize、strides
([1,2,2,1]),卷积输出形状减半。输出形状减小,不退换滤波器数量(输出通道)或图像批数量尺寸。收缩重量,与图像(滤波器)高度、宽度有关。越多输出通道,滤波器数量净增,二倍于第1层。多个卷积和池化层减少输入中度、宽度,扩大吃水。许多架构,卷积层和池化层超过5层。磨炼调节和测试时间更加长,能合作更加的多更目眩神摇情势。
图像每一种点与输出神经元创设全连接。softmax,全连接层须求二阶张量。第3维区分图像,第3维输入张量秩壹张量。tf.reshape
提示和选取别的全部维,-1把最后池化层调度为远大秩一张量。
池化层张开,互连网当前地方与估摸全连接层整合。weights_initializer接收可调用参数,lambda表明式重临截断正态布满,钦赐布满标准差。dropout
削减模型中神经元主要性。tf.contrib.layers.fully_connected
输出前边全数层与磨练中分类的全连接。各类像素与分类关联。互联网每一步将输入图像转化为滤波减小尺码。滤波器与标签匹配。减少陶冶、测试网络总括量,输出更具一般性。

磨炼多少真实标签和模型预测结果,输入到磨练优化器(优化每层权值)总结模型损失。数拾二次迭代,每一趟升高模型精确率。半数以上分类函数(tf.nn.softmax)需求数值类型标签。种种标签调换代表包蕴全体分类列表索引整数。tf.map_fn
相配每一种标签并赶回连串列表索引。map依靠目录列表成立包括分类列表。tf.map_fn
可用钦命函数对数据流图张量映射,生成仅蕴含各类标签在颇具类标签列表索引秩一张量。tf.nn.softmax用索引预测。

调治CNN,观看滤波器(卷积核)每轮迭代变化。设计非凡CNN,第一个卷积层职业,输入权值被随便开头化。权值通过图像激活,激活函数输出(特征图)随机。特征图可视化,输出外观与原始图相似,被施加静力(static)。静力由全数权值的随机激发。经过多轮迭代,权值被调动拟合训练反馈,滤波器趋于一致。网络未有,滤波器与图像不一致细小方式类似。tf.image_summary得报到并且接受集磨练后的滤波器和特点图轻易视图。数据流图图像概要输出(image
summary
output)从总体通晓所采用的滤波器和输入图像特点图。TensorDebugger,迭代中以GIF动画查看滤波器变化。

文本输入存款和储蓄在SparseTensor,半数以上份额为0。CNN使用稠密输入,各样值都首要,输入大多数分量非0。

 

    import tensorflow as tf
    import glob
    from itertools import groupby
    from collections import defaultdict
    sess = tf.InteractiveSession()
    image_filenames = glob.glob("./imagenet-dogs/n02*/*.jpg")
    image_filenames[0:2]
    training_dataset = defaultdict(list)
    testing_dataset = defaultdict(list)
    image_filename_with_breed = map(lambda filename: (filename.split("/")[2], filename), image_filenames)
    for dog_breed, breed_images in groupby(image_filename_with_breed, lambda x: x[0]):
        for i, breed_image in enumerate(breed_images):
            if i % 5 == 0:
                testing_dataset[dog_breed].append(breed_image[1])
            else:
                training_dataset[dog_breed].append(breed_image[1])
        breed_training_count = len(training_dataset[dog_breed])
        breed_testing_count = len(testing_dataset[dog_breed])
        breed_training_count_float = float(breed_training_count)
        breed_testing_count_float = float(breed_testing_count)
        assert round(breed_testing_count_float / (breed_training_count_float + breed_testing_count_float), 2) > 0.18, "Not enough testing images."
    print "training_dataset testing_dataset END ------------------------------------------------------"
    def write_records_file(dataset, record_location):
        writer = None
        current_index = 0
        for breed, images_filenames in dataset.items():
            for image_filename in images_filenames:
                if current_index % 100 == 0:
                    if writer:
                        writer.close()
                    record_filename = "{record_location}-{current_index}.tfrecords".format(
                        record_location=record_location,
                        current_index=current_index)
                    writer = tf.python_io.TFRecordWriter(record_filename)
                    print record_filename + "------------------------------------------------------" 
                current_index += 1
                image_file = tf.read_file(image_filename)
                try:
                    image = tf.image.decode_jpeg(image_file)
                except:
                    print(image_filename)
                    continue
                grayscale_image = tf.image.rgb_to_grayscale(image)
                resized_image = tf.image.resize_images(grayscale_image, [250, 151])
                image_bytes = sess.run(tf.cast(resized_image, tf.uint8)).tobytes()
                image_label = breed.encode("utf-8")
                example = tf.train.Example(features=tf.train.Features(feature={
                    'label': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_label])),
                    'image': tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_bytes]))
                }))
                writer.write(example.SerializeToString())
        writer.close()
    write_records_file(testing_dataset, "./output/testing-images/testing-image")
    write_records_file(training_dataset, "./output/training-images/training-image")
    print "write_records_file testing_dataset training_dataset END------------------------------------------------------"
    filename_queue = tf.train.string_input_producer(
    tf.train.match_filenames_once("./output/training-images/*.tfrecords"))
    reader = tf.TFRecordReader()
    _, serialized = reader.read(filename_queue)
    features = tf.parse_single_example(
    serialized,
        features={
            'label': tf.FixedLenFeature([], tf.string),
            'image': tf.FixedLenFeature([], tf.string),
        })
    record_image = tf.decode_raw(features['image'], tf.uint8)
    image = tf.reshape(record_image, [250, 151, 1])
    label = tf.cast(features['label'], tf.string)
    min_after_dequeue = 10
    batch_size = 3
    capacity = min_after_dequeue + 3 * batch_size
    image_batch, label_batch = tf.train.shuffle_batch(
        [image, label], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue)
    print "load image from TFRecord END------------------------------------------------------"
    float_image_batch = tf.image.convert_image_dtype(image_batch, tf.float32)
    conv2d_layer_one = tf.contrib.layers.convolution2d(
        float_image_batch,
        num_outputs=32,
        kernel_size=(5,5),
        activation_fn=tf.nn.relu,
        weights_initializer=tf.random_normal,
        stride=(2, 2),
        trainable=True)
    pool_layer_one = tf.nn.max_pool(conv2d_layer_one,
        ksize=[1, 2, 2, 1],
        strides=[1, 2, 2, 1],
        padding='SAME')
    conv2d_layer_one.get_shape(), pool_layer_one.get_shape()
    print "conv2d_layer_one pool_layer_one END------------------------------------------------------"
    conv2d_layer_two = tf.contrib.layers.convolution2d(
        pool_layer_one,
        num_outputs=64,
        kernel_size=(5,5),
        activation_fn=tf.nn.relu,
        weights_initializer=tf.random_normal,
        stride=(1, 1),
        trainable=True)
    pool_layer_two = tf.nn.max_pool(conv2d_layer_two,
        ksize=[1, 2, 2, 1],
        strides=[1, 2, 2, 1],
        padding='SAME')
    conv2d_layer_two.get_shape(), pool_layer_two.get_shape()
    print "conv2d_layer_two pool_layer_two END------------------------------------------------------"
    flattened_layer_two = tf.reshape(
        pool_layer_two,
        [
            batch_size,
            -1
        ])
    flattened_layer_two.get_shape()
    print "flattened_layer_two END------------------------------------------------------"
    hidden_layer_three = tf.contrib.layers.fully_connected(
        flattened_layer_two,
        512,
        weights_initializer=lambda i, dtype: tf.truncated_normal([38912, 512], stddev=0.1),
        activation_fn=tf.nn.relu
    )
    hidden_layer_three = tf.nn.dropout(hidden_layer_three, 0.1)
    final_fully_connected = tf.contrib.layers.fully_connected(
        hidden_layer_three,
        120,
        weights_initializer=lambda i, dtype: tf.truncated_normal([512, 120], stddev=0.1)
    )
    print "final_fully_connected END------------------------------------------------------"
    labels = list(map(lambda c: c.split("/")[-1], glob.glob("./imagenet-dogs/*")))
    train_labels = tf.map_fn(lambda l: tf.where(tf.equal(labels, l))[0,0:1][0], label_batch, dtype=tf.int64)
    loss = tf.reduce_mean(
        tf.nn.sparse_softmax_cross_entropy_with_logits(
            final_fully_connected, train_labels))
    batch = tf.Variable(0)
    learning_rate = tf.train.exponential_decay(
        0.01,
        batch * 3,
        120,
        0.95,
        staircase=True)
    optimizer = tf.train.AdamOptimizer(
        learning_rate, 0.9).minimize(
        loss, global_step=batch)
    train_prediction = tf.nn.softmax(final_fully_connected)
    print "train_prediction END------------------------------------------------------"
    filename_queue.close(cancel_pending_enqueues=True)
    coord.request_stop()
    coord.join(threads)
    print "END------------------------------------------------------"

 

参照他事他说加以考察资料:
《面向机器智能的TensorFlow实行》

接待加小编微信沟通:qingxingfengzi
自家的微信公众号:qingxingfengzigz
自己老伴张幸清的微信公众号:qingqingfeifangz

相关文章