博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Tensorflow常用方法
阅读量:5012 次
发布时间:2019-06-12

本文共 11481 字,大约阅读时间需要 38 分钟。

tf.Graph.as_default()

as_default(self)

Returns a context manager that makes this `Graph` the default graph.

This method should be used if you want to create multiple graphs in the same process. For convenience, a global default graph is provided, and all ops will be added to this graph if you do not create a new graph explicitly.

Use this method with the `with` keyword to specify that ops created within the scope of a block should be added to this graph.

 

tf.identity()

identity(input, name=None)

Return a tensor with the same shape and contents as the input tensor or value.

Args:

input: A `Tensor`.
name: A name for the operation (optional).

Returns:

A `Tensor`. Has the same type as `input`.

 

tf.random_normal()

random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)

Outputs random values from a normal distribution.

Args:

shape: A 1-D integer Tensor or Python array. The shape of the output tensor.
mean: A 0-D Tensor or Python value of type `dtype`. The mean of the normal distribution.
stddev: A 0-D Tensor or Python value of type `dtype`. The standard deviation of the normal distribution.
dtype: The type of the output.
seed: A Python integer. Used to create a random seed for the distribution.
See [`set_random_seed`](../../api_docs/python/constant_op.md#set_random_seed) for behavior.
name: A name for the operation (optional).

Returns:

A tensor of the specified shape filled with random normal values.

 

tf.nn.embedding_lookup()

embedding_lookup(params, ids, partition_strategy='mod', name=None, validate_indices=True, max_norm=None)

Looks up `ids` in a list of embedding tensors.

This function is used to perform parallel lookups on the list of tensors in `params`. It is a generalization of

[`tf.gather()`](../../api_docs/python/array_ops.md#gather), where `params` is interpreted as a partitioning of a large embedding tensor. `params` may be
a `PartitionedVariable` as returned by using `tf.get_variable()` with a partitioner.

If `len(params) > 1`, each element `id` of `ids` is partitioned between the elements of `params` according to the `partition_strategy`.

In all strategies, if the id space does not evenly divide the number of partitions, each of the first `(max_id + 1) % len(params)` partitions will be assigned one more id.

If `partition_strategy` is `"mod"`, we assign each id to partition `p = id % len(params)`.

For instance,13 ids are split across 5 partitions as:

`[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`

If `partition_strategy` is `"div"`, we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as:

`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`

The results of the lookup are concatenated into a dense tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.

Args:

params: A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors.

Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the given `partition_strategy`.

ids: A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`.

partition_strategy: A string specifying the partitioning strategy, relevant if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`.

name: A name for the operation (optional).
validate_indices: Whether or not to validate gather indices.
max_norm: If not None, embedding values are l2-normalized to the value of
max_norm.

Returns:

A `Tensor` with the same type as the tensors in `params`.

Raises:

ValueError: If `params` is empty.

 

tf.nn.dynamic_rnn()

dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)

Creates a recurrent neural network specified by RNNCell `cell`.

This function is functionally identical to the function `rnn` above, but

performs fully dynamic unrolling of `inputs`.

Unlike `rnn`, the input `inputs` is not a Python list of `Tensors`, one for

each frame. Instead, `inputs` may be a single `Tensor` where
the maximum time is either the first or second dimension (see the parameter
`time_major`). Alternatively, it may be a (possibly nested) tuple of
Tensors, each of them having matching batch and time dimensions.
The corresponding output is either a single `Tensor` having the same number
of time steps and batch size, or a (possibly nested) tuple of such tensors,
matching the nested structure of `cell.output_size`.

The parameter `sequence_length` is optional and is used to copy-through state

and zero-out outputs when past a batch element's sequence length. So it's more
for correctness than performance, unlike in rnn().

Args:

cell: An instance of RNNCell.
inputs: The RNN inputs.

If `time_major == False` (default), this must be a `Tensor` of shape:

`[batch_size, max_time, ...]`, or a nested tuple of such
elements.

If `time_major == True`, this must be a `Tensor` of shape:

`[max_time, batch_size, ...]`, or a nested tuple of such
elements.

This may also be a (possibly nested) tuple of Tensors satisfying

this property. The first two dimensions must match across all the inputs,
but otherwise the ranks and other shape components may differ.
In this case, input to `cell` at each time-step will replicate the
structure of these tuples, except for the time dimension (from which the
time is taken).

The input to `cell` at each time step will be a `Tensor` or (possibly

nested) tuple of Tensors each with dimensions `[batch_size, ...]`.
sequence_length: (optional) An int32/int64 vector sized `[batch_size]`.
initial_state: (optional) An initial state for the RNN.
If `cell.state_size` is an integer, this must be
a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.
If `cell.state_size` is a tuple, this should be a tuple of
tensors having shapes `[batch_size, s] for s in cell.state_size`.
dtype: (optional) The data type for the initial state and expected output.
Required if initial_state is not provided or RNN state has a heterogeneous
dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency
and can be run in parallel, will be. This parameter trades off
time for space. Values >> 1 use more memory but take less time,
while smaller values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs
which would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the `inputs` and `outputs` Tensors.
If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`.
If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`.
Using `time_major = True` is a bit more efficient because it avoids
transposes at the beginning and end of the RNN calculation. However,
most TensorFlow data is batch-major, so by default this function
accepts input and emits output in batch-major form.
scope: VariableScope for the created subgraph; defaults to "rnn".

Returns:

A pair (outputs, state) where:

outputs: The RNN output `Tensor`.

If time_major == False (default), this will be a `Tensor` shaped:

`[batch_size, max_time, cell.output_size]`.

If time_major == True, this will be a `Tensor` shaped:

`[max_time, batch_size, cell.output_size]`.

Note, if `cell.output_size` is a (possibly nested) tuple of integers

or `TensorShape` objects, then `outputs` will be a tuple having the
same structure as `cell.output_size`, containing Tensors having shapes
corresponding to the shape data in `cell.output_size`.

state: The final state. If `cell.state_size` is an int, this

will be shaped `[batch_size, cell.state_size]`. If it is a
`TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
If it is a (possibly nested) tuple of ints or `TensorShape`, this will
be a tuple having the corresponding shapes.

Raises:

TypeError: If `cell` is not an instance of RNNCell.
ValueError: If inputs is None or an empty list.

 

tf.reduce_max()

reduce_max(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None)

Computes the maximum of elements across dimensions of a tensor.

 

tf.contrib.layers.embed_sequence()

embed_sequence(ids, vocab_size=None, embed_dim=None, unique=False, initializer=None, regularizer=None, trainable=True, scope=None, reuse=None)

Maps a sequence of symbols to a sequence of embeddings.

 

tf.strided_slice()

strided_slice(input_, begin, end, strides=None, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=None, name=None)

Extracts a strided slice from a tensor.

To a first order, this operation extracts a slice of size `end - begin`

from a tensor `input`
starting at the location specified by `begin`. The slice continues by adding
`stride` to the `begin` index until all dimensions are not less than `end`.
Note that components of stride can be negative, which causes a reverse
slice.

This operation can be thought of an encoding of a numpy style sliced

range. Given a python slice input[<spec0>, <spec1>, ..., <specn>]
this function will be called as follows.

`begin`, `end`, and `strides` will be all length n. n is in general

not the same dimensionality as `input`.

 

tf.fill()

fill(dims, value, name=None)

Creates a tensor filled with a scalar value.

This operation creates a tensor of shape `dims` and fills it with `value`.

For example:

```prettyprint

# Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9]
[9, 9, 9]]
```

Args:

dims: A `Tensor` of type `int32`.
1-D. Represents the shape of the output tensor.
value: A `Tensor`. 0-D (scalar). Value to fill the returned tensor.

@compatibility(numpy)

Equivalent to np.full
@end_compatibility
name: A name for the operation (optional).

Returns:

A `Tensor`. Has the same type as `value`.

 

tf.strided_slice()

strided_slice(input_, begin, end, strides=None, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=None, name=None)

Extracts a strided slice from a tensor.

To a first order, this operation extracts a slice of size `end - begin`

from a tensor `input`
starting at the location specified by `begin`. The slice continues by adding
`stride` to the `begin` index until all dimensions are not less than `end`.
Note that components of stride can be negative, which causes a reverse
slice.

This operation can be thought of an encoding of a numpy style sliced

range. Given a python slice input[<spec0>, <spec1>, ..., <specn>]
this function will be called as follows.

`begin`, `end`, and `strides` will be all length n. n is in general

not the same dimensionality as `input`.

 

tf.fill()

fill(dims, value, name=None)
Creates a tensor filled with a scalar value.

This operation creates a tensor of shape `dims` and fills it with `value`.

For example:

```prettyprint

# Output tensor has shape [2, 3].
fill([2, 3], 9) ==> [[9, 9, 9]
[9, 9, 9]]
```

Args:

dims: A `Tensor` of type `int32`.
1-D. Represents the shape of the output tensor.
value: A `Tensor`. 0-D (scalar). Value to fill the returned tensor.

@compatibility(numpy)

Equivalent to np.full
@end_compatibility
name: A name for the operation (optional).

Returns:

A `Tensor`. Has the same type as `value`.

转载于:https://www.cnblogs.com/max-hu/p/7072426.html

你可能感兴趣的文章
岗顶-一图一世界
查看>>
一步步构造自己的vue2.0+webpack环境
查看>>
分页类
查看>>
Python装饰器的个人小理解
查看>>
为什么百万医疗险越来越多,到底选哪款?
查看>>
如何检测 51单片机IO口的下降沿
查看>>
扫描识别控件Dynamic .NET TWAIN使用教程:如何将事件添加到应用程序中
查看>>
创建和修改主键 (SQL)
查看>>
2018-2019 ICPC, NEERC, Southern Subregional Contest(训练记录)
查看>>
20145233 《信息安全系统设计基础》第7周学习总结
查看>>
linux设备驱动程序第3版学习笔记(例程2--hellop.c)
查看>>
玩转storm
查看>>
第10章 使用Apache服务部署静态网站
查看>>
关于给予webApp框架的开发工具
查看>>
c语言编写的生成泊松分布随机数
查看>>
Maven入门笔记
查看>>
iOS webView的常见属性和方法
查看>>
理解position:relative
查看>>
Codeforces Round #344 (Div. 2) Messager KMP的应用
查看>>
20145308刘昊阳 《Java程序设计》第4周学习总结
查看>>