1
0
Fork 0
mirror of https://github.com/tensorflow/haskell.git synced 2024-12-27 03:59:46 +01:00
tensorflow-haskell/docs/haddock/tensorflow-ops-0.1.0.0/TensorFlow-Ops.html

180 lines
106 KiB
HTML
Raw Normal View History

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>TensorFlow.Ops</title><link href="ocean.css" rel="stylesheet" type="text/css" title="Ocean" /><script src="haddock-util.js" type="text/javascript"></script><script type="text/javascript">//<![CDATA[
window.onload = function () {pageLoad();setSynopsis("mini_TensorFlow-Ops.html");};
//]]>
2016-10-31 22:22:48 +01:00
</script></head><body><div id="package-header"><ul class="links" id="page-menu"><li><a href="index.html">Contents</a></li><li><a href="doc-index.html">Index</a></li></ul><p class="caption">tensorflow-ops-0.1.0.0: Friendly layer around TensorFlow bindings.</p></div><div id="content"><div id="module-header"><table class="info"><tr><th>Safe Haskell</th><td>None</td></tr><tr><th>Language</th><td>Haskell2010</td></tr></table><p class="caption">TensorFlow.Ops</p></div><div id="description"><p class="caption">Description</p><div class="doc"><p>This module contains definitions for some built-in TensorFlow operations.</p><p>Note that certain, &quot;stateful&quot; ops like <code><a href="TensorFlow-Ops.html#v:variable">variable</a></code> and <code><a href="TensorFlow-Ops.html#v:assign">assign</a></code> return a
<code><a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a></code> action (e.g., <code>Build (Tensor Ref a)</code> instead of a pure value; the
returned <code><a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a></code>s are always rendered in the current <code><a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a></code> context. This
approach helps us avoid problems with inlining or common subexpression
elimination, by writing</p><pre>do
v &lt;- variable []
w &lt;- assign v 3
render $ w * w</pre><p>instead of</p><pre>let
v = variable []
w = assign v 3
in w * w</pre><p>since the latter could be reasonably transformed by the compiler into (or
vice versa)</p><pre>let
v = variable []
w = assign v 3
w' = assign v 3
in w * w'</pre><p>Ops should return a <code><a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a></code> action if their original <code>OpDef</code> marks them as
stateful, or if they take any Refs as input. (This mirrors the rules that
2016-10-31 22:22:48 +01:00
TensorFlow uses to avoid common subexpression elimination.)</p></div></div><div id="synopsis"><p id="control.syn" class="caption expander" onclick="toggleSection('syn')">Synopsis</p><ul id="section.syn" class="hide" onclick="toggleSection('syn')"><li class="src short"><a href="#v:add">add</a> :: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../bytestring-0.10.6.0/Data-ByteString.html#t:ByteString">ByteString</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *)))))))))))) t) =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</li><li class="src short"><a href="#v:abs">abs</a> :: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *)))))) t) =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</li><li class="src short"><a href="#v:addN">addN</a> :: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))))) t) =&gt; [<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t] -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</li><li class="src short"><a href="#v:argMax">argMax</a> :: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>
<a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html">here</a></li></ul></div></div><div class="top"><p class="src"><a name="v:abs" class="def">abs</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *)))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>x</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>y</strong></p></td></tr></table></div><div class="doc"><p>Computes the absolute value of a tensor.</p><p>Given a tensor <code>x</code>, this operation returns a tensor containing the absolute
value of each element in <code>x</code>. For example, if x is an input element and y is
an output element, this operation computes \(y = |x|\).</p></div></div><div class="top"><p class="src"><a name="v:addN" class="def">addN</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; [<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t]</td><td class="doc"><p><strong>inputs</strong>: Must all be the same size and shape.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>sum</strong></p></td></tr></table></div><div class="doc"><p>Add all input tensors element wise.</p></div></div><div class="top"><p class="src"><a name="v:argMax" class="def">argMax</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))))) t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> tidx, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) tidx)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>input</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 tidx</td><td class="doc"><p><strong>dimension</strong>: int32, 0 &lt;= dimension &lt; rank(input). Describes which dimension
2016-10-31 22:22:48 +01:00
of the input Tensor to reduce across. For vectors, use dimension = 0.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a></td><td class="doc"><p><strong>output</strong></p></td></tr></table></div><div class="doc"><p>Returns the index with the largest value across dimensions of a tensor.</p></div></div><div class="top"><p class="src"><a name="v:assign" class="def">assign</a> :: <span class="keyword">forall</span> a v. <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Ref">Ref</a> a -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v a -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Ref">Ref</a> a)</p></div><div class="top"><p class="src"><a name="v:broadcastGradientArgs" class="def">broadcastGradientArgs</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>s0</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t</td><td class="doc"><p><strong>s1</strong></p></td></tr><tr><td class="src">-&gt; (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t)</td><td class="doc"><p>(<strong>r0</strong>, <strong>r1</strong>)</p><ul><li><strong>r0</strong></li><li><strong>r1</strong></li></ul></td></tr></table></div><div class="doc"><p>Return the reduction indices for computing gradients of s0 op s1 with broadcast.</p><p>This is typically used by gradient computations for a broadcasting operation.</p></div></div><div class="top"><p class="src"><a name="v:cast" class="def">cast</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> dstT, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> srcT)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 srcT</td><td class="doc"><p><strong>x</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> dstT</td><td class="doc"><p><strong>y</strong></p></td></tr></table></div><div class="doc"><p>Cast x of type SrcT to y of DstT.</p></div></div><div class="top"><p class="src"><a name="v:concat" class="def">concat</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a></td><td class="doc"><p><strong>concat_dim</strong>: 0-D.
range [0, rank(values)).</p></td></tr><tr><td class="src">-&gt; [<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t]</td><td class="doc"><p><strong>values</strong>: The <code>N</code> Tensors to concatenate. Their ranks and types must match,
and their sizes must match in all dimensions except <code>concat_dim</code>.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>output</strong>: A <code><a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a></code> with the concatenation of values stacked along the
<code>concat_dim</code> dimension. This tensor's shape matches that of <code>values</code> except
2016-10-31 22:22:48 +01:00
in <code>concat_dim</code> where it has the sum of the sizes.</p></td></tr></table></div><div class="doc"><p>Concatenates tensors along one dimension.</p></div></div><div class="top"><p class="src"><a name="v:constant" class="def">constant</a> :: <span class="keyword">forall</span> a. <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:Shape">Shape</a> -&gt; [a] -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> a</p><div class="doc"><p>Create a constant tensor.</p><p>The values should be in row major order, e.g.,</p><p>element 0: index (0, ..., 0)
element 1: index (0, ..., 1)
2016-10-31 22:22:48 +01:00
...</p></div></div><div class="top"><p class="src"><a name="v:equal" class="def">equal</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Bool.html#t:Bool">Bool</a> ((:) * <a href="../bytestring-0.10.6.0/Data-ByteString.html#t:ByteString">ByteString</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>x</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t</td><td class="doc"><p><strong>y</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> <a href="../base-4.8.2.0/Data-Bool.html#t:Bool">Bool</a></td><td class="doc"><p><strong>z</strong></p></td></tr></table></div><div class="doc"><p>Returns the truth value of (x == y) element-wise.</p><ul><li>NOTE*: <code>Equal</code> supports broadcasting. More about broadcasting
<a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html">here</a></li></ul></div></div><div class="top"><p class="src"><a name="v:expandDims" class="def">expandDims</a> :: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</p></div><div class="top"><p class="src"><a name="v:initializedVariable" class="def">initializedVariable</a> :: <span class="keyword">forall</span> a. <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> a -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Ref">Ref</a> a)</p><div class="doc"><p>Creates a variable initialized to the given value.
Initialization happens next time session runs.</p></div></div><div class="top"><p class="src"><a name="v:zeroInitializedVariable" class="def">zeroInitializedVariable</a> :: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a, <a href="../base-4.8.2.0/Prelude.html#t:Num">Num</a> a) =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:Shape">Shape</a> -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Ref">Ref</a> a)</p><div class="doc"><p>Creates a zero-initialized variable with the given shape.</p></div></div><div class="top"><p class="src"><a name="v:fill" class="def">fill</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a></td><td class="doc"><p><strong>dims</strong>: 1-D. Represents the shape of the output tensor.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t</td><td class="doc"><p><strong>value</strong>: 0-D (scalar). Value to fill the returned tensor.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>output</strong></p></td></tr></table></div><div class="doc"><p>Creates a tensor filled with a scalar value.</p><p>This operation creates a tensor of shape <code>dims</code> and fills it with <code><a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#v:value">value</a></code>.</p><p>For example:</p><p>```prettyprint
# Output tensor has shape [2, 3].
fill([2, 3], 9) ==&gt; [[9, 9, 9]
[9, 9, 9]]
2016-10-31 22:22:48 +01:00
```</p></div></div><div class="top"><p class="src"><a name="v:oneHot" class="def">oneHot</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> tI, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ([] *)))) tI)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 tI</td><td class="doc"><p><strong>indices</strong>: A tensor of indices.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a></td><td class="doc"><p><strong>depth</strong>: A scalar defining the depth of the one hot dimension.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v3 t</td><td class="doc"><p><strong>on_value</strong>: A scalar defining the value to fill in output when `indices[j] = i`.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v4 t</td><td class="doc"><p><strong>off_value</strong>: A scalar defining the value to fill in output when `indices[j] != i`.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>output</strong>: The one-hot tensor.</p></td></tr></table></div><div class="doc"><p>Returns a one-hot tensor.</p><p>The locations represented by indices in <code>indices</code> take value <code>on_value</code>,
while all other locations take value <code>off_value</code>.</p><p>If the input <code>indices</code> is rank <code>N</code>, the output will have rank `N+1`,
The new axis is created at dimension <code>axis</code> (default: the new axis is
appended at the end).</p><p>If <code>indices</code> is a scalar the output shape will be a vector of length <code>depth</code>.</p><p>If <code>indices</code> is a vector of length <code>features</code>, the output shape will be:
```
features x depth if axis == -1
depth x features if axis == 0
```</p><p>If <code>indices</code> is a matrix (batch) with shape `[batch, features]`,
the output shape will be:
```
batch x features x depth if axis == -1
batch x depth x features if axis == 1
depth x batch x features if axis == 0
```</p><p>Examples
=========</p><p>Suppose that</p><p>```
indices = [0, 2, -1, 1]
depth = 3
on_value = 5.0
off_value = 0.0
axis = -1
```</p><p>Then output is `[4 x 3]`:</p><p>```output =
[5.0 0.0 0.0] // one_hot(0)
[0.0 0.0 5.0] // one_hot(2)
[0.0 0.0 0.0] // one_hot(-1)
[0.0 5.0 0.0] // one_hot(1)
```</p><p>Suppose that</p><p>```
indices = [0, 2, -1, 1]
depth = 3
on_value = 0.0
off_value = 3.0
axis = 0
```</p><p>Then output is `[3 x 4]`:</p><p>```output =
[0.0 3.0 3.0 3.0]
[3.0 3.0 3.0 0.0]
[3.0 3.0 3.0 3.0]
[3.0 0.0 3.0 3.0]
// ^ one_hot(0)
// ^ one_hot(2)
// ^ one_hot(-1)
// ^ one_hot(1)
```
Suppose that</p><p>```
indices = [[0, 2], [1, -1]]
depth = 3
on_value = 1.0
off_value = 0.0
axis = -1
```</p><p>Then output is `[2 x 2 x 3]`:</p><p>```output =
[
[1.0, 0.0, 0.0] // one_hot(0)
[0.0, 0.0, 1.0] // one_hot(2)
][
[0.0, 1.0, 0.0] // one_hot(1)
[0.0, 0.0, 0.0] // one_hot(-1)
]```</p></div></div><div class="top"><p class="src"><a name="v:matMul" class="def">matMul</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>a</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t</td><td class="doc"><p><strong>b</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>product</strong></p></td></tr></table></div><div class="doc"><p>Multiply the matrix &quot;a&quot; by the matrix &quot;b&quot;.</p><p>The inputs must be two-dimensional matrices and the inner dimension of
&quot;a&quot; (after being transposed if transpose_a is true) must match the
outer dimension of &quot;b&quot; (after being transposed if transposed_b is
true).</p><ul><li>Note*: The default kernel implementation for MatMul on GPUs uses
2016-10-31 22:22:48 +01:00
cublas.</li></ul></div></div><div class="top"><p class="src"><a name="v:matTranspose" class="def">matTranspose</a> :: <span class="keyword">forall</span> a v. <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v a -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> a</p></div><div class="top"><p class="src"><a name="v:mean" class="def">mean</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))))) t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> tidx, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) tidx)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>input</strong>: The tensor to reduce.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 tidx</td><td class="doc"><p><strong>reduction_indices</strong>: The dimensions to reduce.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>output</strong>: The reduced tensor.</p></td></tr></table></div><div class="doc"><p>Computes the mean of elements across dimensions of a tensor.</p><p>Reduces <code>input</code> along the dimensions given in <code>reduction_indices</code>. Unless
<code>keep_dims</code> is true, the rank of the tensor is reduced by 1 for each entry in
<code>reduction_indices</code>. If <code>keep_dims</code> is true, the reduced dimensions are
retained with length 1.</p></div></div><div class="top"><p class="src"><a name="v:mul" class="def">mul</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>x</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t</td><td class="doc"><p><strong>y</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>z</strong></p></td></tr></table></div><div class="doc"><p>Returns x * y element-wise.</p><ul><li>NOTE*: <code>Mul</code> supports broadcasting. More about broadcasting
<a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html">here</a></li></ul></div></div><div class="top"><p class="src"><a name="v:neg" class="def">neg</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *)))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>x</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>y</strong></p></td></tr></table></div><div class="doc"><p>Computes numerical negative value element-wise.</p><p>I.e., \(y = -x\).</p></div></div><div class="top"><p class="src"><a name="v:pack" class="def">pack</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; [<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t]</td><td class="doc"><p><strong>values</strong>: Must be of same shape and type.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>output</strong>: The packed tensor.</p></td></tr></table></div><div class="doc"><p>Packs a list of <code>N</code> rank-<code>R</code> tensors into one rank-`(R+1)` tensor.</p><p>Packs the <code>N</code> tensors in <code>values</code> into a tensor with rank one higher than each
tensor in <code>values</code>, by packing them along the <code>axis</code> dimension.
Given a list of tensors of shape `(A, B, C)`;</p><p>if `axis == 0` then the <code>output</code> tensor will have the shape `(N, A, B, C)`.
if `axis == 1` then the <code>output</code> tensor will have the shape `(A, N, B, C)`.
Etc.</p><p>For example:</p><p>```prettyprint
# <code>x</code> is [1, 4]
# <code>y</code> is [2, 5]
# <code>z</code> is [3, 6]
pack([x, y, z]) =&gt; [[1, 4], [2, 5], [3, 6]] # Pack along first dim.
pack([x, y, z], axis=1) =&gt; [[1, 2, 3], [4, 5, 6]]
2016-10-31 22:22:48 +01:00
```</p><p>This is the opposite of <code><a href="../tensorflow-core-ops-0.1.0.0/TensorFlow-GenOps-Core.html#v:unpack">unpack</a></code>.</p></div></div><div class="top"><p class="src"><a name="v:placeholder" class="def">placeholder</a> :: <span class="keyword">forall</span> a. <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:Shape">Shape</a> -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> a)</p></div><div class="top"><p class="src"><a name="v:range" class="def">range</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> tidx, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) tidx)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 tidx</td><td class="doc"><p><strong>start</strong>: 0-D (scalar). First entry in the sequence.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 tidx</td><td class="doc"><p><strong>limit</strong>: 0-D (scalar). Upper limit of sequence, exclusive.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v3 tidx</td><td class="doc"><p><strong>delta</strong>: 0-D (scalar). Optional. Default is 1. Number that increments <code>start</code>.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> tidx</td><td class="doc"><p><strong>output</strong>: 1-D.</p></td></tr></table></div><div class="doc"><p>Creates a sequence of integers.</p><p>This operation creates a sequence of integers that begins at <code>start</code> and
extends by increments of <code>delta</code> up to but not including <code>limit</code>.</p><p>For example:</p><p>```
# <code>start</code> is 3
# <code>limit</code> is 18
# <code>delta</code> is 3
tf.range(start, limit, delta) ==&gt; [3, 6, 9, 12, 15]
2016-10-31 22:22:48 +01:00
```</p></div></div><div class="top"><p class="src"><a name="v:reducedShape" class="def">reducedShape</a> :: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> `[<a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a>, <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a>]` t1, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> `[<a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a>, <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a>]` t2) =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t1 -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t2 -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a></p><div class="doc"><p>Helper function for reduction ops (translation of math_ops.reduced_shape).</p></div></div><div class="top"><p class="src"><a name="v:relu" class="def">relu</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>features</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>activations</strong></p></td></tr></table></div><div class="doc"><p>Computes rectified linear: `max(features, 0)`.</p></div></div><div class="top"><p class="src"><a name="v:reluGrad" class="def">reluGrad</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>gradients</strong>: The backpropagated gradients to the corresponding Relu operation.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t</td><td class="doc"><p><strong>features</strong>: The features passed as input to the corresponding Relu operation, OR
the outputs of that operation (both work equivalently).</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>backprops</strong>: `gradients * (features &gt; 0)`.</p></td></tr></table></div><div class="doc"><p>Computes rectified linear gradients for a Relu operation.</p></div></div><div class="top"><p class="src"><a name="v:reshape" class="def">reshape</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> tshape, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) tshape)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>tensor</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 tshape</td><td class="doc"><p><strong>shape</strong>: Defines the shape of the output tensor.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>output</strong></p></td></tr></table></div><div class="doc"><p>Reshapes a tensor.</p><p>Given <code>tensor</code>, this operation returns a tensor that has the same values
as <code>tensor</code> with shape <code><a href="../tensorflow-core-ops-0.1.0.0/TensorFlow-GenOps-Core.html#v:shape">shape</a></code>.</p><p>If one component of <code><a href="../tensorflow-core-ops-0.1.0.0/TensorFlow-GenOps-Core.html#v:shape">shape</a></code> is the special value -1, the size of that dimension
is computed so that the total size remains constant. In particular, a <code><a href="../tensorflow-core-ops-0.1.0.0/TensorFlow-GenOps-Core.html#v:shape">shape</a></code>
of `[-1]` flattens into 1-D. At most one component of <code><a href="../tensorflow-core-ops-0.1.0.0/TensorFlow-GenOps-Core.html#v:shape">shape</a></code> can be -1.</p><p>If <code><a href="../tensorflow-core-ops-0.1.0.0/TensorFlow-GenOps-Core.html#v:shape">shape</a></code> is 1-D or higher, then the operation returns a tensor with shape
<code><a href="../tensorflow-core-ops-0.1.0.0/TensorFlow-GenOps-Core.html#v:shape">shape</a></code> filled with the values of <code>tensor</code>. In this case, the number of elements
implied by <code><a href="../tensorflow-core-ops-0.1.0.0/TensorFlow-GenOps-Core.html#v:shape">shape</a></code> must be the same as the number of elements in <code>tensor</code>.</p><p>For example:</p><p>```prettyprint
# tensor <code>t</code> is [1, 2, 3, 4, 5, 6, 7, 8, 9]
# tensor <code>t</code> has shape [9]
reshape(t, [3, 3]) ==&gt; [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]</p><p># tensor <code>t</code> is [[[1, 1], [2, 2]],
# [[3, 3], [4, 4]]]
# tensor <code>t</code> has shape [2, 2, 2]
reshape(t, [2, 4]) ==&gt; [[1, 1, 2, 2],
[3, 3, 4, 4]]</p><p># tensor <code>t</code> is [[[1, 1, 1],
# [2, 2, 2]],
# [[3, 3, 3],
# [4, 4, 4]],
# [[5, 5, 5],
# [6, 6, 6]]]
# tensor <code>t</code> has shape [3, 2, 3]
# pass '[-1]' to flatten <code>t</code>
reshape(t, [-1]) ==&gt; [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]</p><p># -1 can also be used to infer the shape</p><p># -1 is inferred to be 9:
reshape(t, [2, -1]) ==&gt; [[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6]]
# -1 is inferred to be 2:
reshape(t, [-1, 9]) ==&gt; [[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6]]
# -1 is inferred to be 3:
reshape(t, [ 2, -1, 3]) ==&gt; [[[1, 1, 1],
[2, 2, 2],
[3, 3, 3]],
[[4, 4, 4],
[5, 5, 5],
[6, 6, 6]]]</p><p># tensor <code>t</code> is [7]
# shape `[]` reshapes to a scalar
reshape(t, []) ==&gt; 7
2016-10-31 22:22:48 +01:00
```</p></div></div><div class="top"><p class="src"><a name="v:restore" class="def">restore</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../bytestring-0.10.6.0/Data-ByteString.html#t:ByteString">ByteString</a></td><td class="doc"><p>File path.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Ref">Ref</a> a</td><td class="doc"><p>Tensor to restore.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Output.html#t:ControlNode">ControlNode</a></td><td class="doc empty">&nbsp;</td></tr></table></div><div class="doc"><p>Restore a tensor's value from a checkpoint file.</p></div></div><div class="top"><p class="src"><a name="v:restoreFromName" class="def">restoreFromName</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../bytestring-0.10.6.0/Data-ByteString.html#t:ByteString">ByteString</a></td><td class="doc"><p>File path.</p></td></tr><tr><td class="src">-&gt; <a href="../bytestring-0.10.6.0/Data-ByteString.html#t:ByteString">ByteString</a></td><td class="doc"><p>Tensor name override.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Ref">Ref</a> a</td><td class="doc"><p>Tensor to restore.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Output.html#t:ControlNode">ControlNode</a></td><td class="doc empty">&nbsp;</td></tr></table></div><div class="doc"><p>Restore a tensor's value from a checkpoint file.</p><p>This version allows restoring from a checkpoint file that uses a different
tensor name than the variable.</p></div></div><div class="top"><p class="src"><a name="v:save" class="def">save</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../bytestring-0.10.6.0/Data-ByteString.html#t:ByteString">ByteString</a></td><td class="doc"><p>File path.</p></td></tr><tr><td class="src">-&gt; [<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v a]</td><td class="doc"><p>Tensors to save.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Output.html#t:ControlNode">ControlNode</a></td><td class="doc empty">&nbsp;</td></tr></table></div></div><div class="top"><p class="src"><a name="v:scalar" class="def">scalar</a> :: <span class="keyword">forall</span> a. <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a =&gt; a -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> a</p><div class="doc"><p>Create a constant scalar.</p></div></div><div class="top"><p class="src"><a name="v:shape" class="def">shape</a> :: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a></p></div><div class="top"><p class="src"><a name="v:sign" class="def">sign</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *)))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>x</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>y</strong></p></td></tr></table></div><div class="doc"><p>Returns an element-wise indication of the sign of a number.</p><p>`y = sign(x) = -1` if `x <a href="0`;">0 if `x == 0`; 1 if `x</a> 0`.</p><p>For complex numbers, `y = sign(x) = x / |x|` if `x != 0`, otherwise `y = 0`.</p></div></div><div class="top"><p class="src"><a name="v:size" class="def">size</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> out_type, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) out_type)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow
<code>input</code>.</p><p>For example:</p><p>```prettyprint
# <code>t</code> is [[[1, 1,, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==&gt; 12
```</p></div></div><div class="top"><p class="src"><a name="v:softmax" class="def">softmax</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *)))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>logits</strong>: 2-D with shape `[batch_size, num_classes]`.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>softmax</strong>: Same shape as <code>logits</code>.</p></td></tr></table></div><div class="doc"><p>Computes softmax activations.</p><p>For each batch <code>i</code> and class <code>j</code> we have</p><p>softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))</p></div></div><div class="top"><p class="src"><a name="v:softmaxCrossEntropyWithLogits" class="def">softmaxCrossEntropyWithLogits</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *)))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>features</strong>: batch_size x num_classes matrix</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t</td><td class="doc"><p><strong>labels</strong>: batch_size x num_classes matrix
The caller must ensure that each batch of labels represents a valid
probability distribution.</p></td></tr><tr><td class="src">-&gt; (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t)</td><td class="doc"><p>(<strong>loss</strong>, <strong>backprop</strong>)</p><ul><li><strong>loss</strong>: Per example loss (batch_size vector).</li><li><strong>backprop</strong>: backpropagated gradients (batch_size x num_classes matrix).</li></ul></td></tr></table></div><div class="doc"><p>Computes softmax cross entropy cost and gradients to backpropagate.</p><p>Inputs are the logits, not probabilities.</p></div></div><div class="top"><p class="src"><a name="v:sparseToDense" class="def">sparseToDense</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> tindices, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) tindices)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 tindices</td><td class="doc"><p><strong>sparse_indices</strong>: 0-D, 1-D, or 2-D. `sparse_indices[i]` contains the complete
index where `sparse_values[i]` will be placed.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 tindices</td><td class="doc"><p><strong>output_shape</strong>: 1-D. Shape of the dense output tensor.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v3 t</td><td class="doc"><p><strong>sparse_values</strong>: 1-D. Values corresponding to each row of <code>sparse_indices</code>,
or a scalar value to be used for all sparse indices.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v4 t</td><td class="doc"><p><strong>default_value</strong>: Scalar value to set for indices not specified in
<code>sparse_indices</code>.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>dense</strong>: Dense output tensor of shape <code>output_shape</code>.</p></td></tr></table></div><div class="doc"><p>Converts a sparse representation into a dense tensor.</p><p>Builds an array <code>dense</code> with shape <code>output_shape</code> such that</p><p>```prettyprint
# If sparse_indices is scalar
dense[i] = (i == sparse_indices ? sparse_values : default_value)</p><p># If sparse_indices is a vector, then for each i
dense[sparse_indices[i]] = sparse_values[i]</p><p># If sparse_indices is an n by d matrix, then for each i in [0, n)
dense[sparse_indices[i][0], ..., sparse_indices[i][d-1]] = sparse_values[i]
```</p><p>All other values in <code>dense</code> are set to <code>default_value</code>. If <code>sparse_values</code> is a
scalar, all sparse indices are set to this single value.</p><p>Indices should be sorted in lexicographic order, and indices must not
contain any repeats. If <code>validate_indices</code> is true, these properties
are checked during execution.</p></div></div><div class="top"><p class="src"><a name="v:sub" class="def">sub</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *)))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>x</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 t</td><td class="doc"><p><strong>y</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>z</strong></p></td></tr></table></div><div class="doc"><p>Returns x - y element-wise.</p><ul><li>NOTE*: <code>Sub</code> supports broadcasting. More about broadcasting
<a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html">here</a></li></ul></div></div><div class="top"><p class="src"><a name="v:sum" class="def">sum</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a>) ((:) * (<a href="../base-4.8.2.0/Data-Complex.html#t:Complex">Complex</a> <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a>) ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))))) t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> tidx, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) tidx)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>input</strong>: The tensor to reduce.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 tidx</td><td class="doc"><p><strong>reduction_indices</strong>: The dimensions to reduce.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>output</strong>: The reduced tensor.</p></td></tr></table></div><div class="doc"><p>Computes the sum of elements across dimensions of a tensor.</p><p>Reduces <code>input</code> along the dimensions given in <code>reduction_indices</code>. Unless
<code>keep_dims</code> is true, the rank of the tensor is reduced by 1 for each entry in
<code>reduction_indices</code>. If <code>keep_dims</code> is true, the reduced dimensions are
retained with length 1.</p></div></div><div class="top"><p class="src"><a name="v:topK" class="def">topK</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int16">Int16</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int8">Int8</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word16">Word16</a> ((:) * <a href="../base-4.8.2.0/Data-Word.html#t:Word8">Word8</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Double">Double</a> ((:) * <a href="../base-4.8.2.0/Prelude.html#t:Float">Float</a> ([] *))))))))) t)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a></td><td class="doc"><p><strong>k</strong>: Number of top elements to look for along the last dimension (along each
row for matrices).</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>input</strong>: 1-D or higher with last dimension at least <code>k</code>.</p></td></tr><tr><td class="src">-&gt; (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a>)</td><td class="doc"><p>(<strong>values</strong>, <strong>indices</strong>)</p><ul><li><strong>values</strong>: The <code>k</code> largest elements along each last dimensional slice.</li><li><strong>indices</strong>: The indices of <code>values</code> within the last dimension of <code>input</code>.</li></ul></td></tr></table></div><div class="doc"><p>Finds values and indices of the <code>k</code> largest elements for the last dimension.</p><p>If the input is a vector (rank-1), finds the <code>k</code> largest entries in the vector
and outputs their values and indices as vectors. Thus `values[j]` is the
<code>j</code>-th largest entry in <code>input</code>, and its index is `indices[j]`.</p><p>For matrices (resp. higher rank input), computes the top <code>k</code> entries in each
row (resp. vector along the last dimension). Thus,</p><p>values.shape = indices.shape = input.shape[:-1] + [k]</p><p>If two elements are equal, the lower-index element appears first.</p><p>If <code>k</code> varies dynamically, use <code>TopKV2</code> below.</p></div></div><div class="top"><p class="src"><a name="v:transpose" class="def">transpose</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: (<a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> tperm, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:OneOf">OneOf</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int32">Int32</a> ((:) * <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a> ([] *))) tperm)</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>x</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v2 tperm</td><td class="doc"><p><strong>perm</strong></p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>y</strong></p></td></tr></table></div><div class="doc"><p>Shuffle dimensions of x according to a permutation.</p><p>The output <code>y</code> has the same rank as <code>x</code>. The shapes of <code>x</code> and <code>y</code> satisfy:
2016-10-31 22:22:48 +01:00
`y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]`</p></div></div><div class="top"><p class="src"><a name="v:truncatedNormal" class="def">truncatedNormal</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v <a href="../base-4.8.2.0/Data-Int.html#t:Int64">Int64</a></td><td class="doc"><p>Shape.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> a)</td><td class="doc empty">&nbsp;</td></tr></table></div></div><div class="top"><p class="src"><a name="v:variable" class="def">variable</a> :: <span class="keyword">forall</span> a. <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:Shape">Shape</a> -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Build.html#t:Build">Build</a> (<a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Ref">Ref</a> a)</p><div class="doc"><p>Create a new, uninitialized stateful Tensor of the given shape.</p></div></div><div class="top"><p class="src"><a name="v:vector" class="def">vector</a> :: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a =&gt; [a] -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> a</p><div class="doc"><p>Create a constant vector.</p></div></div><div class="top"><p class="src"><a name="v:zeros" class="def">zeros</a> :: <span class="keyword">forall</span> a. (<a href="../base-4.8.2.0/Prelude.html#t:Num">Num</a> a, <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> a) =&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:Shape">Shape</a> -&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> a</p></div><div class="top"><p class="src"><a name="v:zerosLike" class="def">zerosLike</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <a href="../tensorflow-0.1.0.0/TensorFlow-Types.html#t:TensorType">TensorType</a> t</td><td class="doc empty">&nbsp;</td></tr><tr><td class="src">=&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> v1 t</td><td class="doc"><p><strong>x</strong>: a tensor of type T.</p></td></tr><tr><td class="src">-&gt; <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Tensor">Tensor</a> <a href="../tensorflow-0.1.0.0/TensorFlow-Tensor.html#t:Value">Value</a> t</td><td class="doc"><p><strong>y</strong>: a tensor of the same shape and type as x but filled with zeros.</p></td></tr></table></div><div class="doc"><p>Returns a tensor of zeros with the same shape and type as x.</p></div></div></div></div><div id="footer"><p>Produced by <a href="http://www.haskell.org/haddock/">Haddock</a> version 2.16.1</p></div></body></html>