mirror of
https://github.com/tensorflow/haskell.git
synced 2024-11-23 11:29:43 +01:00
6b19e54722
* Update README to refer to 2.3.0-gpu. * Remove old package documentation from haddock directory.
13 lines
No EOL
6.3 KiB
HTML
13 lines
No EOL
6.3 KiB
HTML
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><meta name="viewport" content="width=device-width, initial-scale=1" /><title>TensorFlow.EmbeddingOps</title><link href="linuwial.css" rel="stylesheet" type="text/css" title="Linuwial" /><link rel="stylesheet" type="text/css" href="quick-jump.css" /><link rel="stylesheet" type="text/css" href="https://fonts.googleapis.com/css?family=PT+Sans:400,400i,700" /><script src="haddock-bundle.min.js" async="async" type="text/javascript"></script><script type="text/x-mathjax-config">MathJax.Hub.Config({ tex2jax: { processClass: "mathjax", ignoreClass: ".*" } });</script><script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script></head><body><div id="package-header"><span class="caption">tensorflow-ops-0.3.0.0: Friendly layer around TensorFlow bindings.</span><ul class="links" id="page-menu"><li><a href="src/TensorFlow.EmbeddingOps.html">Source</a></li><li><a href="index.html">Contents</a></li><li><a href="doc-index.html">Index</a></li></ul></div><div id="content"><div id="module-header"><table class="info"><tr><th>Safe Haskell</th><td>None</td></tr><tr><th>Language</th><td>Haskell2010</td></tr></table><p class="caption">TensorFlow.EmbeddingOps</p></div><div id="description"><p class="caption">Description</p><div class="doc"><p>Parallel lookups on the list of tensors.</p></div></div><div id="synopsis"><details id="syn"><summary>Synopsis</summary><ul class="details-toggle" data-details-id="syn"><li class="src short"><a href="#v:embeddingLookup">embeddingLookup</a> :: <span class="keyword">forall</span> a b v1 v2 m. (<a href="../tensorflow-0.3.0.0/TensorFlow-Build.html#t:MonadBuild" title="TensorFlow.Build">MonadBuild</a> m, <a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Rendered" title="TensorFlow.Tensor">Rendered</a> (<a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a> v1), <a href="../tensorflow-0.3.0.0/TensorFlow-Types.html#t:TensorType" title="TensorFlow.Types">TensorType</a> a, <a href="../tensorflow-0.3.0.0/TensorFlow-Types.html#t:OneOf" title="TensorFlow.Types">OneOf</a> '[<a href="../base-4.13.0.0/Data-Int.html#t:Int64" title="Data.Int">Int64</a>, <a href="../base-4.13.0.0/Data-Int.html#t:Int32" title="Data.Int">Int32</a>] b, <a href="../base-4.13.0.0/Prelude.html#t:Num" title="Prelude">Num</a> b) => [<a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a> v1 a] -> <a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a> v2 b -> m (<a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a> <a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Value" title="TensorFlow.Tensor">Value</a> a)</li></ul></details></div><div id="interface"><h1>Documentation</h1><div class="top"><p class="src"><a id="v:embeddingLookup" class="def">embeddingLookup</a> <a href="src/TensorFlow.EmbeddingOps.html#embeddingLookup" class="link">Source</a> <a href="#v:embeddingLookup" class="selflink">#</a></p><div class="subs arguments"><p class="caption">Arguments</p><table><tr><td class="src">:: <span class="keyword">forall</span> a b v1 v2 m. (<a href="../tensorflow-0.3.0.0/TensorFlow-Build.html#t:MonadBuild" title="TensorFlow.Build">MonadBuild</a> m, <a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Rendered" title="TensorFlow.Tensor">Rendered</a> (<a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a> v1), <a href="../tensorflow-0.3.0.0/TensorFlow-Types.html#t:TensorType" title="TensorFlow.Types">TensorType</a> a, <a href="../tensorflow-0.3.0.0/TensorFlow-Types.html#t:OneOf" title="TensorFlow.Types">OneOf</a> '[<a href="../base-4.13.0.0/Data-Int.html#t:Int64" title="Data.Int">Int64</a>, <a href="../base-4.13.0.0/Data-Int.html#t:Int32" title="Data.Int">Int32</a>] b, <a href="../base-4.13.0.0/Prelude.html#t:Num" title="Prelude">Num</a> b)</td><td class="doc empty"> </td></tr><tr><td class="src">=> [<a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a> v1 a]</td><td class="doc"><p>A list of tensors which can be concatenated along
|
|
dimension 0. Each <code><a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a></code> must be appropriately
|
|
sized for <code><a href="../base-4.13.0.0/Prelude.html#v:mod" title="Prelude">mod</a></code> partition strategy.</p></td></tr><tr><td class="src">-> <a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a> v2 b</td><td class="doc"><p>A <code><a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a></code> with type <code>int32</code> or <code>int64</code>
|
|
containing the ids to be looked up in <code>params</code>.
|
|
The ids are required to have fewer than 2^31
|
|
entries.</p></td></tr><tr><td class="src">-> m (<a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Tensor" title="TensorFlow.Tensor">Tensor</a> <a href="../tensorflow-0.3.0.0/TensorFlow-Tensor.html#t:Value" title="TensorFlow.Tensor">Value</a> a)</td><td class="doc"><p>A dense tensor with shape `shape(ids) + shape(params)[1:]`.</p></td></tr></table></div><div class="doc"><p>Looks up <code>ids</code> in a list of embedding tensors.</p><p>This function is used to perform parallel lookups on the list of
|
|
tensors in <code>params</code>. It is a generalization of <code><a href="TF.html#v:gather" title="TF">gather</a></code>, where
|
|
<code>params</code> is interpreted as a partition of a larger embedding
|
|
tensor.</p><p>The partition_strategy is "mod", we assign each id to partition
|
|
`p = id % len(params)`. For instance,
|
|
13 ids are split across 5 partitions as:
|
|
`[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]`</p><p>The results of the lookup are concatenated into a dense
|
|
tensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.</p></div></div></div></div><div id="footer"><p>Produced by <a href="http://www.haskell.org/haddock/">Haddock</a> version 2.23.0</p></div></body></html> |