Commit Graph

9 Commits

Author SHA1 Message Date
Mike Sperber 568c9b6f03
Update to current proto-lens packages. (#258) 2020-05-21 13:36:52 -07:00
Christian Berentsen 61e58fd33f Use proto-lens* == 0.3.* (#212)
* Include more *_Fields modules
2018-09-04 10:44:52 -07:00
Judah Jacobson 42f4fc647e Add resource-based variable ops. (#98)
The main difference between these and the `Ref`-bases ops is the explicit
`readValue` op.  I'm not sure how this should interact with gradients
and save/restore, so I'm keeping it as a separate module for now.  Once we
figure out the details, we can merge it into `TensorFlow.Ops` and replace
all uses of the old `Ref`-based ops.  (That would also fix #92.)

Also replaces our special case newtype `ResourceHandle` to
`Tensor Value ResourceHandle`, where `ResourceHandle` is the TF proto
corresponding to `DT_RESOURCE`.
2017-04-16 09:24:02 -07:00
Judah Jacobson d62c614695 Distinguish between "rendered" and "unrendered" Tensors. (#88)
Distinguish between "rendered" and "unrendered" Tensors.

There are now three types of `Tensor`:

- `Tensor Value a`: rendered value
- `Tensor Ref a`: rendered reference
- `Tensor Build a` : unrendered value

The extra bookkeeping makes it easier to track (and enforce) which tensors are
rendered or not.  For examples where this has been confusing in the past, see

With this change, pure ops look similar to before, returning `Tensor Build`
instead of `Tensor Value`.  "Stateful" (monadic) ops are unchanged.  For
example:

    add :: OneOf [..] t => Tensor v'1 t -> Tensor v'2 t -> Tensor Build t
    assign :: (MonadBuild m, TensorType t)
           => Tensor Ref t -> Tensor v'2 t -> m (Tensor Ref t)

The `gradients` function now requires that the variables over which it's
differentiating are pre-rendered:

    gradients :: (..., Rendered v2) => Tensor v1 a -> [Tensor v2 a]
              -> m [Tensor Value a]

(`Rendered v2` means that `v2` is either a `Ref` or a `Value`.)

Additionally, the implementation of `gradients` now takes care to render every
intermediate value when performing the reverse accumulation.  I suspect this
fixes an exponential blowup for complicated expressions.
2017-04-06 15:10:33 -07:00
Judah Jacobson 9209dfc4c4 Support lists of tensors in ops. (#79)
Adds a new type `ListOf` which wraps a heterogeneous list; for example,
`ListOf (Tensor Value) '[Int32, Float]` represents a list of two
elements: a tensor of int32s and a tensor of floats.

Also changes the `Queue2` type (which suppored pairs of tensors) to
`Queue` (which supports arbitrary lists).
2017-03-17 13:53:19 -07:00
Judah Jacobson 0c8d41250a Remove the type parameter from ResourceHandle. (#76)
This change allows us to reenable the rest of the ResourceHandle ops, and
future-proofs us against more being added.  It removes the custom logic that
assumed there was a "dtype" attribute to guess what the type parameter is
(which wasn't true in general.)

When we switch to ResourceHandle (e.g., for queues and variables) we can add
parameters to the wrapper types like "Queue" on a case-by-case basis.
2017-02-21 19:38:26 -08:00
Judah Jacobson db75350969 Support type attributes that aren't used by an input/output. (#51)
We should treat such attributes as regular `DataType` values rather than type
parameters; otherwise we'll get ambiguous types.  As with other attributes,
they can either set by default or passed in as an explicit argument to the op.

Allows us to reenable a couple more ops.
2016-12-15 11:52:48 -08:00
Judah Jacobson cec666e135 Fix Ref and Build semantics for generated code. (#37)
Also:
- Make TensorFlow.Ops.{variable,assign} be the Core generated versions.
- Make ops take "Shape" as mandatory input.
2016-11-21 10:19:15 -08:00
Judah Jacobson a277c7ddb3 Refactor OpGen. (#36)
Also fixes op lists when the same attribute specifies the length of
both an input and an output.  I added a test of "shapeN" which
previously failed with the following error:

    ERROR: Ran out of counts in toResult. Likely misuse of buildListOp.
2016-11-20 10:00:22 -08:00