* Tensorflow 2.3.0 building and passing tests.
* Added einsum and test.
* Added ByteString as a possible argument to a function.
* Support more data types for Adam.
* Move to later version of LTS on stackage.
* Added a wrapper module for convolution functions.
* Update ci build to use a later version of stack.
* Removed a deprecated import in GradientTest.
- Merge tensorflow-nn and tensorflow-queue into tensorflow-ops.
They don't add extra dependencies and each contain a single module, so I
don't think it's worth separating them at the package level.
- Remove google-shim in favor of direct use of test-framework.
- Add LICENSE files for all packages.
- Add descriptions for packages that were missing one.
- Work around google/proto-lens#69 by symlinking third_party into
tensorflow-proto.
The main difference between these and the `Ref`-bases ops is the explicit
`readValue` op. I'm not sure how this should interact with gradients
and save/restore, so I'm keeping it as a separate module for now. Once we
figure out the details, we can merge it into `TensorFlow.Ops` and replace
all uses of the old `Ref`-based ops. (That would also fix #92.)
Also replaces our special case newtype `ResourceHandle` to
`Tensor Value ResourceHandle`, where `ResourceHandle` is the TF proto
corresponding to `DT_RESOURCE`.
Distinguish between "rendered" and "unrendered" Tensors.
There are now three types of `Tensor`:
- `Tensor Value a`: rendered value
- `Tensor Ref a`: rendered reference
- `Tensor Build a` : unrendered value
The extra bookkeeping makes it easier to track (and enforce) which tensors are
rendered or not. For examples where this has been confusing in the past, see
With this change, pure ops look similar to before, returning `Tensor Build`
instead of `Tensor Value`. "Stateful" (monadic) ops are unchanged. For
example:
add :: OneOf [..] t => Tensor v'1 t -> Tensor v'2 t -> Tensor Build t
assign :: (MonadBuild m, TensorType t)
=> Tensor Ref t -> Tensor v'2 t -> m (Tensor Ref t)
The `gradients` function now requires that the variables over which it's
differentiating are pre-rendered:
gradients :: (..., Rendered v2) => Tensor v1 a -> [Tensor v2 a]
-> m [Tensor Value a]
(`Rendered v2` means that `v2` is either a `Ref` or a `Value`.)
Additionally, the implementation of `gradients` now takes care to render every
intermediate value when performing the reverse accumulation. I suspect this
fixes an exponential blowup for complicated expressions.
Also removes all the ghc-8-specific logic in the .cabal files.
ghc-8 has issues with deeply nested tuples of constraints. We can
work around it by:
- Changing TensorTypes to a regular class. This required FlexibleContexts.
(But we'll probably need it anyway when we support heterogeneous tensor
lists.)
- Specializing NoneOf for long type lists.
For more details, see: https://ghc.haskell.org/trac/ghc/ticket/12175.
Also added 'directory' to tensorflow-core-ops' dependencies since it's used
in the Setup script.
One more step towards fixing #38.
Two issues:
- The definition of `\\` was missing parentheses. It was probably a bug
that this used to worked in ghc-7.10.
- Set `-fconstraint-solver-iterations=0` to work around
https://ghc.haskell.org/trac/ghc/ticket/12175. It looks like we can
trigger that bug when defining a significantly complicated op. Specifically,
our type shenanigans ("OneOf") along with lens setters (for OpDef) seem
to confuse GHC.
Still TODO: automate testing of different ghc versions to prevent a regression.
* Fix for embedding gradient calculation
- Passes vectors instead of scalars to slice
- converts the numRows to a scalar
- add `toScalar` utility function
- minor change to test case so that it actually works
* added lib for testing helper functions
* add flatSlice function