* Tensorflow 2.3.0 building and passing tests.
* Added einsum and test.
* Added ByteString as a possible argument to a function.
* Support more data types for Adam.
* Move to later version of LTS on stackage.
* Added a wrapper module for convolution functions.
* Update ci build to use a later version of stack.
* Removed a deprecated import in GradientTest.
Required by #187.
The version we were using is old enough that it doesn't work with the
latest stackage LTS. haskellstack.org says
There is also a Ubuntu package for Ubuntu 16.10 and up, but the
distribution's Stack version lags behind, ...
So, instead of asking them to update it, it's probably better to
download the tar of the version we want.
Somehow updating stack surfaced a new pedantic warning in GradientTest,
so I've fixed that as well.
All of the non-s/1.3/1.7/ changes are because
* There are new tensorflow datatypes
* Some ops have looser types (e.g. fill now accepts both int64 and int32)
* There are more ops of type "func"
* Fix initialized variables for tensorflow 1.7
This is needed to support tensorflow 1.7. The trick of initializing a
variable with `Shape []` and then overriding the shape by assigning an
initial value no longer works. It seems that we need to explicitly flip
the unknown_rank bit in the shape proto.
I thought about switching opgen to use `Maybe Shape` when an op requires
a shape attribute, but that will cause a lot of api churn, so I chose to
hold off for now and just do a spot fix to unblock 1.7.
- Merge tensorflow-nn and tensorflow-queue into tensorflow-ops.
They don't add extra dependencies and each contain a single module, so I
don't think it's worth separating them at the package level.
- Remove google-shim in favor of direct use of test-framework.
- Add LICENSE files for all packages.
- Add descriptions for packages that were missing one.
- Work around google/proto-lens#69 by symlinking third_party into
tensorflow-proto.
The number of iterations was reduced from 1000 to 300 during review, but that
turned out to be too low and the test now fails about 20% of the time.
After changing it back to 1000, the test succeeded at 50 out of 50 runs.
It would be better to avoid the copy when it's not necessary, but
that will require more involved changes to the internal API. (For example,
Fetchable might need to allow IO or ST actions.)
The main difference between these and the `Ref`-bases ops is the explicit
`readValue` op. I'm not sure how this should interact with gradients
and save/restore, so I'm keeping it as a separate module for now. Once we
figure out the details, we can merge it into `TensorFlow.Ops` and replace
all uses of the old `Ref`-based ops. (That would also fix #92.)
Also replaces our special case newtype `ResourceHandle` to
`Tensor Value ResourceHandle`, where `ResourceHandle` is the TF proto
corresponding to `DT_RESOURCE`.
Distinguish between "rendered" and "unrendered" Tensors.
There are now three types of `Tensor`:
- `Tensor Value a`: rendered value
- `Tensor Ref a`: rendered reference
- `Tensor Build a` : unrendered value
The extra bookkeeping makes it easier to track (and enforce) which tensors are
rendered or not. For examples where this has been confusing in the past, see
With this change, pure ops look similar to before, returning `Tensor Build`
instead of `Tensor Value`. "Stateful" (monadic) ops are unchanged. For
example:
add :: OneOf [..] t => Tensor v'1 t -> Tensor v'2 t -> Tensor Build t
assign :: (MonadBuild m, TensorType t)
=> Tensor Ref t -> Tensor v'2 t -> m (Tensor Ref t)
The `gradients` function now requires that the variables over which it's
differentiating are pre-rendered:
gradients :: (..., Rendered v2) => Tensor v1 a -> [Tensor v2 a]
-> m [Tensor Value a]
(`Rendered v2` means that `v2` is either a `Ref` or a `Value`.)
Additionally, the implementation of `gradients` now takes care to render every
intermediate value when performing the reverse accumulation. I suspect this
fixes an exponential blowup for complicated expressions.