Commit Graph

69 Commits

Author SHA1 Message Date
Mike Sperber 568c9b6f03
Update to current proto-lens packages. (#258) 2020-05-21 13:36:52 -07:00
Mike Sperber 0f322b2e06
Fix MonadFail-related errors to support ghc 8.8 2020-04-13 16:48:43 -07:00
Greg Steuck 580917a731
Switch to LTS 14 and bump versions to 0.2.0.1 (#248)
* Use newer stack and protoc in Dockerfiles
2019-09-08 12:45:24 -07:00
Yorick 7e1d92c6c2 Bump version constraints for proto-lens-* (#247) 2019-09-02 18:19:07 -07:00
rschlotterbeck d741c3ee59 Add gradient for batchMatMul (#246) 2019-07-08 13:41:35 -04:00
rschlotterbeck c811037cb9 Add gradient for sigmoid (#245) 2019-07-07 20:18:02 -04:00
Christian Berentsen 1fbd5d41dd Add gradients for DepthwiseConv2dNative (#240) 2019-04-22 00:46:27 -04:00
Christian Berentsen 4a2e46ba57 Make 'mean' doubly differentiable (#241)
Use stopGradient on shape computations
Add opGrad for StopGradient
2019-04-22 00:46:01 -04:00
Daniel YU 7316062c10 upgrade to ghc 8.6.4 (#237) 2019-04-11 19:27:15 -07:00
Christian Berentsen c0f87dc0bc Avoid computing gradients for incidental nodes (#238) 2019-04-11 14:17:19 -04:00
Christian Berentsen 96f1c88327 Add gradient for ResizeBilinear (#239) 2019-04-08 13:43:17 -04:00
erikabor 3cfd96ef08 Add gradient for slice function (#234) 2019-03-26 16:30:50 -04:00
erikabor 666dce94bd Add gradient for sqrt function (#236) 2019-03-18 21:08:08 -04:00
Rik e4acd69574 Support gradients of pad, squeeze, spaceToBatchND, and batchToSpaceND (#226) 2018-11-27 14:17:32 -05:00
Rik 95c6b6f277 Added support for ExpandDims gradient. (#224) 2018-11-20 21:45:31 -05:00
Rik 915015018c Added support for tanh activation function (#223) 2018-11-14 12:08:05 -05:00
Christian Berentsen 61e58fd33f Use proto-lens* == 0.3.* (#212)
* Include more *_Fields modules
2018-09-04 10:44:52 -07:00
fkm3 85bf0bb12c
Bump all of the versions to 0.2.0.0 (#202) 2018-08-02 22:07:30 -04:00
fkm3 baa501b262
Use newer version of stack in CI (#189)
Required by #187.

The version we were using is old enough that it doesn't work with the
latest stackage LTS. haskellstack.org says

    There is also a Ubuntu package for Ubuntu 16.10 and up, but the
    distribution's Stack version lags behind, ...

So, instead of asking them to update it, it's probably better to
download the tar of the version we want.

Somehow updating stack surfaced a new pedantic warning in GradientTest,
so I've fixed that as well.
2018-05-15 23:19:15 -04:00
fkm3 1e2dca8701
Update to tensorflow 1.7 (#185)
All of the non-s/1.3/1.7/ changes are because

* There are new tensorflow datatypes
* Some ops have looser types (e.g. fill now accepts both int64 and int32)
* There are more ops of type "func"
2018-04-17 12:24:31 -04:00
fkm3 e35211d49b Fix initialized variables for tensorflow 1.7 (#184)
* Fix initialized variables for tensorflow 1.7

This is needed to support tensorflow 1.7. The trick of initializing a
variable with `Shape []` and then overriding the shape by assigning an
initial value no longer works. It seems that we need to explicitly flip
the unknown_rank bit in the shape proto.

I thought about switching opgen to use `Maybe Shape` when an op requires
a shape attribute, but that will cause a lot of api churn, so I chose to
hold off for now and just do a spot fix to unblock 1.7.
2018-04-16 07:48:05 -07:00
Christian Berentsen 2dcc921f6e Gradient of Conv2DBackpropInput (#155) 2017-10-15 11:49:44 -07:00
Nathan T.A. Lewis d79a919efa Fix a small typo in the warning message of numOutputs (#151) 2017-08-24 17:34:22 -04:00
Jonathan Kochems 79d8d7edea Adding gradient for Concat (#144) 2017-07-29 23:29:33 -04:00
fkm3 cac45d1cd6 Delete inaccurate comments 2017-07-25 09:38:00 -04:00
Christian Berentsen bebc4aa7d9 Add gradient of 'maximum' and 'gradForBinaryCwise'
`maximum` gradient uses `gradForBinaryCwise` which may be useful for other
binary componentwise op gradients
2017-07-25 00:14:23 -04:00
Christian Berentsen ea30577264 Gradient for AddN 2017-07-25 00:06:10 -04:00
Christian Berentsen 4ab9cb9cf2 Moved reduceMean to Ops (#136) 2017-06-20 20:50:46 -07:00
fkm3 0603a6987b Add Minimize module with gradient descent and adam implementations (#125) 2017-05-25 19:19:22 -07:00
fkm3 a86d424cac Add initializedValue function for Variable (#124) 2017-05-20 21:42:45 -07:00
fkm3 b86945f008 Support Variable in TensorFlow.Gradient and use in mnist example (#116) 2017-05-17 13:20:51 -07:00
fkm3 ddb4fe4f90 Add ToTensor class 2017-05-16 23:05:33 -07:00
fkm3 0f04e5a50d Expand Rendered class to support ResourceHandle wrappers like Variable
This allows functions like `feed`, `colocateWith`, and (in a later commit)
`gradients` to work with `Variable`.
2017-05-15 19:55:34 -07:00
Judah Jacobson 64971c876a Consolidate some packages. (#111)
- Merge tensorflow-nn and tensorflow-queue into tensorflow-ops.
  They don't add extra dependencies and each contain a single module, so I
  don't think it's worth separating them at the package level.
- Remove google-shim in favor of direct use of test-framework.
2017-05-10 15:26:03 -07:00
Judah Jacobson 0fa719b701 Fix .cabal files so 'stack check' passes. (#110)
- Add LICENSE files for all packages.
- Add descriptions for packages that were missing one.
- Work around google/proto-lens#69 by symlinking third_party into
  tensorflow-proto.
2017-05-10 11:37:00 -07:00
Judah Jacobson ff5f1cccf4 Increase the number of iterations for MatrixTest. (#107)
The number of iterations was reduced from 1000 to 300 during review, but that
turned out to be too low and the test now fails about 20% of the time.
After changing it back to 1000, the test succeeded at 50 out of 50 runs.
2017-05-09 09:54:09 -07:00
Judah Jacobson a64af5076a Work around #92 by always copying TensorData when fetching.
It would be better to avoid the copy when it's not necessary, but
that will require more involved changes to the internal API.  (For example,
Fetchable might need to allow IO or ST actions.)
2017-05-09 00:10:29 -07:00
Jarl Christian Berentsen 37e3c9b084 Whitespace 2017-05-05 16:49:27 -07:00
Jarl Christian Berentsen d153d0aded Fixed matMul gradients for transposed arguments 2017-05-05 16:49:27 -07:00
Jarl Christian Berentsen 51014a015c Implemented TileGrad
Some notes about static shape inference
2017-05-05 16:49:27 -07:00
Jarl Christian Berentsen 97b4bb5bab Added reduceSum to Ops 2017-05-05 16:49:27 -07:00
Christian Berentsen eca4ff8981 Implemented ReluGradGrad and FillGrad (#102)
Added testReluGrad, testReluGradGrad and testFillGrad
2017-04-30 11:18:06 -07:00
Chris Mckinlay 09c792b84c added matrix factorization test (#101) 2017-04-27 17:05:34 -07:00
Judah Jacobson 51c883684b Clarify the behavior of readValue in a comment. (#99)
Also add a unit test corresponding to that comments' example code.
2017-04-16 15:31:26 -07:00
Judah Jacobson 42f4fc647e Add resource-based variable ops. (#98)
The main difference between these and the `Ref`-bases ops is the explicit
`readValue` op.  I'm not sure how this should interact with gradients
and save/restore, so I'm keeping it as a separate module for now.  Once we
figure out the details, we can merge it into `TensorFlow.Ops` and replace
all uses of the old `Ref`-based ops.  (That would also fix #92.)

Also replaces our special case newtype `ResourceHandle` to
`Tensor Value ResourceHandle`, where `ResourceHandle` is the TF proto
corresponding to `DT_RESOURCE`.
2017-04-16 09:24:02 -07:00
Christian Berentsen 21b723d542 Adapt to lts-8.6 and use proto-lens-0.2.0.1 (#97) 2017-04-11 14:09:01 -07:00
Judah Jacobson d62c614695 Distinguish between "rendered" and "unrendered" Tensors. (#88)
Distinguish between "rendered" and "unrendered" Tensors.

There are now three types of `Tensor`:

- `Tensor Value a`: rendered value
- `Tensor Ref a`: rendered reference
- `Tensor Build a` : unrendered value

The extra bookkeeping makes it easier to track (and enforce) which tensors are
rendered or not.  For examples where this has been confusing in the past, see

With this change, pure ops look similar to before, returning `Tensor Build`
instead of `Tensor Value`.  "Stateful" (monadic) ops are unchanged.  For
example:

    add :: OneOf [..] t => Tensor v'1 t -> Tensor v'2 t -> Tensor Build t
    assign :: (MonadBuild m, TensorType t)
           => Tensor Ref t -> Tensor v'2 t -> m (Tensor Ref t)

The `gradients` function now requires that the variables over which it's
differentiating are pre-rendered:

    gradients :: (..., Rendered v2) => Tensor v1 a -> [Tensor v2 a]
              -> m [Tensor Value a]

(`Rendered v2` means that `v2` is either a `Ref` or a `Value`.)

Additionally, the implementation of `gradients` now takes care to render every
intermediate value when performing the reverse accumulation.  I suspect this
fixes an exponential blowup for complicated expressions.
2017-04-06 15:10:33 -07:00
Judah Jacobson a11a417ad5 Add another test of CSE and feeds. (#87)
As a follow-up to #86, check that our CSE isn't too aggressive to prevent feeds
of pure ops with distinct names.
2017-03-23 12:58:40 -07:00
Judah Jacobson fdbfd050f8 Prevent CSE of placeholder ops. (#86)
The bug was introduced in #84.
2017-03-22 22:47:42 -07:00
Judah Jacobson c99a23b6a7 Add versions of each op that take optional params as an extra arg. (#84)
Each op `foo :: ...` now has a corresponding `foo' :: OpParams -> ...`
which lets you set optional attributes.  `OpParams` is currently a type alias for
`OpDef -> OpDef`.  In the future we should consider more type safety, e.g.,
using type-level strings and OverloadedLabels for optional attributes.

I used it to replace a few manual `buildOp`s in our code with the codegenerated
ops, now that it's easier to set attributes.  I also removed `tensorAttr` and
`named` since it's now possible to set those op attributes directly.

Although this clutters up the API a bit, I think it's simpler than using type
classes to implement optional arguments (as in, for example, `Text.Printf`) --
especially in terms of type inference with the rest of the library.
2017-03-20 18:16:38 -07:00