1
0
Fork 0
mirror of https://github.com/tensorflow/haskell.git synced 2024-11-27 05:19:45 +01:00
Commit graph

30 commits

Author SHA1 Message Date
jcmartin
c66c912c32
Tensorflow 2.3.0 Support (#267)
* Tensorflow 2.3.0 building and passing tests.
* Added einsum and test.
* Added ByteString as a possible argument to a function.
* Support more data types for Adam.
* Move to later version of LTS on stackage.
* Added a wrapper module for convolution functions.
* Update ci build to use a later version of stack.
* Removed a deprecated import in GradientTest.
2020-11-06 11:32:21 -08:00
Mike Sperber
568c9b6f03
Update to current proto-lens packages. (#258) 2020-05-21 13:36:52 -07:00
Mike Sperber
0f322b2e06
Fix MonadFail-related errors to support ghc 8.8 2020-04-13 16:48:43 -07:00
rschlotterbeck
d741c3ee59 Add gradient for batchMatMul (#246) 2019-07-08 13:41:35 -04:00
rschlotterbeck
c811037cb9 Add gradient for sigmoid (#245) 2019-07-07 20:18:02 -04:00
Christian Berentsen
1fbd5d41dd Add gradients for DepthwiseConv2dNative (#240) 2019-04-22 00:46:27 -04:00
Christian Berentsen
4a2e46ba57 Make 'mean' doubly differentiable (#241)
Use stopGradient on shape computations
Add opGrad for StopGradient
2019-04-22 00:46:01 -04:00
Daniel YU
7316062c10 upgrade to ghc 8.6.4 (#237) 2019-04-11 19:27:15 -07:00
Christian Berentsen
c0f87dc0bc Avoid computing gradients for incidental nodes (#238) 2019-04-11 14:17:19 -04:00
Christian Berentsen
96f1c88327 Add gradient for ResizeBilinear (#239) 2019-04-08 13:43:17 -04:00
erikabor
3cfd96ef08 Add gradient for slice function (#234) 2019-03-26 16:30:50 -04:00
erikabor
666dce94bd Add gradient for sqrt function (#236) 2019-03-18 21:08:08 -04:00
Rik
e4acd69574 Support gradients of pad, squeeze, spaceToBatchND, and batchToSpaceND (#226) 2018-11-27 14:17:32 -05:00
Rik
95c6b6f277 Added support for ExpandDims gradient. (#224) 2018-11-20 21:45:31 -05:00
Rik
915015018c Added support for tanh activation function (#223) 2018-11-14 12:08:05 -05:00
Christian Berentsen
61e58fd33f Use proto-lens* == 0.3.* (#212)
* Include more *_Fields modules
2018-09-04 10:44:52 -07:00
fkm3
baa501b262
Use newer version of stack in CI (#189)
Required by #187.

The version we were using is old enough that it doesn't work with the
latest stackage LTS. haskellstack.org says

    There is also a Ubuntu package for Ubuntu 16.10 and up, but the
    distribution's Stack version lags behind, ...

So, instead of asking them to update it, it's probably better to
download the tar of the version we want.

Somehow updating stack surfaced a new pedantic warning in GradientTest,
so I've fixed that as well.
2018-05-15 23:19:15 -04:00
Christian Berentsen
2dcc921f6e Gradient of Conv2DBackpropInput (#155) 2017-10-15 11:49:44 -07:00
Jonathan Kochems
79d8d7edea Adding gradient for Concat (#144) 2017-07-29 23:29:33 -04:00
Christian Berentsen
bebc4aa7d9 Add gradient of 'maximum' and 'gradForBinaryCwise'
`maximum` gradient uses `gradForBinaryCwise` which may be useful for other
binary componentwise op gradients
2017-07-25 00:14:23 -04:00
Christian Berentsen
ea30577264 Gradient for AddN 2017-07-25 00:06:10 -04:00
fkm3
b86945f008 Support Variable in TensorFlow.Gradient and use in mnist example (#116) 2017-05-17 13:20:51 -07:00
Judah Jacobson
64971c876a Consolidate some packages. (#111)
- Merge tensorflow-nn and tensorflow-queue into tensorflow-ops.
  They don't add extra dependencies and each contain a single module, so I
  don't think it's worth separating them at the package level.
- Remove google-shim in favor of direct use of test-framework.
2017-05-10 15:26:03 -07:00
Jarl Christian Berentsen
d153d0aded Fixed matMul gradients for transposed arguments 2017-05-05 16:49:27 -07:00
Jarl Christian Berentsen
51014a015c Implemented TileGrad
Some notes about static shape inference
2017-05-05 16:49:27 -07:00
Christian Berentsen
eca4ff8981 Implemented ReluGradGrad and FillGrad (#102)
Added testReluGrad, testReluGradGrad and testFillGrad
2017-04-30 11:18:06 -07:00
Judah Jacobson
d62c614695 Distinguish between "rendered" and "unrendered" Tensors. (#88)
Distinguish between "rendered" and "unrendered" Tensors.

There are now three types of `Tensor`:

- `Tensor Value a`: rendered value
- `Tensor Ref a`: rendered reference
- `Tensor Build a` : unrendered value

The extra bookkeeping makes it easier to track (and enforce) which tensors are
rendered or not.  For examples where this has been confusing in the past, see

With this change, pure ops look similar to before, returning `Tensor Build`
instead of `Tensor Value`.  "Stateful" (monadic) ops are unchanged.  For
example:

    add :: OneOf [..] t => Tensor v'1 t -> Tensor v'2 t -> Tensor Build t
    assign :: (MonadBuild m, TensorType t)
           => Tensor Ref t -> Tensor v'2 t -> m (Tensor Ref t)

The `gradients` function now requires that the variables over which it's
differentiating are pre-rendered:

    gradients :: (..., Rendered v2) => Tensor v1 a -> [Tensor v2 a]
              -> m [Tensor Value a]

(`Rendered v2` means that `v2` is either a `Ref` or a `Value`.)

Additionally, the implementation of `gradients` now takes care to render every
intermediate value when performing the reverse accumulation.  I suspect this
fixes an exponential blowup for complicated expressions.
2017-04-06 15:10:33 -07:00
Judah Jacobson
2c5c879037 Introduce a MonadBuild class, and remove buildAnd. (#83)
This change adds a class that both `Build` and `Session` are instances of:

    class MonadBuild m where
        build :: Build a -> m a

All stateful ops (generated and manually written) now have a signature that returns
an instance of `MonadBuild` (rather than just `Build`).  For example:

    assign_ :: (MonadBuild m, TensorType t)
            => Tensor Ref t -> Tensor v t -> m (Tensor Ref t)

This lets us remove a bunch of spurious calls to `build` in user code.  It also
lets us replace the pattern `buildAnd run foo` with the simpler pattern `foo >>= run`
(or `run =<< foo`, which is sometimes nicer when foo is a complicated expression).

I went ahead and deleted `buildAnd` altogether since it seems to lead to
confusion; in particular a few tests had `buildAnd run . pure` which is
actually equivalent to just `run`.
2017-03-18 12:08:53 -07:00
fkm3
cc08520dc7 Fix gradients calculation for min and max (#48) 2016-12-12 09:47:02 -08:00
Greg Steuck
67690d1499 Initial commit 2016-10-24 19:26:42 +00:00