1
0
Fork 0
mirror of https://github.com/tensorflow/haskell.git synced 2024-11-26 21:09:44 +01:00
Commit graph

21 commits

Author SHA1 Message Date
Bart Schuurmans
e68a414999 Set cabal-version to 2.0 everywhere 2023-01-23 17:17:47 -08:00
jcmartin
4d9d315fbd
Update Package Version Numbers (#268) 2020-11-11 09:21:35 -08:00
jcmartin
c66c912c32
Tensorflow 2.3.0 Support (#267)
* Tensorflow 2.3.0 building and passing tests.
* Added einsum and test.
* Added ByteString as a possible argument to a function.
* Support more data types for Adam.
* Move to later version of LTS on stackage.
* Added a wrapper module for convolution functions.
* Update ci build to use a later version of stack.
* Removed a deprecated import in GradientTest.
2020-11-06 11:32:21 -08:00
Mike Sperber
568c9b6f03
Update to current proto-lens packages. (#258) 2020-05-21 13:36:52 -07:00
Yorick
7e1d92c6c2 Bump version constraints for proto-lens-* (#247) 2019-09-02 18:19:07 -07:00
Daniel YU
7316062c10 upgrade to ghc 8.6.4 (#237) 2019-04-11 19:27:15 -07:00
Christian Berentsen
61e58fd33f Use proto-lens* == 0.3.* (#212)
* Include more *_Fields modules
2018-09-04 10:44:52 -07:00
fkm3
85bf0bb12c
Bump all of the versions to 0.2.0.0 (#202) 2018-08-02 22:07:30 -04:00
fkm3
7720af0afd Update to tensorflow 1.3 (#161)
* Tested on linux without Docker.
* Couldn't get nix build to work, so I just updated the URL and hash.
* Did not test macos build.

The mnist change was necessary because the argmax output type is now polmorphic.
2017-10-19 13:41:55 -04:00
Christian Berentsen
4ab9cb9cf2 Moved reduceMean to Ops (#136) 2017-06-20 20:50:46 -07:00
fkm3
0603a6987b Add Minimize module with gradient descent and adam implementations (#125) 2017-05-25 19:19:22 -07:00
fkm3
b86945f008 Support Variable in TensorFlow.Gradient and use in mnist example (#116) 2017-05-17 13:20:51 -07:00
Judah Jacobson
64971c876a Consolidate some packages. (#111)
- Merge tensorflow-nn and tensorflow-queue into tensorflow-ops.
  They don't add extra dependencies and each contain a single module, so I
  don't think it's worth separating them at the package level.
- Remove google-shim in favor of direct use of test-framework.
2017-05-10 15:26:03 -07:00
Judah Jacobson
0fa719b701 Fix .cabal files so 'stack check' passes. (#110)
- Add LICENSE files for all packages.
- Add descriptions for packages that were missing one.
- Work around google/proto-lens#69 by symlinking third_party into
  tensorflow-proto.
2017-05-10 11:37:00 -07:00
Christian Berentsen
21b723d542 Adapt to lts-8.6 and use proto-lens-0.2.0.1 (#97) 2017-04-11 14:09:01 -07:00
Judah Jacobson
d62c614695 Distinguish between "rendered" and "unrendered" Tensors. (#88)
Distinguish between "rendered" and "unrendered" Tensors.

There are now three types of `Tensor`:

- `Tensor Value a`: rendered value
- `Tensor Ref a`: rendered reference
- `Tensor Build a` : unrendered value

The extra bookkeeping makes it easier to track (and enforce) which tensors are
rendered or not.  For examples where this has been confusing in the past, see

With this change, pure ops look similar to before, returning `Tensor Build`
instead of `Tensor Value`.  "Stateful" (monadic) ops are unchanged.  For
example:

    add :: OneOf [..] t => Tensor v'1 t -> Tensor v'2 t -> Tensor Build t
    assign :: (MonadBuild m, TensorType t)
           => Tensor Ref t -> Tensor v'2 t -> m (Tensor Ref t)

The `gradients` function now requires that the variables over which it's
differentiating are pre-rendered:

    gradients :: (..., Rendered v2) => Tensor v1 a -> [Tensor v2 a]
              -> m [Tensor Value a]

(`Rendered v2` means that `v2` is either a `Ref` or a `Value`.)

Additionally, the implementation of `gradients` now takes care to render every
intermediate value when performing the reverse accumulation.  I suspect this
fixes an exponential blowup for complicated expressions.
2017-04-06 15:10:33 -07:00
Judah Jacobson
2c5c879037 Introduce a MonadBuild class, and remove buildAnd. (#83)
This change adds a class that both `Build` and `Session` are instances of:

    class MonadBuild m where
        build :: Build a -> m a

All stateful ops (generated and manually written) now have a signature that returns
an instance of `MonadBuild` (rather than just `Build`).  For example:

    assign_ :: (MonadBuild m, TensorType t)
            => Tensor Ref t -> Tensor v t -> m (Tensor Ref t)

This lets us remove a bunch of spurious calls to `build` in user code.  It also
lets us replace the pattern `buildAnd run foo` with the simpler pattern `foo >>= run`
(or `run =<< foo`, which is sometimes nicer when foo is a complicated expression).

I went ahead and deleted `buildAnd` altogether since it seems to lead to
confusion; in particular a few tests had `buildAnd run . pure` which is
actually equivalent to just `run`.
2017-03-18 12:08:53 -07:00
fkm3
f170df9d13 Support fetching storable vectors + use them in benchmark (#50)
In addition, you can now fetch TensorData directly. This might be useful in
scenarios where you feed the result of a computation back in, like RNN.

Before:

benchmarking feedFetch/4 byte
time                 83.31 μs   (81.88 μs .. 84.75 μs)
                     0.997 R²   (0.994 R² .. 0.998 R²)
mean                 87.32 μs   (86.06 μs .. 88.83 μs)
std dev              4.580 μs   (3.698 μs .. 5.567 μs)
variance introduced by outliers: 55% (severely inflated)

benchmarking feedFetch/4 KiB
time                 114.9 μs   (111.5 μs .. 118.2 μs)
                     0.996 R²   (0.994 R² .. 0.998 R²)
mean                 117.3 μs   (116.2 μs .. 118.6 μs)
std dev              3.877 μs   (3.058 μs .. 5.565 μs)
variance introduced by outliers: 31% (moderately inflated)

benchmarking feedFetch/4 MiB
time                 109.0 ms   (107.9 ms .. 110.7 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 108.6 ms   (108.2 ms .. 109.2 ms)
std dev              740.2 μs   (353.2 μs .. 1.186 ms)

After:

benchmarking feedFetch/4 byte
time                 82.92 μs   (80.55 μs .. 85.24 μs)
                     0.996 R²   (0.993 R² .. 0.998 R²)
mean                 83.58 μs   (82.34 μs .. 84.89 μs)
std dev              4.327 μs   (3.664 μs .. 5.375 μs)
variance introduced by outliers: 54% (severely inflated)

benchmarking feedFetch/4 KiB
time                 85.69 μs   (83.81 μs .. 87.30 μs)
                     0.997 R²   (0.996 R² .. 0.999 R²)
mean                 86.99 μs   (86.11 μs .. 88.15 μs)
std dev              3.608 μs   (2.854 μs .. 5.273 μs)
variance introduced by outliers: 43% (moderately inflated)

benchmarking feedFetch/4 MiB
time                 1.582 ms   (1.509 ms .. 1.677 ms)
                     0.970 R²   (0.936 R² .. 0.993 R²)
mean                 1.645 ms   (1.554 ms .. 1.981 ms)
std dev              490.6 μs   (138.9 μs .. 1.067 ms)
variance introduced by outliers: 97% (severely inflated)
2016-12-14 18:53:06 -08:00
Greg Steuck
2b5e41ffeb Make code --pedantic (#35)
* Enforce pedantic build mode in CI.
* Our imports drifted really far from where they should be.
2016-11-18 10:42:02 -08:00
fkm3
03a3a6d086 Misc MNIST example cleanup (#9)
* Use native oneHot op in the example code. It didn't exist when this was originally written.
* Misc cleanup in MNIST example

- Use unspecified dimension for batch size in model. This simplifies the
  code for the test set.
- Move error rate calculation into model.
2016-10-26 11:14:38 -07:00
Greg Steuck
67690d1499 Initial commit 2016-10-24 19:26:42 +00:00