* Put LD_LIBRARY_PATH set back into Linux `nix-shell`
...as we need it for `ghci` workflows inside the shell(s).
* Add (failing) test case to check MetadataMap ordering
* Remove SortedList value-component from MetadataMap
...which fixes the failing test case introduced by `85a2d13`.
This is a potentially breaking change that warrants a library rev bump.
I'm not sure what the original reason was for the sorted list component of
`MetadataMap` (i.e., header values), but that implementation choice makes it so
that determining the "last provided" header value associated with a duplicate
key cannot be recovered. That is, it is in violation of this requirement from
the [spec](https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md):
```
Custom-Metadata header order is not guaranteed to be preserved except for values
with duplicate header names.
```
I'm guessing that the original motivation might have been to ensure that the Eq
instance was not sensitive to ordering of values for duplicate keys.
I think we can drop the existing `Eq` assumption about order-insensitive values
for duplicate keys (there is order sensitivity after all), and if we end up
discovering a common use case for an order-insensitive equality on values, we
should address that via a utility function (instead via the type's `Eq`
instance).
So, this commit changes the value component of the `MetadataMap` type to be a
list of `ByteString` values instead of `SortedList ByteString`, and removes the
`sorted-list` package as a dependency, as it has no other uses in the library.
Note that this commit is not claiming we are now spec-compliant w.r.t. header
treatment after this change. In particular (and at least),
1. We do not yet support base64-encoded binary data via the special `-bin` key
suffix.
2. As far as I am aware, we do not (yet) interpret comma-separated header values
the same as duplicate header keys for each of those values.
3. As far as I am aware, we do not (yet) do any validation of header names nor
whitespace handling as per the request grammar from the spec.
* Extend Arbitrary MetadataMap to explicitly encode key duplication
Duplicate keys were allowed by the previous implementation, but this commit
makes key duplication more explicit and more frequent.
* Add metadata map ordering QC prop
* Drop qualified use of @?= since it's so common in this module
* Extend checkMetadataOrdering to check instance Eq MetadataMap
...and use the appropriate bracketing wrapper.
* Relocate MetadataMap type to its own module
* Add some helper functions for MetadataMap lookup; documentation
* Extend testMetadataOrdering w/ use of lookup{All,Last}
* Bump grpc-haskell{,-core} -> 0.2.0
* Remove unnecessary self-dependency
* Regenerate core/default.nix and build -core via callCabal2nix
* grpc-haskell-core: -Wall -Werror and fix warnings
* grpc-haskell: -Wall -Werror and fix warnings
* Update documentation
* Remove LD_LIBRARY_PATH sets from usesGRPC
...as they no longer seem to be needed.
* Remove dead code
* Remove core/default.nix
...as suggested by @evanrelf.
35163c3 introduced a new use of `mask` which makes the server
process uninterruptible while waiting for a new incoming request.
This change fixes that by surrounding the logic that waits for a
new request with `unmask`. This new `unmask` should still
respect the finalization guarantees of the surrounding masked
code.
A bunch of files have been missing from the tarballs created by `cabal
sdist`. I’ve changed the nix config to check for this and also found
some examples that I forgot to update in a previous PR (sorry about
that).
Previously, grpc-haskell used a lot of code in the form of
```
do x <- acquireResource
f x `finally` releaseResource x
```
This is not safe since you can get killed after acquiring the resource
but before installing the exception handler via `finally`. We have
seen various gRPC assertion errors and crashes on shutdown when this
got triggered.
Note that even though we can now build grpc-haskell and grpc-haskell-core
with modern tasty, the environment in which we built those test programs
did not support actually running all them successfully, due to the need to test
generated code in the context of the appropriate libraries. We do not yet
know whether test programs built with new versions of tasty would succeed
in the appropriate environment. In principle this could be discovered, but
the work involved is far from trivial, and therefore we defer it to another
time. Tests built with the old tasty still succeed.