* [Docx Parser] Move style-parsing-specific code to a new module
* [Docx Writer] Re-use Readers.Docx.Parse.Styles for StyleMap
* [Docx Writer] Move Readers.Docx.StyleMap to Writers.Docx.StyleMap
It's never used outside of writer code, so it makes more sense to scope it under writers really.
Most of this is due to @vijayphoenix (#5704), but it
needed some revisions to integrate with current
master, and to use the released HsYAML.
Closes#5704.
* pandoc.cabal: remove conditionals for ghc < 8.0. Support for GHC 7.10 has been dropped.
* pandoc.cabal: compile with `-Wcpp-undef` when possible
* pandoc.cabal: compile with `-fhide-source-paths` if possible
+ Remove Text.Pandoc.Pretty; use doclayout instead. [API change]
+ Text.Pandoc.Writers.Shared: remove metaToJSON, metaToJSON'
[API change].
+ Text.Pandoc.Writers.Shared: modify `addVariablesToContext`,
`defField`, `setField`, `getField`, `resetField` to work with
Context rather than JSON values. [API change]
+ Text.Pandoc.Writers.Shared: export new function `endsWithPlain` [API
change].
+ Use new templates and doclayout in writers.
+ Use Doc-based templates in all writers.
+ Adjust three tests for minor template rendering differences.
+ Added indentation to body in docbook4, docbook5 templates.
The main impact of this change is better reflowing of content
interpolated into templates. Previously, interpolated variables
were rendered independently and intepolated as strings, which could lead
to overly long lines. Now the templates interpolated as Doc values
which may include breaking spaces, and reflowing occurs
after template interpolation rather than before.
Changed optMetadataFile from `Maybe FilePath` to `[FilePath]`. This allows
for multiple YAML metadata files to be added. The new default value has
been changed from `Nothing` to `[]`.
To account for this change in `Text.Pandoc.App`, `metaDataFromFile` now
operates on two `mapM` calls (for `readFileLazy` and `yamlToMeta`) and a fold.
Added a test (command/5700.md) which tests this functionality and
updated MANUAL.txt, as per the contributing guidelines.
With the current behavior, using `foldr1 (<>)`, values within files
specified first will be used over those in later files. (If the reverse
of this behavior would be preferred, it should be fixed by changing
foldr1 to foldl1.)
Lua filters must be able to traverse sequences of AST elements and to
replace elements by splicing sequences back in their place. Special
`Walkable` instances can be used for this; those are provided in a new
module `Text.Pandoc.Lua.Walk`.
* Require recent doctemplates. It is more flexible and
supports partials.
* Changed type of writerTemplate to Maybe Template instead
of Maybe String.
* Remove code from the LaTeX, Docbook, and JATS writers that looked in
the template for strings to determine whether it is a book or an
article, or whether csquotes is used. This was always kludgy and
unreliable. To use csquotes for LaTeX, set `csquotes` in your
variables or metadata. It is no longer sufficient to put
`\usepackage{csquotes}` in your template or header includes.
To specify a book style, use the `documentclass` variable or
`--top-level-division`.
* Change template code to use new API for doctemplates.
Return value is now Text rather than being polymorphic.
This makes room for upcoming removal of the TemplateTarget
class from doctemplates.
Other code modified accordingly, and should compile with
both current and upcoming version of doctemplates.
A new function `pandoc.mediabag.items` was added to Lua module
pandoc.mediabag. This allows users to lazily iterate over all media bag
items, loading items into Lua one-by-one. Example:
for filename, mime_type, content in pandoc.mediabag.items() do
-- use media bag item.
end
This is a convenient alternative to using `mediabag.list` in combination
with `mediabag.lookup`.
Version specifiers like `PANDOC_VERSION` and `PANDOC_API_VERSION` are
turned into `Version` objects. The objects simplify version-appropriate
comparisons while maintaining backward-compatibility.
A function `pandoc.types.Version` is added as part of the newly
introduced module `pandoc.types`, allowing users to create version
objects in scripts.
This makes use of tasty-lua, a package to write tests in Lua
and integrate the results into Tasty output. Test output becomes
more informative: individual tests and test groups become visible
in test output. Failures are reported with helpful error messages.
The `system` Lua module provides utility functions to interact with the
operating- and file system. E.g.
print(pandoc.system.get_current_directory())
or
pandoc.system.with_temporary_directory('tikz', function (dir)
-- write and compile a TikZ file with pdflatex
end)
Instead of `$HOME/.pandoc`, the default user data directory is
now `$XDG_DATA_HOME/pandoc`, where `XDG_DATA_HOME` defaults to
`$HOME/.local/share` but can be overridden by setting the environment
variable.
If this directory is missing, then `$HOME/.pandoc` is searched
instead, for backwards compatibility. However, we recommend
moving local pandoc data files from `$HOME/.pandoc` to
`$HOME/.local/share/pandoc`.
On Windows the default user data directory remains the same.
Closes#3582.
[API change]
* Depend on ipynb library.
* Add `ipynb` as input and output format.
* Added Text.Pandoc.Readers.Ipynb (supports both nbformat v3 and v4).
* Added Text.Pandoc.Writers.Ipynb (supports nbformat v4).
* Added ipynb readers and writers to T.P.Readers,
T.P.Writers, and T.P.Extensions. Register the
file extension .ipynb for this format.
* Add `PandocIpynbDecodingError` constructor to Text.Pandoc.Error.Error.
* Note: there is no template for ipynb.
The only thing we gained from the custom build was
automatic installation of the man page when using
'cabal install'. But custom builds cause problems,
e.g., with cross-compilation.
Installation of the man page is better handled by packagers.
Note to packagers (e.g. Debian): it may be necessary
to add a step installing the man page with the next
release.
The new Opt module has only a few dependencies. This is important for
compile-times during development, as Template Haskell containing modules
are be recompiled whenever a (transitive) dependency changes.
Disabling the flag will cause derivation of ToJSON and FromJSON
instances via GHC Generics instead of Template Haskell. The flag is
enabled by default, as deriving via Generics can be slow (see #4083).
* Lua: allow access to pandoc state
Lua filters and custom writers now have read-only access to most fields
of pandoc's internal state via the global variable `PANDOC_STATE`.
* Lua: allow iterating through fields of PANDOC_STATE
* Lua filters doc: describe CommonState
* Lua filters doc: mention global variable PANDOC_STATE
* Lua: add access to logs
Log messages can currently only be printed, but not decomposed.
(unexported module). These are used in both the man and ms
writers.
Moved groffEscape out of Text.Pandoc.Writers.Shared [cancels earlier
API change from adding it, which was after last release].
This fixes strong/code combination on man (should be `\f[CB]` not
`\f[BC]`), mentioned in #4973.
Updated tests.
Closes#4975.
A proper Lua traceback is added if either loading of a file or execution
of a filter function fails. This should be of help to authors of Lua
filters who need to debug their code.
Now the `write*` functions for Docbook, HTML, ICML, JATS,
Man, Ms, OPML are sensitive to `writerPreferAscii`. Previously
the to-ascii translation was done in Text.Pandoc.App, and
thus not available to those using the writer functions
directly.
In addition, the LaTeX writer is now sensitive to
`writerPreferAscii` and to `--ascii`. 100% ASCII
output can't be guaranteed, but the writer will use
commands like `\"{a}` and `\l` whenever possible,
to avoid emiting a non-ASCII character.
A new unexported module, Text.Pandoc.Groff, has been
added to store functions used in the different groff-based
writers.
This collects some of the general-purpose code from the LaTeX
reader, with the aim of making the module smaller. (We've been
having out-of-memory issues compiling this module on CI.)
Removed `--latexmathml`, `--gladtex`, `--mimetex`, `--jsmath`, `-m`,
`--asciimathml` options.
Removed `JsMath`, `LaTeXMathML`, and `GladTeX` constructors from
`Text.Pandoc.Options.HTMLMathMethod` [API change].
Removed unneeded data file LaTeXMathML.js and updated tests.
Bumped version to 2.2.
This file wasn't used in the production of documents. It's supposed to
be a thumbnail of the current document, and we can't actually produce
that ourselves. It turns out that the file contains a nonfree ICC
color calibration file, so the best thing to do would be to remove it
altogether.
Fixes: #4588
language is now consistently Haskell2010, and other-extensions
is consistently NoImplicitPrelude. Everything else to be specified
in the module header as needed.
There is very little pptx-specific in these tests, so we abstract out
the basic testing function so it can be used for docx as well. This
should allow us to catch some errors in the docx writer that slipped
by the roundtrip testing.
Previously we had tested certain properties of the output PowerPoint
slides. Corruption, though, comes as the result of a numebr of
interrelated issues in the output pptx archive. This is a new
approach, which compares the output of the Powerpoint writer with
files that we know to (a) not be corrupt, and (b) to show the desired
output behavior (details below). This commit introduces three tests
using the new framework. More will follow.
The test procedure: given a native file and a pptx file, we generate a
pptx archive from the native file, and then test:
1. Whether the same files are in the two archives
2. Whether each of the contained xml files is the same. (We skip time
entries in `docProps/core.xml`, since these are derived from IO. We
just check to make sure that they're there in the same way in both
files.)
3. Whether each of the media files is the same.
Note that steps 2 and 3, though they compare multiple files, are one
test each, since the number of files depends on the input file (if
there is a failure, it will only report the first failed file
comparison in the test failure).
We introduce a new module, Text.Pandoc.Readers.Docx.Fields which
contains a simple parsec parser. At the moment, only simple hyperlink
fields are accepted, but that can be extended in the future.
There are two steps in the conversion: a conversion from pandoc to a
Presentation datatype modeling pptx, and a conversion from
Presentation to a pptx archive. The two steps were sharing the same
state and environment, and the code was getting a bit
spaghetti-ish. This separates the conversion into separate
modules (T.P.W.Powerpoint.Presentation, which defineds the
Presentation datatype and goes Pandoc->Presentation)
and (T.P.W.Pandoc.Output, which goes Presentation->Archive).
Text.Pandoc.Writers.Powerpoint a thin wrapper around the two modules.
If you use a custom syntax definition that refers to a syntax
you haven't loaded, pandoc will now complain when it is highlighting
the text, rather than at the start.
This saves a huge performance hit from the `missingIncludes` check.
Closes#4226.
This version fixes a bug that made it difficult to handle failures while
getting lists or a Map from Lua. A bug in pandoc, which made it
necessary to always pass a tag when using MetaList or MetaBlock, is
fixed as a result. Using the pandoc module's constructor functions for
these values is now optional (if still recommended).
* Previously we ran all lua filters before JSON filters.
* Now we run filters in the order they are presented on the
command line, whether lua or JSON.
* The type of `applyFilters` has changed (incompatible API change).
* `applyLuaFilters` has been removed (incompatible API change).
* Bump version to 2.1.
See #4196.
This is the beginning of a test suite for the powerpoint
writer. Initial tests are for the number of slides.
Note that at the moment it does not test against corruption in
Microsoft PowerPoint; it just tests that certain outcomes work as
expected. More tests will be added.
This test framework uses the PandocPure monad introduced with Pandoc 2.0.
The level of headers in included files can be shifted to a higher level
by specifying a minimum header level via the `:minlevel` parameter. E.g.
`#+include: "tour.org" :minlevel 1` will shift the headers in tour.org
such that the topmost headers become level 1 headers.
Fixes: #4154
The org reader test file had grown large, to the point that editor
performance was negatively affected in some cases. The tests are spread
over multiple submodules, and re-combined into a tasty TestTree in the
main org reader test file.
The same init file (`data/init`) that is used to setup the Lua
interpreter for Lua filters is also used to setup the interpreter of
custom writers.lua.
Support writing <fig> and <table-wrap> elements with <title> and
<caption> inside them by using Divs with class set to on of
fig, table-wrap or cation. The title is included as a Heading
so the constraint on where Heading can occur is also relaxed.
Also leaves out empty alt attributes on links.
This fixes a bug in 2.0.4, whereby pandoc could not
read the theme files generated with `--print-highlight-style`.
It also fixes some CSS issues involving line numbers.
Highlighted code blocks are now enclosed in a div with class
sourceCode.
Highlighting CSS no longer sets a generic color for pre
and code; we only set these for class `sourceCode`.
This will close#4133 and #4128.
The file `init.lua` is used to initialize the Lua interpreter which is
used in Lua filters. This gives users the option to require libraries
which they want to use in all of their filters, and to extend default
modules.
The integration with Lua's package/module system is improved: A
pandoc-specific package searcher is prepended to the searchers in
`package.searchers`. The modules `pandoc` and `pandoc.mediabag` can now
be loaded via `require`.
The List module is automatically loaded, but not assigned to a global
variable. It can be included in filters by calling `List = require
'List'`.
Lists of blocks, lists of inlines, and lists of classes are now given
`List` as a metatable, making working with them more convenient. E.g.,
it is now possible to concatenate lists of inlines using Lua's
concatenation operator `..` (requires at least one of the operants to
have `List` as a metatable):
function Emph (emph)
local s = {pandoc.Space(), pandoc.Str 'emphasized'}
return pandoc.Span(emph.content .. s)
end
Closes: #4081
The `text` module is preloaded in lua. The module contains some UTF-8
aware string functions, implemented in Haskell. The module is loaded on
request only, e.g.:
text = require 'text'
function Str (s)
s.text = text.upper(s.text)
return s
end
Refactored some code from Text.Pandoc.Lua.PandocModule
into new internal module Text.Pandoc.Lua.Filter.
Add `walk_inline` and `walk_block` in pandoc lua module.
The line identifiers are built using the code block's identifier
as a prefix. If the code block has null identifier, we use
"cb1", "cb2", etc.
Closes#4031.
This prevents the problem with extra space around highlighted
code blocks (closes#3996).
Note that we no longer put an enclosing div around highlighted
code blocks. The pre is the outer element, just as for unhighlighted
blocks.
Previously `\include` wouldn't work if the included file
contained, e.g., a begin without a matching end.
We've changed the Tok type so that it stores a full SourcePos,
rather than just a line and column. So tokens keeep track
of the file they came from. This allows us to use a simpler
method for includes, which doesn't require parsing the included
document as a whole.
Closes#3971.
Now all the guts of openURL have been put into openURL from
Class. openURL is now sensitive to stRequestHeaders in CommonState
and will add these custom headers when making a request.
It no longer looks at the USER_AGENT environment variable,
since you can now set the `User-Agent` header directly.
We now use the default.latex template for both latex and beamer.
It contains conditionals for the beamer-specific things.
`pandoc -D beamer` will return this template.
Stack instances for common data types are now provides by hslua. The
instance for Either was useful only for a very specific case; the
function that was using the `ToLuaStack Either` instance was rewritten
to work without it.
Closes: #3805
We assume that comments are defined as parsed by the
docx reader:
I want <span class="comment-start" id="0" author="Jesse Rosenthal"
date="2016-05-09T16:13:00Z">I left a comment.</span>some text to
have a comment <span class="comment-end" id="0"></span>on it.
We assume also that the id attributes are unique and properly
matched between comment-start and comment-end.
Closes#2994.
* readDataFile, readDefaultDataFile, getReferenceDocx,
getReferenceODT have been removed from Shared and
moved into Class. They are now defined in terms of
PandocMonad primitives, rather than being primitve
methods of the class.
* toLang has been moved from BCP47 to Class.
* NoTranslation and CouldNotLoudTranslations have
been added to LogMessage.
* New module, Text.Pandoc.Translations, exporting
Term, Translations, readTranslations.
* New functions in Class: translateTerm, setTranslations.
Note that nothing is loaded from data files until
translateTerm is used; setTranslation just sets the
language to be used.
* Added two translation data files in data/translations.
* LaTeX reader: Support `\setmainlanguage` or `\setdefaultlanguage`
(polyglossia) and `\figurename`.
In addition to `-Wall`:
`-Wincomplete-uni-patterns -Wincomplete-record-updates -Wredundant-constraints -Wcompat -Wnoncanonical-monad-instances -Wnoncanonical-monadfail-instances`
We no longer have a separate readGFM and writeGFM;
instead, we'll use readCommonMark and writeCommonMark
with githubExtensions.
It remains to implement these extensions conditionally.
Closes#3841.
This uses bindings to GitHub's fork of cmark, so it should parse
gfm exactly as GitHub does (excepting certain postprocessing
steps, involving notifications, emojis, etc.).
* Added Text.Pandoc.Readers.GFM (exporting readGFM)
* Added Text.Pandoc.Writers.GFM (exporting writeGFM)
* Added `gfm` as input and output forma
Note that tables are currently always rendered as HTML
in the writer; this can be improved when CMarkGFM supports
tables in output.
Added TikiWiki reader, including tests and documentation.
It's probably not *complete*, but it works pretty well, handles all
the basics (and some not-so-basics).
This rewrite is primarily motivated by the need to
get macros working properly. A side benefit is that the
reader is significantly faster (27s -> 19s in one
benchmark, and there is a lot of room for further
optimization).
We now tokenize the input text, then parse the token stream.
Macros modify the token stream, so they should now be effective
in any context, including math. Thus, we no longer need the clunky
macro processing capacities of texmath.
A custom state LaTeXState is used instead of ParserState.
This, plus the tokenization, will require some rewriting
of the exported functions rawLaTeXInline, inlineCommand,
rawLaTeXBlock.
* Added Text.Pandoc.Readers.LaTeX.Types (new exported module).
Exports Macro, Tok, TokType, Line, Column. [API change]
* Text.Pandoc.Parsing: adjusted type of `insertIncludedFile`
so it can be used with token parser.
* Removed old texmath macro stuff from Parsing.
Use Macro from Text.Pandoc.Readers.LaTeX.Types instead.
* Removed texmath macro material from Markdown reader.
* Changed types for Text.Pandoc.Readers.LaTeX's
rawLaTeXInline and rawLaTeXBlock. (Both now return a String,
and they are polymorphic in state.)
* Added orgMacros field to OrgState. [API change]
* Removed readerApplyMacros from ReaderOptions.
Now we just check the `latex_macros` reader extension.
* Allow `\newcommand\foo{blah}` without braces.
Fixes#1390.
Fixes#2118.
Fixes#3236.
Fixes#3779.
Fixes#934.
Fixes#982.
* New module Text.Pandoc.Readers.Vimwiki, exporting readVimwiki [API change].
* New input format `vimwiki`.
* New data file, `data/vimwiki.css`, for displaying the HTML produced by this reader and pandoc's HTML writer in the style of vimwiki's own HTML export.
Support for the `#+INCLUDE:` file inclusion mechanism was added.
Recognized include types are *example*, *export*, *src*, and normal org
file inclusion. Advanced features like line numbers and level selection
are not implemented yet.
Closes: #3510
Supporting two completely different libraries for fetching
from URLs makes it difficult to trap errors, because of
different error types expected from the libraries.
There's no clear reason not to build with these https-capable
libraires.
Writer helper functions were defined in the top-level Text.Pandoc
module. These functions are moved to the Writer submodule as to enable
reuse in other submodules.
Reader helper functions were defined in the top-level Text.Pandoc
module. These functions are moved to the Readers submodule as to enable
reuse in other submodules.
The lua filters and custom lua writer system defined very similar
StackValue instances for strings and tuples. These instance definitions
are extracted to a separate module to enable sharing.
These are caught (and lead to exit) in pandoc.hs, but
other uses of Text.Pandoc.App may want to recover in another
way.
Added PandocAppError to PandocError (API change).
This is a stopgap: later we should have a separate constructor
for each type of error.
Also fixed uses of 'exit' in Shared.readDataFile, and
removed 'err' from Shared (API change).
Finally, removed the dependency on extensible-exceptions.
See #3548.
Plain text readers are exposed to lua scripts via the `pandoc.reader`
submodule, which is further subdivided by format. Converting e.g. a
markdown string into a pandoc document is possible from within lua:
doc = pandoc.reader.markdown.read_doc("Hello, World!")
A `read_block` convenience function is provided for all formats,
although it will still parse the whole string but return only the first
block as the result.
Custom reader options are not supported yet, default options are used
for all parsing operations.
I think template haskell is robust enough now across platforms
that this will work.
Motivation: file-embed gives us better dependency tracking: if a data
file changes, ghc/stack/cabal know to recompile the Data module.
This also removes hsb2hs as a build dependency.
The 0.5.0 release of hslua fixes problems with lua C modules on linux.
The signature of the `loadstring` function changed, so a compatibility
wrapper is introduced to allow both 0.4.* and 0.5.* versions to be used.
* New module: Text.Pandoc.Writers.Ms.
* New template: default.ms.
* The writer uses texmath's new eqn writer to convert math
to eqn format, so a ms file produced with this writer
should be processed with `groff -ms -e` if it contains
math.
* Add `--lua-filter` option. This works like `--filter` but takes pathnames of special lua filters and uses the lua interpreter baked into pandoc, so that no external interpreter is needed. Note that lua filters are all applied after regular filters, regardless of their position on the command line.
* Add Text.Pandoc.Lua, exporting `runLuaFilter`. Add `pandoc.lua` to data files.
* Add private module Text.Pandoc.Lua.PandocModule to supply the default lua module.
* Add Tests.Lua to tests.
* Add data/pandoc.lua, the lua module pandoc imports when processing its lua filters.
* Document in MANUAL.txt.