They weren't ever being used, and nobody really ever uses the return
value from the printf family of functions anyway. They return how many
bytes were actually printed, but if it's less than you expected then
there's not much you can really do about them. Also the vlog_* functions
were computing them inaccurately because I only set the return value to
the return value of vprintf when there's other print functions being
called, but regardless there's no reason to have a return value here
anyway.
The hilariously-named WIN32_LEAN_AND_MEAN slims down the number of
header files included in the already-massive `windows.h`. I know people
say Moore's law and precompiled headers and all that (well, we don't use
precompiled headers), but they kinda forgot about virtual machines, and
there's no reason not to define this and slim down the number of headers
anyway.
This started when I saw the warning that GetVersionExW was deprecated,
then looked it up and found StackOverflow answers saying that you should
basically detect the feature directly instead of checking the version,
which makes sense to me. Then I found that I could probably detect color
support by using GetConsoleMode and GetStdHandle. But then I asked
myself what the point was unless you could get color output directly in
the terminal, which it seems like you really can't if your app is a GUI
app. (I have no idea why Windows makes this pointless distinction
between console and GUI apps...)
I tested Command Prompt, PowerShell, Windows Terminal (which is just
PowerShell again), and even Git Bash (MINGW64), but none of them will
ever give the console output of a GUI app such as VVVVVV. The closest I
got is that Git Bash doesn't seem to detach the process, but it will
simply produce no output.
At this point I feel like it's not worth it keeping this code around if
it didn't even work in the first place, so I'm removing it. People can
always enable color by using the -forcecolor command-line argument
anyway.
I'm fine with putting the release version in a header file, thus
necessitating the need to recompile every file that includes it if it's
changed, simply because it's not supposed to be changed that often.
The SDL_arraysize is necessary because sometimes we'll have subreleases
(e.g. 2.4.1, 2.4.2, 2.4.3), and who knows, maybe we'll get to 2.10
someday.
This reworks how the commit hash and date are compiled so that if
they're changed (and they're changed often), only one source file needs
to be recompiled in order to update it everywhere in the game, no matter
how many source files use the hash or date.
The commit hash and date are now declared in InterimVersion.h (and they
need `extern "C"` guards because otherwise it results in a link fail on
MSVC because MSVC is stupid).
To do this, what now happens is that upon every rebuild,
InterimVersion.in.c is processed to create InterimVersion.out.c, then
InterimVersion.out.c is compiled into its own static library that is
then linked with VVVVVV.
(Why didn't I just simply add it to the list of VVVVVV source files?
Well, doing it _now_ does nothing because at that point the horse is
already out of the barn, and the VVVVVV executable has already been
declared, so I can't modify its sources. And I can't do it before
either, because we depend on the VVVVVV executable existing to do the
interim version logic. I could probably work around this by cleverly
moving around lines, but that'd separate related logic from each other.)
And yes, the naming convention has changed. Not only did I rename
Version to InterimVersion (to clearly differentiate it from
ReleaseVersion, which I'll be adding later), I also named the files
InterimVersion.in.c and InterimVersion.out.c instead of
InterimVersion.c.in and InterimVersion.c.out. I needed to put the file
extension on the end because otherwise CMake wouldn't recognize what
kind of language it is, and I mean like yeah duh of course it doesn't,
my text editor doesn't recognize it either.
I thought all of these were removed earlier but apparently not. Anyways,
add_definitions is bad because it pollutes the definitions of every
single target, we should be using target_compile_definitions instead.
I missed this because to check for all instances of 2.0.22, I did `rg -F
'2.0.22'`. But ripgrep doesn't search in hidden directories by default,
so the actual command to run is `rg -F. '2.0.22'`.
This updates all references to SDL 2.0.22 to SDL 2.24.0, including the
Docker container that I maintain for Linux CI.
Be warned, this release of SDL updates the versioning scheme to be less
dumb. The previous version is 2.0.22, this release is 2.24.0 (so the
last number can be properly used for patch-level version releases).
This option is enabled by default and will replace absolute paths of all
source directory file paths with relative paths in the compiled binary,
if the compiler supports it. Of course, this isn't needed if you compile
with all paths removed anyways (e.g. in Release mode).
The purpose is to help make builds reproducible and to remove any
potentially sensitive information about the user or the user's system
from the compiled binary.
Both Clang and GCC support -fdebug-prefix-map, -fmacro-prefix-map, and
-ffile-prefix-map. In particular, -ffile-prefix-map is just a flag that
does both -fdebug-prefix-map and -fmacro-prefix-map.
According to https://reproducible-builds.org/docs/build-path/ ,
-fdebug-prefix-map is available in all GCC versions but only available
starting from Clang 3.8, and -fmacro-prefix-map and -ffile-prefix-map
are available since GCC 8 and Clang 10. So we check the compiler version
and use the available flags depending on if the compiler supports it or
not.
This does make debugging a bit more annoying, but there are a couple
ways to rectify this. Either disable it with
-DREMOVE_ABSOLUTE_PATHS=OFF, or add a `.gdbinit` that consists of
set substitute-path . ../..
so that `.` is considered to be `../..`. Of course, if you need to,
replace `../..` with the actual source directory path (in my case it's
`../../..` because I place my build folders in another subdirectory to
have multiple build folders in one directory).
This doesn't need to be a global `.gdbinit`, it can be in a
directory-specific `.gdbinit` (similar to how `.gitignore`s can also be
directory-specific). But then you need to add `add-auto-load-safe-path`
to your `.gdbinit` to load any directory-specific `.gdbinit`s.
The above is for GDB; I don't know what (if anything) needs to be done
for LLDB; I don't use LLDB.
Fixes#889.
Whereas all `SDL_assert`s will go away when compiling with optimization
flags and all plain `assert` calls (used in PhysFS) will go away when
compiling in Release mode, FAudio has a bunch of debug stuff that needs
to be explicitly disabled with its own `FAUDIO`-prefixed flag.
To do this in Release mode, we need to use generator expressions for
dumb CMake reasons. Basically, if checking the CMAKE_BUILD_TYPE variable
will not work for certain generators (Ninja, Visual Studio) because they
only specify the build type at build time, not generation/configuration
time.
This is so flags that apply globally (i.e. to the game and all static
libraries it's compiled with), such as /MT on MSVC, can be put in a
list, and along with putting all static libraries in a list, we remove
the need for each flag to be repeated for each static library and we can
just use a foreach loop instead.
(Global compile flags of course don't apply to us meddling with
CMAKE_C_FLAGS and CMAKE_CXX_FLAGS directly, because we need to do that
in order to make sure the C and C++ standards are set properly.)
This fixes a regression where the game ignored the amount of frames you
held down a direction if you released the direction during death.
Previously, the game only checked the amount of frames you held down a
direction if you were able to control the player. If you weren't able to
control the player (e.g. during the death animation), then the number of
frames it counted didn't change. This also meant that if you were
holding a direction before you died, but released it during death, the
game wouldn't zero out the number of frames you held it.
This behavior was useful because it meant you could keep the
deceleration momentum that you normally get by holding a direction for 5
frames just by holding a direction for less than 5 frames after dying,
if you had the rest of the hold frames before you died. This behavior is
what's used in https://tasvideos.org/7575S at around frame 7200.
Unfortunately, #609 made it so that the direction hold processing
happened even if the player didn't have control, meaning that it would
zero the hold frames during the death animation in the TAS, thus
desyncing it when it performed the maneuver it relied on the extra
momentum for after Viridian respawns.
The solution here is to just add the check back in again.
Fixes#887.
While fixing #885, I noticed that I had a bunch of
`special/stdin.vvvvvv` entries saved in my `levelstats.vvv`. At once I
knew that the dumb `special/stdin` hack that actually checks if the
filename passed is `special/stdin` was to blame.
STDIN playtesting was first merged, I knew in the back of my mind that
it was a bit of a dumb hack, but I didn't know it would cause
consequences like showing up in `levelstats.vvv`. For now, I'll just
have to patch it, but hopefully in the future I'll remove the dumb hack
entirely. Commenting both instances of the dumb hack with instructions
to grep for it should help maintainers out.
This is useful to investigate any TAS desync/reproducibility issues
relating to RNG, because even though I specifically separated the
Gravitron RNG away from other RNG and made it not dependent on the
system libc rand() function, there's still apparently some differences
in RNG execution between systems, resulting in TASVideos submission 7575
( https://tasvideos.org/7575S ) not syncing for everyone except the
author.
It seems that SDL_GetTicks(), which is used to seed the xoshiro RNG, is
not reliably consistent between systems, so in the future I will
probably replace it with a counter that is incremented each frame
starting from game startup, which is probably better.
This fixes the following warnings:
desktop_version/src/Music.cpp: At global scope:
desktop_version/src/Music.cpp:240:23: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
240 | Uint8 *wav_buffer = NULL;
| ^
desktop_version/src/Music.cpp:414:32: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
414 | Uint8* decoded_buf_playing = NULL;
| ^
desktop_version/src/Music.cpp:415:32: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
415 | Uint8* decoded_buf_reserve = NULL;
| ^
desktop_version/src/Music.cpp:416:21: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
416 | Uint8* read_buf = NULL;
| ^
These warnings are because the non-static data members (i.e. data
members that will be different for each instance of a class) are being
initialized at the same time as they're being declared, which means
that's what their value will be when the class instance is initialized.
However, this is only a C++11 feature, and we don't use C++11. Luckily,
we don't need to do this, and this is in fact redundant because we
already zero out the class instance in its constructor using
SDL_zerop(). Therefore, we should remove these initializers to fix
compliance with C++03.
Instead, for any number that isn't in the list of number words, just
return the regular Arabic numerical representation (i.e. just convert it
to string). It's better than having "Lots" or "???", neither of which
really tell you what the number actually is.
This flag makes it so the MSVC runtime libraries are statically linked.
This avoids needing Windows users to have these libraries installed.
Apparently /MT stands for "MultiThreaded", and there's a bit of a
history there where originally by default you could only have a
single-threaded library, and then the multi-threaded flags were added in
later.
First I tried doing target_compile_options on VVVVVV, but then got a
linker error. Then I tried doing add_compile_options because I figured
/MT had to be applied everywhere, and it seemed to work, but it still
linked to the runtime libraries. Apparently it was being overridden.
Then I tried target_compile_options again but this time did it to
everything, and that linked correctly and also removed the runtime
dependency. I would've tried using the MSVC_RUNTIME_LIBRARY property
- along with the CMP0091 policy - but those were only introduced in
CMake 3.15.
You can verify that a binary is built without dependencies by installing
LLVM and running llvm-readobj --needed-libs path/to/binary. This is the
output for a binary with runtime dependencies:
infoteddy@fedorarune ~/d llvm-readobj --needed-libs VVVVVV.exe
File: VVVVVV.exe
Format: COFF-i386
Arch: i386
AddressSize: 32bit
NeededLibraries [
ADVAPI32.dll
KERNEL32.dll
MSVCP140.dll
SDL2.dll
SHELL32.dll
USER32.dll
VCRUNTIME140.dll
api-ms-win-crt-heap-l1-1-0.dll
api-ms-win-crt-locale-l1-1-0.dll
api-ms-win-crt-math-l1-1-0.dll
api-ms-win-crt-runtime-l1-1-0.dll
api-ms-win-crt-stdio-l1-1-0.dll
api-ms-win-crt-string-l1-1-0.dll
api-ms-win-crt-time-l1-1-0.dll
api-ms-win-crt-utility-l1-1-0.dll
]
And this is the output for a binary with those dependencies having been
statically-linked in:
infoteddy@fedorarune ~/d llvm-readobj --needed-libs VVVVVV.exe
File: VVVVVV.exe
Format: COFF-i386
Arch: i386
AddressSize: 32bit
NeededLibraries [
ADVAPI32.dll
KERNEL32.dll
SDL2.dll
SHELL32.dll
USER32.dll
]
As already described in cc61194bed, as
well as Ved's commits from the last almost two weeks, starting VVVVVV
from Ved for playtesting could be made a lot faster by "preloading" the
game - letting it do all its asset loading in the background without
creating a window - and then waiting until the level is passed in via
stdin. There's only one problem left with this approach: VVVVVV
currently expects the starting position to be passed via command line
arguments, which isn't known yet at the time we'd like to start VVVVVV.
Therefore, this commit allows passing the starting position via the
level XML, instead of via arguments.
The extra XML looks like this, and is added next to the <Data> tag:
<Playtest>
<playx>214</playx>
<playy>112</playy>
<playrx>100</playrx>
<playry>100</playry>
<playgc>0</playgc>
<playmusic>4</playmusic>
</Playtest>
This is handled similarly to how the equivalent arguments are handled:
when the level metadata is loaded for CLI playtesting, we also try to
find this tag, and if it exists, it sets the same variables that the
arguments would have otherwise set.
There's always been a bit of an inconsistency in the game where enabling
invincibility would make spikes so solid that enemies and moving
platforms would treat spikes as solid and bounces off of them.
This fixes that by adding an `invincible` parameter to collision
functions, so the functions will only treat spikes as solid if that
parameter is true, and the parameter passed will only be true if it's
called for an entity that is a humanoid and invincibility mode is
enabled.
Also, for clarity, `spikecollide` is renamed to `towerspikecollide`
because it's only used for tower spikes. And as a small optimization,
`checktowerspikes` returns early if invincibility mode is enabled.
This is a minor optimization to streamline the experience of Ved
playtesting. Previously, the user would have to wait for all the assets
to load when launching playtesting (most of the time, I suspect, is
taken up by loading music from a vvvvvvmusic blob). With this
optimization, however, the game can be launched in the background and
its assets can be loaded, while it blocks on STDIN input. During this
time, the user in Ved will be choosing where to start playtesting. After
Ved provides STDIN input, then the window will be created and appears
instantaneously.
This also fixes a related issue in which providing an invalid
playtesting level name would result in a brief window flash that gets
instantly destroyed. With this, if the level is invalid then no window
is ever shown at all.
Probably should have done this earlier in 2.3, but better late than
never.
This makes it easier for third-party programs like Ved to detect what
version of the game this is.
Slightly quick-n-dirty for now, I'll de-duplicate the version number
later, and add commit hash and date if applicable.
Doesn't hurt to keep things up to date. (The other submodules, UTF-CPP
and LodePNG, haven't been updated since then.) In particular, PhysFS has
worked around a bug with Windows Explorer, so people should no longer
have issues modifying their data.zip with Explorer and then being unable
to have assets in their game (as reported in icculus/physfs#24 ).
Without this, entering in-game and opening the map with missing graphics
will result in a segfault. This is because even if the image doesn't
exist, it's still pushed into the `images` std::vector as a NULL
pointer. And it segfaults because we dereference it (to get things like
their width and height). In contrast, passing them to SDL is fine
because SDL always checks for NULL first.
There are three different places where we call PHYSFS_openRead. This
commit makes sure all of them print a statement upon failure along with
the PhysFS reason for failure, and assigns the log level of each print
as so:
- FILESYSTEM_loadFileToMemory: Debug print (previously no
print existed in the first place), because some files (such as
font.txt) may or may not be needed, but if it isn't then no need to
print and worry the user. The game will error anyway if a critical
file like a graphics file is missing.
- FILESYSTEM_loadBinaryBlob: Debug print (previously info print),
because sometimes it's not needed, such as mmmmmm.vvv. I remember one
user being worried that the game printed "Unable to open file
mmmmmm.vvv" when it's not critical unlike vvvvvvmusic.vvv (and that
file is assumed to exist if data.zip exists anyways). Though maybe we
should move to loose-leaf files to save on memory usage (and so we
don't have to use special tools to modify a binary blob)...
- FILESYSTEM_loadZip: Error print. If we're calling this function, we
really do expect the zip to be loaded, and if it fails because we
can't open the file in the first place, then it would be good to know
why.
This commit adds a new string formatting system to replace uses of
`SDL_snprintf` and string concatenation.
Making our own string formatting system has been briefly discussed
during the review of the localization branch, and on the VVVVVV
Discord. It's inspired by Python's format strings, but simpler.
This is primarily to benefit localization - strings will be easier to
understand (`Now using %s Tileset` → `Now using {area} Tileset`,
`"%s remain"` → `"{n_crewmates|wordy} remain"`), translators can change
the word order for their language's grammar (`%1$s` is a POSIX
extension), and this system is also less error-prone (making the format
string not align with the actual arguments won't result in a crash or
UB).
It also integrates our needs better - particularly the "wordy" numbers
without having to have a `help.number_words(n).c_str()` at the
callsite, translators can opt in and out of wordy numbers per string,
and this should also make it easier to solve #859.
This commit adds the formatting system itself, and changes one
`SDL_snprintf` in the code to use it as a small demo (the rest should
probably be done in the localization branch to avoid more unneeded
work).
The system is described in full detail in VFormat.h and in the pull
request description.
2.0.22 just released 40 minutes ago.
This also updates the `Dockerfile` to use the URL from the GitHub
releases page, instead of SDL's servers.
I've also pushed a new Docker container to
`ghcr.io/infoteddy/vvvvvv-build`.
This removes the magic numbers previously used for controlling the fade
mode, which are really not readable at all unless you already know what
they mean.
0: FADE_NONE
1: FADE_FULLY_BLACK
2: FADE_START_FADEOUT
3: FADE_FADING_OUT
4: FADE_START_FADEIN
5: FADE_FADING_IN
There is also the macro FADEMODE_IS_FADING, which indicates when the
intention is to only check if the game is fading right now, which wasn't
clearly conveyed previously.
I also took the opportunity to clean up the style of any lines I
touched. This included rewriting if-else chains into case-switches,
turning one-liner if-then statements into proper blocks, fixing up
comments, and even commenting the `fademode == FADE_NONE` on the tower
spike checks (which, it was previously undocumented why that check was
there, but I think I know why it's there).
As for type safety, we already get some by transforming the variable
types into the enum. Assignment is prohibited without a cast. But,
apparently, comparison is perfectly legal and won't even give so much as
a warning. To work around this and make absolutely sure I made all
existing comparisons now use the enum, I temporarily changed it to be an
`enum class`, which is a C++11 feature that makes it so all comparisons
are illegal. Unfortunately, it scopes them in a namespace with the same
name as a class, so I had to temporarily define macros to make sure my
existing code worked. I also had to temporarily up the standard in
CMakeLists.txt to get it to compile. But after all that was done, I
found the rest of the places where a comparison to an integer was used,
and fixed them.
Previously, it was copy-pasted and slightly different, when really, they
ought to both be the exact same code.
It kind of pains me that the room name, glitch name, and hidden name
don't own their own memory, but, that's to be addressed later.
What's a bit annoying is that the `temp` variable used in
`teleporterrender` also ends up being reused later in the function. In
this case, I opted to just redeclare them when they are used anyway, to
make it clearer.
Apart from `teleporterrender` no longer calling `map.area` or caring
about `map.custommode`, it also no longer cares about
`graphics.fademode` being 0. I could never actually get this condition
to be false in practice, and I have absolutely no idea why it's there.
I'm guessing it could be some weird edge case rendering issue if the
screen is fully black? But I wouldn't know how to trigger that, and
anyway it should probably be fixed elsewhere. So I'm just going to
remove that conditional.
It's quite rare, though possible, that during finalstretch you could see
a glitchy tileset that looked like this:
https://i.imgur.com/V7cYKDW.png
This happened because final_mapcol, the variable that controls which
color of finalstretch is rendered, could end up being 7. Normally, it's
in the range of 1..6, which perfectly correlates with the Warp Zone
tilesets in tiles2.png, and the higher the number the farther back in
the tileset it goes from the gray Warp Zone tileset. However, if it's 7,
then it'll start grabbing tiles from the Ship plus some unused blank
tiles, which does not look pretty in the slightest.
This happened because it's possible, though exceedingly unlikely, that
fRandom(), a function which returns a float between 0..1, could return
exactly 1. fRandom() calls rand(), which returns a result between 0 and
RAND_MAX, and divides it by RAND_MAX. This value is implementation
dependent, but required to be at least 32767, and on most systems is
2147483647. Even taking the value of 32767, that means there's a 0.003%
chance that you could get this glitchy tileset when the game cycled the
color in finalstretch. But of course, playing the game for long periods
of time will eventually increase this chance - cycling the color 1,000
times (around 17 minutes of playing) will result in the chance being 3%.
Then as the calculations in the finalstretch color cycling logic calls
fRandom(), then multiplies by 6 and adds 1, it was possible for
fRandom() to return exactly 1, then have
6 added to it, resulting in final_mapcol being 7.
To fix this, just decrement the multiplication by fRandom() to multiply
by 5 instead of 6. Now the only possible numbers that calculation can
produce would be 1..6.
This fixes a limitation where the level filename had to be the exact
same name as the name of the zip, because the game used the name of the
level to identify the zip of which to load assets, and this also made it
impossible to use assets for more than one level in a zip.
Instead, we just look up where the level came from, so we can always
load its assets regardless of its filename.
Additionally, the zip structure checks can go away too, simplifying the
code further.
This WOULD be a huge breaking change, if it weren't for the fact that no
one uses them. Which is why I'm removing them, to simplify the code.
I asked on the VVVVVV Discord whether anyone used them or was even aware
of them and basically the answer was no. I go on Distractionware and no
one uses them. And why would they, when they'd have to distribute the
level .vvvvvv file separately? Better to just distribute everything in
one zip. And it's quite a bit obscure that you have to suffix the file
with .data.zip anyway.
During review of #869, I looked at this part of the codebase again. I
have no idea how or why, but during the course of 2.4 this whole area
just became a mess.
The issues I fixed (in no particular order):
- Copy-pasting the code that loads from the binary blobs
- Making sure SDL_RWFromConstMem is used over SDL_RWFromMem wherever
possible
- Adding checks to make sure the index from the binary blob is valid
(it's possible it could not exist)
- Adding checks to make sure we gracefully handle
SDL_RWFromConstMem/PHYSFSRWOPS_openRead returning NULL
- Moving the pointer asterisk to the type instead of the name :)
So, it turns out we weren't quite done fighting CMake yet...
To accommodate #869 (and actually also #272), the C standard was raised
from C90 to C99. This turned out to require a bit of a fight with the
CentOS CI's CMake version to get it to set the flags we wanted (and to
not overwrite them later). Eventually the fix was to move the block
that sets the standards to later in the file, which was done in
24353a54bb.
As it apparently turns out, if your CMake is at least 3.1.3 and
`CMAKE_<LANG>_STANDARD` is used instead of the workaround, the standard
setting now has an effect on the third party libraries, but not on
VVVVVV itself. The cause is (probably) the phrase "if it is set when a
target is created" in the CMake documentation - the
`CMAKE_<LANG>_STANDARD` values have to come before the VVVVVV target is
defined. In other words, the compiler's default C/C++ standard will be
used, probably something like C17 and C++17. As I can confirm with
`__cplusplus` and `__STDC_VERSION__` with my recent-enough CMake. If I
force the pre-3.1.3 workaround to be used, everything is compiled with
C99/C++98 as expected; and the `-fno-exceptions` `-fno-rtti` flags
appear everywhere regardless of version.
So my fix is to make the CMakeLists a little less complex by
simplifying away the `CMAKE_<LANG>_STANDARD` and
`CMAKE_<LANG>_EXTENSIONS`, and always using the workaround regardless
of CMake version. There's nothing wrong with the workaround, the same
thing is also done for `-fno-exceptions` `-fno-rtti`, and it's good to
have a less complicated CMakeLists that doesn't do different and
unexpected things for different versions.
The previous commit f6d7a214f8 ended up
breaking CI because the workaround ended up breaking the PhysFS build
too, which was previously relying on extensions to compile.
Since #869 is going to require C99 anyways, I might as well just up the
standard now. That way the PR won't have to fight it too.
Previously, if the user had a CMake version below 3.1.3, we told them to
set `-std` themselves.
However, we are going to go to C99 soon (because of FAudio, see #869),
and CentOS 7's CMake is too old to set `-std=` automatically, defaulting
to C90. This is bad because it fails the CI.
To work around this, we set `-std=` ourselves, but first we have to
clear any existing `-std=` flag in C_FLAGS or CXX_FLAGS. Amusingly
enough, MSVC does not have `/std:` switches for either C90 or C++98, so
we just get to do nothing.