This adds color support to the output of the console on Windows. Now if
you're using Windows 10 build 1511 or later (I think it's build 1511
anyway; they added more VT sequence support in later versions), you will
see colors by default. This isn't due to Windows helping in any way;
this commit has to specifically enable it with SetConsoleMode() because
by default, Windows won't enable color support unless we enable it. (Or
if it's enabled in the registry, but having to go through the registry
to enable basic shit like that is completely fucking stupid.)
I tested this in my Windows 10 virtual machine and it's completely
working.
Previously, we were using `color_enabled` to mean both that the color
was supported and that it was enabled by the user (which it is enabled
by default). But this logic doesn't work well if the color check
function is called again and ends up enabling color after the user
disabled it. To fix this, just separate the two so the user controls one
`color_supported` variable and the `color_enabled` variable is separate.
Check both of them in order to print color, of course.
This adds the `-console` command-line option (for Win32 only) so the
game can spawn an attached console window which will contain all console
output.
This is to make it easier for people to debug on Windows systems.
Otherwise, the only way to get console output would be to either compile
the application as a console app (i.e. switch the subsystem to console)
- which is undesirable for regular users as this makes it so a console
is always spawned even when unwanted - or launch the game with shell
arguments that make it so output is redirected to a file.
As a result, color checking support is factored out of vlog_init() into
its own function, even though we don't support colors on Windows.
Using SDL_GetTicks() to seed the Gravitron RNG caused many
reproducibility issues while syncing https://tasvideos.org/7575S . To
fix this, add a frame counter, which is a number that is incremented
every frame and never resets, and use it instead.
If someone needs to switch back to SDL_GetTicks() for old TASes, then
provide the -seed-use-sdl-getticks command-line option for them.
In its previous location, it would only print the value of `s` after it
had been mutated by `splitmix32` four times, and it doesn't get used
after that, so the print isn't very useful.
Mixing code and declarations here is fine because starting from a few
months ago, we compile with C99 and if we ever need to compile with C90
then it's trivial to add braces surrounding the declarations.
If a music track has a loop comment with a negative value, ignore all
comments of the track. This is just to prevent any weirdness from
happening as it's safer to just let the track loop improperly. Also log
to the console to let users know.
This is the same thing that SDL_mixer does now:
libsdl-org/SDL_mixer@e819489459
This commit happened as a result of discussion on the VVVVVV Discord
server about SDL_mixer 2.0.4 behavior with weird loop comment values
(e.g. octal input with leading zeroes). This is simply updating the code
to be in line with what newer versions of SDL_mixer do.
Just in case it happens. Comments aren't really important to the game
(at worst a track will just loop in the wrong place) so it's fine to
carry on here and ignore all comments if this happens.
This does the following:
- Const-qualify variables if they are not modified
- Place each statement onto their own separate lines
- Place the asterisk with the type instead of the variable name
- Combine declarations and initializations where possible
VVVVVV uses submodules now, so you need to know how to initialize them.
I'm explicitly not including `git clone --recurse-submodules`. Usage of
submodules in git projects is kinda rare in my experience, so people
are used to doing simple clones, and that instruction would just result
in people being annoyed thinking they have to delete the repo they
already cloned, and clone it again except slightly differently.
It also doesn't help you if you need submodules that aren't in the
master branch (for example, if you clone my fork recursively and then
checkout the localization branch, you won't have C-HashMap and you'll
need the update command anyway). And you also need it whenever VVVVVV
updates its submodules. So teaching people just the update command is
better.
They weren't ever being used, and nobody really ever uses the return
value from the printf family of functions anyway. They return how many
bytes were actually printed, but if it's less than you expected then
there's not much you can really do about them. Also the vlog_* functions
were computing them inaccurately because I only set the return value to
the return value of vprintf when there's other print functions being
called, but regardless there's no reason to have a return value here
anyway.
The hilariously-named WIN32_LEAN_AND_MEAN slims down the number of
header files included in the already-massive `windows.h`. I know people
say Moore's law and precompiled headers and all that (well, we don't use
precompiled headers), but they kinda forgot about virtual machines, and
there's no reason not to define this and slim down the number of headers
anyway.
This started when I saw the warning that GetVersionExW was deprecated,
then looked it up and found StackOverflow answers saying that you should
basically detect the feature directly instead of checking the version,
which makes sense to me. Then I found that I could probably detect color
support by using GetConsoleMode and GetStdHandle. But then I asked
myself what the point was unless you could get color output directly in
the terminal, which it seems like you really can't if your app is a GUI
app. (I have no idea why Windows makes this pointless distinction
between console and GUI apps...)
I tested Command Prompt, PowerShell, Windows Terminal (which is just
PowerShell again), and even Git Bash (MINGW64), but none of them will
ever give the console output of a GUI app such as VVVVVV. The closest I
got is that Git Bash doesn't seem to detach the process, but it will
simply produce no output.
At this point I feel like it's not worth it keeping this code around if
it didn't even work in the first place, so I'm removing it. People can
always enable color by using the -forcecolor command-line argument
anyway.
I'm fine with putting the release version in a header file, thus
necessitating the need to recompile every file that includes it if it's
changed, simply because it's not supposed to be changed that often.
The SDL_arraysize is necessary because sometimes we'll have subreleases
(e.g. 2.4.1, 2.4.2, 2.4.3), and who knows, maybe we'll get to 2.10
someday.
This reworks how the commit hash and date are compiled so that if
they're changed (and they're changed often), only one source file needs
to be recompiled in order to update it everywhere in the game, no matter
how many source files use the hash or date.
The commit hash and date are now declared in InterimVersion.h (and they
need `extern "C"` guards because otherwise it results in a link fail on
MSVC because MSVC is stupid).
To do this, what now happens is that upon every rebuild,
InterimVersion.in.c is processed to create InterimVersion.out.c, then
InterimVersion.out.c is compiled into its own static library that is
then linked with VVVVVV.
(Why didn't I just simply add it to the list of VVVVVV source files?
Well, doing it _now_ does nothing because at that point the horse is
already out of the barn, and the VVVVVV executable has already been
declared, so I can't modify its sources. And I can't do it before
either, because we depend on the VVVVVV executable existing to do the
interim version logic. I could probably work around this by cleverly
moving around lines, but that'd separate related logic from each other.)
And yes, the naming convention has changed. Not only did I rename
Version to InterimVersion (to clearly differentiate it from
ReleaseVersion, which I'll be adding later), I also named the files
InterimVersion.in.c and InterimVersion.out.c instead of
InterimVersion.c.in and InterimVersion.c.out. I needed to put the file
extension on the end because otherwise CMake wouldn't recognize what
kind of language it is, and I mean like yeah duh of course it doesn't,
my text editor doesn't recognize it either.
I thought all of these were removed earlier but apparently not. Anyways,
add_definitions is bad because it pollutes the definitions of every
single target, we should be using target_compile_definitions instead.
This updates all references to SDL 2.0.22 to SDL 2.24.0, including the
Docker container that I maintain for Linux CI.
Be warned, this release of SDL updates the versioning scheme to be less
dumb. The previous version is 2.0.22, this release is 2.24.0 (so the
last number can be properly used for patch-level version releases).
This option is enabled by default and will replace absolute paths of all
source directory file paths with relative paths in the compiled binary,
if the compiler supports it. Of course, this isn't needed if you compile
with all paths removed anyways (e.g. in Release mode).
The purpose is to help make builds reproducible and to remove any
potentially sensitive information about the user or the user's system
from the compiled binary.
Both Clang and GCC support -fdebug-prefix-map, -fmacro-prefix-map, and
-ffile-prefix-map. In particular, -ffile-prefix-map is just a flag that
does both -fdebug-prefix-map and -fmacro-prefix-map.
According to https://reproducible-builds.org/docs/build-path/ ,
-fdebug-prefix-map is available in all GCC versions but only available
starting from Clang 3.8, and -fmacro-prefix-map and -ffile-prefix-map
are available since GCC 8 and Clang 10. So we check the compiler version
and use the available flags depending on if the compiler supports it or
not.
This does make debugging a bit more annoying, but there are a couple
ways to rectify this. Either disable it with
-DREMOVE_ABSOLUTE_PATHS=OFF, or add a `.gdbinit` that consists of
set substitute-path . ../..
so that `.` is considered to be `../..`. Of course, if you need to,
replace `../..` with the actual source directory path (in my case it's
`../../..` because I place my build folders in another subdirectory to
have multiple build folders in one directory).
This doesn't need to be a global `.gdbinit`, it can be in a
directory-specific `.gdbinit` (similar to how `.gitignore`s can also be
directory-specific). But then you need to add `add-auto-load-safe-path`
to your `.gdbinit` to load any directory-specific `.gdbinit`s.
The above is for GDB; I don't know what (if anything) needs to be done
for LLDB; I don't use LLDB.
Fixes#889.
Whereas all `SDL_assert`s will go away when compiling with optimization
flags and all plain `assert` calls (used in PhysFS) will go away when
compiling in Release mode, FAudio has a bunch of debug stuff that needs
to be explicitly disabled with its own `FAUDIO`-prefixed flag.
To do this in Release mode, we need to use generator expressions for
dumb CMake reasons. Basically, if checking the CMAKE_BUILD_TYPE variable
will not work for certain generators (Ninja, Visual Studio) because they
only specify the build type at build time, not generation/configuration
time.
This is so flags that apply globally (i.e. to the game and all static
libraries it's compiled with), such as /MT on MSVC, can be put in a
list, and along with putting all static libraries in a list, we remove
the need for each flag to be repeated for each static library and we can
just use a foreach loop instead.
(Global compile flags of course don't apply to us meddling with
CMAKE_C_FLAGS and CMAKE_CXX_FLAGS directly, because we need to do that
in order to make sure the C and C++ standards are set properly.)
This fixes a regression where the game ignored the amount of frames you
held down a direction if you released the direction during death.
Previously, the game only checked the amount of frames you held down a
direction if you were able to control the player. If you weren't able to
control the player (e.g. during the death animation), then the number of
frames it counted didn't change. This also meant that if you were
holding a direction before you died, but released it during death, the
game wouldn't zero out the number of frames you held it.
This behavior was useful because it meant you could keep the
deceleration momentum that you normally get by holding a direction for 5
frames just by holding a direction for less than 5 frames after dying,
if you had the rest of the hold frames before you died. This behavior is
what's used in https://tasvideos.org/7575S at around frame 7200.
Unfortunately, #609 made it so that the direction hold processing
happened even if the player didn't have control, meaning that it would
zero the hold frames during the death animation in the TAS, thus
desyncing it when it performed the maneuver it relied on the extra
momentum for after Viridian respawns.
The solution here is to just add the check back in again.
Fixes#887.
While fixing #885, I noticed that I had a bunch of
`special/stdin.vvvvvv` entries saved in my `levelstats.vvv`. At once I
knew that the dumb `special/stdin` hack that actually checks if the
filename passed is `special/stdin` was to blame.
STDIN playtesting was first merged, I knew in the back of my mind that
it was a bit of a dumb hack, but I didn't know it would cause
consequences like showing up in `levelstats.vvv`. For now, I'll just
have to patch it, but hopefully in the future I'll remove the dumb hack
entirely. Commenting both instances of the dumb hack with instructions
to grep for it should help maintainers out.
This is useful to investigate any TAS desync/reproducibility issues
relating to RNG, because even though I specifically separated the
Gravitron RNG away from other RNG and made it not dependent on the
system libc rand() function, there's still apparently some differences
in RNG execution between systems, resulting in TASVideos submission 7575
( https://tasvideos.org/7575S ) not syncing for everyone except the
author.
It seems that SDL_GetTicks(), which is used to seed the xoshiro RNG, is
not reliably consistent between systems, so in the future I will
probably replace it with a counter that is incremented each frame
starting from game startup, which is probably better.
This fixes the following warnings:
desktop_version/src/Music.cpp: At global scope:
desktop_version/src/Music.cpp:240:23: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
240 | Uint8 *wav_buffer = NULL;
| ^
desktop_version/src/Music.cpp:414:32: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
414 | Uint8* decoded_buf_playing = NULL;
| ^
desktop_version/src/Music.cpp:415:32: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
415 | Uint8* decoded_buf_reserve = NULL;
| ^
desktop_version/src/Music.cpp:416:21: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
416 | Uint8* read_buf = NULL;
| ^
These warnings are because the non-static data members (i.e. data
members that will be different for each instance of a class) are being
initialized at the same time as they're being declared, which means
that's what their value will be when the class instance is initialized.
However, this is only a C++11 feature, and we don't use C++11. Luckily,
we don't need to do this, and this is in fact redundant because we
already zero out the class instance in its constructor using
SDL_zerop(). Therefore, we should remove these initializers to fix
compliance with C++03.
Instead, for any number that isn't in the list of number words, just
return the regular Arabic numerical representation (i.e. just convert it
to string). It's better than having "Lots" or "???", neither of which
really tell you what the number actually is.
This flag makes it so the MSVC runtime libraries are statically linked.
This avoids needing Windows users to have these libraries installed.
Apparently /MT stands for "MultiThreaded", and there's a bit of a
history there where originally by default you could only have a
single-threaded library, and then the multi-threaded flags were added in
later.
First I tried doing target_compile_options on VVVVVV, but then got a
linker error. Then I tried doing add_compile_options because I figured
/MT had to be applied everywhere, and it seemed to work, but it still
linked to the runtime libraries. Apparently it was being overridden.
Then I tried target_compile_options again but this time did it to
everything, and that linked correctly and also removed the runtime
dependency. I would've tried using the MSVC_RUNTIME_LIBRARY property
- along with the CMP0091 policy - but those were only introduced in
CMake 3.15.
You can verify that a binary is built without dependencies by installing
LLVM and running llvm-readobj --needed-libs path/to/binary. This is the
output for a binary with runtime dependencies:
infoteddy@fedorarune ~/d llvm-readobj --needed-libs VVVVVV.exe
File: VVVVVV.exe
Format: COFF-i386
Arch: i386
AddressSize: 32bit
NeededLibraries [
ADVAPI32.dll
KERNEL32.dll
MSVCP140.dll
SDL2.dll
SHELL32.dll
USER32.dll
VCRUNTIME140.dll
api-ms-win-crt-heap-l1-1-0.dll
api-ms-win-crt-locale-l1-1-0.dll
api-ms-win-crt-math-l1-1-0.dll
api-ms-win-crt-runtime-l1-1-0.dll
api-ms-win-crt-stdio-l1-1-0.dll
api-ms-win-crt-string-l1-1-0.dll
api-ms-win-crt-time-l1-1-0.dll
api-ms-win-crt-utility-l1-1-0.dll
]
And this is the output for a binary with those dependencies having been
statically-linked in:
infoteddy@fedorarune ~/d llvm-readobj --needed-libs VVVVVV.exe
File: VVVVVV.exe
Format: COFF-i386
Arch: i386
AddressSize: 32bit
NeededLibraries [
ADVAPI32.dll
KERNEL32.dll
SDL2.dll
SHELL32.dll
USER32.dll
]
As already described in cc61194bed, as
well as Ved's commits from the last almost two weeks, starting VVVVVV
from Ved for playtesting could be made a lot faster by "preloading" the
game - letting it do all its asset loading in the background without
creating a window - and then waiting until the level is passed in via
stdin. There's only one problem left with this approach: VVVVVV
currently expects the starting position to be passed via command line
arguments, which isn't known yet at the time we'd like to start VVVVVV.
Therefore, this commit allows passing the starting position via the
level XML, instead of via arguments.
The extra XML looks like this, and is added next to the <Data> tag:
<Playtest>
<playx>214</playx>
<playy>112</playy>
<playrx>100</playrx>
<playry>100</playry>
<playgc>0</playgc>
<playmusic>4</playmusic>
</Playtest>
This is handled similarly to how the equivalent arguments are handled:
when the level metadata is loaded for CLI playtesting, we also try to
find this tag, and if it exists, it sets the same variables that the
arguments would have otherwise set.
There's always been a bit of an inconsistency in the game where enabling
invincibility would make spikes so solid that enemies and moving
platforms would treat spikes as solid and bounces off of them.
This fixes that by adding an `invincible` parameter to collision
functions, so the functions will only treat spikes as solid if that
parameter is true, and the parameter passed will only be true if it's
called for an entity that is a humanoid and invincibility mode is
enabled.
Also, for clarity, `spikecollide` is renamed to `towerspikecollide`
because it's only used for tower spikes. And as a small optimization,
`checktowerspikes` returns early if invincibility mode is enabled.
This is a minor optimization to streamline the experience of Ved
playtesting. Previously, the user would have to wait for all the assets
to load when launching playtesting (most of the time, I suspect, is
taken up by loading music from a vvvvvvmusic blob). With this
optimization, however, the game can be launched in the background and
its assets can be loaded, while it blocks on STDIN input. During this
time, the user in Ved will be choosing where to start playtesting. After
Ved provides STDIN input, then the window will be created and appears
instantaneously.
This also fixes a related issue in which providing an invalid
playtesting level name would result in a brief window flash that gets
instantly destroyed. With this, if the level is invalid then no window
is ever shown at all.
Probably should have done this earlier in 2.3, but better late than
never.
This makes it easier for third-party programs like Ved to detect what
version of the game this is.
Slightly quick-n-dirty for now, I'll de-duplicate the version number
later, and add commit hash and date if applicable.
Without this, entering in-game and opening the map with missing graphics
will result in a segfault. This is because even if the image doesn't
exist, it's still pushed into the `images` std::vector as a NULL
pointer. And it segfaults because we dereference it (to get things like
their width and height). In contrast, passing them to SDL is fine
because SDL always checks for NULL first.
There are three different places where we call PHYSFS_openRead. This
commit makes sure all of them print a statement upon failure along with
the PhysFS reason for failure, and assigns the log level of each print
as so:
- FILESYSTEM_loadFileToMemory: Debug print (previously no
print existed in the first place), because some files (such as
font.txt) may or may not be needed, but if it isn't then no need to
print and worry the user. The game will error anyway if a critical
file like a graphics file is missing.
- FILESYSTEM_loadBinaryBlob: Debug print (previously info print),
because sometimes it's not needed, such as mmmmmm.vvv. I remember one
user being worried that the game printed "Unable to open file
mmmmmm.vvv" when it's not critical unlike vvvvvvmusic.vvv (and that
file is assumed to exist if data.zip exists anyways). Though maybe we
should move to loose-leaf files to save on memory usage (and so we
don't have to use special tools to modify a binary blob)...
- FILESYSTEM_loadZip: Error print. If we're calling this function, we
really do expect the zip to be loaded, and if it fails because we
can't open the file in the first place, then it would be good to know
why.