This makes it so room names are no longer pointers to someone else's
memory, and instead to set them you use `mapclass::setroomname`. If the
string is short enough to fit in a static, no-alloc buffer, then it gets
copied there. Otherwise, a new heap allocation is made that duplicates
the string, and the new pointer is used instead.
This makes it possible for room names to contain arbitrary data whose
origin is temporary (e.g. from a script command that could be added in
the future).
This replaces all calls to SDL_free with a new macro, VVV_free, that
nulls the pointer afterwards. This mitigates any use-after-frees and
also completely eliminates double-frees. The same is done for any
function to free specific objects such as SDL_FreeSurface, with the
VVV_freefunc macro.
No exceptions for any of these calls, even if the pointer is discarded
or zeroed afterwards anyway. Better safe than sorry.
This is a macro rather than a function that takes in a
pointer-to-pointer because such a function would have type issues that
require casting and that's just not safe.
Even though SDL_free and other SDL functions already check for NULL, the
macro has a NULL check for other functions that don't. For example,
FAudioVoice_DestroyVoice does not check for NULL.
FILESYSTEM_freeMemory has been axed in favor of VVV_free because it
functionally does the same thing except for `unsigned char*` only.
Trinket and teleporter legends would be drawn even if they were out of
bounds. Trinket legends in particular were easy to do because you can
just place a trinket in a custom level and resize the map to not include
the room of the trinket.
Now, there are checks added so they won't be added if they are out of
bounds. This is in line with the fact that, since 2.3, if a trinket
exists outside of the map in custom levels, it won't count towards the
number of trinkets.
It's becoming pretty clear that the size of the map is important enough
to be queried a lot, but each time it's something like `map.custommode ?
map.customwidth : 20` and `map.custommode ? map.customheight : 20` which
is not ideal because of copy-pasting.
Furthermore, even `map.customwidth` and `map.customheight` are just
duplicates of `cl.mapwidth` and `cl.mapheight`, which are only set in
`customlevelclass::generatecustomminimap`. This is a bit annoying if you
want to, say, add checks that depend on the width and height of the
custom map in `mapclass::initcustommapdata`, but `map.customwidth` and
`map.customheight` are out of date because `generatecustomminimap`
hasn't been called yet. And doing the ternary there requires a `#ifndef
NO_CUSTOM_LEVELS` to reference `cl.mapwidth` and `cl.mapheight` which is
just awful.
So I'm axing `map.customwidth` and `map.customheight`, and I'm axing all
the ternaries that are duplicating the source of truth in
`MapRenderData`. Instead, there will just be one function to call for
the width and height, `mapclass::getwidth` and `mapclass::getheight`,
and everyone can simply call those without needing to do ternaries or
duplication.
The existing code was allergic to putting spaces between tokens, and had
some minor code duplication that I took the time to clean up as well.
The logic should be easier to follow now that the for-loops are no
longer duplicated for each of the map zoom levels.
I tested this by temporarily disabling map fog entirely and going
through a couple different custom levels to compare their minimap with
the existing code. These were A New Dimension and 333333 for 1x, Golden
Spiral and VVVV 4k for 2x, and VVVVVV is NP-Hard for 4x. There was no
difference in the output, not even a single pixel.
This adds color support to the output of the console on Windows. Now if
you're using Windows 10 build 1511 or later (I think it's build 1511
anyway; they added more VT sequence support in later versions), you will
see colors by default. This isn't due to Windows helping in any way;
this commit has to specifically enable it with SetConsoleMode() because
by default, Windows won't enable color support unless we enable it. (Or
if it's enabled in the registry, but having to go through the registry
to enable basic shit like that is completely fucking stupid.)
I tested this in my Windows 10 virtual machine and it's completely
working.
Previously, we were using `color_enabled` to mean both that the color
was supported and that it was enabled by the user (which it is enabled
by default). But this logic doesn't work well if the color check
function is called again and ends up enabling color after the user
disabled it. To fix this, just separate the two so the user controls one
`color_supported` variable and the `color_enabled` variable is separate.
Check both of them in order to print color, of course.
This adds the `-console` command-line option (for Win32 only) so the
game can spawn an attached console window which will contain all console
output.
This is to make it easier for people to debug on Windows systems.
Otherwise, the only way to get console output would be to either compile
the application as a console app (i.e. switch the subsystem to console)
- which is undesirable for regular users as this makes it so a console
is always spawned even when unwanted - or launch the game with shell
arguments that make it so output is redirected to a file.
As a result, color checking support is factored out of vlog_init() into
its own function, even though we don't support colors on Windows.
Using SDL_GetTicks() to seed the Gravitron RNG caused many
reproducibility issues while syncing https://tasvideos.org/7575S . To
fix this, add a frame counter, which is a number that is incremented
every frame and never resets, and use it instead.
If someone needs to switch back to SDL_GetTicks() for old TASes, then
provide the -seed-use-sdl-getticks command-line option for them.
In its previous location, it would only print the value of `s` after it
had been mutated by `splitmix32` four times, and it doesn't get used
after that, so the print isn't very useful.
Mixing code and declarations here is fine because starting from a few
months ago, we compile with C99 and if we ever need to compile with C90
then it's trivial to add braces surrounding the declarations.
If a music track has a loop comment with a negative value, ignore all
comments of the track. This is just to prevent any weirdness from
happening as it's safer to just let the track loop improperly. Also log
to the console to let users know.
This is the same thing that SDL_mixer does now:
libsdl-org/SDL_mixer@e819489459
This commit happened as a result of discussion on the VVVVVV Discord
server about SDL_mixer 2.0.4 behavior with weird loop comment values
(e.g. octal input with leading zeroes). This is simply updating the code
to be in line with what newer versions of SDL_mixer do.
Just in case it happens. Comments aren't really important to the game
(at worst a track will just loop in the wrong place) so it's fine to
carry on here and ignore all comments if this happens.
This does the following:
- Const-qualify variables if they are not modified
- Place each statement onto their own separate lines
- Place the asterisk with the type instead of the variable name
- Combine declarations and initializations where possible
VVVVVV uses submodules now, so you need to know how to initialize them.
I'm explicitly not including `git clone --recurse-submodules`. Usage of
submodules in git projects is kinda rare in my experience, so people
are used to doing simple clones, and that instruction would just result
in people being annoyed thinking they have to delete the repo they
already cloned, and clone it again except slightly differently.
It also doesn't help you if you need submodules that aren't in the
master branch (for example, if you clone my fork recursively and then
checkout the localization branch, you won't have C-HashMap and you'll
need the update command anyway). And you also need it whenever VVVVVV
updates its submodules. So teaching people just the update command is
better.
They weren't ever being used, and nobody really ever uses the return
value from the printf family of functions anyway. They return how many
bytes were actually printed, but if it's less than you expected then
there's not much you can really do about them. Also the vlog_* functions
were computing them inaccurately because I only set the return value to
the return value of vprintf when there's other print functions being
called, but regardless there's no reason to have a return value here
anyway.
The hilariously-named WIN32_LEAN_AND_MEAN slims down the number of
header files included in the already-massive `windows.h`. I know people
say Moore's law and precompiled headers and all that (well, we don't use
precompiled headers), but they kinda forgot about virtual machines, and
there's no reason not to define this and slim down the number of headers
anyway.
This started when I saw the warning that GetVersionExW was deprecated,
then looked it up and found StackOverflow answers saying that you should
basically detect the feature directly instead of checking the version,
which makes sense to me. Then I found that I could probably detect color
support by using GetConsoleMode and GetStdHandle. But then I asked
myself what the point was unless you could get color output directly in
the terminal, which it seems like you really can't if your app is a GUI
app. (I have no idea why Windows makes this pointless distinction
between console and GUI apps...)
I tested Command Prompt, PowerShell, Windows Terminal (which is just
PowerShell again), and even Git Bash (MINGW64), but none of them will
ever give the console output of a GUI app such as VVVVVV. The closest I
got is that Git Bash doesn't seem to detach the process, but it will
simply produce no output.
At this point I feel like it's not worth it keeping this code around if
it didn't even work in the first place, so I'm removing it. People can
always enable color by using the -forcecolor command-line argument
anyway.
I'm fine with putting the release version in a header file, thus
necessitating the need to recompile every file that includes it if it's
changed, simply because it's not supposed to be changed that often.
The SDL_arraysize is necessary because sometimes we'll have subreleases
(e.g. 2.4.1, 2.4.2, 2.4.3), and who knows, maybe we'll get to 2.10
someday.
This reworks how the commit hash and date are compiled so that if
they're changed (and they're changed often), only one source file needs
to be recompiled in order to update it everywhere in the game, no matter
how many source files use the hash or date.
The commit hash and date are now declared in InterimVersion.h (and they
need `extern "C"` guards because otherwise it results in a link fail on
MSVC because MSVC is stupid).
To do this, what now happens is that upon every rebuild,
InterimVersion.in.c is processed to create InterimVersion.out.c, then
InterimVersion.out.c is compiled into its own static library that is
then linked with VVVVVV.
(Why didn't I just simply add it to the list of VVVVVV source files?
Well, doing it _now_ does nothing because at that point the horse is
already out of the barn, and the VVVVVV executable has already been
declared, so I can't modify its sources. And I can't do it before
either, because we depend on the VVVVVV executable existing to do the
interim version logic. I could probably work around this by cleverly
moving around lines, but that'd separate related logic from each other.)
And yes, the naming convention has changed. Not only did I rename
Version to InterimVersion (to clearly differentiate it from
ReleaseVersion, which I'll be adding later), I also named the files
InterimVersion.in.c and InterimVersion.out.c instead of
InterimVersion.c.in and InterimVersion.c.out. I needed to put the file
extension on the end because otherwise CMake wouldn't recognize what
kind of language it is, and I mean like yeah duh of course it doesn't,
my text editor doesn't recognize it either.
I thought all of these were removed earlier but apparently not. Anyways,
add_definitions is bad because it pollutes the definitions of every
single target, we should be using target_compile_definitions instead.
This updates all references to SDL 2.0.22 to SDL 2.24.0, including the
Docker container that I maintain for Linux CI.
Be warned, this release of SDL updates the versioning scheme to be less
dumb. The previous version is 2.0.22, this release is 2.24.0 (so the
last number can be properly used for patch-level version releases).
This option is enabled by default and will replace absolute paths of all
source directory file paths with relative paths in the compiled binary,
if the compiler supports it. Of course, this isn't needed if you compile
with all paths removed anyways (e.g. in Release mode).
The purpose is to help make builds reproducible and to remove any
potentially sensitive information about the user or the user's system
from the compiled binary.
Both Clang and GCC support -fdebug-prefix-map, -fmacro-prefix-map, and
-ffile-prefix-map. In particular, -ffile-prefix-map is just a flag that
does both -fdebug-prefix-map and -fmacro-prefix-map.
According to https://reproducible-builds.org/docs/build-path/ ,
-fdebug-prefix-map is available in all GCC versions but only available
starting from Clang 3.8, and -fmacro-prefix-map and -ffile-prefix-map
are available since GCC 8 and Clang 10. So we check the compiler version
and use the available flags depending on if the compiler supports it or
not.
This does make debugging a bit more annoying, but there are a couple
ways to rectify this. Either disable it with
-DREMOVE_ABSOLUTE_PATHS=OFF, or add a `.gdbinit` that consists of
set substitute-path . ../..
so that `.` is considered to be `../..`. Of course, if you need to,
replace `../..` with the actual source directory path (in my case it's
`../../..` because I place my build folders in another subdirectory to
have multiple build folders in one directory).
This doesn't need to be a global `.gdbinit`, it can be in a
directory-specific `.gdbinit` (similar to how `.gitignore`s can also be
directory-specific). But then you need to add `add-auto-load-safe-path`
to your `.gdbinit` to load any directory-specific `.gdbinit`s.
The above is for GDB; I don't know what (if anything) needs to be done
for LLDB; I don't use LLDB.
Fixes#889.
Whereas all `SDL_assert`s will go away when compiling with optimization
flags and all plain `assert` calls (used in PhysFS) will go away when
compiling in Release mode, FAudio has a bunch of debug stuff that needs
to be explicitly disabled with its own `FAUDIO`-prefixed flag.
To do this in Release mode, we need to use generator expressions for
dumb CMake reasons. Basically, if checking the CMAKE_BUILD_TYPE variable
will not work for certain generators (Ninja, Visual Studio) because they
only specify the build type at build time, not generation/configuration
time.
This is so flags that apply globally (i.e. to the game and all static
libraries it's compiled with), such as /MT on MSVC, can be put in a
list, and along with putting all static libraries in a list, we remove
the need for each flag to be repeated for each static library and we can
just use a foreach loop instead.
(Global compile flags of course don't apply to us meddling with
CMAKE_C_FLAGS and CMAKE_CXX_FLAGS directly, because we need to do that
in order to make sure the C and C++ standards are set properly.)
This fixes a regression where the game ignored the amount of frames you
held down a direction if you released the direction during death.
Previously, the game only checked the amount of frames you held down a
direction if you were able to control the player. If you weren't able to
control the player (e.g. during the death animation), then the number of
frames it counted didn't change. This also meant that if you were
holding a direction before you died, but released it during death, the
game wouldn't zero out the number of frames you held it.
This behavior was useful because it meant you could keep the
deceleration momentum that you normally get by holding a direction for 5
frames just by holding a direction for less than 5 frames after dying,
if you had the rest of the hold frames before you died. This behavior is
what's used in https://tasvideos.org/7575S at around frame 7200.
Unfortunately, #609 made it so that the direction hold processing
happened even if the player didn't have control, meaning that it would
zero the hold frames during the death animation in the TAS, thus
desyncing it when it performed the maneuver it relied on the extra
momentum for after Viridian respawns.
The solution here is to just add the check back in again.
Fixes#887.
While fixing #885, I noticed that I had a bunch of
`special/stdin.vvvvvv` entries saved in my `levelstats.vvv`. At once I
knew that the dumb `special/stdin` hack that actually checks if the
filename passed is `special/stdin` was to blame.
STDIN playtesting was first merged, I knew in the back of my mind that
it was a bit of a dumb hack, but I didn't know it would cause
consequences like showing up in `levelstats.vvv`. For now, I'll just
have to patch it, but hopefully in the future I'll remove the dumb hack
entirely. Commenting both instances of the dumb hack with instructions
to grep for it should help maintainers out.
This is useful to investigate any TAS desync/reproducibility issues
relating to RNG, because even though I specifically separated the
Gravitron RNG away from other RNG and made it not dependent on the
system libc rand() function, there's still apparently some differences
in RNG execution between systems, resulting in TASVideos submission 7575
( https://tasvideos.org/7575S ) not syncing for everyone except the
author.
It seems that SDL_GetTicks(), which is used to seed the xoshiro RNG, is
not reliably consistent between systems, so in the future I will
probably replace it with a counter that is incremented each frame
starting from game startup, which is probably better.
This fixes the following warnings:
desktop_version/src/Music.cpp: At global scope:
desktop_version/src/Music.cpp:240:23: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
240 | Uint8 *wav_buffer = NULL;
| ^
desktop_version/src/Music.cpp:414:32: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
414 | Uint8* decoded_buf_playing = NULL;
| ^
desktop_version/src/Music.cpp:415:32: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
415 | Uint8* decoded_buf_reserve = NULL;
| ^
desktop_version/src/Music.cpp:416:21: warning: non-static data member initializers only available with ‘-std=c++11’ or ‘-std=gnu++11’ [-Wc++11-extensions]
416 | Uint8* read_buf = NULL;
| ^
These warnings are because the non-static data members (i.e. data
members that will be different for each instance of a class) are being
initialized at the same time as they're being declared, which means
that's what their value will be when the class instance is initialized.
However, this is only a C++11 feature, and we don't use C++11. Luckily,
we don't need to do this, and this is in fact redundant because we
already zero out the class instance in its constructor using
SDL_zerop(). Therefore, we should remove these initializers to fix
compliance with C++03.
Instead, for any number that isn't in the list of number words, just
return the regular Arabic numerical representation (i.e. just convert it
to string). It's better than having "Lots" or "???", neither of which
really tell you what the number actually is.