While refactoring scriptclass::startgamemode, I noticed that these
variables weren't being reset. Fortunately, this doesn't seem to have
affected anything because they get overwritten one way or another in
startgamemode. But it's good just to be defensive and reset them anyway.
They are not reset in 2.2 or 2.0 glitchrunner mode because dying during
exiting to the menu is needed for credits warp, and that means the
checkpoint position needs to be maintained through.
This is to indicate when a code path is absolutely, for certain, 100%
unreachable. Useful as the default case inside a case-switch that is for
sure 100% exhaustive because it's inside the case of another case-switch
(and the default case is there to suppress compiler warnings about the
case-switch not being exhaustive), which is a situation coming up in my
scriptclass::startgamemode refactor.
It does this by deliberately invoking undefined behavior, either using a
compiler builtin that does the same thing or being a noreturn function
that returns. (And undefined behavior is not undefined behavior if it is
not executed in a code path, otherwise all NULL checks would be useless
because it'd dereference something that could be NULL in another code
path.)
I have no idea why they were floats in the first place. They are
coordinates, and coordinates are integer positions in this game. They
get converted to float only to be explicitly `static_cast`ed back to
ints in `testwallsy`.
This variable passes along the rule of the entity, which is an int. No
idea why it was converted to a float. Thankfully this is only used for
an unused block type, so it doesn't really matter.
The android version just got a much needed update to fix some resolution issues on devices with cutouts.
It turns out the mobile source was actually pretty out of date, like 3 versions out of date! This commit brings it up to date.
All the changes have just been about keeping the game running on modern devices, though. The biggest change was adding the Starling library to the project, which made the game GPU powered and sped the whole thing up.
This makes it so room names are no longer pointers to someone else's
memory, and instead to set them you use `mapclass::setroomname`. If the
string is short enough to fit in a static, no-alloc buffer, then it gets
copied there. Otherwise, a new heap allocation is made that duplicates
the string, and the new pointer is used instead.
This makes it possible for room names to contain arbitrary data whose
origin is temporary (e.g. from a script command that could be added in
the future).
This replaces all calls to SDL_free with a new macro, VVV_free, that
nulls the pointer afterwards. This mitigates any use-after-frees and
also completely eliminates double-frees. The same is done for any
function to free specific objects such as SDL_FreeSurface, with the
VVV_freefunc macro.
No exceptions for any of these calls, even if the pointer is discarded
or zeroed afterwards anyway. Better safe than sorry.
This is a macro rather than a function that takes in a
pointer-to-pointer because such a function would have type issues that
require casting and that's just not safe.
Even though SDL_free and other SDL functions already check for NULL, the
macro has a NULL check for other functions that don't. For example,
FAudioVoice_DestroyVoice does not check for NULL.
FILESYSTEM_freeMemory has been axed in favor of VVV_free because it
functionally does the same thing except for `unsigned char*` only.
Trinket and teleporter legends would be drawn even if they were out of
bounds. Trinket legends in particular were easy to do because you can
just place a trinket in a custom level and resize the map to not include
the room of the trinket.
Now, there are checks added so they won't be added if they are out of
bounds. This is in line with the fact that, since 2.3, if a trinket
exists outside of the map in custom levels, it won't count towards the
number of trinkets.
It's becoming pretty clear that the size of the map is important enough
to be queried a lot, but each time it's something like `map.custommode ?
map.customwidth : 20` and `map.custommode ? map.customheight : 20` which
is not ideal because of copy-pasting.
Furthermore, even `map.customwidth` and `map.customheight` are just
duplicates of `cl.mapwidth` and `cl.mapheight`, which are only set in
`customlevelclass::generatecustomminimap`. This is a bit annoying if you
want to, say, add checks that depend on the width and height of the
custom map in `mapclass::initcustommapdata`, but `map.customwidth` and
`map.customheight` are out of date because `generatecustomminimap`
hasn't been called yet. And doing the ternary there requires a `#ifndef
NO_CUSTOM_LEVELS` to reference `cl.mapwidth` and `cl.mapheight` which is
just awful.
So I'm axing `map.customwidth` and `map.customheight`, and I'm axing all
the ternaries that are duplicating the source of truth in
`MapRenderData`. Instead, there will just be one function to call for
the width and height, `mapclass::getwidth` and `mapclass::getheight`,
and everyone can simply call those without needing to do ternaries or
duplication.
The existing code was allergic to putting spaces between tokens, and had
some minor code duplication that I took the time to clean up as well.
The logic should be easier to follow now that the for-loops are no
longer duplicated for each of the map zoom levels.
I tested this by temporarily disabling map fog entirely and going
through a couple different custom levels to compare their minimap with
the existing code. These were A New Dimension and 333333 for 1x, Golden
Spiral and VVVV 4k for 2x, and VVVVVV is NP-Hard for 4x. There was no
difference in the output, not even a single pixel.
This adds color support to the output of the console on Windows. Now if
you're using Windows 10 build 1511 or later (I think it's build 1511
anyway; they added more VT sequence support in later versions), you will
see colors by default. This isn't due to Windows helping in any way;
this commit has to specifically enable it with SetConsoleMode() because
by default, Windows won't enable color support unless we enable it. (Or
if it's enabled in the registry, but having to go through the registry
to enable basic shit like that is completely fucking stupid.)
I tested this in my Windows 10 virtual machine and it's completely
working.
Previously, we were using `color_enabled` to mean both that the color
was supported and that it was enabled by the user (which it is enabled
by default). But this logic doesn't work well if the color check
function is called again and ends up enabling color after the user
disabled it. To fix this, just separate the two so the user controls one
`color_supported` variable and the `color_enabled` variable is separate.
Check both of them in order to print color, of course.
This adds the `-console` command-line option (for Win32 only) so the
game can spawn an attached console window which will contain all console
output.
This is to make it easier for people to debug on Windows systems.
Otherwise, the only way to get console output would be to either compile
the application as a console app (i.e. switch the subsystem to console)
- which is undesirable for regular users as this makes it so a console
is always spawned even when unwanted - or launch the game with shell
arguments that make it so output is redirected to a file.
As a result, color checking support is factored out of vlog_init() into
its own function, even though we don't support colors on Windows.
Using SDL_GetTicks() to seed the Gravitron RNG caused many
reproducibility issues while syncing https://tasvideos.org/7575S . To
fix this, add a frame counter, which is a number that is incremented
every frame and never resets, and use it instead.
If someone needs to switch back to SDL_GetTicks() for old TASes, then
provide the -seed-use-sdl-getticks command-line option for them.
In its previous location, it would only print the value of `s` after it
had been mutated by `splitmix32` four times, and it doesn't get used
after that, so the print isn't very useful.
Mixing code and declarations here is fine because starting from a few
months ago, we compile with C99 and if we ever need to compile with C90
then it's trivial to add braces surrounding the declarations.
If a music track has a loop comment with a negative value, ignore all
comments of the track. This is just to prevent any weirdness from
happening as it's safer to just let the track loop improperly. Also log
to the console to let users know.
This is the same thing that SDL_mixer does now:
libsdl-org/SDL_mixer@e819489459
This commit happened as a result of discussion on the VVVVVV Discord
server about SDL_mixer 2.0.4 behavior with weird loop comment values
(e.g. octal input with leading zeroes). This is simply updating the code
to be in line with what newer versions of SDL_mixer do.
Just in case it happens. Comments aren't really important to the game
(at worst a track will just loop in the wrong place) so it's fine to
carry on here and ignore all comments if this happens.
This does the following:
- Const-qualify variables if they are not modified
- Place each statement onto their own separate lines
- Place the asterisk with the type instead of the variable name
- Combine declarations and initializations where possible
VVVVVV uses submodules now, so you need to know how to initialize them.
I'm explicitly not including `git clone --recurse-submodules`. Usage of
submodules in git projects is kinda rare in my experience, so people
are used to doing simple clones, and that instruction would just result
in people being annoyed thinking they have to delete the repo they
already cloned, and clone it again except slightly differently.
It also doesn't help you if you need submodules that aren't in the
master branch (for example, if you clone my fork recursively and then
checkout the localization branch, you won't have C-HashMap and you'll
need the update command anyway). And you also need it whenever VVVVVV
updates its submodules. So teaching people just the update command is
better.