When recreating the same menu, there's basically no reason to reset the
currently-selected menu option. (Also, no need to worry about indexing
out of bounds or anything - the number gets checked while iterating over
all menu options; it's never used to actually index anything. At worst
there might be a 1-frame flicker as the bounds code in gameinput() kicks
in, but that shouldn't happen anyways.)
Zip files that have been successfully mounted in editorclass::loadZips()
will now be ignored when the game does its second pass over the levels
directory. Otherwise, this would produce a superfluous error message,
because the game would attempt to parse the zip file as a level file
(when it's not a level file and is in fact a binary file).
This returns if the file given is mounted or not. 2.3 added level zip
support, so whenever the game loads level metadata, it will mount any
zip files in the levels directory; this function can be used to check if
any of those files have been mounted, and ignore them if so.
Otherwise, this would produce a superfluous warning message in the
console. Directories are now ignored and never attempted to be opened;
so now any warning messages printed out are genuine file that something
has genuinely gone wrong with.
Well, there's still a warning message printed if there's a symlink to a
directory; this is rarer, but it's still a false positive.
This function will be used to differentiate files from directories.
Or at least that was the hope. Symlink support was added in 2.3, but it
doesn't seem like PHYSFS_stat() lets you follow the symlink to check if
what it points to is itself a file or directory. And there doesn't seem
to be any function to follow the symlink yourself...
So for now, this function considers symlinks to directories to be files.
PHYSFS_readBytes() returns a PHYSFS_sint64, but we forcefully shove it
into a 32-bit signed integer.
Fixing the type of this doesn't have any immediate consequences, but
it's good for the future in case we want to use the return value for
files bigger than 2 gigabytes; it doesn't harm us in any way, and it's
just better housekeeping.
PHYSFS_fileLength() returns -1 if the file size can't be determined. I'm
going to set it to 0 instead, because it seems like that's more
well-behaved with consumers.
Take lodepng_decode24() or lodepng_decode32(), for example - from a
quick glance at the source, it only takes in a size_t (an unsigned
integer) for the filesize, and one of the first things it does is malloc
with the given filesize. If the -1 turns into SIZE_MAX and LodePNG
attempts to allocate that many bytes... well, I don't know of any
systems that have 18 exabytes of memory. So that seems pretty bad.
The function returns a PHYSFS_sint64, but we forcefully shove it into a
PHYSFS_uint32. This means we throw away all the negative numbers, which
is bad because the function returns -1 if the size of the file can't be
determined; plus, we also throw away 32 bits of information, reducing
our range of supported file sizes from 9 exabytes to 4 gigabytes.
File size support is only as good as the weakeast link, and it looks
like one of the consumers of FILESYSTEM_loadFileToMemory(),
SDL_RWFromConstMem(), only takes in a signed 32-bit integer of size;
however, I would still like to do at least the bare minimum to support
as many file sizes as we can, and changing types around is one of those
bare minimums.
I had misread this line in #629 and thought that it was just clearing
the entire surface, when really it was filling the surface with opaque
black. ClearSurface() would instead make it transparent, which would
mean when it got drawn, it would get drawn against blue, and not black.
Whoops.
These float attributes are assigned to, and then never read again. The
coordinate systems of blocks are a bit of a mess - some use xp/yp, some
use xp/yp and rect.x/rect.y - but I can confidently say that these are
never used, because it compiles fine if I remove the attributes from the
class, plus remove all assignments to it.
Screen.cpp wasn't explicitly including SDL.h, instead relying on
Screen.h to include it.
It was also relying on SDL.h to include stdio.h on Linux, which breaks
because SDL.h doesn't include stdio.h on Windows. So stdio.h is now
explicitly included as well.
stdlib.h is not used in this file.
After reasoning about it for a bit, there's no reason for these checks
to be here. `zip_normal` will either be
/home/infoteddy/.local/share/VVVVVV/levels if the asset directory is a
directory, or levels/levelname.zip if the asset directory is inside the
same zip as the level is. I don't see how they could ever be data.zip.
My guess is because of the VCE bug where it messed up its search path,
and before that bug was fixed, it had to be worked around here by
explicitly blacklisting data.zip here. When the assets mounting stuff
was ported from VCE to vanilla, vanilla didn't have the problem, and so
this data.zip blacklisting stuff was unnecessary.
Either way, I see no reason for this, so I'm going to remove it.
There is no need to use heap-allocated strings here, so I've refactored
them out. I've also cleaned up both of the functions a bit, because the
line spacing of the previous version was completely non-existent, brace
style was same-line instead of next-line, and the variable names were a
bit misleading (in FILESYSTEM_mountassets(), there is a `zippath` AND a
`zip_path`, which are two completely different variables).
Also, FILESYSTEM_mount() now prints an error message and bails if
PHYSFS_getRealDir() returns NULL, whereas it didn't do that before.
The function is literally just an alias for PHYSFS_exists(), which does
not exclusively check for directories. Plus, the function is also used
to check if a non-directory file exists. Why is this function named
"directoryExists"?!
The info message when a .data.zip file is mounted is now differentiated
from the message when an actual directory is mounted (the .data.zip
message specifies ".data.zip").
The error message for an error occurring when loading or mounting a .zip
is now capitalized.
The "Custom asset directory does not exist" now uses puts(), because
there's no need to use printf() here.
I don't know why this is here; it's unused. I don't know why the
compiler doesn't warn about this being unused either - maybe it's
secretly being used? That also means I'm not sure if the compiler is
optimizing this away or not. Anyway, this is getting removed.
The STL here cannot be completely eliminated (because the custom entity
object uses std::string), but at least we can avoid unnecessarily making
std::strings until the very end.
There's not really any reason for this function to use heap-allocated
strings. So I've refactored it to not do that.
I would've used SDL_strrstr(), if it existed. It does not appear to
exist. But that's okay.
PhysFS by default just uses system malloc(), realloc(), and free(); it
provides a way to change them, with a struct named PHYSFS_Allocator and
a function named PHYSFS_setAllocator().
According to PhysFS docs, this function should be called before
PHYSFS_init(), which is why this allocator stuff is handled in
FileSystemUtils.cpp.
Also, I've had to make two "bridge" functions, because PHYSFS_Allocator
wants pointers to functions taking in `PHYSFS_uint64`s, not `size_t`s.
ClearSurface() is less verbose than doing it the old way, and also
conveys intent clearer. Plus, some of these FillRect()s had hardcoded
width and height values, whereas ClearSurface() doesn't - meaning this
change also has better future-proofing, in case the widths and heights
of the surfaces involved change in the future.
When you pass NULL in for the SDL_Rect* parameter to SDL_FillRect(), SDL
will automatically fill the entire surface with that color. There's no
need for us to create the SDL_Rect ourselves.
This is a function that does what it says - it clears the given surface.
This just means doing a FillRect(), but it's better to use this function
because it conveys intent better.
This is pretty old commented-out code from earlier versions of the game;
they are no longer useful, and are just distracting. If we need them, we
can always refer back to this commit (but I sincerely doubt that we'll
need them).
Apparently in C, if you have `void test();`, it's completely okay to do
`test(2);`. The function will take in the argument, but just discard it
and throw it away. It's like a trash can, and a rude one at that. If you
declare it like `void test(void);`, this is prevented.
This is not a problem in C++ - doing `void test();` and `test(2);` is
guaranteed to result in a compile error (this also means that right now,
at least in all `.cpp` files, nobody is ever calling a void parameter
function with arguments and having their arguments be thrown away).
However, we may not be using C++ in the future, so I just want to lay
down the precedent that if a function takes in no arguments, you must
explicitly declare it as such.
I would've added `-Wstrict-prototypes`, but it produces an annoying
warning message saying it doesn't work in C++ mode if you're compiling
in C++ mode. So it can be added later.
One of these days, I need to get around to running Include What You Use
on this codebase. Until then, while I was working on #624, I noticed
these; I'm removing them now.
The recently released SDL 2.0.14 adds a native function for opening URIs
from the host system, superseding the OS-specific implementations of
FILESYSTEM_openDirectory.
This fixes a regression where moving platforms had no collision. Because
their width and height would be maintained, but their type would be -1.
(Also because I didn't test enough.)
In #565, I decided to set blocks' types to -1 when disabling them, to be
a bit safer in case there was some code that used block types but not
their width and heights. However, this means that when blocks get
disabled and re-created in the platform update loops, their types get
set to -1, which effectively also disables their collision.
In the end, I'll just have to compromise and remove setting blocks to
type -1. Because in a better world, we shouldn't be destroying and
creating blocks constantly just to move some platforms - however, fixing
such a fundamental problem is beyond the scope of at least 2.3 (there's
also the fact that this problem also results in some bugs that are a
part of compatibility, whether we like it or not). So I'll just remove
the -1.
next_split_s() could potentially commit out-of-bounds indexing if the
amount of source data was bigger than the destination data.
This is because the size of the source data passed in includes the null
terminator, so if 1 byte is not subtracted from it, then after it passes
through the VVV_min(), it will index 1 past the end of the passed buffer
when null-terminating it.
In contrast, the other argument of the VVV_min() does not need 1
subtracted from it, because that length does not include a null
terminator (next_split() returns the length of the substring, after all;
not the length of the substring plus 1).
(The VVV_min() here also shortens the range of values to the size of an
int, but we'll probably make size_t versions anyway; plus who really
cares about supporting massively-sized buffers bigger than 2 billion
bytes in length? That just doesn't make sense.)
If PHYSFS_enumerate() isn't successful, we now print that it wasn't
successful, and print the PhysFS error message. (We should get that
logging thing going sometime...)
Note that level dir listing still uses plenty of STL (including the end
product - the `LevelMetaData` struct - which, for the purposes of 2.3,
is okay enough (2.4 should remove STL usage entirely)); it's just that
the initial act of iterating over the levels directory no longer takes
four or SIX(!!!) heap allocations (not counting reallocations and other
heap allocations this patch does not remove), and no longer does any
data marshalling.
Like text splitting, and binary blob extra indice grabbing, the current
approach that FILESYSTEM_getLevelDirFileNames() uses is a temporary
std::vector of std::strings as a middleman to store all the filenames,
and the game iterates over that std::vector to grab each level metadata.
Except, it's even worse in this case, because PHYSFS_enumerateFiles()
ALREADY does a heap allocation. Oh, and
FILESYSTEM_getLevelDirFileNames() gets called two or three times. Yeah,
let me explain:
1. FILESYSTEM_getLevelDirFileNames() calls PHYSFS_enumerateFiles().
2. PHYSFS_enumerateFiles() allocates an array of pointers to arrays of
chars on the heap. For each filename, it will:
a. Allocate an array of chars for the filename.
b. Reallocate the array of pointers to add the pointer to the above
char array.
(In this step, it also inserts the filename in alphabetically -
without any further allocations, as far as I know - but this is a
COMPLETELY unnecessary step, because we are going to sort the list
of levels by ourselves via the metadata title in the end anyways.)
3. FILESYSTEM_getLevelDirFileNames() iterates over the PhysFS list, and
allocates an std::vector on the heap to shove the list into. Then,
for each filename, it will:
a. Allocate an std::string, initialized to "levels/".
b. Append the filename to the std::string above. This will most
likely require a re-allocation.
c. Duplicate the std::string - which requires allocating more memory
again - to put it into the std::vector.
(Compared to the PhysFS list above, the std::vector does less
reallocations; it however will still end up reallocating a certain
amount of times in the end.)
4. FILESYSTEM_getLevelDirFileNames() will free the PhysFS list.
5. Then to get the std::vector<std::string> back to the caller, we end
up having to reallocate the std::vector again - reallocating every
single std::string inside it, too - to give it back to the caller.
And to top it all off, FILESYSTEM_getLevelDirFileNames() is guaranteed
to either be called two times, or three times. This is because
editorclass::getDirectoryData() will call editorclass::loadZips(), which
will unconditionally call FILESYSTEM_getLevelDirFileNames(), then call
it AGAIN if a zip was found. Then once the function returns,
getDirectoryData() will still unconditionally call
FILESYSTEM_getLevelDirFileNames(). This smells like someone bolting
something on without regard for the whole picture of the system, but
whatever; I can clean up their mess just fine.
So, what do I do about this? Well, just like I did with text splitting
and binary blob extras, make the final for-loop - the one that does the
actual metadata parsing - more immediate.
So how do I do that? Well, PhysFS has a function named
PHYSFS_enumerate(). PHYSFS_enumerateFiles(), in fact, uses this function
internally, and is basically just a wrapper with some allocation and
alphabetization.
PHYSFS_enumerate() takes in a pointer to a function, which it will call
for every single entry that it iterates over. It also lets you pass in
another arbitrary pointer that it leaves alone, which I use to pass
through a function pointer that is the actual callback.
So to clarify, there are two callbacks - one callback is passed through
into another callback that gets passed through to PHYSFS_enumerate().
The callback that gets passed to PHYSFS_enumerate() is always the same,
but the callback that gets passed through the callback can be different
(if you look at the calling code, you can see that one caller passes
through a normal level metadata callback; the other passes through a zip
file callback).
Furthermore, I've also cleaned it up so that if editorclass::loadZips()
finds a zip file, it won't iterate over all the files in the levels
directory a third time. Instead, the level directory only gets iterated
over twice - once to check for zips, and another to load every level
plus all zips; the second time is when all the heap allocations happen.
And with that, level list loading now uses less STL templated stuff and
much less heap allocations.
Also, ed.directoryList basically has no reason to exist other than being
a temporary std::vector, so I've removed it. This further decreases
memory usage, depending on how many levels you have in your levels
folder (I know that I usually have a lot and don't really ever clean it
up, lol).
Lastly, in the callback passed to PhysFS, `builtLocation` is actually no
longer hardcoded to just the `levels` directory, since instead we now
use the `origdir` variable that PhysFS passes us. So that's good, too.
If PHYSFS_mountHandle() failed to mount a zip file, we would print
PhysFS's error message straight, without any surrounding context. This
seems a little weird, and doesn't maximize understanding for readers;
I've made it so now the error message is "Could not mount <zip file>:
<PhysFS error>".
When Ethan added PhysFS to the game, he put in a hardcoded check (marked
with a FIXME) that explicitly removed all filenames that were "data"
returned by PHYSFS_enumerateFiles(). Apparently this was due to a weird
bug with the function putting in "data" strings in its output in PhysFS
2.0.3; however, the game now uses PhysFS 3.0.2, and I could not
reproduce this bug on my system. (I also tested, and this also
straight-up ignores legitimate level filenames that just happen to be
"data" (without the .vvvvvv extension).)
After talking with Ethan in Discord DMs, I asked if we could remove this
check, and he said that we could. So I'm doing it now.
Just like I refactored text splitting to no longer use std::vectors,
std::strings, or temporary heap allocations, decreasing memory usage and
improving performance; there's no reason to use a temporary
heap-allocated std::vector to grab all extra binary blob indices, when
instead the iteration can just be more immediate.
Instead, what I've done is replaced binaryBlob::getExtra() with
binaryBlob::nextExtra(), which takes in a pointer to an index variable,
and will increment the index variable until it reaches an extra track.
After the caller processes the extra track, it is the caller's
responsibility to increment the variable again before passing it back to
getExtra().
This avoids all heap allocations and brings down the memory usage of
processing extra tracks.
This patch restores some 2.2 behavior, fixing a regression caused by the
refactor of properly using std::vectors.
In 2.2, the game allocated 200 items in obj.entities, but used a system
where each entity had an `active` attribute to signify if the entity
actually existed or not. When dealing with entities, you would have to
check this `active` flag, or else you'd be dealing with an entity that
didn't actually exist. (By the way, what I'm saying applies to blocks
and obj.blocks as well, except for some small differing details like the
game allocating 500 block slots versus obj.entities's 200.)
As a consequence, the game had to use a separate tracking variable,
obj.nentity, because obj.entities.size() would just report 200, instead
of the actual amount of entities. Needless to say, having to check for
`active` and use `obj.nentity` is a bit error-prone, and it's messier
than simply using the std::vector the way it was intended. Also, this
resulted in a hard limit of 200 entities, which custom level makers ran
into surprisingly quite often.
2.3 comes along, and removes the whole system. Now, std::vectors are
properly being used, and obj.entities.size() reports the actual number
of entities in the vector; you no longer have to check for `active` when
dealing with entities of any sort.
But there was one previous behavior of 2.2 that this system kind of
forgets about - namely, the ability to have holes in between entities.
You see, when an entity got disabled in 2.2 (which just meant turning
its `active` off), the indices of all other entities stayed the same;
the indice of the entity that got disabled stays there as a hole in the
array. But when an entity gets removed in 2.3 (previous to this patch),
the indices of every entity afterwards in the array get shifted down by
one. std::vector isn't really meant to be able to contain holes.
Do the indices of entities and blocks matter? Yes; they determine the
order in which entities and blocks get evaluated (the highest indice
gets evaluated first), and I had to fix some block evaluation order
stuff in previous PRs.
And in the case of entities, they matter hugely when using the
recently-discovered Arbitrary Entity Manipulation glitch (where crewmate
script commands are used on arbitrary entities by setting the `i`
attribute of `scriptclass` and passing invalid crewmate identifiers to
the commands). If you use Arbitrary Entity Manipulation after destroying
some entities, there is a chance that your script won't work between 2.2
and 2.3.
The indices also still determine the rendering order of entities
(highest indice gets drawn first, which means lowest indice gets drawn
in front of other entities). As an example: let's say we have the player
at 0, a gravity line at 1, and a checkpoint at 2; then we destroy the
gravity line and create a crewmate (let's do Violet).
If we're able to have holes, then after removing the gravity line, none
of the other indices shift. Then Violet will be created at indice 1, and
will be drawn in front of the checkpoint.
But if we can't have holes, then removing the gravity line results in
the indice of the checkpoint shifting down to indice 1. Then Violet is
created at indice 2, and gets drawn behind the checkpoint! This is a
clear illustration of changing the behavior that existed in 2.2.
However, I also don't want to go back to the `active` system of having
to check an attribute before operating on an entity. So... what do we
do to restore the holes?
Well, we don't need to have an `active` attribute, or modify any
existing code that operates on entities. Instead, we can just set the
attributes of the entities so that they naturally get ignored by
everything that comes into contact with it. For entities, we set their
invis to true, and their size, type, and rule to -1 (the game never uses
a size, type, or rule of -1 anywhere); for blocks, we set their type to
-1, and their width and height to 0.
obj.entities.size() will no longer necessarily equal the amount of
entities in the room; rather, it will be the amount of entity SLOTS that
have been allocated. But nothing that uses obj.entities.size() needs to
actually know the amount of entities; it's mostly used for iterating
over every entity in the vector.
Excess entity slots get cleaned up upon every call of
mapclass::gotoroom(), which will now deallocate entity slots starting
from the end until it hits a player, at which point it will switch to
disabling entity slots instead of removing them entirely.
The entclass::clear() and blockclass::clear() functions have been
restored because we need to call their initialization functions when
reusing a block/entity slot; it's possible to create an entity with an
invalid type number (it creates a glitchy Viridian), and without calling
the initialization function again, it would simply not create anything.
After this patch is applied, entity and block indices will be restored
to how they behaved in 2.2.
Just like is_positive_num(), an empty string is not a number.
I've also decided to unroll iteration 0 of the loop here so readability
is improved; this happens to also knock out the whole "accepting empty
string" thing, too.
To account for empty strings, we simply have to special-case them.
Simple as that.
This was also a problem with the previous std::string implementation of
this function; regardless, this is fixed now.
The current way "arrays" from XML files are loaded (before this commit
is applied) goes something like this:
1. Read the buffer of the contents of the tag using TinyXML-2.
2. Allocate a buffer on the heap of the same size, and copy the
existing buffer to it. (This is what the statement `std::string
TextString = pText;` does.)
3. For each delimiter in the heap-allocated buffer...
a. Allocate another buffer on the heap, and copy the characters from
the previous delimiter to the delimiter you just hit.
b. Then allocate the buffer AGAIN, to copy it into an std::vector.
4. Then re-allocate every single buffer YET AGAIN, because you need to
make a copy of the std::vector in split() to return it to the caller.
As you can see, the existing way uses a lot of memory allocations and
data marshalling, just to split some text.
The problem here is mostly making a temporary std::vector of split text,
before doing any actual useful work (most likely, putting it into an
array or ANOTHER std::vector - if the latter, then that's yet another
memory allocation on top of the memory allocation you already did; this
memory allocation is unavoidable, unlike the ones mentioned earlier,
which should be removed).
So I noticed that since we're iterating over the entire string once
(just to shove its contents into a temporary std::vector), and then
basically iterating over it again - why can't the whole thing just be
more immediate, and just be iterated over once?
So that's what I've done here. I've axed the split() function (both of
them, actually), and made next_split() and next_split_s().
next_split() will take an existing string and a starting index, and it
will find the next occurrence of the given delimiter in the string. Once
it does so, it will return the length from the previous starting index,
and modify your starting index as well. The price for immediateness is
that you're supposed to handle keeping the index of the previous
starting index around in order to be able to use the function; updating
it after each iteration is also your responsibility.
(By the way, next_split() doesn't use SDL_strchr(), because we can't get
the length of the substring for the last substring. We could handle this
special case specifically, but it'd be uglier; it also introduces
iterating over the last substring twice, when we only need to do it
once.)
next_split_s() does the same thing as next_split(), except it will copy
the resulting substring into a buffer that you provide (along with its
size). Useful if you don't particularly care about the length of the
substring.
All callers have been updated accordingly. This new system does not make
ANY heap allocations at all; at worst, it allocates a temporary buffer
on the stack, but that's only if you use next_split_s(); plus, it'd be a
fixed-size buffer, and stack allocations are negligible anyway.
This improves performance when loading any sort of XML file, especially
loading custom levels - which, on my system at least, I can noticeably
tell (there's less of a freeze when I load in to a custom level with
lots of scripts). It also decreases memory usage, because the heap isn't
being used just to iterate over some delimiters when XML files are
loaded.
These comments were probably remnants of some late-night coding session
or something. Anyway, they're not needed; there's nothing to do with SDL
here, and the "Init" is obvious because the function is a constructor.
Contents and scripts should be reset in editorclass::reset(); there's no
reason to reset them again right before you load them from an XML file
in editorclass::load().
Additionally, the resets now consistently use SDL_zeroa() (for contents)
and scriptclass::clearcustom() (for scripts).
I'm partial to slash-asterisk-style comments, so I'll use those here.
Also, having a space after the start of comments is good. I've also
removed the "Add the script if we have a preceding header" comments
since it can be inferred by reading the surrounding code.
Instead of checking the length() of an std::string, just check if
pText[0] is equal to '\0'.
This will have to be done anyway, because I'm going to get rid of the
std::string allocation here, and I noticed this inefficiency in the
indentation, so I'm going to remove it.
The actual unindent will be done in the next commit.
This now means every XML array loading is done with common,
re-duplicated code. The only exceptions to this are special cases other
than the the majority of cases; the majority being a simple matter of
reading an array of integers and putting it into another array.
Seems like the only reason I hadn't caught the <customlevelscore> tag
until now was because I was focused on de-duplicating all the array
loads in Game::loadstats() and below, forgetting about
Game::loadcustomlevelstats().
In order to be able to use the LOAD_ARRAY() and LOAD_ARRAY_RENAME()
macros in Game::loadcustomlevelstats(), they have to be moved to earlier
in the file.
Even if split() didn't use the STL, using this function here is a bit
unnecessary, because a simple SDL_strchr() suffices. Refactoring split()
to not use the STL will break this caller anyway, so I might as well
just refactor this to not use split() in the first place.
This refactor also properly checks if the inputs are valid integers. And
since split() is no longer used, it also rejects inputs ending with a
trailing comma as being invalid, too; this didn't happen previously.
It's intentional that I used is_number() here instead of
is_positive_num(), thus accepting negative numbers; in the future it
might be possible to have negative room coordinates.
Valgrind reported this.
The error here is that the buffer here is only guaranteed to be
initialized up until (and including) the null-terminator, by
SDL_snprintf(). Iterating over the entire allocated buffer is bad and I
should feel bad as the girl who wrote this code; doing that reads
uninitialized memory and passes it to SDL_tolower().
As a bonus, the iterator increment is now a preincrement instead of a
postincrement.
This fixes memory leaking every single time a file gets loaded(!) when
the list of custom levels gets loaded(!!!), which Valgrind reports. This
memory leak is completely my bad; 2.2 properly frees the loaded file,
and VCE uses an std::unique_ptr - which I decided to ignore and not
think about why it would be there.
It's safe to do this free after uMem gets copied into std::string;
although, in the future, I *am* thinking about refactoring this function
(and the tag finder function) to not use std::strings, and I'll have to
be careful to make sure that the memory management with the file is
correct when I do so.
This makes the freesrc argument of Mix_LoadMUS_RW() 1 instead of 0. If
the argument is nonzero, then the passed SDL_RWops will be automatically
freed when m_music is freed, too.
I don't know why this was 0 before. Setting it to 1 fixes a memory leak
that Valgrind reports (which turns into an actual leak every time custom
assets are mounted or unmounted).
This adds a check that the pointer passed to
FILESYSTEM_loadFileToMemory() isn't NULL, and if it is, just returns
early in the function, instead of continuing later and producing a
different, slightly-misleading error message.
Previously, it was guarded behind a check for the length, which is... I
guess still perfectly fine behavior, but there's no reason to have a
length check here; FILESYSTEM_freeMemory() uses SDL_free(), which does a
check that the pointer passed is non-NULL (the pointer that is passed
here, despite not being initialized upon declaration, is guaranteed to
be initialized by FILESYSTEM_loadFileToMemory() anyway, so).
Following Ethan's example of bailing (calling VVV_exit()) if
binaryBlob::unPackBinary() couldn't allocate memory, I've searched
through and found every SDL_malloc(), then made sure that if it returned
NULL, the caller would bail (because you can't do much when you're out
of memory).
There should probably be an error message printed when the process is
out of memory, but unPackBinary() doesn't print an error message for
being out of memory, so this can probably be added later. (Also we don't
really have a logging system, I'd like to have something like that added
in first before adding more messages.)
Also, this doesn't account for any allocators used by STL stuff, but
we're working on removing the STL, and allocation failure just results
in an abort anyway, so there's not really a problem there.
Wow, there are a lot of these. All of these exit paths now use
VVV_exit() instead, which attempts to save unlock.vvv and settings.vvv,
and also frees all resources so Valgrind is happy. This is a good thing,
because previously unlock.vvv/settings.vvv wouldn't be written to if we
decided to bail for a given reason.
It should be between the include of the corresponding header file for
the source file (Script.h) and the includes of other local header files
(the files that are specific to this codebase only); this is the
convention that includes in all other source files follow.
However, it seems like I misplaced this, so I'm fixing it now.
This is just a function that calls the cleanup() in main.cpp, as well as
calls exit().
I would have liked to use SDL_ExitProcess() here, because that function
has ifdefs for different runtime environments. But alas, it's an
internal function and isn't exported. Ah well; exit() seems to be fine
anyway.
If there's a resource that doesn't otherwise need to be cleaned up and
is still alive upon program shutdown, then it should go in cleanup().
This cleans up Screen, GraphicsResources, Graphics buffers, Graphics
tiles, and musicclass audio upon program shutdown.
Even we technically don't NEED to clean these resources up ourselves
(the kernel is going to get rid of all of it anyway, else it'd be a
security problem), I'm doing this because otherwise Valgrind will
complain about these, and then it'd be difficult to see which memory
leaks are real and which are just "well this isn't really a leak but you
haven't freed this thing when the process exited, and that's technically
what a memory leak is".
These are all resources whose cleanup functions can be safely called
even if they haven't initialized anything yet.
This isn't a memory leak (not even Valgrind complains), because it gets
properly cleaned up in GraphicsResources::destroy(). Still, it's memory
that is just laying around not being used, and in the name of
deallocating things as soon as you no longer need them, we should
deallocate the base tilesheet images after we split all of them into
tiles.
This reduces the memory cost of all tilesheet images by half, since we
were essentially keeping around duplicates for nothing; this doesn't
really have much of an impact with conventional tilesheet sizes, since
they're usually small enough, but since 2.3 allowed for tilesheet images
of any size, this is a pretty big deal for really big tilesheet images.
It's okay to do this, even though they also get freed in
GraphicsResources::destroy(), because SDL_FreeSurface() does a NULL
check on the pointer passed to it, and we set the pointer to NULL after
freeing the surfaces.
A quick glance at PhysFS source code will show that PhysFS will bail if
PHYSFS_deinit() is called if it's not initialized.
"Bail" here just means setting an error code and returning early, so
it's not that bad. Still, it's the principle of the thing, and I just
want to ensure that FILESYSTEM_deinit() can be safely called no matter
if the filesystem hasn't initialized yet; having an error set by PhysFS
kind of taints the whole safety thing, even if it does nothing wrong,
no?
(although, speaking of which, we should be handling all errors by
PhysFS, but that's for later...)
These FIXME comments are still correct about code duplication, but
they're incorrect about where exactly the original code is after the
original code got moved around. So I've fixed them to refer to the
correct locations.
We really should get around to de-duplicating the code mentioned in
these comments...
Since musicWriteBlob is a temporary object that gets destroyed at the
end of musicclass::init(), in order to make sure we don't leak memory
and lose all the pointers to the blocks we just allocated in
musicWriteBlob, we need to call its clear() method after writing
BinaryMusic.vvv.
musicReadBlob was used for both MMMMMM and PPPPPP soundtracks. This
causes a memory leak if you have mmmmmm.vvv installed, because the
pointers holding each allocated block of MMMMMM would be lost when
PPPPPP got loaded. Valgrind complains about this memory leak.
This is in contrast to 2.2 and previous behavior, where musicReadBlob
was only a temporary object instead of being held in musicclass.
However, this wasn't really a memory leak (moreso something that just
didn't get cleaned up when closing the game), but it did get turned into
a leak when per-level assets mounting and unmounting got introduced in
2.3 (loading a level with custom assets after starting the game with an
mmmmmm.vvv, or exiting out of a level that had an mmmmmm.vvv, would
cause the game to leak memory). Leo recognized this, and moved
musicReadBlob onto musicclass in a separate 2.3 PR, but either he didn't
think about what was happening here too closely, or he didn't use
Valgrind, because he forgot about the memory leak caused by re-using the
same binaryBlob for PPPPPP and MMMMMM.
So instead, just use two different binaryBlob objects for MMMMMM and
PPPPPP. That way, no memory leaks happen.
I'm going to introduce another binaryBlob object in to the mix, and I
want to be able to re-use an existing FOREACH_TRACK #define without
having to copy-paste it again. So, TRACK_NAMES now takes in a blob
parameter, which will be passed to the temporary FOREACH_TRACK #define.
This removes the music cleanup code from musicclass::init(), and
requires that we also call destroy() in Graphics::reloadresources().
This is because we'll need to re-use the musicclass cleanup code
elsewhere, and we don't want to copy-paste the cleanup code. Or at
least, I don't (but I'm not a game dev, game devs copy-paste all the
friggin' time).
It doesn't feel quite write leaving all the buffer creation code in
main(), even though it's perfectly okay to do so and it doesn't result
in any memory mismanagement that Valgrind can report; so I'm factoring
all of it out to a separate function, Graphics::create_buffers().
As a bonus, we no longer have to keep qualifying with `graphics.` in the
buffer creation code, which is nice.
These destroy all the buffers that are created on the Graphics class.
Since these buffers can't be created at the same time as the rest of
Graphics is (due to the fact that they require knowing the pixel format
of the game screen), they can't be destroyed at the same as the rest of
Graphics is, either.
This is a very complicated way of zeroing out grphx (instead of using
SDL_zero()), which itself is completely unnecessary because grphx.init()
gets called immediately afterwards anyway.
It should be next-line brace, not same-line brace. Even in a codebase
that uses same-line braces everywhere, I still prefer having next-line
braces inside functions (because they're at the top level, and you can't
next them). But regardless, this should still be next-line brace like
(most of) the rest of the codebase.
The function previously conditionally freed a m_memblocks pointer if its
corresponding m_headers was valid. This makes me slightly worried about
the possibility that memory would be allocated, but the header would
still be marked as invalid.
I don't see how that could happen, but it's better to be safe than
sorry. SDL_free() does a guaranteed NULL pointer check (like most SDL
functions), so it's okay to pass NULL pointers to it.
Just to be sure, I'm also zeroing m_memblocks and m_headers after
freeing everything in the function.
MSVC complains about these, doesn't seem like GCC does. These can be
safely removed because they're unreachable, and they always follow a
case-switch or similar that has a default case which this code is a
duplicate of anyway. (Unless it isn't, in which case all the better to
remove it, becausee otherwise it looks misleading or confusing to casual
glances at the code.)
find_tag() would commit out-of-bounds indexing if someone made a level
file with malformed XML entity encodings in the metadata tags.
This would happen if the end of the string followed immediately after an
ampersand and hash, or if there wasn't a semicolon ending an XML entity.
Valgrind complains about these, so I've fixed it.
This fixes a bug where "12" gets properly evaluated as 12, but "148"
gets evaluated as 1408. It's because `place` gets multiplied by `radix`
again, so `retval` gets multipled by 100 instead of 10.
There's no reason to have a `place` variable, so I've removed it
entirely. This simplifies the function a little bit.
The previous person who wrote this (a girl named Misa) clearly didn't
understand the reason why you couldn't compare line[line.length()-1]
directly to a string literal. It's because the former is a char, and the
latter is a pointer to a char. Both are ints, so it compiles fine, but
it doesn't do what you want it to.
Why not just make the latter a char instead of a string literal? Well,
because you can, but also I clearly didn't think this through earlier,
so that's why I didn't do it in the first place.
But this is fixed now.
This avoids an unnecessary copy of the input std::vector, since we don't
need to modify it for anything. This cuts down on unnecessary memory
operations.
Apart from the std::string, this function no longer uses the STL.
ss_toi() is a simple function - it converts the input into an int,
taking as many digits as possible until it reaches a non-digit
character, at which point it stops. It's trivial to implement this
without the STL.
I could've used Int() here, but that would've required copying the
string to a temporary buffer to insert a null-terminator (we can't just
use a pointer-and-length data type either, the string functions don't
operate like that - one disadvantage of C strings!). Instead, I decided
to implement my own conversion to int here, because I don't think the
way we humans write our Arabic numerals is going to change anytime soon.
Also, the std::string input is now passed by const reference, instead of
making a copy - cutting down on unnecessary memory operations.
I personally like putting the asterisk with the type, because despite
the language parsing the asterisk as a part of the name, the pointer
part is clearly a part of the return type of the function. Also,
put constness here, to indicate that the input won't be modified inside
the function.
This comment indicates that the function is used by
UtilityClass::GCString(). Which is unnecessary, because the reader can
trivially search for usages of GCChar in the file itself (the 'static'
preceding the function should be a good enough hint) - and if there
aren't any, then the reader will know the function is unused, whereas if
they read the comment, they would have been under the assumption that it
wasn't used. (There might also a compiler warning about it being unused,
which would be more confusing if the comment was still there.)
Point is, comments can get outdated, so removing the comment here makes
the code more self-documenting.
This is a re-do of 942217f871 (#509), but
with a more conservative fix that only resets the player's newxp and
newyp when they respawn from a checkpoint or spawn in to the map.
Unlike the previous patch, if the player were to suddenly collide with a
conveyor or horizontally-moving platform during gameplay, their
y-position would revert back to the intended next y-position of the
previous frame. But this is the same behavior as before, I haven't ever
seen such a contrived situation come up, and this behavior is probably
more preferable for gameplay than actually going to the conveyor, so
it's fine.
I also decided to reset newxp here, and not just newyp, because while
resetting newyp seems to be enough, it's safer to also reset newxp (and
so future readers won't question why only newyp is reset but not newxp).
I tested this and it once again fixes the death loop issue from earlier,
while also still allowing for that Trench Warfare trick to be possible
(I tested it with the libTAS movie I mentioned in #606; it syncs fine).
There are no other known regressions resulting from this fix
(hopefully).
This reverts commit 942217f871.
This fix (of a regression of a fix) has a regression where immediately
flipping off of horizontally-moving platforms or conveyors will no
longer provide you with a "boost" given certain vertical pixel
alignments.
The regression that this fix fixed will be fixed another way.
Fixes#606.
This works on macOS, Wayland, and a few more esoteric platforms. X11
doesn't have a concept of DPI-awareness. Note that with this flag
SDL_GetWindowSize() isn't guaranteed to return the actual window size.
Retextured checkpoints have always been in the game, but clicking on
them in the editor would lead to them losing their retextured-ness. So,
checkpoints should be left alone if their p1 isn't either 0 or 1. Also,
they don't show up properly in the editor, so that's fixed, too.
Retextured and flipped terminals were added in 2.3, and show up properly
in-game, but don't properly show up in the editor, either. So now they
show up in the editor. Additionally, clicking on them will flip the
terminal as well, but only if its p1 is 0 or 1, just like checkpoints
now do.
This call to Makebfont() always existed, but ever since 2.3's per-level
custom assets were added, graphics.reloadresources() also calls
graphics.Makebfont(), meaning this call is unnecessary. Calling it twice
results in graphics.bfont and graphics.flipbfont having twice the number
of elements, until custom assets get mounted (or unmounted, technically).
This does the same thing as the last commit, but for No Death Mode
instead of Time Trials. Whenever you die in No Death Mode, or complete
it, all the relevant variables get copied to variables prefixed with
'ndmresult' that never get reset by script.hardreset(), and these
variables are what titlerender() use, instead of the "live" ones.
This makes it so when a Time Trial gets completed, all the relevant
variables get copied onto variables prefixed with 'timetrialresult',
which never get reset by script.hardreset(). Then titlerender() will use
those variables accordingly.
There are multiple different exit paths to the main menu. In 2.2, they
all had a bunch of copy-pasted code. In 2.3 currently, most of them use
game.quittomenu(), but there are some stragglers that still use
hand-copied code.
This is a bit of a problem, because all exit paths should consistently
have FILESYSTEM_unmountassets(), as part of the 2.3 feature of per-level
custom assets. Furthermore, most (but not all) of the paths call
script.hardreset() too, and some of the stragglers don't. So there could
be something persisting through to the title screen (like a really long
flash/shake timer) that could only persist if exiting to the title
screen through those paths.
But, actually, it seems like there's a good reason for some of those to
not call script.hardreset() - namely, dying or completing No Death Mode
and completing a Time Trial presents some information onscreen that
would get reset by script.hardreset(), so I'll fix that in a later
commit.
So what I've done for this commit is found every exit path that didn't
already use game.quittomenu(), and made them use game.quittomenu(). As
well, some of them had special handling that existed on top of them
already having a corresponding entry in game.quittomenu() (but the path
would take the special handling because it never did game.quittomenu()),
so I removed that special handling as well (e.g. exiting from a custom
level used returntomenu(Menu::levellist) when quittomenu() already had
that same returntomenu()).
The menu that exiting from the level editor returns to is now handled in
game.quittomenu() as well, where the map.custommode branch now also
checks for map.custommodeforreal. Unfortunately, it seems like entering
the level editor doesn't properly initialize map.custommode, so entering
the level editor now initializes map.custommode, too.
I've also taken the music.play(6) out of game.quittomenu(), because not
all exit paths immediately play Presenting VVVVVV, so all exit paths
that DO immediately play Presenting VVVVVV now have music.play(6)
special-cased for them, which is fine enough for me.
Here is the list of all exit paths to the menu:
- Exiting through the pause menu (without glitchrunner mode)
- Exiting through the pause menu (with glitchrunner mode)
- Completing a custom level
- Completing a Time Trial
- Dying in No Death Mode
- Completing No Death Mode
- Completing an Intermission replay
- Exiting from the level editor
- Completing the main game
Comments in general don't get verified by the compiler, but
commented-out code is even worse. Especially since this looks to be
outdated code.
As always, if we need some of this code, then we can just look back in
the Git history.
While compiling in release mode, GCC warns about these two potentially
being used uninitialized further down. The only way this could happen is
if the case-switches below didn't match up with a case, which would
require the game to be in an invalid state (and have invalid values for
rcol and spcol), but it's better to be safe than sorry.
During 2.3 development, there's been a gradual shift to using SDL stdlib
functions instead of libc functions, but there are still some libc
functions (or the same libc function but from the STL) in the code.
Well, this patch replaces all the rest of them in one fell swoop.
SDL's stdlib can replace most of these, but its SDL_min() and SDL_max()
are inadequate - they aren't really functions, they're more like macros
with a nasty penchant for double-evaluation. So I just made my own
VVV_min() and VVV_max() functions and placed them in Maths.h instead,
then replaced all the previous usages of min(), max(), std::min(),
std::max(), SDL_min(), and SDL_max() with VVV_min() and VVV_max().
Additionally, there's no SDL_isxdigit(), so I just implemented my own
VVV_isxdigit().
SDL has SDL_malloc() and SDL_free(), but they have some refcounting
built in to them, so in order to use them with LodePNG, I have to
replace the malloc() and free() that LodePNG uses. Which isn't too hard,
I did it in a new file called ThirdPartyDeps.c, and LodePNG is now
compiled with the LODEPNG_NO_COMPILE_ALLOCATORS definition.
Lastly, I also refactored the awful strcpy() and strcat() usages in
PLATFORM_migrateSaveData() to use SDL_snprintf() instead. I know save
migration is getting axed in 2.4, but it still bothers me to have
something like that in the codebase otherwise.
Without further ado, here is the full list of functions that the
codebase now uses:
- SDL_strlcpy() instead of strcpy()
- SDL_strlcat() instead of strcat()
- SDL_snprintf() instead of sprintf(), strcpy(), or strcat() (see above)
- VVV_min() instead of min(), std::min(), or SDL_min()
- VVV_max() instead of max(), std::max(), or SDL_max()
- VVV_isxdigit() instead of isxdigit()
- SDL_strcmp() instead of strcmp()
- SDL_strcasecmp() instead of strcasecmp() or Win32 strcmpi()
- SDL_strstr() instead of strstr()
- SDL_strlen() instead of strlen()
- SDL_sscanf() instead of sscanf()
- SDL_getenv() instead of getenv()
- SDL_malloc() instead of malloc() (replacing in LodePNG as well)
- SDL_free() instead of free() (replacing in LodePNG as well)
This patch de-duplicates the tool drawing code a bit in the menu that
gets brought up when you press Space in the level editor, as well as
fixes several bugs related to the fact that the original author(s) of
the code decided to copy-paste everything. (It was most likely Terry,
judging by the distinct lack of whitespace between tokens in the code.)
There are two "pages" of tools that get shown when you open the tool
menu, according to your currently-selected tool.
1. On the first page, your currently-selected tool gets a brighter
outline. However, on the second page, the code to draw the outline over
your currently-selected tool is missing. So I've fixed that.
2. On the first page, the glyph indicator next to the tool icon also
gets brighter when you have that tool selected. However, on the
second page, the code that drew the brighter-colored indicator got
ran before the code that drew the normal-colored indicator, so this
was never shown. This is also fixed.
3. The glyph indicator of the gravity line tool didn't get brighter when
you had it selected, due to its special-cased copy-pasted code
drawing its brighter color before drawing its normal color. This has
also been fixed.
4. Lastly, the tool menu no longer draws the brighter-colored glyphs on
top of the normal-colored glyphs. Instead, the menu will simply draw
the brighter-colored glyphs and will not draw the normal-colored
glyphs in the first place. This is because double-drawing text like
this will look bad if the user has a custom font.png that has
translucent pixels, like I do.
All of these bugs have been fixed by paying off the technical debt of
copy-pasting code.
This variable seems to have been intended to make sure
game.savestatsandsettings() was called at the end of the frame, or make
sure that it didn't get called more than once per frame. I don't see any
frame ordering-related reason why it needs to be called specifically at
the end of the frame (the function doesn't modify any state), so it's
more plausible that it was added to make sure it didn't get called more
than one per frame.
However, upon further analysis, none of the code paths where
game.savemystats is used ever calls or sets game.savemystats more than
once, and a majority of the code directly calls
game.savestatsandsettings() anyway, so there's no reason for this
variable to exist. If we ever need to make sure it doesn't get called
more than once, and there's no way to change the code paths around to
prevent it otherwise, we can use the defer callbacks system that I added
to #535, when it gets merged.
These variables basically serve no purpose. map.customx and map.customy
are clearly never used. map.finalx and map.finaly, on the other hand,
are basically always game.roomx and game.roomy respectively if
map.finalmode is on, and if it's off, then they don't matter.
Also, there are some weird and redundant variable assignments going on
with these; most notably in map.gotoroom(), where rx/ry (local
variables) get assigned to finalx/finaly, then finalx/finaly get
assigned to game.roomx/game.roomy, then finalx/finaly get assigned to
rx/ry. If finalx/finaly made a difference, then there'd be no need to
assign finalx/finaly back to rx/ry. So it makes the code clearer to
remove these weird bits of code.
2.3's per-level assets feature also added a hotkey to reload the custom
assets of the level you're currently editing in the editor, so you
wouldn't have to re-load the level yourself. This hotkey is F9, but
however, it hasn't been documented in the hotkey list brought up by
pressing Shift, until now.
This patch cleans up unnecessary exports from header files (there were
only a few), as well as adds the static keyword to all symbols that
aren't exported and are specific to a file. This helps the linker out in
not doing any unnecessary work, speeding it up and avoiding silent
symbol conflicts (otherwise two symbols with the same name (and
type/signature in C++) would quietly resolve as okay by the linker).
Line clipping and second-frame edge-flipping have been broken since #539
was merged (d910c5118d). The cause of this
is moving the onground/onroof code around.
A proper loop order fix is going to come once #535 gets finalized and
merged, so this is a stopgap measure just to make sure people don't
report that line clipping or second-frame edge-flipping are broken in
current builds of 2.3.
There is a certain ordering of which corners you click on to place enemy
and platform boundaries, and script boxes. You must first click on the
top-left corner, then click on the bottom-right corner. The visual box
that is drawn after you've first clicked on the top-left corner clearly
shows this intended way of doing things.
However, it seems like despite the visuals, the game didn't properly
prevent you from clicking on the corners in the wrong way. If you placed
it from top-right to bottom-left, or bottom-left to top-right, then the
game would place the boxes accordingly, and they would have a weird
shape where two of its opposite sides would just be missing. But,
placing them from bottom-right to top-left is prevented accordingly.
The bug comes down to a simple use of "or", instead of the correct
"and". This isn't the first time the wrong conjunction was used in a
conditional... (8260bb2696, #136)
Since the code block that the if-statement guards is the code that will
execute if the corners placed were correct, the if-statement thus should
be written in the positive case and use a more restrictive "and",
instead of the negative case, which would use a more looser "or". There
are less cases that are correct than cases which are incorrect - in this
case, there is only 1 correct way of doing things (top-left to
bottom-right), compared to 3 incorrect ways of doing things (top-right
to bottom-left, bottom-left to top-right, bottom-right to top-left) - so
when thinking of positive cases, you should be using "and".
Or, you can always just test it. This bug has been in the game since
2.0, so it seems like no one just tested that incorrect input actually
didn't work.
Ever since 2.0, the colors of some of the Time Trial trophies in the
Secret Lab don't correspond to the crewmate of the given level. The
trophy for the Tower uses Victoria's color, and the Lab trophy uses
Vermilion's color. The Space Station 2 trophy uses Viridian's color, and
the Final Level trophy uses Vitellary's color.
This doesn't appear to be intentional, and it would be odd if it was,
since this game matches the colors everywhere else (each zone on the map
is colored with their respective crewmate in mind, for instance). Also,
the Lab trophy has the sad expression, which is Victoria's trait - it
would be weird if this was intended for Vermilion instead.
But the biggest piece of evidence that this was unintentional is the
corresponding comment for each color in Graphics::setcol(). It mislabels
yellow as cyan, cyan as yellow, blue as red, and red as blue.
To fix this, I simply have to set the correct color for each trophy in
case 25 of entityclass::createentity(). I could fix it in
Graphics::setcol() itself, but custom levels might depend on those
certain colors being the way they are, so it's a safer bet to just fix
it in the trophy creation case itself.
The diff of this might look weird. Even though all I'm doing is changing
some value assignments around, it looks like the "patience" algorithm
thinks I'm moving a whole case of the trophy switch-case around.
So... it looks like being able to switch through tilesets backwards has
been in 2.3 for a while, guess no one just uses 2.3 or the level editor
that much. It seems like it's always been broken, too.
If you were on the Space Station tileset (tileset 0), pressing Shift+F1
would keep you on the Space Station tileset instead of switching to the
Ship (tileset 4).
It looks like the problem here was mixing size_t and int together - so
the modulus operation promoted the left-hand side to size_t, which is
unsigned, so the -1 turned into SIZE_MAX, which is 18446744073709551615
on my system. You'll note that that ends in a 5, so the number is
divisible by 5, meaning taking it modulo 5 leads to 0. So the tileset
would be kept at 0.
At least unsigned integer underflow/overflow is properly defined, so
there's no UB here. Just careless type mixing going on.
The solution is to make the modulus an int instead of a size_t. This
introduces an implicit conversion, but I don't care because my compiler
doesn't warn about it, and implicit conversion warnings ought to be
disabled on MSVC anyway.
This fixes a bug where if you entered a tower before watching the
credits sequence, the credits sequence would have mismatched text and
background colors.
This bug happened because entering a tower modified the r/g/b attributes
of mapclass, and updated graphics.towerbg, without updating
graphics.titlebg too. Then gamecompleterender() uses the r/g/b
attributes of mapclass.
The solution is to put the r/g/b attributes on TowerBG instead. That
way, entering a tower will only modify the r/g/b attributes used to
render towers, and won't affect the r/g/b attributes used to render the
credits sequence.
Additionally, I also de-duplicated the case-switch that updated the
r/g/b attributes based off of the current colstate, because it got
copy-pasted twice, leading to three instances of one piece of code.
This fixes a bug where if you completed a custom level during
command-line playtesting, when returning to the title screen, the
background would be red and the text would be white.
This is because playtesting skips over the code path of pressing ACTION
to start the game and advance to the title screen, and the code path of
that ACTION press specifically initializes the title screen colors to
cyan.
This is also caused by the fact that completing a custom level doesn't
call map.nexttowercolour(), but my guess is that the intent there was
that the player would select a custom level, complete it, and return to
the title screen on the same screen with the same colors, so I decided
not to add a map.nexttowercolour() there.
Instead, I've moved the cyan color initialization to main(), so that it
is always executed no matter what, and doesn't require you to take a
specific code path to do it.
With commit 48313169b6 (PR #453),
AllyTally added a single-case patch for a regression, instead of fixing
it at its root cause.
In fact, that commit only fixes the music if Presenting VVVVVV is
playing while exiting to the menu, not if you enter a level that plays
Presenting VVVVVV - so it only fixes it going one way, and not going the
other way around; neither fixing also all the other cases this could
happen.
It doesn't, say, fix the case where you are exited to the menu
automatically after collecting the last crewmate in the level (or if the
level calls gamestate 1013 itself), which is what happens in my MIRA-VIU
TAS video at the end, and which I noted in the description of that video
( https://www.youtube.com/watch?v=OYQO4ePbYW4&t=111 ).
So, the problem here is that when musicclass::play() is called, it sees
that currentsong is the same as its input, and decides that since the
music is already playing, it shouldn't play the music again. Thus, the
music fades out, and we get silence instead of the music playing again.
But I said this was a regression. Why didn't this happen in 2.2? Well,
it's because of the fact that 2.2 sets currentsong to -1 (no music
playing at all) immediately when starting a fadeout, and not when the
fadeout completes (commit facb079b35,
PR #316). As you can imagine, this discrepancy could lead to bugs, given
that the game would think that music wasn't playing when in actuality it
was, but fixing this bug could also break code that expected this wrong
behavior. And in this case, it has.
So to properly fix the root cause of this, instead of naïvely
single-case patching out every case that comes up randomly, in
musicclass::play(), the function will now ignore if the input given is
the same as currentsong if the music is currently fading out.
This reverts commit 48313169b6, "Don't
fade music out when returning to the menu if it's Presenting VVVVVV".
This commit is being reverted because it is only a single-case patch -
that is, it fixes only a single symptom of the bug, and not its
underlying cause.
This is also another conditional where the rest of the function is
nested. Furthermore, in order to not repeat ourselves, I've also decided
to unconditionally assign currentsong to t, because if t is -1
currentsong gets assigned to -1 anyway - so it's the same thing, but
it's much easier to see and think about.
This removes an indentation level, and makes it easier to reason about
the function since you essentially now view it as the function returning
right there.
This prevents issues when calling std::abs with a float on some older
compilers. While it would normally be promoted to an int, std::abs is
special due to being overloaded despite being a C function. This can
cause errors due to the compiler being unable to find a float overload.
SDL_abs doesn't have this problem, since it's a normal C function.
When gamemode(teleporter) gets run in a script, it brings up a read-only
version of the teleporter screen, intended only for displaying rooms on
the minimap.
However, ever since 2.3 allowed bringing up the map screen during
cutscenes (in order to prevent softlocks), bringing up the map screen
during this mode would (1) do an unnecessary animation of suddenly
switching back to the game and bringing up the menu screen again (even
though the menu screen has already been brought up), and (2) would let
you close the menu entirely and go back to GAMEMODE, thus
unintentionally closing the teleporter screen and kind of ruining the
cutscene.
To fix this, when you bring up the map screen, it will instead instantly
transition to the map screen. And when you bring it down, it will also
instantly transition back to the teleporter screen.
But that's not all. The previous behavior was actually kind of a nice
failsafe, in that if you somehow got stuck in a state where a script ran
gamemode(teleporter), but stopped running before it could take you out
of that mode by running gamemode(game), then you could return to
GAMEMODE yourself by bringing up the map screen and then bringing it
back down. So I've made sure to keep that failsafe behavior, only as
long as there isn't a script running.
When bringing up the map screen, the game does a small menu animation
where the menu comes in from the bottom. The code to calculate the menu
offset is copy-pasted everywhere, so I thought I'd de-duplicate it to
make my life easier when working with it. I also included the
game.gamestate assignment in the de-duplicated function, so it would be
easier for a future bugfix.
At the same time, I'm also removing all the BlitSurfaceStandard()s that
copied menubuffer to backBuffer. The red flag is that this blit happened
for every single entry point to MAPMODE and TELEPORTERMODE, except for
the script command gamemode(teleporter). Pressing Enter to bring up the
map screen, pressing Enter to quit the Super Gravitron, pressing Esc to
bring up the pause screen, and pressing Enter to bring up the teleporter
screen all do this blit, so if this blit was there to fix a bug, then
there's a bug with using the script command gamemode(teleporter)... but,
as far as I can tell, there isn't.
That's because the blit basically does nothing. All the blit does is
copy menubuffer onto backBuffer. Then the next thing that happens is
that either maprender() or teleporterrender() will be called, and the
first thing that those functions will always do is fill backBuffer with
solid black, completely overriding the previous blit. So that's why
removing this blit won't have any effect, and it can be safely removed
for code clarity.
When I did #569, I forgot that taking out the code path that set the
player's invis to false meant that the player would still be invisible
upon loading back in to the game if they exited the game while
invisible. Taking out that code path also meant that if game.lifeseq was
nonzero, it wouldn't be reset properly, either. So this fixes those
things.
When I did #567, I didn't test it. And I should have tested it, because
it made the player invisible. This is because map.resetplayer() also
sets the invis attribute of the player to true as well, and I only undid
it setting game.lifeseq to 10.
So instead, I'll just add a flag to map.resetplayer() that by default
doesn't set game.lifeseq or the player's invis attribute. And I tested
it this time, and it works fine. I tested both respawning after death
and exiting to the menu and loading in the game again.
When exiting a level, music.init() gets called again, and every time it
gets called after the first time it gets called, it will free all music
tracks.
To do so, it calls Mix_FreeMusic(). Unfortunately, if there is music
fading, Mix_FreeMusic() will call SDL_Delay(), which will result in
annoying no-draw frames. Meaning, the screen freezes and doesn't draw
anything, and has to wait a bit before continuing.
Here's the relevant piece of code from SDL2_mixer's music.c:
if (music == music_playing) {
/* Wait for any fade out to finish */
while (music->fading == MIX_FADING_OUT) {
Mix_UnlockAudio();
SDL_Delay(100);
Mix_LockAudio();
}
if (music == music_playing) {
music_internal_halt();
}
}
This is especially annoying if you're a TASer, because no-draw frames in
a libTAS movie aren't guaranteed to last for a consistent number of
frames when you change your inputs around.
After this patch, as long as your computer can unmount and re-mount
assets fast enough (it doesn't seem like mine can, unfortunately), then
you won't get any freezes when exiting a level that has custom assets.
(This freeze didn't happen when loading a level because the title screen
music fadeout upon pressing ACTION had enough time to fully complete
before the level got loaded.)
There's a small inconsistency where the first time you load in to the
game, game.lifeseq is at 0, but when you exit to the menu and load in
again, game.lifeseq becomes 10. This is visible as Viridian blinking
when loading in only after the first time you load in, and this also
means that after the first time you load in, you also have to wait 5
frames before being able to move Viridian.
The reason for this inconsistency is because on the first time you load
in to the game, there are no entities loaded in obj.entities yet, so the
game creates a player entity, and doesn't mess with game.lifeseq. When
you exit and then load in for the second time, obj.entities contains at
least one entity (all the entities from the room you just exited out of
- map.gotoroom() hasn't even been called yet, so it doesn't even check
for only the player entity), so the game calls map.resetplayer()
instead, and map.resetplayer() sets game.lifeseq to 10.
There's even an inconsistency to this inconsistency - when you die in No
Death Mode, all entities will be removed from obj.entities, so the next
time you load in to the game, game.lifeseq will be 0.
This inconsistency is pretty minor in the grand scheme of things, but
it still bothers me, so I'm going to fix it.
Since INTERIM_COMMIT is a char array whose size we know for sure at
compile time, and which we also know is an array (instead of being a
pointer), we can take the SDL_arraysize() of it. However,
SDL_arraysize() doesn't account for the null terminator unlike
SDL_strlen(), so we'll have to do it ourselves. But at least we are
guaranteed to get a constant value at compile time, unlike if we use
SDL_strlen(), which would be repeatedly evaluating a constant value at
runtime.
To my knowledge, there's no equivalent SDL_arraysize() for constant
strings, and a quick `rg` (ripgrep) for "sizeof" in the SDL include/
folder doesn't show anything like that. So we'll just have to use the
SDL_arraysize() - 1 and deal with it.
The previous implementation of showing the commit hash on the title
screen used a preprocessor definition added at CMake time to pass the
hash and date. This was passed for every file compiled, so if the date
or hash changed, then every file would be recompiled. This is especially
annoying if you're working on the game and switching branches all the
time - the game has at least 50 source files to recompile!
To fix this, we'll switch to using a generated file, named
Version.h.out, that only gets included by the necessary files (which
there is only one of - Render.cpp). It will be autogenerated by CMake
(by using CONFIGURE_FILE(), which takes a templated file and does a
find-and-replace on it, not unlike C macros), and since there's only one
file that includes it, only one file will need to be recompiled when it
changes.
And also to prevent Version.h.out being a required file, it will only be
included if necessary (i.e. OFFICIAL_BUILD is off). Since the C
preprocessor can't ignore non-existing include files and will always
error on them, I wrapped the #include in an #ifdef VERSION_H_EXISTS, and
CMake will add the VERSION_H_OUT_EXISTS define when generating
Version.h.out. The wrapper is named Version.h, so any file
that #includes the commit hash and date should #include Version.h
instead of Version.h.out.
As an added bonus, I've also made it so CMake will print "This is
interim commit [HASH] (committed [DATE])" at configure time if the game
is going to be compiled with an interim commit hash.
Now, there is also the issue that the commit hash change will only be
noticed in the first place if CMake needs to be re-ran for anything, but
that's a less severe issue than requiring recompilation of 50(!) or so
files.
While working on #535, I noticed this bug.
When going to Graphic Options or Game Options from the pause menu,
kludge_ingametemp was intended to save the current menu stack frame
BEFORE either of those menus got created. However, it was actually
assigned afterwards, meaning kludge_ingametemp would always be either
Menu::graphicoptions or Menu::options.
This meant that the returntomenu() in returntopausemenu() would always
attempt to return to the current in-game menu, and seeing as it's the
same menu, would re-create the menu, instead of returning to the
previous menu before it.
This patch also fixes a potential source of a trivial memory leak, if
someone were to keep entering and exiting Graphic Options or Game
Options from the pause menu. It would keep piling up duplicate Graphic
Options or Game Options stack frames, which would never get removed.
However, they do get removed when you exit to the menu properly, by
returntomenu() again, so this doesn't seem like that serious of an
issue, but it's still good to fix.
While I was working on #535, I noticed that all the call sites of
script.run() have the exact same code, namely:
if (script.running)
{
script.run();
}
I figured, why not move the script.running check into the function
itself? That way, we won't have to duplicate the check every single
time, and we don't risk forgetting to add the check and causing a bug
because of that.
The check was already duplicated once since 2.0 (it's used in both
GAMEMODE and TELEPORTERMODE), and with the fix of the two-frame delay in
2.3, it's now duplicated twice, leading to THREE instances of this check
in the code, when there should be only one.
To fix this bug, all we have to do is just pass the existing
ScreenSettings* that we have in loadstats() to savestats(), and in
loadsettings() to savesettings().
Fixes#556. Depends on #558.
Another step to fix the bug #556 is to allow Game::savestats() to accept
a pointer to an existing ScreenSettings struct. This entails refactoring
Game::savesettings() and Game::serializesettings() to accept the
function as well, along with adding Screen::GetSettings() so the
settings of the current Screen can be easily grabbed.
In order to be able to fix the bug #556, I'm planning on adding
ScreenSettings* to the settings.vvv write function. However, that
entails adding another argument to Game::savesettings(), which is going
to be really messy given the default argument of Game::savestats().
That, combined with the fact that the code comment at the site of the
implementation of Game::savestats() being wrong (!!!), leads me to
believe that using default function arguments here isn't worth it.
Instead, what I've done is made it so callers are explicit about whether
or not they're calling savestats(), savesettings(), or both at the same
time. If they are calling both at the same time, then they will be using
a new function named savestatsandsettings().
In short, these are the interface changes:
* bool Game::savestats(bool) has been removed
* bool Game::savestatsandsettings() has been added
* void Game::savestats_menu() has been renamed to
void Game::savestatsandsettings_menu()
* All previous callers of bool Game::savestats() are now using bool
Game::savestatsandsettings()
* The one caller of bool Game::savestats(bool) is now using bool
Game::savestats()
While there already exists an option to skip the fake loading screen
entirely (without requiring an ACTION press), there are several reasons
for including this option as well:
* So people upgrading from 2.2 won't have to sit through the fake
loading screen the first time they open 2.3.
* So if people are too lazy to use the existing option, they can use
this one instead.
* So tool-assisted speedruns (TASes) of this game can skip the fake
loading screen without requiring an existing settings.vvv beforehand.
This last one is the biggest reason for me, since I'm not sure what
TASVideos.org rules are regarding existing save files, but with this
change nobody has to worry about their rules and can safely just
press ACTION to skip the fake loading screen automatically.
`success = success && savesettings();` is now changed to
`success &= savesettings();`. It's bitwise, and I think C++ should have
had a &&= for completeness, but it shouldn't matter here.
Changing settings would most of the time attempt to save unlock.vvv and
now also settings.vvv, but there would be no feedback whether the files
have been saved successfully or not. Now, if saving fails when changing
settings in the menu, a warning message will be shown. The setting will
still be applied of course, but the user will be informed it couldn't
be saved. This message can be silenced until the game is restarted,
because I can imagine this could get very annoying when someone already
knows their settings aren't writeable.
Also, some options didn't even save settings in the first place. These
are turning off invincibility, and by coincidence precisely all the
options in the advanced options menu. I made sure these options now do
so.
As part of fixing #464, I'll need to move these pieces of code around
easily. In #220 I just kind of shoved them awkwardly in whatever
fixed function would be last called in the gamestate loop, which I
shouldn't have done as I've now had to make formal fixed-render
functions anyway. Because these fixed functions need to be called
directly before a render function, and I'm fixing the order to put
render functions in their proper place, so I need to be able to move
these around easily, and making them function calls instead of inlined
makes them easier to manipulate.
Today, I saw a video posted by Chelito on the VVVVVV speedrunning
Discord where he died inside a gravitron square over and over after the
Gravitron in Intermission 2 ended.
https://cdn.discordapp.com/attachments/234787368522088448/779074864660480020/What.mp4
This is caused by the fact that after the Gravitron ends, the game no
longer considers you to be inside swnmode, so it won't move the enemies
offscreen when you die.
To fix this, after the Gravitron ends, the game will switch to swngame
8, where it will keep you inside swnmode until all gravitron squares are
offscreen. This means that gravitron squares that are onscreen after the
Gravitron ends will be moved offscreen if you die, preventing the above
death loop from happening.
PR #468 made it so you can use the menus while in a cutscene, in order
to counteract softlocks. However, this has resulted in more
unintentional behavior:
- `gamemode(teleporter)` breaks when opening the ENTER menu (Misa
mentioned this)
- The player can now interrupt shakes and walks, and have their timers
run out before resuming the cutscene
- After completing the game, the player can warp to the ship while a
dialogue is active, and prevent themselves from advancing text (plus
it's always rude to just teleport away while someone's talking)
- The player can peek at the map before hidecoordinates is run, and can
also peek at what the game does with missing/rescued crewmates during
cutscenes
This commit fixes the latter two issues. While a script is running,
only the SAVE tab is now available. Therefore the player can still get
themselves out of softlocks as intended, but they do things like
looking at the map or teleporting away during a cutscene.
It wasn't a direct duplicate of key.sensitivity, but it was still
basically the same thing. Although to be fair, at least the case-switch
conversion didn't get duplicated everywhere unlike game.slowdown.
So now key.sensitivity functions the same as game.controllerSensitivity,
and it only gets converted to its actual value whenever a joystick input
happens in key.Poll(), unlike previously where it got converted every
single frame on the title screen (there was even a comment that said
"TODO bit wasteful doing this every poll").
game.gameframerate seems to exist for converting the value of
game.slowdown into an actual timestep value, when really the timestep
value should just use game.slowdown directly with a fast lookup table.
Otherwise, there's a bunch of duplicated game.slowdown case-switches
everywhere, which adds up to a large, annoying pile should the values be
changed in the future. But now the duplicate variable has been removed,
and with it, all the copy-pasted case-switches.
Also, the game speed text rendering in Menu::accessibility and
Menu::setslowdown has been factored out to a function and de-duplicated
as well.
There were some duplicate Screen configuration variables that were on
Game, when there didn't need to be.
- game.fullScreenEffect_badSignal is a duplicate of
graphics.screenbuffer->badSignalEffect
- game.fullscreen is a duplicate of !graphics.screenbuffer->isWindowed
- game.stretchMode is a duplicate of graphics.screenbuffer->stretchMode
- game.useLinearFilter is a duplicate of
graphics.screenbuffer->isFiltered
These duplicate variables have been removed now.
I put indentation when handling the ScreenSettings struct in main() so
the local doesn't live for the entirety of main() (which is the entirety
of the program).
Apparently, the amount of digits in a commit hash that git will output
varies depending on how many objects are in the repository that the hash
gets pulled from. The more objects, the more digits needed to avoid a
hash collision.
Sources:
https://stackoverflow.com/q/18134627/#comment26560283_18134919https://stackoverflow.com/a/21015031/
So that means we'll have to dynamically account for the length of the
commit hash in order to get it properly right-aligned with the rest of
the text.
For some reason, music.usingmmmmmm automatically gets set to true in
musicclass::init(). I assume this was because it would get re-assigned
by game.usingmmmmmm in the game startup code anyway, but now that
musicclass::init() can be called more than once, this variable will just
get set to true when it shouldn't be, causing a confusing desync just
like the one I described in my previous commit, where you would have
PPPPPP or MMMMMM on the title screen, but closing the game and
re-launching it would play the other soundtrack instead.
Again, these duplicate variables should be removed, but that's going to
be a separate patch. In the meantime, I'm removing this variable
assignment.
musicclass::init() re-initializes every attribute of musicclass
unnecessarily, when initialization should be put in a constructor
instead. This is bad, because music.init() gets called whenever we enter
and exit a custom level that has assets.
Otherwise, this would result in a bug where music.usingmmmmmm would be
reset, causing you to revert to PPPPPP on the title screen whenever you
enter a level with MMMMMM selected and exit it.
This also causes a confusing desync between game.usingmmmmmm and
music.usingmmmmmm since the values of the two variables are now
different (these duplicate variables should probably be removed, too,
and a lot of other duplicate variables like these exist, too, which are
a real headache). Which means despite MMMMMM playing on the title
screen, exiting the game and re-launching it will play PPPPPP instead.
What's even more is that going to game options and switching to PPPPPP
will play PPPPPP, but afterwards launching and re-entering will play
MMMMMM. Again, having duplicate variables is very bad, and should
probably be fixed, but that's a separate patch.
...you die and the platform's x-coordinate is to the left of x=152.
Which means if you die and the platform isn't completely clear of the
space of its adjacent disappearing platform.
The block needs to be updated accordingly with calls to
obj.nocollisionat() and obj.moveblockto(), else the block will simply be
left behind and the platform will no longer have any collision. This is
in contrast to 2.2 behavior, where the platform would simply
unconditionally create a new block, which would actually end up with a
duplicate block since the previous block didn't get cleaned up, but this
didn't cause any problems because the room was carefully designed so you
would never be able to touch that previous block after you died and
respawned at the checkpoint. But it's still there.
I also added comments to document what this kludge code did, because
otherwise it would be mysterious to readers who are unfamiliar with it.
Fixes#543.
What's the difference between a slash sign and a percent sign? Well, a
percent sign is just a slash sign with two extra oranges in between, but
those two oranges make a huuuuge difference...
I can't really make the filter update only every timestep, because it's
per-pixel and operates on deltaframes too, so it TECHNICALLY runs faster
in over-30-FPS mode than not. That said, it's not really noticeable, the
filter doesn't look bad for updating more often or anything. However, I
can at least interpolate the scrolling, so it's smooth in over-30-FPS
mode.
Originally this function was made because it needed to be exported to
gameinput(), but this piece of code is actually also used in
gamecompletelogic(). So it's good to de-duplicate it here, too.
Assigning these variables is now wholly unnecessary ever since #522 got
merged, and in fact setting graphics.backgrounddrawn to false actually
causes the warp background to "skip" when the map screen gets closed. So
this fixes that bug, too.
This kludge variable was used to re-set the warp background after coming
back from the in-game settings menus. But since #522 got merged, this
has no longer been necessary.
This check is clearly meant for destroying the factory clouds in the
room "Level Complete!" in the main game, but it covers all rooms on row
8, instead of only (13,8). Adding an x-room check restricts this
behavior to only (13,8).
Trinket9 reported that this weird behavior of destroying specifically
above y-position 60 was undesirable, since they were creating an enemy
with this `behave` in a room on row 8 and it kept disappearing
instantly.
First, the variable has been inverted, because it's bad practice to have
booleans with negative names. Secondly, instead of magically setting a
signaling variable when calling fadeout(), fadeout() now has a parameter
to set the quick_fade attribute, which is much cleaner than doing the
magical assignment that could potentially look unrelated.