mirror of
https://github.com/TerryCavanagh/VVVVVV.git
synced 2025-01-08 18:09:45 +01:00
Fix potential out-of-bounds in next_split_s() wrt buffer length
next_split_s() could potentially commit out-of-bounds indexing if the amount of source data was bigger than the destination data. This is because the size of the source data passed in includes the null terminator, so if 1 byte is not subtracted from it, then after it passes through the VVV_min(), it will index 1 past the end of the passed buffer when null-terminating it. In contrast, the other argument of the VVV_min() does not need 1 subtracted from it, because that length does not include a null terminator (next_split() returns the length of the substring, after all; not the length of the substring plus 1). (The VVV_min() here also shortens the range of values to the size of an int, but we'll probably make size_t versions anyway; plus who really cares about supporting massively-sized buffers bigger than 2 billion bytes in length? That just doesn't make sense.)
This commit is contained in:
parent
72b8afcf7d
commit
cd5408e396
1 changed files with 1 additions and 1 deletions
|
@ -122,7 +122,7 @@ bool next_split_s(
|
|||
/* Using SDL_strlcpy() here results in calling SDL_strlen() */
|
||||
/* on the whole string, which results in a visible freeze */
|
||||
/* if it's a very large string */
|
||||
const size_t length = VVV_min(buffer_size, len);
|
||||
const size_t length = VVV_min(buffer_size - 1, len);
|
||||
SDL_memcpy(buffer, &str[prev_start], length);
|
||||
buffer[length] = '\0';
|
||||
}
|
||||
|
|
Loading…
Reference in a new issue