Skip to content

Commit

Permalink
Expanded the regionParser tests
Browse files Browse the repository at this point in the history
  • Loading branch information
TCA166 committed Jul 19, 2023
1 parent 1c24d8c commit 2f6c7ae
Show file tree
Hide file tree
Showing 5 changed files with 72 additions and 14 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,14 +135,14 @@ These may prove to be useful should you want to parse Minecraft save files or ge
Feel free to use these as libraries in your projects just make sure to read the license before you do so.
The documentation for functions in these libraries should be mainly in header files, and I will gladly expand it should there be a need so just let me know.

- regionParser
- regionParser
This library provides three functions for parsing region files.
You can either extract an entire chunk from the given region file, or extract all of them at once.
- chunkParser
- chunkParser
This library utilizes the cNBT library to extract information about blocks from Minecraft NBTs
- model
- model
This library is capable of generating wavefront 3D models
- generator
- generator
This library utilizes chunkParser and model to provide a simple interface with which you can quickly generate a 3D model from a Minecraft save file

## License
Expand Down
20 changes: 13 additions & 7 deletions regionParser.c
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@

#define getRegion(coord) coord>>5

#define coordsToOffset(x, z) 4 * ((x & 31) + (z & 31) * 32)

int handleFirstSegment(chunk* output, FILE* regionFile, char* regionFileName){
//so the numbers are stored as big endian AND as int24
byte bytes[3];
Expand Down Expand Up @@ -108,25 +106,33 @@ chunk* getChunks(FILE* regionFile){
for(int i = 0; i < chunkN; i++){
chunk newChunk;
if(handleFirstSegment(&newChunk, regionFile, "region file") != 0){
parsingError("region file", "first segment");
if(chunkIsNull(newChunk)){
newChunk.byteLength = 0;
newChunk.data = NULL;
newChunk.sectorCount = 0;
newChunk.timestamp = 0;
}
else{
parsingError("region file", "first segment");
}
}
chunks[i] = newChunk;
}
//Then there's an equally long section made up of 1024 int32 timestamps
for(int i = 0; i < chunkN; i++){
if(handleSecondSegment(&chunks[i], regionFile) != 0){
if(handleSecondSegment(&chunks[i], regionFile) != 0 && !chunkIsNull(chunks[i])){
parsingError("region file", "second segment");
}
}
//Then there's encoded chunk data in n*4096 byte long chunks
//Each of these chunks is made up of a single int32 field that contains the length of the preceding compressed data
//foreach chunk we have extracted so far
for(int i = 0; i < chunkN; i++){
if(chunks[i].offset != 0){ //if the chunk isn't NULL
if(!chunkIsNull(chunks[i])){ //if the chunk isn't NULL
/*
At one point i did in fact attempt to move this function to linear file parsing instead of the jumping thing we have now
Two major issues.
One: less stability. A single error that may be caused by corrupted data throws the entire algorythm off which is exactly what happened during testing
One: less stability. A single error that may be caused by corrupted data throws the entire algorithm off which is exactly what happened during testing
Two: Sometimes the header data about a chunk would be straight up wrong? chunkLen would be greater than the suggested cap, or fill less than 1000 bytes but have allocated three segments
All of this chicanery made me simply give up and opt for this clearly more stable and safer option
*/
Expand Down Expand Up @@ -177,7 +183,7 @@ chunk extractChunk(char* regionDirPath, int x, int z){
}
chunk ourChunk = getChunk(x, z, regionFile, filename);
//I could use errno here, or rework getChunk so that it returns an errno value, but I like it this way better
if(!ourChunk.offset && !ourChunk.sectorCount){
if(chunkIsNull(ourChunk)){
parsingError(filename, "parsing of the first segment; chunk isn't saved")
}
else if(ourChunk.offset == -1){
Expand Down
7 changes: 7 additions & 0 deletions regionParser.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,13 @@
#define Zlib 2
#define Uncompressed 3

//If the chunk offset is 0 and sector count is 0 then the chunk is considered not generated and empty so basically NULL
#define chunkIsNull(chunk) (chunk.offset == 0 && chunk.sectorCount == 0)

#define coordsToIndex(x, z) ((x & 31) + (z & 31) * 32)

#define coordsToOffset(x, z) 4 * coordsToIndex(x, z)

//Unsigned char
typedef unsigned char byte;

Expand Down
13 changes: 13 additions & 0 deletions tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Tests

So here are the actual tests for the libraries located.
They are coded in C ofc, however in order to simplify the usage of [Check](https://libcheck.github.io/check/) I chose to use their checkmk utility and it's C dialect.
Writing tests is already tedious enough, I don't have to make it harder on myself now do I.
I don't have much experience writing tests, all suggestions are welcome.

## Hashes for test files

Since I use files in some tests here are the sha256 hashes for the files:

- 8583b9137dc65f5e04e36f19dd247d1159d519617028b4a3f4b3ddf02ecd8627 r.0.0.mca
- e9be2818875684b5e0802b49dcfc0ef27a4c1966d179a262bc1b8a5a1a8e780a 0.0.nbt
38 changes: 35 additions & 3 deletions tests/regionParser.check
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@

#include "../regionParser.h"


//SHA256 of testRegion: c2105d33e5ecae63139f2f2b24abbff229b1f4c52c777cef707fc85f5a745694
#define getTestRegion \
FILE* testFile = fopen("r.0.0.mca", "rb"); \
if(testFile == NULL) testFile = fopen("tests/r.0.0.mca", "rb");
Expand All @@ -12,12 +14,42 @@
#test getValidChunkTest
getTestRegion
chunk mockChunk = getChunk(0, 0, testFile, NULL);
ck_assert_msg(mockChunk.offset != 0 && mockChunk.sectorCount != 0, "Offset and sector count was 0");
ck_assert_msg(!chunkIsNull(mockChunk), "Offset and sector count was 0");
ck_assert_msg(mockChunk.byteLength > 0, "byteLength was less than 0");
fclose(testFile);

#test getInvalidChunkTest
getTestRegion
chunk mockChunk = getChunk(31, 31, testFile, NULL);
ck_assert_msg(mockChunk.offset == 0 && mockChunk.sectorCount == 0, "Offset and sector count wasn't 0");
fclose(testFile);
ck_assert_msg(chunkIsNull(mockChunk), "Offset and sector count wasn't 0");
fclose(testFile);

#test getAllChunksTest
getTestRegion
chunk* mockChunks = getChunks(testFile);
chunk hopefullyNullChunk = mockChunks[coordsToIndex(4, 15)];
ck_assert_msg(chunkIsNull(hopefullyNullChunk), "Chunk that should be NULL actually wasn't");
chunk hopefullyChunk = mockChunks[coordsToIndex(0, 0)];
ck_assert_msg(!chunkIsNull(hopefullyChunk), "Chunk that shouldn't be NULL actually is");
//We check if any chunk has had it's data extracted
short b = 0;
for(int i = 0; i < chunkN; i++){
if(mockChunks[i].byteLength > 0){
b = !b;
}
}
ck_assert_msg(b, "All chunks are data free");
fclose(testFile);

#test equal
getTestRegion
chunk* mockChunks = getChunks(testFile);
chunk hopefullyChunk = mockChunks[coordsToIndex(1, 1)];
chunk ourChunk = getChunk(1, 1, testFile, NULL);
ck_assert_msg(hopefullyChunk.offset == ourChunk.offset, "0,0 Chunks have different offset");
ck_assert_msg(hopefullyChunk.sectorCount == ourChunk.sectorCount, "0,0 Chunks have different sectorCounts");
ck_assert_msg(hopefullyChunk.timestamp == ourChunk.timestamp, "0,0 Chunks have different timestamp");
ck_assert_msg(hopefullyChunk.byteLength == ourChunk.byteLength, "0,0 Chunks have different byteLength");
ck_assert_msg(hopefullyChunk.compression == ourChunk.compression, "0,0 Chunks have different compression");
ck_assert_msg(memcmp(hopefullyChunk.data, ourChunk.data, hopefullyChunk.byteLength) == 0, "0,0 Chunks have different data");
fclose(testFile);

0 comments on commit 2f6c7ae

Please sign in to comment.