![]() The actual bottleneck seems to be sending the vertex data to the GPU driver, presumably due to a memory transfer): In particular, chunk updates may have been affected by a change in rendering to use triangles instead of quads, meaning that each block face now takes 1.5 times more data and time to render quads were deprecated in OpenGL 3 so only triangles can be used (the GPU driver internally converts quads into triangles but my own experience shows that this has nearly no impact on performance even versions as old as 1.6.4 have a flag in the code that can be set to use triangles, which makes chunk updates about 1.5 times slower, which tells me why it was disabled by default. MC-219639 Performance loss after using OpenGL 3.2 core profile ![]() MC-164123 Poor FPS performance with new rendering engine Otherwise, there have been many reports of performance issues in 1.18 and other recent versions, some of which have been closed as "invalid", much as they closed similar issues dating back to 1.8: That said, this only affects chunk loading from disk in my experience slow chunk loading on the client is caused by slow rendering (the chunks are loaded but not visible, best seen by being able to stand in or see mobs wandering in them, as in this example chunk updates (in F3) will also be low), but everybody, even Optifine, just calls it chunk loading even if that isn't technically correct (you could perhaps call it loading chunks into GPU memory). ![]() ![]() You need to "optimize" the world, which will upgrade all chunks to the latest format, which has changed hugely over recent versions, especially in 1.13, which is when they added the option, and the complexity of the changes makes upgrading much slower (otherwise, chunks are upgraded on-the-fly as they are loaded there is advice to open a world in each update so it can be upgraded incrementally but but this will only update loaded chunks): ![]()
0 Comments
Leave a Reply. |