Zend certified PHP/Magento developer

Optimize file system for parallel read for HDD

I have slave 16TB drive, that need to run 6 spacemesh node which need each to read 512G folder (so total ~3TB data) at same time as fast as it can (atm its 230MB/s as read is sequential with 1 node)
each folder has 4GB part files and that is all what is on HDD
Ubuntu 22.04 is the OS
Now I am trying to format ext4 file system with block size of 50M to prevent HDD seeking all the time as read is done in parallel as I run 6 nodes and all will start to read at same time

What I need is for OS to force it read file in largest segment as it can before it seek (to prevent seek as match as possible), I can not tweak node to work one after other atm, and reploting is no option

So what I try to do would look as, read POS1 file block1 50MB, POS2 file block1 50MB, etc
I must say I have no idea on OS level will OS try to read block1 and block 2 at same time and make HDD seek in middle of block read