Skip to content

Conversation

@JTischbein
Copy link
Contributor

Implements Direct I/O (uncached) file reading on Linux to improve model loading performance by bypassing the page cache. This is especially beneficial for large model files.

While mmap is fast on loading the same model multiple times, uncached read provides consistent model loading times at the speed of the sequential disk read speed. On DGX Spark loading GPT-OSS-120B-MXFP4 using mmap takes ~110s, in the following loads ~67s. With these changes it takes consistently ~10.5s. The speedup depends on the model size, the disk read speed and for sequential loading the available RAM.

I would propose to set uncached reads as default, Windows already has async uncached IO (PR)

Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This results in a huge load speedup on DGX Spark and also at the end of the program leaves the memory in state free instead of buff/cache.

Currently, the implementation is gated behind defined(__linux__). Is this functionality generally supported across all linux platforms? If I am reading this correctly, it boils down to having O_DIRECT support for open().

Also, do we expect this change to also have effect on non-DGX Spark systems?

@lemmi
Copy link

lemmi commented Dec 14, 2025

On my strix halo machine with btrfs, this is strictly worse than master or with mmap. mmap shows the highest throughput while loading the model (~6GByte/s), master is around 3GByte/s and this patch is 2GByte/s.

@ehoogeveen-medweb
Copy link

IIRC with Strix Halo and ROCm/HIP, loading a model into memory reserved for the GPU using mmap has a major performance issue, hanging basically indefinitely for larger models. Given that reserving memory for the GPU also means having less RAM available to the CPU, it would be great if this DirectIO doesn't have that issue as it would make ROCm/HIP more viable for larger models. Vulkan doesn't have this issue.

@JTischbein
Copy link
Contributor Author

@ggerganov I have added a fallback open() in case O_DIRECT is not available. O_DIRECT is supported on Linux since 2.4.10.

In my tests the first (cold) load time improved with every system configuration (PCIe4.0/PCIe5.0 SSD, RTX5080/5090). On the second load mmap=true was faster again, but only in case the model fitted into the VRAM. Overall, with a fast disk, the load time is also near the cached load with mmap=true.

@lemmi Which disk are you using? And I assume the 6GB/s load with mmap=true was a loading from cache, not the first cold load? The difference between std::fread and read is odd, I will have a look into it.

@lemmi
Copy link

lemmi commented Dec 15, 2025

So, I ran a bunch of tests with vulkan backend to test #18012 and #18047 against master.

  1. GGML_VK_DISABLE_HOST_VISIBLE_VIDMEM=1 /usr/bin/time -v build/bin/llama-cli -m ../models/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4.gguf -p "bla" -n 0 --single-turn --mmap
  2. GGML_VK_DISABLE_HOST_VISIBLE_VIDMEM=1 /usr/bin/time -v build/bin/llama-cli -m ../models/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4.gguf -p "bla" -n 0 --single-turn --no-mmap
  3. /usr/bin/time -v build/bin/llama-cli -m ../models/gpt-oss-120b-GGUF/gpt-oss-120b-mxfp4.gguf -p "bla" -n 0 --single-turn --no-mmap

Minisforum MS-S1 Max (AMD RYZEN AI MAX+ 395 w/ Radeon 8060S)
2x WD_BLACK SN850X HS 2000GB, RAID 0, btrfs
Kernel 6.18.0_1

Configuration master (d6a1e18) #18012 #18047 #18012 + #18047
GGML_VK_DISABLE_HOST_VISIBLE_VIDMEM=1 + --mmap 0:43.37  0:30.31  0:31.02  0:37.36 
GGML_VK_DISABLE_HOST_VISIBLE_VIDMEM=1 + --no-mmap 0:17.53  0:36.17  Error (out‑of‑memory)  0:17.98 
--no-mmap 0:17.57  0:36.42  Error (out‑of‑memory)  0:18.08 
ddrescue avg GB/s time
buffered 6.3  0:09.57
direct I/O 1.9  31.72

The --mmap case is weird. It starts out with >6GB/s, then short pause, then depending on the PR it looks like the whole model is read again at 2-4GB/s.
With --no-mmap, the throughput is also always below 4GB/s and using direct I/O is the worst case. Direct I/O is a little tricky on a CoW FS, so maybe it's not a very optimized path.
(Ideally vulkan could just use mmaped files, but i have no idea, whether that's possible)

(EDIT: of course I was a good boy and ran echo 3 > /proc/sys/vm/drop_caches between tests)

@JTischbein
Copy link
Contributor Author

Thank you for testing this @lemmi ! Looking at your numbers it seems like read() with 18012 + 18047 is falling back to buffered IO, leading to similar performance as on master with fread(). I have implementing a filesystem check to decide whether to use read() with O_DIRECT or to use fopen()/fread(). Does it still make sense when read() and fread() perform equally?

Is --mmap on a warm start quicker than --no-mmap on your machine @lemmi ?

bool kv_unified = false; // enable unified KV cache

bool input_prefix_bos = false; // prefix BOS to user inputs, preceding input_prefix
bool use_mmap = true; // use mmap for faster loads
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing this to false by default, results in a huge slowdown on MacOS with default arguments:

time ./bin/llama-completion -m ../models/gpt-oss-120b/ggml-model-mxfp4.gguf -p "hello" -n 1 -no-cnv

# master
real	0m4.648s

# PR
real	0m17.957s

Not sure what is the best way to handle this. If we keep it true, then linux users would not get the benefit of Direct IO. If we switch to false, Mac users will take the hit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be OK to set mmap depending on the platform?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have such precedent atm for any of the parameters in common, so I would say it's not ideal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have on M4 Pro with GPT-OSS-20B on cold load --no-mmap: 4.168s --mmap: 6.3s. The warm load however takes with --mmap 2.1s (--no-mmap still ~4.1s).

Measured using time ./llama-cli -m /Users/jtischbein/Documents/models/openai_gpt-oss-20b-MXFP4.gguf --no-mmap -p "bla" -n 0 --single-turn and filesystem cache cleared using purge.

So the cold load time is still faster using --mmap, but unfortunately not as fast as on Linux.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can do the following:

  • Add new CLI argument --direct-io, -dio
  • Description: "Use DirectIO if available. Takes precedence over --mmap"
  • Keep use_mmap == true and use_direct_io == true
  • On Mac, the internal implementation will determine that DIO is not available so it will fallback to mmap

Might want to do it in a separate PR as it would require changes in libllama API. This PR should keep use_mmap == true by default.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good

@JTischbein
Copy link
Contributor Author

The commit removes the branching in the llama-model-loader.cpp and reduces the code duplications in llama-mmap.cpp. Now DirectIO is easier to integrate on Windows and Mac.

Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's restore use_mmap to true and we can merge.

@JTischbein
Copy link
Contributor Author

I will file a PR which implements the argument use_direct_io later

@ggerganov ggerganov merged commit 4d4f4ca into ggml-org:master Dec 18, 2025
71 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants