With merely a handful of articles, I’m still growing into my confidence as a writer. However, I’m obviously nailing it when it comes to titles, right? It will all make sense in a moment.

As I was writing my tip about inotifywait(1) , decided to show how different editors behave when saving a file. “I use vim btw”™, so it didn’t take long to try it in there.

Executed vim foo +wq and this is what inotifywait(1) printed:

/path/to/watched/dir/ CLOSE_WRITE,CLOSE 4913
/path/to/watched/dir/ DELETE 4913
/path/to/watched/dir/ MOVED_FROM foo
/path/to/watched/dir/ CLOSE_WRITE,CLOSE foo

That’s weird. It creates a file named 4913 and then moves it to the intended location. Why 4913, though? Tried it several times on multiple machines - always that same number. You know how when it comes to some vim mystery, everybody always tells you to use :help? Yeah, no luck there, this time.

git clone https://github.com/vim/vim.git && rg 4913 vim

Two meaningful results.

vim/runtime/doc/version7.txt: On MS-Windows sometimes files with number 4913 or higher are left behind.

😅

Fine. One meaningful result :

// Check if we can create a file and set the owner/group to
// the ones from the original file.
// First find a file name that doesn't exist yet (use some
// arbitrary numbers).
STRCPY(IObuff, fname);
fd = -1;
for (i = 4913; ; i += 123)
{
    sprintf((char *)gettail(IObuff), "%d", i);
    if (mch_lstat((char *)IObuff, &st) < 0)
    {
    fd = mch_open((char *)IObuff,
            O_CREAT|O_WRONLY|O_EXCL|O_NOFOLLOW, perm);
    if (fd < 0 && errno == EEXIST)
        // If the same file name is created by another
        // process between lstat() and open(), find another
        // name.
        continue;
    break;
    }
}
if (fd < 0)	// can't write in directory
    backup_copy = TRUE;
else [..]

This is part of a (haunting) function bigger than 6000 lines called int buf_write. This particular block is only accessible when :set backup and the user is trying to save changes in a file that already exists. So, the idea is fairly simple: make sure to write the changes in any file whatsoever inside the target directory in order to move it over the target file, so that the latter will be updated atomically. No one would ever observe partial change of state.

But why 4913? Reminded me of a meme about that random function which always returns 4, because it was a result of a perfectly fair dice throw. No idea, it appeared in the 7th commit since the migration to git for version 7. If anybody knows, please reach out.

The purpose of adding 123 until the end of time is clear, though. Pick “a random” non-existent file to the target directory. Choosing 123 in particular is there just because Bram had to write some number, probably. Doing this indefinitely is not really a bad idea in this case. I mean, who in their right mind will create thousands of files with names that are only numbers starting from 4913 and then adding 123?

Well, me, apparently. (click to expand)

Let’s see how hard we can make it for vim. A single write over already existing file does not take long even on pretty old hardware (Xeon E5-2690):

$ time vim files/foo +wq

real    0m0.035s
user    0m0.015s
sys     0m0.012s

Let’s see what happens when 4913 exists:

$ touch files/4913 && time vim files/foo +wq

real    0m0.034s
user    0m0.021s
sys     0m0.004s

Pretty much the same. What is one more open(2) after all. inotifywait(1) reports:

CLOSE_WRITE,CLOSE 4913
CLOSE_WRITE,CLOSE 5036
DELETE 5036
MOVED_FROM foo
CLOSE_WRITE,CLOSE foo

How about we make it work just a bit more:

$ n=4913
$ for i in {1..4096}; do n=$(($n+123)); touch files/$n; done
$ time vim files/foo +wq

real    0m0.041s
user    0m0.021s
sys     0m0.011s

Still far quicker than anybody would notice in already insane scenario. And yet…

$ for i in {1..50000}; do n=$(($n+123)); touch files/$n; done
$ time vim files/foo +wq

real    0m0.134s
user    0m0.020s
sys     0m0.098s

Now, we’re talking. I even saw the terminal flickering in order to draw the window, before being able to finish with the :wq command. Testing with anything more than that is way above any even remotely possible scenario.

Having thousands files in a single directory is something you never want in the real world anyways. There are filesystems with a hard limit by design. While modern filesystems don’t, you will surely experience performance issues when working with such a directory.

Y tho?

Why am I telling you all this? Because I like digging deeper into such quirky details. The next time you stumble upon something peculiar, remember: there might just be an amusing or enlightening story waiting to be uncovered. Or maybe not, but you’ll learn something anyway.