Uwe Ohse


LFS - my opinion about it

Unix file size has been limited to 31 bits for ages. This obviously isn't enough for some applications. I'm not saying there are misdesigned, i'm not saying they are welldesigned: I'm just stating a fact here.

Extending this without breaking compatibility is hard.


LFS extends the ABI (application binary interface) by a number of functions. It adds a function X64 to a function X, with X being pread, readdir, ftruncate, lseek, open and ... and a off64_t to off_t.
A compile-time switch determines which function is used. It's a set of preprocessor symbols: _LARGEFILE_SOURCE, _LARGEFILE64_SOURCE and _FILE_OFFSET_BITS.
Nice idea.

But it doesn't work perfectly. Imagine what happens if two source files are compiled with different compiler flags, but the object files are linked together. Let's use these simple demonstration sources:

#include <sys/types.h>
#include <sys/stat.h>
#include <stdio.h>
#include <fcntl.h>
extern void z2(int fd, struct stat *st);
int main(int ac,char **av)
	struct stat s;
	int fd=open(av[1],0);
	if (fd==-1) _exit(3);
	printf("size32 is %llu\n",(unsigned long long) s.st_size);
#define _FILE_OFFSET_BITS 64
#include <unistd.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>
void z2(int fd, struct stat *st)
	if (-1==fstat(fd,st)) _exit(1);
	printf("size64 is %llu\n",(unsigned long long) st->st_size);
The result is this (if you happen to have some large file):
size64 is 4294967296
size32 is 0

One might argue that this must not be done: obviously right.
One might argue this can't happen: Wrong. It can. Imagine a complicated source package incorporating other packages. Let's assume one of the is a library. Are you sure that, even if the top level package uses some magic to determine whether or not to set the three definitions, all the lower level packages uses the same magic? What happens if the lower level library uses the autoconf macro AC_SYS_LARGEFILE, but one or more of the callers don't?

This kind of trouble is going to be with us for a very long time (along with the most obvious other problem that not all software installed uses large file support). It is hard to detect - you obviously need to test something with a file of more than 2 GB size, and you'll have to test every possibility ... i can't imagine even one GNU/Linux distributor does this with all packages included in his release.

So i expect this kind of problems - which are likely to result in mysteriously corrupted large files - to show up for years. Yessir, _years_. Many years.

The OpenBSD solution

This solution may be implemented on other operating systems, too, but i first noticed it on OpenBSD.

OpenBSD just has one interface, which is 64bit clean. OpenBSD just doesn't support a 32 bit file size type. This was easy to do since OpenBSD is relatively new and the developers knew about the problems early.

Drawback: Software not using off_t but a long for file offsets will break. This software is likely to break with LFS, too, though. There is now way around that, apart from making long 64 bits, which is likely to hurt performance badly.

A better solution for GNU/Linux

would have been to give up on libc6 and switch to a new C library version which then can do the same thing as OpenBSD does. Unfortunately this is likely to annoy people, since experience seems to have proven that a switch to a new version of the C library is painful. I can't subscribe to this point of view in this case, since the C library itself doesn't change it's behaviour. And distributors could care about the remaining problems - by providing libraries compiled against this new C library.
A switch like this is likely to be easy, as the C library wouldn't change behaviour of well-behaved programs. Other programs may break, though, but they are also broken if there are still using the 32bit-interface, since they cannot be used reliably - which i call broken.

The linker could make sure that no clashes happen. And it can do this, as shown by all the symbol versioning tricks already done with it.

One thing is sure: switching to a new C library would be the clean way, especially since you can see what a binary does by looking at which library it's linked against (ldd). It hurts a bit during the switch-over phase, since one would have to have a second version of all libraries around - for some time, at least. But this is a not too painful.


I said "would have been" ... is it really too late now? I think it is, as, once again, the GNU/Linux community has choosen to prefer compatibility concerns over long term problems.