xfs filesystem and g94



 Dear CCL,
 Sorry for slighty off topic question, but it is concerned with
 runnig gaussian 94 on SGI machines having xfs filesystems ...
 This is not problem of Gaussian, but rather of SGI but
 maybe someone can help me.
 The proble is that new 64-bit xfs filesystems allows for
 writing files "with holes" - it take place when one opens
 file for direct access which will have known number  of records.
 Then if one writes out first record and last record to the file, the
 system "thinks" like all file is filled. So if you make
 `ls -l ` or use system call  'stat' or 'fstat' or its 64-bit equivalents
 then one gets size of file e.g. 1Gb. But in fact it occupies  2 blocks
 on the disk. So if one makes `du -k -a file`, one gets e.q. 1kb instead of 1Gb.
 Until one does not (1) split files between disks or does not (2) perform e.g.
 MP2 calcs
 which involves MAXDISK keyword, it is not important. But in case (1) e.g.
 %rwf=/tmp/1.rwf,400Mb,/temp/2.rwf,-1
 when I have local xfs filesystem with 400Mb free space and NFS (slow) /temp
 filesystem, g94 takes 50 Mb from /tmp and rest integrals (550Mb) puts on slow
 /temp instead of much better 400MB on /tmp and 200 MB on /temp. Of course if
 one checks `ls -l /tmp/1.rwf` it shows 400MB but in fact it has 50Mb. We had
 such
 curious situations that on a 3GB disk we had  3 files of 3Gb size each ( in fact
 each was smaller than 200Mb).
 Similar behaviour is in case (2)
 Does anyone has solution for that ?
 Thanks in advance
 G.Bakalarski