CCL:G: distribution of memory and disk in a G03 parallel job

Hi Pablo!
I was runing some calculations on a cluster with GAUSSIAN03 with a parallel version, I think that my jobs were big, around 58 atoms of the first and second row and one transition metal atom. 
the memory and processor requirements for this kind of job is big but never the disk requirements  were as huge as your say, the bigest output file was of 11.715KB and I suppose that the .chk was I little bigger. So while the output files are created on the same disk, the calculation process run on various processors and is possible that each node of the cluster used for the calculation put in its memory till the memory requirements had been satisfied. I used 96MW for the calculations. then I think that you don't need that disk space.
best regards from Colombia.
Diego Gómez.
"Pablo Echenique" <> wrote:

Sent to CCL by: "Pablo Echenique" [echenique.p]^[]
Dear CCLers,

excuse me for my newbieness but I want to know whether or not the memory and, specially, disk requirements of a job are distributed among the machines if I launch it in a parallel rather than in serial way.

Coming down to an example, say I want to perform a CCSD single point energy calculation that requires 200GB of disk space to be run. If I launch the job in a machine with 100GB available space for the scratch folder, the job dies. But, what if I launch the same job, say, in a parallel way, to 4 machines with 60GB available space each? Will the disk requirements be split and the job will successfully end or will it die?

And what about RAM memory?

Thank you very much in advance for your help and best regards from Spain,

Pablo Echenique

Pablo Echenique

Instituto de Biocomputacin y
Fsica de los Sistemas Complejos (BIFI)

Departamento de Fsica Terica
Universidad de Zaragoza
Pedro Cerbuna 12, 50009 Zaragoza

Tel.: +34 976761260
Fax: +34 976761264

Looking for last minute shopping deals? Find them fast with Yahoo! Search.