Sent to CCL by: "Pablo Echenique" [echenique.p]^[gmail.com]
excuse me for my newbieness but I want to know whether or not the memory and, specially, disk requirements of a job are distributed among the machines if I launch it in a parallel rather than in serial way.
Coming down to an example, say I want to perform a CCSD single point energy calculation that requires 200GB of disk space to be run. If I launch the job in a machine with 100GB available space for the scratch folder, the job dies. But, what if I launch the same job, say, in a parallel way, to 4 machines with 60GB available space each? Will the disk requirements be split and the job will successfully end or will it die?
And what about RAM memory?
Thank you very much in advance for your help and best regards from Spain,
Instituto de Biocomputacin y
Fsica de los Sistemas Complejos (BIFI)
Departamento de Fsica Terica
Universidad de Zaragoza
Pedro Cerbuna 12, 50009 Zaragoza
Tel.: +34 976761260
Fax: +34 976761264