From owner-chemistry %-% at %-% ccl.net Tue Dec 11 12:01:00 2007 From: "DIEGOI GOMEZ darkego21+/-yahoo.com" To: CCL Subject: CCL:G: distribution of memory and disk in a G03 parallel job Message-Id: <-35801-071211115912-31056-HFJhX14hVy6om0GUbdiRiw^server.ccl.net> X-Original-From: DIEGOI GOMEZ Content-Transfer-Encoding: 8bit Content-Type: multipart/alternative; boundary="0-261317432-1197392340=:67212" Date: Tue, 11 Dec 2007 08:59:00 -0800 (PST) MIME-Version: 1.0 Sent to CCL by: DIEGOI GOMEZ [darkego21 .. yahoo.com] --0-261317432-1197392340=:67212 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Hi Pablo! I was runing some calculations on a cluster with GAUSSIAN03 with a parallel version, I think that my jobs were big, around 58 atoms of the first and second row and one transition metal atom. the memory and processor requirements for this kind of job is big but never the disk requirements were as huge as your say, the bigest output file was of 11.715KB and I suppose that the .chk was I little bigger. So while the output files are created on the same disk, the calculation process run on various processors and is possible that each node of the cluster used for the calculation put in its memory till the memory requirements had been satisfied. I used 96MW for the calculations. then I think that you don't need that disk space. best regards from Colombia. Diego Gómez. "Pablo Echenique echenique.p:gmail.com" wrote: Sent to CCL by: "Pablo Echenique" [echenique.p]^[gmail.com] Dear CCLers, excuse me for my newbieness but I want to know whether or not the memory and, specially, disk requirements of a job are distributed among the machines if I launch it in a parallel rather than in serial way. Coming down to an example, say I want to perform a CCSD single point energy calculation that requires 200GB of disk space to be run. If I launch the job in a machine with 100GB available space for the scratch folder, the job dies. But, what if I launch the same job, say, in a parallel way, to 4 machines with 60GB available space each? Will the disk requirements be split and the job will successfully end or will it die? And what about RAM memory? Thank you very much in advance for your help and best regards from Spain, Pablo Echenique -- Pablo Echenique Instituto de Biocomputacin y Fsica de los Sistemas Complejos (BIFI) Departamento de Fsica Terica Universidad de Zaragoza Pedro Cerbuna 12, 50009 Zaragoza Spain Tel.: +34 976761260 Fax: +34 976761264 echenique.p_-_gmail.com http://www.pabloechenique.comhttp://www.ccl.net/cgi-bin/ccl/send_ccl_messagehttp://www.ccl.net/chemistry/sub_unsub.shtmlhttp://www.ccl.net/spammers.txt--------------------------------- Looking for last minute shopping deals? Find them fast with Yahoo! Search. --0-261317432-1197392340=:67212 Content-Type: text/html; charset=iso-8859-1 Content-Transfer-Encoding: 8bit
Hi Pablo!
I was runing some calculations on a cluster with GAUSSIAN03 with a parallel version, I think that my jobs were big, around 58 atoms of the first and second row and one transition metal atom. 
 
the memory and processor requirements for this kind of job is big but never the disk requirements  were as huge as your say, the bigest output file was of 11.715KB and I suppose that the .chk was I little bigger. So while the output files are created on the same disk, the calculation process run on various processors and is possible that each node of the cluster used for the calculation put in its memory till the memory requirements had been satisfied. I used 96MW for the calculations. then I think that you don't need that disk space.
 
best regards from Colombia.
 
Diego Gómez.
   
"Pablo Echenique echenique.p:gmail.com" <owner-chemistry##ccl.net> wrote:

Sent to CCL by: "Pablo Echenique" [echenique.p]^[gmail.com]
Dear CCLers,

excuse me for my newbieness but I want to know whether or not the memory and, specially, disk requirements of a job are distributed among the machines if I launch it in a parallel rather than in serial way.

Coming down to an example, say I want to perform a CCSD single point energy calculation that requires 200GB of disk space to be run. If I launch the job in a machine with 100GB available space for the scratch folder, the job dies. But, what if I launch the same job, say, in a parallel way, to 4 machines with 60GB available space each? Will the disk requirements be split and the job will successfully end or will it die?

And what about RAM memory?

Thank you very much in advance for your help and best regards from Spain,

Pablo Echenique

--
Pablo Echenique

Instituto de Biocomputacin y
Fsica de los Sistemas Complejos (BIFI)

Departamento de Fsica Terica
Universidad de Zaragoza
Pedro Cerbuna 12, 50009 Zaragoza
Spain

Tel.: +34 976761260
Fax: +34 976761264

echenique.p_-_gmail.com
http://www.pabloechenique.com


http://www.ccl.net/cgi-bin/ccl/send_ccl_message
http://www.ccl.net/cgi-bin/ccl/send_ccl_message
http://www.ccl.net/chemistry/sub_unsub.shtml
http://www.ccl.net/spammers.txt




Looking for last minute shopping deals? Find them fast with Yahoo! Search. --0-261317432-1197392340=:67212--