From rafapa@obelix.cica.es Mon Nov 2 10:16:37 1992 Date: Mon, 2 Nov 1992 10:16:37 UTC+0100 From: rafapa@obelix.cica.es Subject: GAUSSIAN and ECPs To: chemistry@ccl.net >X-Envelope-to: chemistry@ccl.net The problem that Jan Hrusak point with Durand & Barthelat ECP and Gaussian os due to a line in link 301, sub prnpot that say: if (negn*(-999).lt.1) negn=0 where negn=(2-nval(k)) and nval is the power of R. This negn variable is only used for printing so you can use powers greater that 2 in gaussian to perfomr single point calculations. Unfortunately there is a problem with the gradients in link l705 when you use powers of 4 or greater. I reported this problem to M. Frisch a few weeks ago so they know it. Hope this help. By the way, can you send me an example of how do you intorduce the ECP in the input? Rafael R. Pappalardo Dept. of Physical Chemistry Univ. of Seville (SPAIN) e-mail: rafapa@obelix.cica.es From theresa@si.fi.ameslab.gov Mon Nov 2 04:25:35 1992 From: theresa@si.fi.ameslab.gov (Theresa Windus) Subject: ECPs in GAMESS To: chemistry@ccl.net () Date: Mon, 2 Nov 92 10:25:35 CST Fellow Netters: GAMESS (the US version now at Iowa State) currently has analytic energies and first derivatives and numerical second derivatives for ECP calculations. The numerical second derivatives (as has been mentioned previously) can take a large amount of time for large molecules. One solution to this problem that is now implemented into GAMESS is to run the calculation in parallel. GAMESS can execute in parallel on many different parallel machines (ones with at least 8 MB of memory) including an Ethernetwork of Unix machines. We are very excited about the results we have seen using networks of Unix machines for parallel. If you have more questions about this, please feel free to contact either me or Mike Schmidt (mike@si.fi.ameslab.gov). Theresa Windus Department of Chemistry Iowa State University Ames, IA 50011 e-mail: theresa@si.fi.ameslab.gov From CUNDARIT@MEMSTVX1.bitnet Mon Nov 2 12:10:00 1992 Date: Mon, 2 Nov 92 17:10 CDT From: CUNDARIT%MEMSTVX1.BITNET@OHSTVMA.ACS.OHIO-STATE.EDU Subject: ECPs, transition metals, and parallel computing To: chemistry@ccl.net Hi, I hope the netters won't think I'm "hyping" GAMESS, but Theresa's e-mail has sparked me to inquire about something that has been percolating in my mind since Doug Smith's e-mail a while back on parallel computing. I vaguely recall last year we had some discussion on parallel computational chemistry. If not, this could be the time. We have been lucky to have access to the iPSC/860 at Oak Ridge through a collaboration between the Computational Chemistry Group at MSU and the Joint Institute of Computational Sciences located at U. T.-Knoxville. The speed of the machine is enough to keep even impatient, untenured assistant professors from complaining. We have looked at both transition metal and lanthanide catalyst systems and compared identical jobs on the iPSC/860 versus the Cray Y-MP at San Diego Supercomputer Center. I don't have an example handy which entails using ECPs to calculate 2nd dervis numerically, but the rough conclusion is the same - "4 to 8 nodes give Cray-like speed!" The table below shows some very promising timings for two sample calculations. One is a 44 basis set RHF calculations of the nonlinear optical properties of water and the other is geometry optimization of LuCl2Hs, a catalyst model, using effective core potential. Timings in Seconds: Water LuCl2H iPSC/860 1 node 608.12 2 node 322.48 4 node 1777.33 179.21 8 node 958.37 112.76<<<<<<<<< 16 node 516.19 79.77 32 node 291.31 59.26 64 node 197.82 51.41 Cray Y-MP8/864 144.22<<<<<<<< DECstation 3100 7743.02 For the two comparisons the times given are total CPU times. Of particular interest to us, is the comparison between the Cray Y-MP8/864 at the San Diego Supercomputer Center and the iPSC/860. Several questions for discussion. 1) What other parallel "goodies" are out there? I thought I read somewhere that Gaussian has a parallel version out or soon to be realized. Ditto for HONDO. What sorts of machines are these ported to? I can imagine that a "hot" field like this would have new options almost daily. 2) Are there parallel programs that can do MP2 and/or MCSCF? Perhaps other correlated wavefunctions? Are folks working on these? 3) Do any of these iPSC/860 "hypercubes" or related machines come with more than 8Mb/node? From my point of view, this would be the advance that would make it feasible to look at realistic models of experimental systems. Tom Cundari and Henry Kurtz Computational Chemistry Group Department of Chemistry Memphis State University Memphis, TN 38152 From jle@world.std.com Mon Nov 2 18:41:51 1992 Date: Mon, 2 Nov 1992 23:41:51 -0500 From: jle@world.std.com (Joe M Leonard) To: chemistry@ccl.net Subject: MPP vs. workstations? As I recall the parallel discussion last year... 1) I think most/all of the Comp Chem codes run on shared-memory MIMD machines, such as SGI's Power series. DOE Lab codes, however, probably run on (far?) more interesting platforms... There are efforts underway with several vendors to get codes ported to their platform(s), which is the real problem with ALL novel architectures and (small) vendors - if you don't have the application codes, you just can't crack the market... 2) A significant problem slowing the spread of mpp machines is the relative lack of tools to assist/facilitate the port. Vector machines have had 8-10 years to get things down, and the preprocessors seem applicable to (super)scalar workstations as well. Parallel machines are new, and look like they'll require FAR more effort to harness their power - it's almost easier to design from scratch than port (again, DOE folks can probably comment at length re: this). 3) Fortran 77 might be a limiting factor - as the mpp machines seem to look for various flavors of newer Fortrans (F90, Fortran D, etc). There was also several discussions re: language selection and programmer/scientist/ grad student training that were pertinent to this. Object-oriented techniques, and vendor/multi-vendor tools will play a role here. 4) Nobody's really addressed the opportunity of clustered workstations as an alternative to shared-memory MIMD machines. I've seen several groups working on this problem, and Dr. Luthi at ETH has extended it to clustered Cray's, if you can call a worldwide net a cluster... 5) Many folks have inquired whether there's a win with parallel codes vs. running multiple jobs, 1 per processor. It seems that if there is ONE job that must get done ASAP, mpp is the only way to go, but for production codes in commercial sites, it's a real tradeoff between run time and throughput. 6) The recent Cray announcement (mpp w/Dec Alpha) is merely another example that the "attack of the killer micro's" ended several years ago, with the micro's basically killing everything in site. It's a great time to be a software developer, but it's tough living on the hardware side. This is a real problem, because who will enable us to use the new machines if there's nobody on the inside with application-apecific expertise? Joe Leonard jle@world.std.com P.S. There are several parallel QM codes - our SPARTAN package, GAUSSIAN 92 (both thanx Roberto!), GAMESS-UK (Martin Guest), ...