From chemistry-request@ccl.net Tue Dec 3 01:22:53 1991 Date: Tue, 3 Dec 91 16:34:25 EST From: Dr. David A. Winkler Subject: Homology modelling with Sybyl/Biosym To: chemistry@ccl.net Status: RO I am a new subscriber to the mail list and wonder if anyone has had experience with homology modelling using SYBYL and the Biosym software? We have SYBYL and need to buy Biopolymer and Composer. The price for the Biosym software is much higher and there would need to be a good reason to buy this instead of the SYBYL modules. Dave Winkler From chemistry-request@ccl.net Tue Dec 3 01:24:34 1991 Date: Tue, 3 Dec 91 16:39:10 EST From: Dr. David A. Winkler Subject: Neural networks for QSAR studies To: chemistry@ccl.net Status: RO Can anyone recommend neural network software for Mac/Vax/Iris which would be suitable for QSAR work (e.g. Andrea et al, J. Med. Chem., 1991, 34, 2824-36)? SGI list the 'Neural Network Toolbox' in their Applications Directory but I cant get Accurate Automation Corp (Chattanooga) to respond to my faxes. Dave Winkler CSIRO Division of Chemicals and Polymers Clayton, Australia From chemistry-request@ccl.net Tue Dec 3 04:06:38 1991 Date: Tue, 3 Dec 91 09:38:17 +0100 From: martin@biokth.sunet.se (Martin Norin, Dept. Biochem., Royal Inst. Technol., Stockholm, tel. +46-8-7907512, e-mail; martin@physchem.kth.se) To: "chemistry@ccl.net"@kth.sunet.se Subject: RE:Homology modelling with Sybyl/Biosym Status: RO Hello Dave, We have run Sybyl/Composer and done some extensive testing of the program. The were very confident with both the result of the modelling and the architecture of the program. I can strongly recommend the Composer program for protein modelling and we have decided to purchase this program (maybe it should be noted hat we already have the Sybyl/ Biopolymer). We have been using the Sybyl program for protein modelling and organic chemistry applications for some years and are satisfied with the performance of the program. I find the Sybyl/Biopolymer package useful for modelling 3-D structures of proteins and DNA/RNA. I cannot judge other packages as Biosym since I have not used them extensively. One thing that might be at the time being missing in Sybyl is a real good interface to protein sequence databases (Swissprot etc.) to do multiple alignments and cluster analysis. Here I'm not the right person to recommend any program since I'm not experienced in the field. In the Composer test we modelled two proteins: one of which the structure is known (chymotrypsin) and one unknown (E.coli beta-lactamase) 1) Chymotrypsin: In this case we discarded all chymotrypsin structure from the database (otherwise the task would be too easy). The final structure was very near the crystallographic one. 2) E.coli beta-lactamase: In this case the structure of the protein is not yet determined so we cannot compare the structure with a "real" one. However the active site of the predicted structure have all the features that a class-A lactamase should have. We also did some more simulations to explore the properties of the active site. These con- firmed that "the model was able to do catalysis". If you want I can snail mail a summary of the lactamase modelling we did. Just give me your address. Greetings, martin --------------------------------------------------------------------------- Martin Norin tel: +46-8-7907512 Dept.Biochem. fax: +46-8-7231890 Royal Inst. Technol. e-mail(bitnet): martin@physchem.kth.se 100 44 Stockholm Sweden --------------------------------------------------------------------------- From chemistry-request@ccl.net Tue Dec 3 07:53:47 1991 Date: Tue, 3 Dec 91 07:49:16 -0500 From: jle@world.std.com (Joe M Leonard) To: chemistry@ccl.net Subject: QSAR software Status: RO I am interested in learning about what QSAR software is available for workstations, and how I can get the glossies for such information. I'd prefer software that's commercially available, but am interested in non- commercial software as well. If there's a large response, I'll summarize for the net. Thanks in advance, Joe Leonard jle@world.std.com From chemistry-request@ccl.net Tue Dec 3 15:25:29 1991 Date: 3 Dec 91 15:15 LCL From: PA13808%UTKVM1.BITNET@OHSTVMA.ACS.OHIO-STATE.EDU To: CHEMISTRY@ccl.net Subject: BITNET mail follows Status: RO One of our faculty would like the E mail address of DR LEO RADOM Australian National University. Can anyone out ther provide it? John E. Bloor(PA13808 at utkvm1.utk.edu)(phone messages to 615-974 3427). From jkl@ccl.net Tue Dec 3 17:21:22 1991 To: chemistry@ccl.net Subject: Massively Parallel Machines Date: Tue, 03 Dec 91 17:21:03 EST From: jkl@ccl.net Status: RO Joe Leonard submitted a digest of parallelization efforts and I am forwarding it to the list(jkl@ccl.net). It is long, but worth reading every bit of it. Some of the messages appeared previously on the list. Thank you, Joe Jan jkl@ccl.net ================================================================== > From ross@zeno.mmwb.ucsf.EDU Thu Nov 14 18:16:01 1991 > Date: Thu, 14 Nov 91 15:07:37 PST > From: ross@zeno.mmwb.ucsf.EDU (Bill Ross) > To: jle@world.std.com > Subject: Re: mpp chem codes > > > >From Supercomputing '91 Advance Program (Nov18-22, > Albuquerque): > Tuesday 1:30-5:00pm > Parallel Algs for MD & MC > Bruce Boghosian, Thinking Machines > > > An earlier request of mine: > >From chemistry-request@ccl.net Sat Jul 6 15:29:02 1991 > > I am collecting information on on parallel processing in molecular > mechanics/dynamics simulation: algorithms, anecdotes and > references. All architectures are of interest, including shared > memory (e.g. Cray, Iris), SIMD (e.g. Connection Machine, MasPar) and > MIMD (e.g. hypercube). > > > Bill Ross > > > >From d3e353@minerva.mmwb.ucsf.edu Tue Jul 23 09:40:44 1991 > Subject: Re: parallel mol mech > > Bill Ross, > > Last week Pacific Northwest Lab and Argonne National Lab jointly > sponsored a Workshop on Parallel Computing in Chemical Physics. > We invited about 30 people to give talks about their current work, > including several on parallel MD. We also held topical discussion > groups, and one specifically concerned parallel MD. The proceedings > for the workshop are being collected and will be published in early > spring - a long time for you to wait, but actually a short publishing > turnaround. > > The answers to your questions are not simple, because the optimal > implementation of a parallel MD method is determined by the kind > of system being studied, as well as the target parallel machine. In > general, successful codes have been written which distribute the > nonbond list, which pass around a distributed list of atoms (systolic > loop), and which partition space (for a large number of particles). > You might be interested in a paper by William Smith, Comp. Phys > Comm., 62 (1991) p229-248, which reviews several approaches. > > Ray Bair > Molecular Science Software Group > Molecular Science Research Center > Pacific Northwest Laboratory > email: d3e353@pnlg.pnl.gov > > >From jones@think.com Tue Sep 3 06:20:25 1991 > Subject: Parallel Mol Mech > > Paul Bash, ex-Karplus postDoc (and ex-UCSF grad student!), has > worked with TMC people on the core of CHARMM (non-bonded > interactions) - that has had two incarnations. The first was written in > *LISP with Bernie Murray of TMC and handled all interactions with > no cutoff. There is a technical report available on that which I can > send you. The second generation was written with Alex Vasilevsky in > CM Fortran and included a cutoff - to the best of my knowledge this > was never written up. Bash is now at FSU - email address > bash@next1.sb.fsu.edu. I do not think this code is available. Klaus > Schulten at the Beckman Institute at the U of Illinois has also written > the core of CHARMM. This version is in the language C* and handles > all interactions. Klaus's group are very experienced on the CM and > write sophisticated code. Eli Lilly are going to be using his code. He > could tell you more about availability - there are all sorts of > restrictions as regards Polygen ... I don't have an email address for > Klaus, but could find it if need be. He has several papers on this work > and similar work using transputers. > > Martin Karplus has a postdoc in his lab working with Bernie Murray > from TMC on a formal port of the entire CHARMM package - this > project has just started. > > George Phillips group at Rice has started a port of XPLOR, beginning > with the crystallographic components - FFTs, electron density > calculations. Axel Brunger and I are 'consulting' on this project. > > Other molecular dynamics ... > > Various groups are working on systems like gases or collections of > small molecules - Bruce Bhogosian and Pablo Tamayo at TMC could > point you in the right directions there if that is where your interest > lies. (bmb@think.com, tamayo@think.com) These usually involve > more sophisticated algorithms but study much simpler systems. > > > Robert Jones jones@think.com > > Thinking Machines Corporation 245 First Street Cambridge MA > 02142 > > > >From heller@lisboa.ks.uiuc.edu Sat Sep 14 10:34:46 1991 > > > @inproceedings{SCHU91, > author={Klaus Schulten}, > title={Computational Biology on Massively Parallel Machines}, > booktitle={Proceedings of the International Conference on Parallel > Computation, Salzburg, Austria}, > publisher={Springer, New York}, > year=1991, > note={in press} > } > > @article{RAPA88, > author={D. C. Rapaport}, > title={Large-scale molecular dynamics simulation using vector and > parallel computers}, > journal={Computer Physics Reports}, > volume=9, > pages={1--53}, > year=1988 > } > > >From chemistry-request@ccl.net Mon Sep 16 08:23:42 1991 > From: Florian Mueller-Plathe > Subject: Parallel MD > > Hi there, > > I received about a dozen notes reading something like: "I am not the > person who originally requested information about parallel MD but I > would appreciate if you could send me a copy of your list of > references as well." Well, here it is. I am posting it to the list Please > have mercy and do not send in more requests! The list contains most > of the references on parallelisation I know. It is probably still > incomplete. Send in your favourite refs and I will add them to the > list. There are also a few references on vectorisation which are in the > list for historical reasons. > > Cheers, > Florian Mueller-Plathe (fmp@igc.ethz.ch) > > %A A Windemuth > %A K Schulten > %T Molecular dynamics simulation on the Connection Machine > %J Molec. Simul. > %V 5 > %P 353-361 > %D 1991 > > %A D Fincham > %T Parallel computers and molecular simulation > %J Molec. Simul. > %V 1 > %P 1-45 > %D 1987 > > %A H Heller > %A H Grubmueller > %A K Schulten > %T Molecular dynamics simulation on a parallel computer > %J Molec. Simul. > %V 5 > %P 133-165 > %D 1990 > > %A D Fincham > %A B J Ralston > %T Molecular dynamics simulation using the Cray-1 vector processor > %J Comput. Phys. Commun. > %V 23 > %P 127-134 > %D 1981 > > %A R Vogelsang > %A M Schoen > %A C Hoheisel > %T Vectorisation of molecular dynamics Fortran programs using the > Cyber > 205 vecr > %J Comput. Phys. Commun. > %V 30 > %P 235-241 > %D 1983 > > %A F Sullivan > %A R D Mountain > %A J O'Connel > %T J. Comput. Phys. > %J J. Comput. Phys. > %V 5 > %P 272-279 > %D 1984 > > %A S Brode > %A R Ahlrichs > %T An optimised MD program for the vector computer cyber 205 > %J Comput. Phys. Commun. > %V 42 > %P 51-57 > %D 1986 > > %A W F van Gunsteren > %A H J C Berendsen > %A F Colonna > %A D Perahia > %A J P Hollenberg > %A D Lellouch > %T On searching neighbours in computer simulations of > macromolecular > systems > %J J. Comput. Chem. > %V 5 > %P 272-279 > %D 1984 > %X > > %A J Boris > %T A vectorized neighbours algorithm of order N using a monotonic > logical grid > %J J. Comput. Phys. > %V 66 > %P 1-20 > %D 1986 > > %A O Teleman > %A B Jonsson > %T Vectorizing a general-purpose molecular dynamics simulation > program > %J J. Comput. Chem. > %V 7 > %P 58-66 > %D 1986 > > %A G S Grest > %A B Duenweg > %A K Kremer > %T Vectorized link-cell all Fortran code for molecular dynamics > simulations for > %J Comput. Phys. Commun. > %V 55 > %P 269-285 > %D 1989 > > %A T W Clark > %A J A McCammon > %T Parallelization of a molecular dynamics non-bonded force > algorithm > for MIMD > %J Computers and Chemistry > %V 14 > %P 219-224 > %D 1990 > > %A R D Skeel > %T Macromolecular dynamics on a shared-memory multiprocessor > %J J. Comp. Chem. > %V 12 > %P 175-179 > %D 1991 > > %A M R S Pinches > %A D J Tildesley > %A W Smith > %T Large scale molecular dynamics on parallel computers using the > link-cell alg > %J Molec. Simul. > %V 6 > %P 51-87 > %D 1991 > > %A F Bruge > %A V Martorana > %A S L Fornili > %T Concurrent molecular dynamics simulation of ST2 water on a > Transputer array > %J Molec. Simul. > %V 1 > %P 309-320 > %D 1988 > > %A A R C Raine > %A D Fincham > %A W Smith > %T Systolic loop methods for molecular dynamics simulation using > multiple Trans > %J Comput. Phys. Commun. > %V 55 > %P 13-30 > %D 1989 > > %A H G Petersen > %A J W Perram > %T Molecular dynamics on transputer arrays I. Algorithm design, > programming iss > %J Mol. Phys. > %V 67 > %P 849-860 > %D 1989 > > %A S J Zara > %A D Nicholson > %T Grand canonical ensemble Monte Carlo simulation on a Transputer > Array > %J Molec. Simul. > %V 5 > %P 245-261 > %D 1990 > > %A W Smith > %T Molecular Dynamics on hypercube parallel computers > %J Comput. Phys. Commun. > %V 62 > %P 229-248 > %D 1991 > > %A C J Craven > %A G S Pawley > %T Molecular dynamics of rigid polyatomic molecules on transputer > arrays > %J Comput. Phys. Commun. > %V 62 > %P 169-178 > %D 1991 > %X > > %A F Mueller-Plathe > %T Parallelising a molecular dynamics algorithm on a multi-processor > workstation > %J Comput. Phys. Commun. > %V 61 > %P 285-293 > %D 1990 > > %A F Mueller-Plathe > %A D Brown > %T Multi-colour algorithms in molecular simulation: vectorisation and > parallel > %J Comput. Phys. Commun. > %V 64 > %P 7-14 > %D 1991 > > %A M Schoen > %T Structure of a simple molecular dynamics Fortran program > optimized > for Cray s > %J Comput. Phys. Commun. > %V 52 > %P 175-185 > %D 1989 > > %A S Gupta > %T Vectorization of molecular dynamics simulation for fluids of > nonspherical mo > %J Comput. Phys. Commun. > %V 48 > %P 197-206 > %D 198 > %D 1988 > > %A T P Straatsma > %A J A McCammon > %T ARGOS, a vectorized general molecular dynamics program > %J J. Comp. Chem. > %V 11 > %P 943-951 > %D 1990 > > %A Z A Rycerz > %T Acceleration of molecular dynamics simulation of order N with > neighbour list > %J Comput. Phys. Commun. > %V 60 > %P 297-303 > %D 1990 > > %A F Bruge > %A S L Fornili > %T A distributed dynamic load balancer and its implementation on > multi-transput > %J Comput. Phys. Commun. > %V 60 > %P 39-45 > %D 1990 > > %A F Bruge > %A S L Fornili > %T Concurrent molecular dynamics simulation of spinodal phase > transition on tra > %J Comput. Phys. Commun. > %V 60 > %P 31-38 > %D 1990 > > %A L J Alvarez > %A A Alavi > %A J I Siepmann > %T A vectorisable algorithm for calculating three-body interactions > %J Comput. Phys. Commun. > %V 62 > %P 179-186 > %D 1991 > > %A D C Rapaport > %T Multi-million particle molecular dynamics I. Design considerations > for vectog > %J Comput. Phys. Commun. > %V 62 > %P 198-216 > %D 1991 > > %A D C Rapaport > %T Multi-million particle molecular dynamics II. Design > considerations for par > %J Comput. Phys. Commun. > %V 62 > %P 217-228 > %D 1991 > > %A K Refson > %A G S Pawley > %T Correlations in the plastic crystal phase of n-butane > %J Comput. Phys. Commun. > %V 62 > %P 279-288 > %D 1991 > > %A S Chynoweth > %A U C Klomp > %A L E Scales > %T Simulation of organic liquids using pseudo-pairwise interatomic > forces on a y > %J Comput. Phys. Commun. > %V 62 > %P 297-306 > %D 1991 > > %A D Fincham > %A P J Mitchell > %T Multicomputer molecular dynamics simulation using distributed > neighbour lists > %J Molec. Simul. > %V (submitted) > %O also Daresbury Preprint DL/SCI/P730T, 1990 > > %A V Yip > %A R Elber > %T Calculations of a list of neighbors in molecular dynamics > simulations > %J J. Comp. Chem. > %V 10 > %P 921-927 > %D 1989 > > %A D W Noid > %A B G Sumpter > %A B Wunderlich > %A G A Pfeffer > %T Molecular dynamics simulations of polymers: methods for optimal > Fortran prog > %J J. Comp. Chem. > %V 11 > %P 236-241 > %D 1990 > > %A D C Rapaport > %T Large-scale molecular dynamics simulation using vector and > parallel computers > %J Comput. Phys. Rep. > %V 9 > %P 1-54 > %D 1988 > > %A A R C Raine > %T Systolic Loop Methods for Molecular Dynamics Simulation, > Generalised for Mac > %J Mol. Sim. > %V 7 > %P 59-69 > %D 1991 > > %A H Grubmueller > %A H Heller > %A A Windemuth > %A K Schulten > %T Generalized Verlet Algorithm for Efficient Molecular Dynamics > Simulations with Long-range Interactions > %J Mol. Sim. > %V 6 > %P 121-142 > %D 1991 > > %A S J Zara > %A D Nicholson > %T Grand Canonical Ensemble Monte Carlo Simulation on a > Transputer Array > %J Mol. Sim. > %V 5 > %P 245-261 > %D 1990 > > > >From colvin@lll-crg.llnl.gov Thu Nov 14 19:46:25 1991 > Date: Thu, 14 Nov 91 16:38:26 -0800 > From: colvin@lll-crg.llnl.gov (Mike Colvin) > To: jle@world.std.com > Status: R > > Subject: Parallel QC programs. > > Joe, > > As you surmised, there are a number of groups working on > massively parallel QC codes. Unfortunately, it is very difficult to > "port" programs efficiently to massively parallel computers since > radical restructuring is usually necessary. I am part of a group at > Sandia National Labs that is writing from scratch a parallel SCF/SCF > GRAD/MP2 program for the NCUBE 2 and IPSC 860 (and with luck all > similar architectures). I know that there are a number of other > efforts going on at Argonne, PNL, and Daresbury. In fact, there was a > conference this past summer on using massively parallel computers > for QC. A future issue of Theoretica Chimica Acta will contain the > proceedings of this conference. Developing a good, parallel matrix > diagonalizer has been one of the big difficulties. We have one that > works fairly well provided # processors < 2 * # basis functions. Rik > Littlefield at PNL has been working extensively on this problem and > surely has a much better routine. > > --Mike Colvin > mecolv@sandia.llnl.gov > > > >From cushing@cse.ogi.edu Thu Nov 14 20:45:34 1991 > Date: Thu, 14 Nov 91 17:34:41 -0800 > From: Judy Bayard Cushing > To: jle@world.std.com > Subject: Re: mpp chem codes > > I don't know if anybody has ported chemistry codes (such as > GAUSSIAN, AMBER, etc.) to massively parallel machines, BUT, for my > own research, am very interested. I'm pulling together a database > design which would span codes such as gaussian and gamess, and > perhaps offer support to parallel versions of those codes. > > So, I'd be delighted to find someone doing this work! > > (please fill me in on responses to your query!) > > thanks, > judy cushing > > > Message 7: > >From bernhold@qtp.ufl.edu Thu Nov 14 21:18:17 1991 > From: bernhold@qtp.ufl.edu > To: jle@world.std.com (Joe M Leonard) > Subject: Re: mpp chem codes > > I don't know of anyone porting codes off hand. You might look up a > book by Stephen Wilson. There was a three volume series, the title > of which I don't remember. The third one has the subtitle > "Concurrent Computation in Chemical Calculations". It includes a > paper from Schaefer's group about doing the two-electron > transformation on mpp, I think. > > I'd be curious to see a summary of what you find out. My personal > opinion is that mpp doesn't seem terribly well suited to electronic > structure calculations, but I don't have anything solid to back that > up. > > David Bernholdt bernhold@qtp.ufl.edu > Quantum Theory Project bernhold@ufpine.bitnet > University of Florida > Gainesville, FL 32611 > 904/392 6365 > > > >From rbw@msc.edu Fri Nov 15 11:00:40 1991 > Date: Fri, 15 Nov 91 09:52:39 -0600 > From: rbw@msc.edu (Richard Walsh) > To: jle@world.std.com > Subject: Re: mpp chem codes > > Here is a reference that might be of interest: > > The implementation of ab initio quantum chemistry calculations on > transputers > > Cooper and Hillier > Journal of Computer Aided Molecular Design, 5(1991) 171-185 > > This is a report of some work done with GAMESS and transputers in > the UK. Not exactly massively parallel, but it get at some of the same > issues. > > I would be interested in anything else that you receive. > > Sincerely, > > Richard Walsh > Minnesota Supercomputer Center > 612-626-1510 > rbw@uh.msc.edu > > > Message 4: > >From whfink@ucdavis.edu Fri Nov 15 13:00:44 1991 > Date: 15 Nov 91 09:37:11 PST > To: jle@world.std.com > Subject: chemistry codes on massively parallel machines > > Dear Joe: > > I have been involved in working on the BBN TCN-2000, a > butterfly switched design with 128 nodes of motorola 8000's, which > is the machine that Lawrence Livermore Laboratory has been > using as an experimental/prototype of a massively parallel machine > in their Massively Parallel Computing Initiative. I have ported the > ARGOS integral routines from the COLUMBUS package onto that > machine. That is not very interesting from a pragmatic perspective > since who wants a simple list of integrals, but it has given me some > experience and understanding of what one needs to think about in > going parallel. It's also the most obviously parallelizable part of > quantum chemistry codes. Mike Colvin at Sandia has done a good > deal of parallelization and has a working parallel direct scf. Since > Sandia is operated by AT&T he is not at liberty to share that code, > but he has code he wrote for his thesis work with Fritz Schaefer that > illustrates the organization of the work very nicely.That code was > written for a hypercube so is really a toy. > > I am presently working on making a parallel direct scf using the > ingredients of the COLUMBUS package. Ron Shepard recently > announced the availability of parallelized or parallelizable > versions of the COLUMBUS package for use on a distributed network > over the comp chem list. Rick Kendall at Battelle Northwest has also > been working with some parallelization, and of course > Michel Dupuis is the first pioneer in that he worked with Clementi's > LCAP 5 years ago already. Working with the massively parallel one > bit computers does not look attractive to me, although I may be too > old and tired so my perspective is jaundiced. The Ames Iowa lab of > DOE also has some parallel projects underway, but I don't know any > of the details. Bob Harrison at Argonne also announced the > availability of his message passing routines for going parallel over a > network over the comp chem list. My experience with parallel > machines is that simple porting is not going to do you any good. The > algorithms need to be reexamined and set up to go parallel manually. > It will be many moons before compilers will be clever enough to > globally analyze code and do automated parallelization except for the > most trivial of local operations. > > Best regards, > Bill Fink > WHFINK@UCDAVIS > > Message 6: > >From EWING@JCVAXA.JCU.EDU Fri Nov 15 16:01:03 1991 > From: EWING@JCVAXA.JCU.EDU > Date: Fri, 15 Nov 1991 15:52 EDT > Subject: mmp quantum chemistry codes > To: jle@world.std.com > > > Ron Shepard at Argonne has been parallelizing parts of the > COLUMBUS package. I don't know any details on this. A good > person to contact for more info is Prof. Isaiah Shavitt at Ohio State > (shavitt@mps.ohio-state.edu). > > Hope this helps. > > ----------------------------------------------------------------------- > > David W. Ewing Internet: ewing@jcvaxa.jcu.edu > Department of Chemistry Bitnet: ewing@jcuvax > John Carroll University Voice: (216) 397-4742 or 397-4241 > Cleveland, OH 44118 USA Fax: (216) 397-4256 > From chemistry-request@ccl.net Tue Dec 3 17:49:43 1991 Date: Tue, 3 Dec 91 17:38:45 EST From: "Dr./CPT Christopher J. Cramer" To: chemistry@ccl.net Subject: G90 idiosyncracy (or user idiocy -- one never knows . . .) Status: RO O net, Here is a curious "bug" (whether mine or the program's is unclear) in Gaussian 90H which perhaps can be addressed by the Gaussian powers that be. The following is the .dat file for an UMP2 optimization on PF5(-) which has C4v symmetry and is a doublet. The geometry and force constants are from an analytical freq calculation at the UHF/6-31g* level. Run is on a Stardent 3000. Using either scf=direct or not (as below), using nosym or not (as below), reading all data from checkpoint file or by explicit input (as below) has no effect, the same error always occurs in link 906. Alas, the error message itself is not particularly illuminating to me. I thank in advance all advisors and trouble-shooters! Chris Cramer ( cjcramer@crdec.apgea.army.mil or mf12131@msc.edu ) INPUT: %mem=1000000 %chk=/scratch/cjcramer/pf5.chk # ump2/6-31g* opt=(ef,noeigentest) phosphinyl -1 2 P F,1,r1 F,1,r2,2,ba F,1,r2,2,ba,3,90.,0 F,1,r2,2,ba,3,180.,0 F,1,r2,2,ba,3,270.,0 r1=1.57 1 0.362653 r2=1.66 1 1.300499 ba=90.8 1 3.691106 SELECTED OUTPUT: 1Entering Gaussian System, Link 0=g90 Input=pf5a.dat Output=pf5a.out Initial command: /opt/g90/l1.exe of initial guess= 0.7547 Alpha deviation from unit magnitude is 5.88D-15 for orbital 73. Alpha deviation from orthogonality is 1.07D-14 for orbitals 63 59. Beta deviation from unit magnitude is 9.21D-15 for orbital 63. Beta deviation from orthogonality is 7.90D-15 for orbitals 39 35. Using DIIS extrapolation. UHF open shell SCF: Requested convergence on density matrix=1.00D-08 within 64 cycles. Unsorted integral processing. Two-electron integral symmetry used by symmetrizing Fock matrices. IEnd= 282941 IEndB= 282941 NGot= 1000000 MDV= 979203 LenX= 717059 SCF DONE: E(UHF) = -838.050763492 A.U. AFTER 16 CYCLES CONVG = 0.4158D-09 -V/T = 2.0018 S**2 = 0.7551 KE= 8.365486838613D+02 PE=-2.799033733053D+03 EE= 7.334032921720D+02 Annihilation of the first spin contaminant: S**2 BEFORE ANNIHILATION 0.7551, AFTER 0.7500 Range of M.O.s used for correlation: 1 94 NBasis= 94 NAE= 31 NBE= 30 NFC= 0 NFV= 0 NROrb= 94 NOA= 31 NOB= 30 NVA= 63 NVB= 64 FulOut=F Deriv=T AODrv=T MMem= 0 MDisk= 31 NDisk1= 1104547 NDisk= 35176459 MDiskD= 31 NOA= 31 W3Min= 1661168 MinDsk= 893885 NDisk2= 2632672 NBas6D= 94 NBas2D= 4633 NTT= 4465 NDExt= 435502 LenExt= 500000 MDV= 1000000 MDiskM= 110 Disk-based method using N**3 memory for 31 occupieds at a time. Estimated scratch disk usage= 35176459 words. Actual scratch disk usage= 34558579 words. JobTyp=2 Pass 1: I= 1 to 31. Logic failure, MDV= 815913 NBuf= -7 but FulOut=F. Error termination in Lnk1e. From chemistry-request@ccl.net Wed Dec 4 17:52:07 1991 Received: from ohstpy.mps.ohio-state.edu by oscsunb.ccl.net (5.65c/pl1+KVa/910613.15) id AA07896; Wed, 4 Dec 1991 17:52:03 -0500 Return-Path: chemistry-request@ccl.net Received: from oscsunb.ccl.net by MPS.OHIO-STATE.EDU with PMDF#10559; Wed, 4 Dec 1991 17:54 EST Received: by oscsunb.ccl.net (5.65c/pl1+KVa/910613.15) id AA07301; Wed, 4 Dec 1991 17:10:58 -0500 Received: from mcnc.mcnc.org by oscsunb.ccl.net (5.65c/pl1+KVa/910613.15) id AA07295; Wed, 4 Dec 1991 17:10:52 -0500 Received: by mcnc.mcnc.org (5.59/MCNC/6-25-91) id AA27625; Wed, 4 Dec 91 17:02:52 -0500 for Received: from salt by bdrc.bdrc.bd.com (4.1/BDRC-B.C.2) id AA22717; Wed, 4 Dec 91 16:48:14 EST Received: by salt (5.52/890607.SGI) (for @BdRc.bd.cOm:chemistry@ccl.net) id AA14663; Wed, 4 Dec 91 16:45:24 EST Date: Wed, 4 Dec 91 16:45:24 EST From: mitchell@bdrc.bd.com (Michael Mitchell) Subject: E-mail address for Peter Goodford, GRIN/GRID folks? Sender: chemistry-request@ccl.net To: chemistry@ccl.net Errors-To: owner-chemistry@ccl.net Message-Id: <9112042145.AA14663@salt> X-Envelope-To: jkl@ccl.net Precedence: bulk Status: RO Anyone have an e-mail address for Peter Goodford or the GRIN/GRID folks. Thanks. Mike Mitchell mitchell@bdrc.bd.com