From FR042008@YSUB.YSU.EDU Mon Jun 7 06:48:15 1993 Message-Id: <199306071448.AA02026@oscsunb.ccl.net> Date: Mon, 07 Jun 93 10:48:15 EDT From: "Janet Del Bene" To: chemistry@ccl.net Subject: Computational Chemistry and Theoretical Organic Chemistry Symposia Two symposia in the area of computational quantum chemistry have been organized for the 25th Central Regional ACS Meeting, which will be held in Pittsburgh on October 4 and 5, 1993. The speakers and topics for the two symposia are listed below. Registration information will appear in a future issue of C&EN. Further information about the symposia may be obtained by contacting the organizers. Computational Chemistry: K. D. Jordan e-mail: JORDAN@CPWPSCA voice: 412-624-8690 fax: 412-624-5008 Theoretical Organic Chemistry: Janet E. Del Bene e-mail: FR042008@YSUB voice: 216-742-3466 fax: 216-742-1579 ---------------------------------------------------------------------------- COMPUTATIONAL CHEMISTRY A Symposium Presented at the 25th Central Regional Meeting of the American Chemical Society Pittsburgh, Pennsylvania Monday, October 4, 1993 Afternoon session K. D. Jordan, presiding Computational Chemistry and Glass Michael Teter Long-range Coupling of Local Donor/Acceptor Sites Marshall D. Newton Exciton Scattering and the Nonlinear Optical Properties of Polyacetylene David Yaron Theoretical Spectroscopy of Semiconductor Clusters M. V. Ramakrishna The Quantum Potential Distribution Theorem and Chemical Potentials of Quantum Fluids Thomas Beck A Lattice Field Theory Approach to the Energetics of Suspensions of Charged Macroions Rob Coalson ----------------------------------------------------------------------- ----------------------------------------------------------------------- THEORETICAL ORGANIC CHEMISTRY A Symposium Presented at the 25th Central Regional Meeting of the American Chemical Society Pittsburgh, Pennsylvania Tuesday, October 5, 1993 Janet E. Del Bene, presiding Morning session A Comparison of Conventional and Density-Functional Molecular Orbital Methods John A. Pople Progress in Optimizing Equilibrium Structures and in Searching for Transition States H. Bernard Schlegel Recent Developments for Theoretical Organic Chemistry Michel Dupuis Electron Correlation Effects on Structures and Stabilities of Hydrocarbons and Carbocations Isaiah Shavitt Solvation Models Combining a Reaction Field with Approximations to Explicit Solvent Michael J. Frisch Afternoon session Electronic Structure of C28 and U@C28 Russell M. Pitzer Electronic Coupling Through Alkane Bridges K. D. Jordan Hyperconjugation as Seen in (e,2e) and EPR Experiments Ernest R. Davidson Transition Structures of Pericyclic Reactions K. N. Houk Potential Energy Surfaces of Chemical Reactions of Organometallic Compounds Keiji Morokuma From wiedeman@altair.acs.uci.edu Mon Jun 7 02:19:17 1993 To: chemistry@ccl.net Subject: Fortran to C Date: Mon, 07 Jun 93 09:19:17 -0700 Message-Id: <13377.739469957@altair.acs.uci.edu> From: Lyle Wiedeman David Gay at AT&T Bell Labs has at one time distrubuted a Fortran to C converter ("f2c"). When I got a copy (a year ago), the instructions were these: ------------------------------------------------------------ FTP: All the material described above is now available by ftp from research.att.com (login: netlib; Password: your E-mail address; cd f2c). ------------------------------------------------------------ Herein the disclaimer: ------------------------------------------------------------ f2c is a Fortran to C converter under development by David Gay (AT&T Bell Labs) Stu Feldman (Bellcore) Mark Maimone (Carnegie-Mellon University) Norm Schryer (AT&T Bell Labs) Please send bug reports to dmg@research.att.com or uunet!research!dmg. AT&T and Bellcore disclaim all warranties with regard to this software, including all implied warranties of merchantability and fitness. In no event shall AT&T or Bellcore be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use or performance of this software. ------------------------------------------------------------ Lyle Wiedeman Office of Academic Computing wiedeman@uci.edu Univ. Calif. Irvine wiedeman@UCI.BITNET Irvine, CA 92717 (714) 856-8718 FAX (714) 725-2069 From tom@sgih.roc.wayne.edu Mon Jun 7 09:41:22 1993 Date: Mon, 7 Jun 93 13:41:22 -0400 From: tom@sgih.roc.wayne.edu (Tom Wiese) Message-Id: <9306071741.AA19977@sgih.roc.wayne.edu> To: CHEMISTRY@ccl.net Subject: ProLogP by Compu Drug ? Hello, Does anybody out there know about a program called: Pro Log P by Compu Drug USA inc. ? I would like to get a hold of Compu Drug and find out more about a program (Pro Log P) that we have on a PC that our group inherited. I a can't reach them at the number listed. Thanks, Tom Wiese Department of Biochemistry Wayne State University Detroit, MI tom@sgih.roc.wayne.edu From nmeier@nirvana.imo.physik.uni-muenchen.de Mon Jun 7 14:01:14 1993 From: Christoph.Niedermeier@physik.uni-muenchen.de Message-Id: <9306071707.AA22813@hegel.imo.physik.uni-muenchen.de> Subject: Summary of hierarchical multipole methods To: chemistry@ccl.net (Chemistry mailing list) Date: Mon, 7 Jun 93 19:07:44 MET DST Dear netters: In my original posting, I asked for comments/references on hierarchical methods for efficiently computing long range electrostatic interactions (mainly for protein simulations): > Hi everybody, > > I'm working as a PhD student in the field of MD simulations > of proteins with special interest in electrostatic interactions. > Currently I am developping a method for efficient computation > of long range electrostatic interactions in MD simulations > of proteins. > > The most efficient methods existing sofar are, to my knowledge, > hierarchical multipole algorithms which scale with O(N log N) > (N being the number of atoms in the system). A special variant > of this type of algorithms, the socalled Fast Multipole Method (FMM) > (Greengard & Rokhlin) even scales with O(N). > > My question to the list is the following:==== > Has anybody worked with this type of algorithms and/or knows of > other people who did ? If so, could you please give your > experiences with/ opinions on these methods and, if available, > supply a list of references ? > > I will post a summary of responses to the list. > > Thank you a lot > > Chris I got quite a few responses which demonstrated that there is a whole bunch of people being interested in this field. Therefore, I will try to give a short overview about the variety of methods and algorithms. This overview will mainly consist of information I took >from the responses because I didn't start to study the literature yet. A lot of people gave references which I consider to be a valuable source of information by themselves because some of them include an abstract. I will append a complete (and unique) list of references at the end of this posting. In my summary I will refer to statements and contributions of various people. Because someone might be interested in direct communication with some of them, I will append a list of participants at the end of the summary which contains e-mail adresses, phone numbers etc as far as I know of them. Now for the summary: As mentioned by Roger E. Critchlow Jr. ,conventional MD codes treat long range electrostatic interactions by truncation at a certain cut-off radius. This is partly due to uncertainties in the dielectric `constant', i.e. the screening behaviour of th system. However, it is also due to practial considerations, because the computational effort for evaluating all pairwise electrostatic interactions is increasing very rapidly with the size of the system (order O(N^2)). This problem is less critical vor Van der Waals interactions because those are decreasing rapidly with increasing distance and can therefore be neglected for large distances. Efficient algorithms have been developped which take into account all long range contributions but avoid the rapid increase in computational effort. As I understand the first algorithm of this kind was developped by Barnes & Hut (see references) for N-body problems in astrohysics (like stellar dynamics). The algorithm uses a hierarchy of cubic grids to subdivide the system in a tree-like manner. In each cubic cell a multipole expansion of the Coulombic or gravitational interactions is performed. These multipole expansion are used to calculate the interactions of all particles with each other with an efficiency of order O(N log(N)). Greengard & Rokhlin (see references) developped the socalled Fast Multipole Algorithm (FMA) or Fast Multipole Method (FMM), respectively. This method uses a sophisticated scheme of computing the multipole expansions and gathering up interaction contributions which reduces the effort to the order of O(N). This algorithms exists in a variety of implementations on single processor and vector machines as well as massively parallel machines. I will mention just a few: - Steve Lustig extended the FMA for Morse and Yukawa potentials and implemented it on Connection Machines CM-200 and CM-5 - Bill Goddard (see references) and his group at Caltech developped the Cell Multipole Method, a variant of the FMA, which also includes London-type interactions in the hierarchical evaluation scheme. - Francis Figueirido inserted an implementation of the Barnes-Hut algorithm into the MD-package IMPACT - Andreas Windemuth (see references) inserted an implementation of the FMA into his MD simulation package MD where also a parallel version running an a Connection Machines CM-5 is existing. He also cooperated with the group of - John Board (see references) who is working on implementations of the Fast Multipole Algorithm on different parallel platforms. - Mike Lee implemented the FMA as a Fortran subroutine which will be available on the public domain. Michael Schaefer pointed out that the FMA starts to become more efficient than direct evaluation of electrostatic interactions when the system size exceeds 2000 particles. In my own work on this field I try to further reduce the computational effort by sacrificing highest accuracy of forces and energies. We use a very simple electrostatic description of a protein which just considers charges and dipoles. The method starts from socalled structural (chemical) groups which are charged or dipolar and builts up a hierarchy of interacting objects from these structural groups by structure adapted partitioning of the protein. The error of electrostatic forces is about 1% whereas for a system of 2000 particles the algorithm speeds up by a factor of seven compared to direct evaluation of all interactions. My personal opinion is that very high accuracy which is obtained by FMA is not necessary because of the incertainties of partial charges and dielectric screening. To get the basic effects an accuracy of about 1% may be sufficient for MD simulations. Tests of this hypothesis on the way. I want to thank all people who contributed comments/references to this discussion and hope it was useful for some of you. PARTICIPANTS ============= The following people participated in the discussion: (random order, e-mail address in parentheses) Steve Lustig (lustigsr@esvax.dnet.dupont.com) Polymer Physics, Central Science Division E.I. du Pont de Nemours & Co, Inc. Experimental Station, Route 141 Wilmington, DE 19880-0356 (302) 695 - 3899 Andreas Windemuth (windemut@cumbne.bioc.columbia.edu) Coulmbia University Michael Schaefer (schaefer@tammy.harvard.edu) Harvard University Roger E. Critchlow Jr. (rec@arris.com), ARRIS Pharmaceutical Corporation, South San Francisco, CA 415.737.1650, 415.737.8590 (fax) Bruce Bush (Bruce_Bush@merck.com) Merck & Co., Inc. Rahway NJ 07065 USA Teerakiat Kerdcharoen (?) Graham Hurst (hurst@hyper.com) Hypercube Inc, 7-419 Phillip St, Waterloo, Ont, Canada N2L 3X2 (519)725-4040 Michael A. Lee (?) Tom Simonson (simonson@zinfandel.u-strasbg.fr) John Nicholas (jb_nicholas@pnl.gov or d3g359@rahman.pnl.gov) Pacific Northwest Laboratory Richland, WA 99352, USA Dr. Robert Q. Topper, PRA (topper@haydn.chm.uri.edu) Department of Chemistry University of Rhode Island Kingston, RI 02881 USA (401) 792-2597 [office] (401) 792-5072 [FAX] Alan M. Mathiowetz (amm@kodak.com) Sterling Winthrop, Inc. Francisco Figueirido (figuei@lutece.rutgers.edu) Thomas C. Bishop (bishop@lisboa.ks.uiuc.edu) Theoretical Biophysics Beckman Institute University of Illinois 405 N Mathews, Urbana, IL61801 Tel: (217)-244-1851 Christoph Niedermeier (Christoph.Niedermeier@Physik.Uni-Muenchen.DE) Theoretische Biophysik Institut fuer medizinische Optik Ludwigs-Maximilian-Universitaet Muenchen Theresienstrasse 37 80333 Muenchen, Germany phone: ++49-89/2394-4580, fax: ++49-89/2805248 REFERENCES =========== The references given are collected from responses of different people and from my own BibTeX database. I did not try to bring them into any particular order. Most of them are in BibTeX style but I did not make the cite keys unique. However, the references themselves should be unique. Some of the references include an abstract which might be useful to potential readers. @article{Greengard87a, author = {L. Greengard and V. Rohklin}, journal = {J.\ Comp.\ Chem.}, pages = {325-348}, title = {A Fast Algorithm for Particle Simulations}, volume = {73}, year = {1987}, } @techreport{Greengard87b, author = {L. Greengard and V. Rohklin}, address = {Yale University, New Haven}, institution = {YALEU/DCS}, number = {RR-515}, title = {Rapid Evaluation of Potential Fields in Three Dimensions}, type = {Research Report}, year = {1987}, } @techreport{Greengard88, author = {L. Greengard and V. Rokhlin}, title = {On the Efficient Implementation of the Fast Multipole Algorithm}, journal = {Research Report of the Yale University, Department of Computer Science}, institution= "Yale University Department of Computer Science", volume = {RR-602}, month = {Feb.}, year = 1988, keywords = {fast multipole method, n-body-problem, efficient implementation of the translation operator of the multipole expansion}, } @article{Greengard89, author = {L. Greengard and V. Rohklin}, journal = {Chem. Scripta}, pages = {139-144}, title = {On the Evaluation of Electrostatic Interactions in Molecular Modeling}, volume = {29A}, year = {1989}, } @article{Saito92, author = {M. Saito}, title = {Molecular Dynamics Simulations of Proteins in Water without the Truncation of Long-Range Coulomb Interactions}, journal = {Molecular Simulation}, volume = 8, pages = {321-333}, year = 1992, keywords = {FMM, hierarchical multipole algorithm, molecular dynamics, protein in water}, } @article{Kuwajima88, author = {S. Kuwajima and A. Warshel}, journal = {J.\ Comp.\ Phys.}, pages = {3751-3759}, title = {The Extended Ewald Method: A General Treatment of Long-Range Electrostatic Interactions in Microscopic Simulations}, volume = {89}, year = {1988}, } @Article{appel, author = "Andrew W. Appel", title = "An efficient program for many body simulations", journal= "SIAM J. Sci. Stat. Comput.", volume = "6", pages = "85--103", year = "1985", abstract = "The simulation of $N$ particles interacting in a gravitational force field is useful in astrophysics, but such simulations become costly for large $N$. Representing the universe as a tree structure with the particles at the leaves and internal nodes labeled with the centers of mass of their descendants allows several simultaneous attacks on the computation time required by the problem. These approaches range from algorithmic changes (replacing an $O(N^2)$ algorithm with an algorithm whose time-complexity is believed to be $O(N\log N)$) to data structure modifications, code-tuning, and hardware modifications. The changes reduced the running time of a large problem ($N=10000$) by a factor of four hundred. This paper describes both the particular program and the methodology underlying such speedups.", } @Article{barnes:hut, author = "Josh Barnes and Piet Hut", title = "A hierarchical ${O}({N}\log {N})$ force-calculation algorithm", journal = "Nature", volume = "324", pages = "446--449", year = "1986", abstract = "Until recently the gravitational $N$-body problem has been modelled numerically either by direct integration, in which the computation needed increases as $N^2$, or by an iterative potential method in which the number of operations grows as $N\,\log N$. Here we describe a novel method of directly calculating the forces on $N$ bodies that grows only as $N\,\log N$. The technique uses a tree-structured hierarchical subdivision of space into cubic cells, each of which is recursively divided into eight subcells whenever more than one particle is found to occupy the same cell. This tree is constructed anew at every time step, avoiding ambiguity and tangling. Advantages over potential-solving codes are: accurate local interactions; freedeom from geometrical assumptions and restrictions; and applicability to a wide class of systems, including (proto-)planetary, stellar, galactic and cosmological ones. Advantages over previous hierarchical tree-codes include simplicity and the possibility of rigorous analysis of error. Although we concentrate here on stellar dynamical applications, our techniques of efficiently handling a large number of long-range interactions and concentrating computational effort where most needed have potential applications in other areas of astrophysics as well." } @PhDThesis{draghicescu, author = "Draghicescu, Cristina I.", title = "Efficient Algorithms for Particle Methods", school = "The Pennsylvania State University", year = "1991", abstract = "A fast algorithm is presented, which reduces the amount of work necessary for computing pairwise interactions in a system of $n$ particles from $O(n^2)$ to $O(n(\log n)^p)$, where $p$ depends on the problem in question. Error and work estimates are given.\par I illustrate its application to the approximation of the Euler equations in fluid dynamical simulations using the point vertex method. The algorithm can be applied for both two- and three-dimensional simulations; in the first case I show that, with a proper choice of parameters, the accuracy and stability of the direct method are preserved.\par Also discussed is the application of the algorithm to the problem of evaluating interactions in molecular simulations. A slightly modified version can be used to reduce the complexity of the integral equation method for boundary value problmes. I implemented the algorithm for such a problem and provide the numerical results. On a SUN 4 the algorithm reduces the CPU time required for a calculation with 500,000 points from a month to 15 minutes and is three times faster than the direct method for as few as 128 particles.", } @PhDThesis{salmon, author = "John K. Salmon", title = "Parallel Hierarchical ${N}$-body Methods", school = "California Institute of Technology", year = "1991", abstract = "Recent algorithmic advances utilizing hierarchical data structures have resulted in a dramatic reduction in the time required for computer simulation of $N$-body systems with long-range interactions. Computations which required $O(N^2)$ operations can now be done in $O(N\,\log N)$ or $O(N)$. We review these tree methods and find that they may be distinguished based on a few simple features. \par The Barnes-Hut (BH) algorithm has received a great deal of attention, and is the subject of the remainder of the dissertation. We present a generalization of the BH tree and analyze the statistical properties of such trees in detail. We also consider the expected number of operations entailed by an execution of the BH algorithm. We find an optimal number for $m$, the maximum number of bodies in a terminal cell, and confirm that the number of operations is $O(N\,\log N)$, even if the distribution of bodies is not uniform. \par The mathematical basis of all hierarchical methods is the multipole approximation. We discuss multipole approximations, for the case of arbitrary, spherically symmetric, and Newtonian Green's functions. We describe methods for computing multipoles and evaluating multipole approximations in each of these cases, emphasizing the tradeoff between generality and algorithmic complexity. \par $N$-body simulations in computational astrophysics can require $10^6$ or even more bodies. Algorithmic advances are not sufficient, in and of themselves, to make computations of this size feasible. Parallel computation offers, {\em a priori\/}, the necessary computational power in terms of speed and memory. We show how the BH algorithm can be adapted to execute in parallel. We use orthogonal recursive bisection to partition space. The logical communication structure that emerges is that of a hypercube. A local version of the BH tree is constructed in each processor by iteratively exchanging data along each edge of the logical hypercube. We obtain speedups in excess of 380 on a 512 processor system for simulations of galaxy mergers with 180,000 bodies. We analyze the performance of the parallel version of the algorithm and find that the overhead is due primarily to interprocessor synchronization delays and redundant computation. Communication is not a significant factor." } @Article{SL:JStatPhys:91, author = "K. E. Schmidt and Michael A. Lee", title = "Implementing the Fast Multipole Method in Three Dimensions", journal = "J. Stat.\ Phys.{}", volume = "63", pages = "1223--1235", year = "1991", abstract = "The Rokhlin-Greengard fast multipole algorithm for evaluating Coulomb and multipole potentials has been implemented and analyzed in three dimensions. The implementation is presented for bounded charged systems and systems with periodic boundary conditions. The results include timings and error characterizations.", } @Article{Hernquist:JCP:87, author = "Lars Hernquist", title = "Vectorization of Tree Traversals", journal = jcompphys, volume = "87", pages = "137--147", year = "1990", abstract = "A simple method for vectorizing tree searches, which operates by processing all relevant nodes at the same depth in the tree simultaneously, is described. This procedure appears to be general, assuming that gather-scatter oprations are vectorizable, but is most efficient if the traversals proceed monotonically from the root to the leaves, or {\em vice versa\/}. Particular application is made to the hierarchical tree approach for computing the self-consistent interaction of $N$ bodies. It is demonstrated that full vectorization of the requisite tree searches is feasible, resulting in a factor $\approx$ 4--5 improvement in cpu efficiency in the traversals on a CRAY X-MP. The overall gain in the case of the Barnes-Hut tree code algorithm is a factor $\approx$ 2--3, implying a net speed-up of $\approx$ 400-500 on a CRAY X-MP over a VAX 11/780 or SUN 3/50.", } @Article{Makino:JCP:87, author = "Junichiro Makino", title = "Vectorization of a Treecode", journal = jcompphys, volume = "87", pages = "1990", year = "148--160", abstract = "Vectorized algorithms for the force calculation and tree construction in the Barnes-Hut tree algorithm are described. The basic idea for the vectorization of the force calculation is to vectorize the tree traversal across particles, so that all particles in the system traverse the tree simultaneously. The tree construction algorithm also makes use of the fact that particles can be treated in parallel. Thus these algorithms take advantage of the internal parallelism in the $N$-body system and the tree algorithm most effectively. As a natural result, these algorithms can be used on a wide range of vector/parallel architectures, including current supercomputers and highly parallel architectures such as the Connection Machine. The vectorized code runs about five times faster than the non-vector code on a Cyber 205 for an $N$-body system with $N=8192$.", } @Article{Barnes:JCP:87, author = "Joshua E. Barnes", title = "A Modified Tree Code: Don't Laugh; It Runs", journal = jcompphys, volume = "87", pages = "161--170", year = "1990", abstract = "I describe a modification of the Barnes-Hut tree algorithm together with a series of numerical tests of this method. The basic idea is to improve the performance of the code on heavily vector-oriented machines such as the Cyber 205 by exploiting the fact that nearby particles tend to have very similar interaction lists. By building an interaction list good everywhere within a cell containing a modest number of particles and reusing this interaction list for each particle in the cell in turn, the balance of computation can be shifted >from recursive descent to force summation. Instead of vectorizing tree descent, this scheme simply avoids it in favor of force summation, which is quite easy to vectorize. A welcome side-effect of this modification is that the force calculation, which now treats a larger fraction of the local interactions exactly, is significantly more accurate that the unmodified method.", } @Article{Makino:JCP:88, author = "Junichiro Makino", title = "Comparison of Two Different Tree Algorithms", journal = jcompphys, volume = "88", pages = "393--408", year = "1990", abstract = "The efficiency of two different algorithms of hierarchical force calculation is discussed. Both algorithms utilize the tree structure to reduce the cost of the force calculation from $O(N^2)$ to $O(N\log N)$. The only difference lies in the method of the construction of the tree. One algorithm uses the oct-tree, which is the recursive division of a cube into eight subcubes. The other method makes the tree by repeatedly replacing a mutually nearest pair in the system by a super-particle. Numerical experiments showed that the cost of the force calculation using these two schemes is quite similar for the same relative accuracy of the obtained force. The construction of the mutual-nearest-neighbor tree is more expensive than the construction of the oct-tree roughly by a factor of 10. On the conventional mainframes this difference is not important because the cost of the tree construction is only a small fraction of the total calculation cost. On vector processors, the oct-tree scheme is currently faster because the tree construction is relatively more expensive on the vector processors.", } @Book{greengardThesis, Author= "Leslie Greengard", Title= "The Rapid Evaluation of Potential Fields in Particle Systems", Publisher= "MIT Press", Address= "Cambridge, MA", Year = "1988", } @Inproceedings{jab:ASME, Author="J. A. {Board, Jr.} and R. R. Batchelor and J. F. {Leathrum,Jr.}", Title="High Performance Implementations of the Fast Multipole Algorithm", Booktitle="Symposium on Parallel and Vector Computation in Heat Transfer, Proc. 1990 AIAA/ASME Thermophysics and Heat Transfer Conference", Year=1990, } @Inproceedings{jab:NATUG3, Author="J. A. {Board, Jr.} and J. F. {Leathrum,Jr.}", Title="The Fast Multipole Algorithm on Transputer Networks", Editor="Alan S. Wagner", Booktitle="Proceedings, Third North American Transputer Users Group Meeting, April 1990", Publisher="IOS Press", Address="Washington, DC", Year=1990, } @Inproceedings{jab:WOTUG, Author="J. F. {Leathrum, Jr.} and J. A. {Board, Jr.}", Title="Parallelization of the Fast Multipole Algorithm using the {B012} Transputer Network", Booktitle="Transputing '91", Publisher="IOS Press", Address="Washington, DC", Year=1991, } @Article{GRAPE, Author="D. Sugimoto and others", Title=" ", Journal="Nature", Year=1990, Volume=345, Pages="33", } @Article{Delft, Author="D. J. Auerbach and W. Paul and A. F. Bakkers", Title="A special purpose computer for molecular dynamics: motivation, design, and application", Journal="J. Phys. Chem.", Year=1987, Volume=91, Pages="4881", } @Article{HierarchyTimesteps, Author="H. Grubmuller and H. Heller and A. Windemuth and K. Schulten", Title="Generalized Verlet algorithm for efficient molecular dynamics", Journal="Mol. Sim.", Year=1991, Volume=6, Pages="121", } @incollection{GreengardParallel, Author="L. Greengard and W. Gropp", Title="A parallel version of the fast multipole algorithm", Editor="G. Rodrigue", Booktitle="Parallel Processing for Scientific Computing", Publisher="SIAM", Address="Philadelphia", Year=1989, Pages="213-222", } @incollection{ICASE, Author="James F. {Leathrum, Jr.} and John A. {Board, Jr.}", Title="Mapping the adaptive fast multipole algorithm onto MIMD systems", Editor="P. Mehrotra and J. Saltz and R. Voight", Booktitle="Unstructured Scientific Computation on Scalable Multiprocessors", Publisher="MIT Press", Address="Cambridge, MA", Pages="161-178", Year=1992, } @Article{VectorFMA, Author="K. E. Schmidt and Michael A. Lee", Title="Implementing the Fast Multipole Method in Three Dimensions", Journal="J. Stat. Phys.", Volume=63, Pages=1220, Year=1991, } @Article{ZhaoArticle, Author="F. Zhao and S. Lennart Johnsson", Title="The Parallel Multipole Method on the Connection Machine", Journal="SIAM J. Sci. Stat. Comp.", Year=1991, Volume=12, Pages=1420, } @article{GRAPE2, Author="T. Ito and T. Ebisuzaki and J. Makino and D. Sugimoto", Title="A Special-Purpose Computer for Gravitational Many-Body Systems: GRAPE-2", Journal="Publ. Astron. Soc. Japan", Volume=43, Pages="547-555", Year=1991, } @Article{CPL, Author="J. A. {Board, Jr.} and J. W. Causey and J. F. {Leathrum, Jr.}, A. Windemuth and K. Schulten", Title="Accelerated Molecular Dynamics Simulation with the Fast Multipole Algorithm", Journal="Chem. Phys. Lett.", Year=1992, Volume=198, Pages="89", } @Article{ReifFaster, Author="Reif and Tate", Title="", Journal="", Year="", Volume="", Pages="", } @Article{jab:HPCCBME, Author="John A. {Board, Jr.}, Title="Grand Challenges in Biomedical Computing", Journal="Crit. Rev. Biomed. Eng.", Volume=20, Pages=1, Year=1992, } Non-BibTeX References: ====================== Ding, H.-Q. Karasawa, N., and Goddard, W.A. III. "Atomic level simulations on a million particles - the Cell Multipole Method for coulomb and London nonbond interactions." J. Chem. Phys. 97:4309-4315, 1992. Ding, H.-Q., Karasawa, N., Goddard, W.A. III. "The reduced cell multipole method for Coulomb interactions in periodic systems with million-atom unit cells." Chem. Phys. Lett. 196:6-10, 1992. -- Christoph Niedermeier -- Theoretische Biophysik -- Institut fuer medizinische Optik -- Ludwigs-Maximilian-Universitaet Muenchen -- __o Theresienstrasse 37 -- 8000 Muenchen 2 -- Germany _`\<,_ phone: ++49-89/2394-4580, fax: ++49-89/2805248 (_)/ (_) email: Christoph.Niedermeier@Physik.Uni-Muenchen.DE ~~~~~~~~~~~ From kmoore@ncsc.org Mon Jun 7 10:00:52 1993 Date: Mon, 7 Jun 93 14:00:52 EDT From: Kevin Moore Message-Id: <9306071800.AA09309@duck.ncsc.org> To: chemistry@ccl.net Subject: Modelling distillation process... A friend of mine has asked if there are any packages written that allow one to model different packing materials, pressure and flow rates and related for distillations. I figure there is, but I have no idea what they would be or how well they work. Is anyone aware of programs that will do this? Thanks. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ || Kevin Moore North Carolina Supercomputing Center || || Scientific Support Analyst 3021 Cornwallis Rd. || || (919) 248-1179 Research Triangle Park, NC 27709 || ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From markz@chem.duke.edu Mon Jun 7 08:45:21 1993 Date: Mon, 7 Jun 93 08:45:12 -0400 From: markz@chem.duke.edu (Mark A. Zottola) Message-Id: <9306071245.AA11416@dna.chem.duke.edu> To: CHEMISTRY@ccl.net Subject: Analysis Tools I am looking for a set of public domain analysis tools for examining DNA structures. Thanks! -mark From nmdl@rlmtc.ENET.hcc.com Mon Jun 7 09:59:44 1993 From: nmdl@rlmtc.enet.hcc.com Message-Id: <9306071356.AA04007@gatekeeper.hcc.com> Date: Mon, 7 Jun 93 09:59:39 EDT To: "chemistry@ccl.net"@inet.enet.hcc.com Subject: pc X servers This has been quite a bit of buzz about X servers for the Mac, but what about the PC? I know there a quite a few products out there, but I'd like to poll people's experience. I'd also like to ask if there have been any benchmarks on performance and features and compatability? Also, what about the stuff that often sits on top of the Xserver, such as GL or Phiggs? Thanks, Sol Jacobson Hoechst Celanese From chen@agouron.agouron.com Mon Jun 7 17:24:52 1993 Date: Mon, 7 Jun 93 14:25:01 -0700 From: chen@tbone.agouron.com (Chris Chen) Message-Id: <9306072125.AA06568@agouron> To: chemistry-request@ccl.net Subject: Trajectories from CHARMM to AMBER Dear netters : I am looking forward a software to translate CHARMM format MD trajectory file into AMBER format. Does anybody know something about that? Highly appreciate Chris Chirong Chen Agouron Pharmaceutical, Inc. "Upon a mountain height,far from the sea "I found a shell, "And to my listening ear the lonely thing "Ever a song of ocean seemed to sing "Ever a tale of ocean seemed to tell" Tel Fax Office: (619) 622-3000 (619) 622-7999 E-mail: chen@agouron.com