From owner-chemistry@ccl.net Sat Oct 1 00:24:01 2005 From: "Jinming zhou fit_tone{:}yahoo.com" To: CCL Subject: CCL: W:LIE free software Message-Id: <-29442-050930233114-16830-iw4K6o9Fx6wW5usy2sP+zA]^[server.ccl.net> X-Original-From: Jinming zhou Content-Transfer-Encoding: 8bit Content-Type: multipart/alternative; boundary="0-1380704871-1128133868=:43613" Date: Fri, 30 Sep 2005 19:31:08 -0700 (PDT) MIME-Version: 1.0 Sent to CCL by: Jinming zhou [fit_tone . yahoo.com] --0-1380704871-1128133868=:43613 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Dear Sandro The software Q would do the job well. It was developed by the Aqvist group, and it is free. You can go to this link to find and download it. http://xray.bmc.uu.se/~aqwww/q/default.html Good luck! "CCL: Sandro Cosconati sa.cosco() virgilio.it" wrote: Sent to CCL by: "Sandro Cosconati" [sa.cosco _ virgilio.it] --Replace strange characters with the "at" sign to recover email address--. Dear All, I was wondering if someone of you could give me some infos about any software ( better if free of charge)that implements the LIE (Linear Interaction Energy) method in order to predict the free energy of binding of some given complexes. Thanks in advance Best regards Sandrohttp://www.ccl.net/cgi-bin/ccl/send_ccl_messagehttp://www.ccl.net/chemistry/sub_unsub.shtmlhttp://www.ccl.net/spammers.txtJinming Zhou Tel: +86-021-54925277 Email: zhoujm() mail.sioc.ac.cn CCL(Computer Chemstry Lab) Shanghai Institute of Organic Chemstry Chinese Academy of Sciences shanghai 200032 china --------------------------------- ~{R07<7"6xSDOc#,Pc6x71Ru~} ~{7gK*8_=`#,K.Bd6xJ/3vU_#,I=Dear Sandro
 
The software Q would do the job well. It was developed by the Aqvist group, and it is free. You can go to this link to find and download it. http://xray.bmc.uu.se/~aqwww/q/default.html
 
Good luck!

"CCL: Sandro Cosconati sa.cosco() virgilio.it" <owner-chemistry() ccl.net> wrote:

Sent to CCL by: "Sandro Cosconati" [sa.cosco _ virgilio.it]

--Replace strange characters with the "at" sign to recover email address--.

Dear All,
I was wondering if someone of you could give me some infos about any software ( better if free of charge)that implements the LIE (Linear Interaction Energy) method in order to predict the free energy of binding of some given complexes. Thanks in advance
Best regards
Sandro


http://www.ccl.net/cgi-bin/ccl/send_ccl_message
http://www.ccl.net/cgi-bin/ccl/send_ccl_message
http://www.ccl.net/chemistry/sub_unsub.shtml
http://www.ccl.net/spammers.txt





Jinming Zhou
Tel: +86-021-54925277
Email: zhoujm() mail.sioc.ac.cn
CCL(Computer Chemstry Lab)
Shanghai Institute of Organic Chemstry
Chinese Academy of Sciences
shanghai 200032 china
---------------------------------
~{R07<7"6xSDOc#,<QD>Pc6x71Ru~}
~{7gK*8_=`#,K.Bd6xJ/3vU_#,I=<dV.KDJ1R2!#~}
--~{E7QtP^~}
----------------------------------


Yahoo! for Good
Click here to donate to the Hurricane Katrina relief effort. --0-1380704871-1128133868=:43613-- From owner-chemistry@ccl.net Sat Oct 1 00:59:01 2005 From: "David John Giesen david.giesen * kodak.com" To: CCL Subject: CCL: W:CCL: Re: NMR calculation Message-Id: <-29443-050930090552-27026-yI0dLAEWB1547raPCllZ+Q{:}server.ccl.net> X-Original-From: "David John Giesen" Sent to CCL by: "David John Giesen" [david.giesen]^[kodak.com] Hi - If you are interested in 13C NMR shifts, you may find the paper below to be interesting. Giesen, D. J., Zumbulyadis, N.; A Hybrid Quantum Mechanical and Empirical Model for the Prediction of Isotropic 13C Shielding Constants of Organic Molecules; Phys. Chem. Chem. Phys. 4, 5498 (2002). It shows how to obtain useful NMR results from B3LYP calculations with small basis sets, and also compares the performance of HF, MP2 and several DFT methods using a variety of basis sets. Dave Giesen > > Sent to CCL by: Dhurairaj Senthilnathan [zenthil03:-:yahoo.co.in] > > --Replace strange characters with the "at" sign to recover email address--. > > dear sir, > i wand to calculate NMR for organic moleculesin > computationaly. which is the best theoretical way to > calculate NMR for organic molecule. please replay me > regards > senthilnathan > > ________________________________________________________________________ > > Mr.D.SENTHILNATHAN , > Research Scholar, > School of Chemistry, > Bharathidasan University, > TIRUCHIRAPPALLI - 620 024, > TamilNadu, INDIA > From owner-chemistry@ccl.net Sat Oct 1 01:34:01 2005 From: "Perry E. Metzger perry===piermont.com" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29444-050930234757-27004-rNRnCFiiyV7ZwWtS63fJ2Q!A!server.ccl.net> X-Original-From: "Perry E. Metzger" Content-Type: text/plain; charset=us-ascii Date: Fri, 30 Sep 2005 23:47:44 -0400 MIME-Version: 1.0 Sent to CCL by: "Perry E. Metzger" [perry*|*piermont.com] "Eric Bennett ericb-,-pobox.com" writes: > Perry Metzger writes: >>A strong recommendation though that I'll bring up here because it is >>vaguely OS related -- do NOT use more threads than processors in your >>app if you know what is good for you. Thread context switching is NOT >>instant, and you do not want to burn up good computation cycles on >> useless thread switching. > > Somewhat relevant to this: I have seen about a 25% throughput > increase in my MM calculations when using hyperthreading, running > four processes on a 2 CPU Xeon machine with hyperthreading on, as > compared to two processes with hyperthreading off. In the special > case of hyperthreading sometimes you can benefit. Hyperthreading is an entirely different thing -- unfortunate that the terms have a common word in them. An Intel processor with hyperthreading is a processor that can do something useful while it is waiting on other things that are blocked some of the time -- it is somewhat of like having 1.25 processors instead of one. In that case, for selected apps, you want to treat one processor as though it were two and have two threads running. This is still an instance of my rule, though -- you just treat a Hyperthreaded processor as though it were more than one processor. In my comment, I'm referring to the more general case -- you don't want to incur context switch penalties inside your program if you can help it. Event dispatch costs about as much as a procedure call, but thread switches require tens to hundreds of times longer. If you can help it, use threads ONLY to exploit the parallelism of the multiple processors on your machine, and not for things like i/o multiplexing and the like. > Having enough RAM is always the most important thing. If you don't > have enough memory to hold your software and its working data set in > RAM, that will for certain be the limiting factor in your speed. > > 15,000 RPM drives are only available with SCSI interfaces; the SATA > drives, even with their higher data density, don't have performance > specs that match up (15K SCSI gets you max sustained transfers of > around 90 MB/sec). So if you are doing something disk-intensive like > large QM calculations, there are still people who will buy SCSI. QM > jobs can end up writing over 10 GB of scratch files. For MM apps like > dynamics the disk speed is not critical. Lets say you have a computation that is I/O bound on access to a 10G file. Right now, an additional 10G of DRAM will cost ~$1200. The lowest price 15,000RPM drives you can buy are ~$210, plus you need a decent SCSI controller which can be another $200, so call it $410. If you want to stripe a couple of drives, the price goes up more. So, the question becomes, does the difference in speed for your app between having enough memory to hold the whole scratch file in buffer cache increase speed sufficiently to justify the marginal $1000 cost? That depends on how I/O bound you are. If you are very I/O bound, the answer is a clear "yes" -- if you are only lightly I/O bound, the answer is not clear. If you are very I/O bound, the added RAM (versus a fast disk) will essentially eliminate your I/O time, switching you to being compute bound. This can sometimes increases your speed enough that you can use a fraction of the number of computers. So, if you are really strongly I/O bound -- that is, if your CPU is idle most of the time because it is waiting for the disk -- the answer can be is a clear "yes". Say you're only using 30% of the CPU -- eliminating the I/O bottleneck with RAM is worth two more expensive computers to you, because you'll suddenly be using 100% of the machine. Some years ago, I remember when it first became obvious that for some servers I was dealing with, buying 4G of memory so the entire set of files in the working set would fit in RAM meant that one machine could perform something or five or ten times better than boxes with even very fast disks. That was a giant win -- effectively the extra couple of G of RAM meant we didn't need four other computers. However, as the degree of I/O bottleneck goes down, the equation shifts. If you're only idle say 15% of the time, the economics become fuzzier. You have to do the calculation pretty carefully, but you may find that you're better off without the RAM if you are only waiting slightly for the disk. If your working set is, say, 40G, there is no way to fit enough memory into the box, and you just have to bite the bullet (or buy a really big honking RAID array). All such calculations are economics, in the end. By the way, if your scratch file access is not random, your working set may be smaller than you think, and you may be able to get most of the effect with less memory, which of course again shifts the economics of the calculations. Of course, if your working set even slightly exceeds RAM, you totally lose because you're constantly waiting for I/O. Testing to determine your true working set size can be very important. Knowing how to tune your OS so that you get maximum cache hits is also critical, and a bit of an esoteric skill, but one that is very important to pick up. -- Perry E. Metzger perry[a]piermont.com From owner-chemistry@ccl.net Sat Oct 1 02:09:00 2005 From: "Perry E. Metzger perry^piermont.com" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29445-051001000949-7785-Resn48DJ0cTAGIuSoT3MXg|server.ccl.net> X-Original-From: "Perry E. Metzger" Content-Type: text/plain; charset=us-ascii Date: Sat, 01 Oct 2005 00:09:46 -0400 MIME-Version: 1.0 Sent to CCL by: "Perry E. Metzger" [perry]![piermont.com] "Igor Filippov Contr igorf[]helix.nih.gov" writes: > Sent to CCL by: "Igor Filippov [Contr]" [igorf|*|helix.nih.gov] > Excellent advice. I would agree with everything, except for the "buy > Dell" part. If you've bought Dell (and probably any other named brand > PC) - be prepared that it won't be upgradable, Well, do the economics. It isn't worth upgrading machines any more. Generally after a couple of years you need to replace effectively everything in order to upgrade. My usual advice to an organization is to assume that machines should be cycled out very fast. You are better off (if you're doing high end computation) buying fewer, cheaper and slower machines and replacing them all every year than buying more ultra-fast boxes every three or four. Why? The fastest possible boxes are usually very poor price/performance -- you can pay 50% more for only a few percent extra oompfh. If you're doing parallel computation anyway, you really only care about cycles per second across the cluster per dollar, not cycles per second per box. If you set an annual budget, and buy and replace portions of your compute cluster every year (or turn last year's compute cluster into the "lower end" cluster every year until you run out of room/power and get rid of them), you will never have as fast a cluster as your neighbor in year one, but he'll be way envious of you in year three when his machines are maybe a factor of 6 slower than yours. I suggest that organizations budget what they are willing to spend annually on their compute cluster and go for the best price/performance boxes each year in the quantity they can afford. Next year's machines will do twice as well for you, and you will be able to buy them next year instead of waiting three years. Of course, doing the math yourself is advisable. The exact time frames, difference in price/performance between the top and not quite top end machines, etc. shift very fast. My only point is, keep in mind that computers are not forever, and Moore's Law is an exponential curve, so given that you're going to replace them eventually anyway, try to treat them more disposably. > and after this particular model goes out of fashion it won't be > repairable either. Dell does pretty well on repairs during their couple year repair period. After that, you should probably replace if you're using the box for compute bound work. > Their cases are a nightmare to disassemble As I said, though, if I was buying more than 2 or 3 boxes (or if I was buying 2 or three high end AMD64 boxes) I wouldn't buy Dell any more. If you only have that many boxes, however, you might as well let the Dell people service the machine -- you can't afford enough spare parts on hand yourself anyway. I have to say, though, I have a bunch of Dells and their 1U and 2U boxes seem just fine to me in terms of ease of disassembly. Two thumbscrews and the lid just comes up. If you're talking about non-rack mountable cases, well, I don't buy those for servers or compute farms. Maybe they are a nightmare, but I don't buy 'em so I don't see 'em. > I would suggest buying things from a small-time vendor that: > a) uses only non-proprietary components - i.e. case, power supply, etc. > that you can get at any CompUSA or BestBuy if you ever need to replace > them. The principle is reasonable, but you'll never get power supplies etc. that will fit in rack mount cases at CompUSA. They simply don't sell them. I usually get stuff like that online from NewEgg. > This way you'll get exactly what you designed yourself and you'll be > able to repair/upgrade/modify it by using a simple Philips screwdriver > and parts that you can get at any computer store. And it will be cheaper > too - I'm more than a little surprised about the talk about "cheap > prices at Dell" - go to pricewatch.com you can find tonnes of places > with better deals, where you don't have to pay for the "Dell" sticker on > your computer case. Last year, I was looking for a pair of new 1U servers. I got a couple > from Dell for $700 each. None of the white box guys would sell me identically configured for less than $900. Now, I didn't buy the memory from Dell -- they squeeze you for all they can on that (how do you think they sell the base systems so cheap?), and so I always buy > from Crucial instead -- but the boxes themselves were unbeatable. Right now they're selling deskside boxes with amazing configs for like $300, which for a poor chem department that could not afford better would be a steal. BTW, keep in mind, I do *NOT* advise buying Dell if you're getting anything in quantity or anything "serious" and keep in mind that they do NOT do AMD64 stuff, so if you need better than bottom end, you can't buy from them anyway. Mostly I recommend them to people who need a good $300 or $800 box, can't do the maintenance work themselves and don't want the fuss. -- Perry E. Metzger perry..piermont.com From owner-chemistry@ccl.net Sat Oct 1 08:58:00 2005 From: "Eugen Leitl eugen**leitl.org" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29446-051001084907-6918-zKZ2nq2dlxFPPPP6KIA1rQ^-^server.ccl.net> X-Original-From: Eugen Leitl Content-Disposition: inline Content-Type: text/plain; charset=us-ascii Date: Sat, 1 Oct 2005 14:48:59 +0200 Mime-Version: 1.0 Sent to CCL by: Eugen Leitl [eugen|-|leitl.org] On Sat, Oct 01, 2005 at 12:09:46AM -0400, Perry E. Metzger perry^piermont.com wrote: > BTW, keep in mind, I do *NOT* advise buying Dell if you're getting > anything in quantity or anything "serious" and keep in mind that they > do NOT do AMD64 stuff, so if you need better than bottom end, you > can't buy from them anyway. Mostly I recommend them to people who need > a good $300 or $800 box, can't do the maintenance work themselves and > don't want the fuss. I recently had to look for an affordable 1U rackmount AMD64 system, and came across http://www.sun.com/servers/entry/x2100/ which start at $745 (~640 EUR at basic rebate) in a very basic configuration (the price curve goes up quite rapidly with Sun components, so it would make sense to buy the basic chassis, and upgrade with off the shelf components -- notice that the most basic system will take 4 GByte unregistered (whether ECC or non-ECC) memory, and the new dual-core Opterons, which is quite important for most chemical codes). The reviews were quite positive: http://www.anandtech.com/printarticle.aspx?i=2530 (I haven't had personal experience with those yet, as our distributor will only begin shipping them in November). -- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From owner-chemistry@ccl.net Sat Oct 1 10:22:00 2005 From: "Lubos Vrbka shnek#,#tiscali.cz" To: CCL Subject: CCL: Periodic boundary conditions Message-Id: <-29447-051001043455-9235-yZM5MJ7CuxR8oUvQcY8Kpw[*]server.ccl.net> X-Original-From: Lubos Vrbka Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=ISO-8859-1; format=flowed Date: Sat, 01 Oct 2005 09:33:09 +0200 MIME-Version: 1.0 Sent to CCL by: Lubos Vrbka [shnek_-_tiscali.cz] David Santos Carballal spectrum_dav*_*operamail.com wrote: > Sent to CCL by: "David Santos Carballal" [spectrum_dav[*]operamail.com] > Dear CCl members: > I am doing some molecular dynamics in TINKER. > I made a box of solvent (water) with the program xyzedit. > In the keyword file evoke keywords: rattle (the responsible of shake method in order to restrain O-H distances and angles in water); a-axis (to define lattice lenght); octahedron (to simulate a sistem closer to a sphere) and cutoff (to evaluate the integrals in a region smaller than the half of dimension of the box of solvent). > My system have 100 water molecules. > First of all I made a minimization with a conjugate gradient of 0.01. After it I run dynamic with the Beeman integration method and constant temperature (298K) and pressure (1 atm.). When the dynamic begins temperature stabilizes so fast but pressure is very high, it wuold be around 4000 atm. The pressure fall down at 1 atm. after 12000 steps, but the lattice length grow up so much, the initial lattice length was 14 angstroms and the final is 200 angstroms, with one consecuence: density fall down until 0.0010 grams/cc, that is desagree with water density. I use MM3 force field. > My question is I am using correctly periodic boundary conditions? > Why this results? if your initial system is ok and looks like it should look like then first thing i could think of: after minimization, do constant volume run first before switching to constant pressure to get somehow relaxed system. this might help a bit... -- Lubos _/./_" From owner-chemistry@ccl.net Sat Oct 1 10:57:01 2005 From: "Alejandro Pedro Ayala ale.p.ayala]-[googlemail.com" To: CCL Subject: CCL: PED analysis Message-Id: <-29448-051001095733-27917-GUnVimn8OXsvOig0o0rFAQ..server.ccl.net> X-Original-From: Alejandro Pedro Ayala Content-Disposition: inline Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=ISO-8859-1 Date: Sat, 1 Oct 2005 06:57:27 -0700 MIME-Version: 1.0 Sent to CCL by: Alejandro Pedro Ayala [ale.p.ayala##googlemail.com] Hi, I am working in vibrational spectra simulations of a couple of molecules using Gaussian03 and I am truing to perform some potential energy distribution (PED) analysis using these results. Beside GAR2PED, which are the options to perform these calculations? Thanks in advance Alejandro From owner-chemistry@ccl.net Sat Oct 1 11:34:00 2005 From: "chandra verma chandra*|*bii.a-star.edu.sg" To: CCL Subject: CCL: W:zn parameters Message-Id: <-29449-051001113247-7701-1g1LdBlo7hTT2MPtSdR8rA,server.ccl.net> X-Original-From: "chandra verma" Sent to CCL by: "chandra verma" [chandra,;,bii.a-star.edu.sg] Does anyone have parameters for Zn interactions with the amino acid Histidine (i need point charges, bond, angle, dihedral force constants/equilibrium values for the covalent linkage model) for the charmm forcefield. thanks chandra From owner-chemistry@ccl.net Sat Oct 1 12:08:00 2005 From: "Perry E. Metzger perry.:.piermont.com" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29450-051001104957-18075-PSdynCfptnDCkWeF/srY3A-,-server.ccl.net> X-Original-From: "Perry E. Metzger" Content-Type: text/plain; charset=us-ascii Date: Sat, 01 Oct 2005 10:49:52 -0400 MIME-Version: 1.0 Sent to CCL by: "Perry E. Metzger" [perry(~)piermont.com] "Perry E. Metzger perry===piermont.com" writes: > Lets say you have a computation that is I/O bound on access to a 10G > file. Right now, an additional 10G of DRAM will cost ~$1200. Er, I'm not being quite 100% careful there. The price can end up being twice that or more depending on speed and configuration, and only rare AMD64 server motherboards will allow you to put enough in. The principle, though, is important (and of course, every year the prices will be half what they were and the densities twice what they were, so the principle will continue to apply). The way you should figure this out is: if I can keep my entire working set of files in RAM (something the buffer cache mechanisms can easily do on Linux or NetBSD or FreeBSD), will it speed things up so much that I don't need to buy enough more machines that the price tradeoff for a "crazy amount of memory" is actually not crazy at all. With time, for most problems, the needle points more and more often towards "get more RAM" than "get a faster disk". The obvious exception is stuff where getting things onto disk is mandatory (database servers), and where your working set is so large (right now past, say, 16G, soon much larger) that it is literally not going to be practical to get a machine with that much memory. Below that size, you owe it to yourself to do the calculation... Perry From owner-chemistry@ccl.net Sat Oct 1 14:05:00 2005 From: "benne278 benne278]![umn.edu" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29451-051001134429-29306-hJ8diRtwDgdyHOokCHE0ow ~~ server.ccl.net> X-Original-From: benne278 Content-Type: TEXT/plain; CHARSET=US-ASCII Date: Sat, 01 Oct 2005 11:59:19 CDT MIME-Version: 1.0 Sent to CCL by: benne278 [benne278===umn.edu] On 30 Sep 2005, Perry E. Metzger perry===piermont.com wrote: >In my comment, I'm referring to the more general case I know. That is why I called hyperthreading a "special case". There are also some drawbacks to hyperthreading but they are not particularly relevant to this thread. > So, the question becomes, does the difference in speed for your app > between having enough memory to hold the whole scratch file in buffer > cache increase speed sufficiently to justify the marginal $1000 > cost? That depends on how I/O bound you are. I'm not trying to provide a full decision tree here. In fact, I'm deliberately trying to avoid that because it will lead to another tedious thread, and the original poster didn't provide enough information anyway. If the OP's quantum calculations use 30 GB of scratch, that will likely change the answer vs. if they are only using 2 GB of scatch. The simple question was if SCSI ever has an advantage, and the simple answer is that yes, sometimes it does, even if it isn't often. Therefore, it should be considered, even if you ultimately end up deciding the money is better spent on something else. If I might make a suggestion for people who post performance questions to the list, it might help if they offered up short (~ 60 min?) test jobs. I bet there are a few people on the list who would be willing to run a short example job on their hardware for you, and this will give you an answer that is probably a lot more helpful than somebody listing all the many variables that interact to determine performance. You might even find out that changing the code you are using would be a bigger boost than buying faster hardware. -Eric From owner-chemistry@ccl.net Sat Oct 1 14:40:00 2005 From: "Bill Ross ross ~ cgl.ucsf.edu" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29452-051001141928-9155-LTWGvo9v0EpuGEdDzMOg9g##server.ccl.net> X-Original-From: Bill Ross Date: Sat, 1 Oct 2005 11:02:21 -0700 (PDT) Sent to CCL by: Bill Ross [ross|-|cgl.ucsf.edu] > With time, for most problems, > the needle points more and more often towards "get more RAM" than "get > a faster disk". The obvious exception is stuff where getting things > onto disk is mandatory (database servers), and where your working set > is so large (right now past, say, 16G, soon much larger) that it is > literally not going to be practical to get a machine with that much > memory. Below that size, you owe it to yourself to do the calculation... I wonder if it's worth factoring in the power cost - don't know if disk speed costs more power for the speedup than increased memory. Bill Ross From owner-chemistry@ccl.net Sat Oct 1 16:40:00 2005 From: "Konstantin Kudin konstantin_kudin*o*yahoo.com" To: CCL Subject: CCL: PED analysis Message-Id: <-29453-051001162039-8045-Au/qAxHII6T7fQd6R7/4Ng : server.ccl.net> X-Original-From: Konstantin Kudin Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=iso-8859-1 Date: Sat, 1 Oct 2005 13:20:32 -0700 (PDT) MIME-Version: 1.0 Sent to CCL by: Konstantin Kudin [konstantin_kudin[-]yahoo.com] Here is a summary for a similar question asked few years ago: http://kekule.osc.edu/cgi-bin/ccl/message.cgi?2000+10+24+002 Personally, I have used REDONG from QCPE in the past. Konstantin --- "Alejandro Pedro Ayala ale.p.ayala]-[googlemail.com" wrote: > > Sent to CCL by: Alejandro Pedro Ayala [ale.p.ayala##googlemail.com] > Hi, > I am working in vibrational spectra simulations of a couple of > molecules using Gaussian03 and I am truing to perform some potential > energy distribution (PED) analysis using these results. > Beside GAR2PED, which are the options to perform these > calculations? > Thanks in advance > Alejandro > > > __________________________________ Yahoo! Mail - PC Magazine Editors' Choice 2005 http://mail.yahoo.com From owner-chemistry@ccl.net Sat Oct 1 17:14:00 2005 From: "Bill Ross ross^_^cgl.ucsf.edu" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29454-051001141933-9264-mJcdC3KSw49tShCEVrQHNQ*o*server.ccl.net> X-Original-From: Bill Ross Date: Sat, 1 Oct 2005 10:44:02 -0700 (PDT) Sent to CCL by: Bill Ross [ross],[cgl.ucsf.edu] > From what I can tell, at the moment (and this could change at any > time), the price/performance of AMD 64 equipment ("Opteron" is just a > model of AMD 64) is by far the best. I know people who do a lot of > number crunching who are buying AMD 64 whiteboxes by the fleet, and > putting them in clusters. ... Vis-s-vis buying Dell or more generic PC's, I've been working at Sun recently, so have noted some excitement over the price/performance of the new commodity-class Opteron server line (e.g. $745 for the most minimal system, X2100 w/ no hard drive, slowest CPU, 512 MB). The 'coolest' thing to my mind is that the X4100 (starting at $2,195) runs at <50% of the power needed by the equivalent Dell server. Here's the president of Sun bragging on his blog: http://blogs.sun.com/jonathan 50% more performance 63% less electricity consumption 1/4 the physical size, at 1/3 the price. ... Space and power matter, and we now lead the planet in responsible computing. These machines can even run Windows.. based on what I've seen of the product development cycle at Sun, I wouldn't be surprised if the boxes were easy to open up and work on, tho clearly they are more engineered than Dell, so I'm curious how hands-on folks will feel. Here's an Anandtech review of the X2100: http://anandtech.com/systems/showdoc.aspx?i=2530 I'm a programmer (former Amber developer), and haven't played with a big variety of systems for a while, but this Opteron line sounds very promising for building clusters. Another cool thing I've seen (and used) is the DTrace facility built into Solaris10 - free kernel tracing. It looks like the open-source version of S10 isn't fully equipped to compete with linux yet, since key parts are apparently still getting intellectual property cleanup (like the package system for installing software), but it'll be interesting to watch S10 as it emerges into the open-source world. Bill Ross From owner-chemistry@ccl.net Sat Oct 1 17:49:01 2005 From: "Evgeniy Gromov Evgeniy.Gromov-x-tc.pci.uni-heidelberg.de" To: CCL Subject: CCL: Amber calculation with Gaussian Message-Id: <-29455-051001145835-29257-YAjxGiDd+dB78XkYkjLC1A]![server.ccl.net> X-Original-From: Evgeniy Gromov Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=us-ascii; format=flowed Date: Sat, 01 Oct 2005 20:17:11 +0200 MIME-Version: 1.0 Sent to CCL by: Evgeniy Gromov [Evgeniy.Gromov^tc.pci.uni-heidelberg.de] Dear All, I have a question related to Amber type (molecular mechanics) calculations using Gaussian03. Does anybody know some (free) soft (except for GaussView) which allows one to specify automatically all necessary for Amber input information (i.e. the type of hybridization, partial charge etc. for all atoms) in the format required by Gaussian. Thanks. Best, Evgeniy -- _______________________________________ Dr. Evgeniy Gromov Theoretische Chemie Physikalisch-Chemisches Institut Im Neuenheimer Feld 229 D-69120 Heidelberg Germany Telefon: +49/(0)6221/545263 Fax: +49/(0)6221/545221 E-mail: evgeniy~!~tc.pci.uni-heidelberg.de _______________________________________ From owner-chemistry@ccl.net Sat Oct 1 19:35:00 2005 From: "Perry E. Metzger perry~!~piermont.com" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29456-051001193053-14784-UGblJhHCRhLlm/aY0tUb9g-*-server.ccl.net> X-Original-From: "Perry E. Metzger" Content-Type: text/plain; charset=us-ascii Date: Sat, 01 Oct 2005 19:30:47 -0400 MIME-Version: 1.0 Sent to CCL by: "Perry E. Metzger" [perry .. piermont.com] "Bill Ross ross ~ cgl.ucsf.edu" writes: >> With time, for most problems, >> the needle points more and more often towards "get more RAM" than "get >> a faster disk". The obvious exception is stuff where getting things >> onto disk is mandatory (database servers), and where your working set >> is so large (right now past, say, 16G, soon much larger) that it is >> literally not going to be practical to get a machine with that much >> memory. Below that size, you owe it to yourself to do the calculation... > > I wonder if it's worth factoring in the power cost - don't know if > disk speed costs more power for the speedup than increased memory. You have reminded me of yet something else I neglected to mention. Clusters eat power, and once they've eaten the power, they turn it into heat, which eats more power and equipment to remove. You're very right to mention power considerations -- though even beyond the raw money involved in paying for the electricity, it costs money to modify a machine room to power big clusters, and it costs money to bring in sufficient air conditioning to remove the resultant heat, so the power budget becomes a serious consideration at times. One good thing is that if you overbuild once, machine rooms can often last a whole lot longer than the computers in them, but organizations planning for their first cluster often get seriously shocked by discoveries that play out like "what do you mean, there isn't enough power in the building!?"... Luckily, with time, computation per unit power tends to go down, but unluckily, we get greedier and greedier about the amount of computation we want to do. We also have this looming problem that in another 10 or 20 years, we're going to get very near the kT limit -- for those that don't know, there is a minimum amount of power dictated by the laws of thermodynamics for an irreversible computation, and although we're still orders of magnitude above that limit, it is already clearly visible on the horizon... Perry From owner-chemistry@ccl.net Sat Oct 1 21:53:00 2005 From: "John Hearns john.hearns##streamline-computing.com" To: CCL Subject: CCL: W:hardware for computational chemistry calculations Message-Id: <-29457-051001194246-23772-/jgHoIzzWk9UoKMjo01e8g(~)server.ccl.net> X-Original-From: John Hearns Content-Transfer-Encoding: 7bit Content-Type: text/plain Date: Sat, 01 Oct 2005 23:48:10 +0100 Mime-Version: 1.0 Sent to CCL by: John Hearns [john.hearns]~[streamline-computing.com] On Sat, 2005-10-01 at 10:44 -0700, Bill Ross ross^_^cgl.ucsf.edu wrote: > Vis-s-vis buying Dell or more generic PC's, I've been working at Sun > recently, so have noted some excitement over the price/performance > of the new commodity-class Opteron server line (e.g. $745 for the most > minimal system, X2100 w/ no hard drive, slowest CPU, 512 MB). The > 'coolest' thing to my mind is that the X4100 (starting at $2,195) runs > at <50% of the power needed by the equivalent Dell server. Here's the > president of Sun bragging on his blog: http://blogs.sun.com/jonathan > We're quite excited by them. I'd say the X2100 is more suited to Monte Carlo 'farm' type applications, and as you say the beefier 4100 for computational chemistry applications. The V20 line was well engineered and reliable, and we have no doubt these will follow suit.