CCL Home Page
Up Directory CCL node9
Molecular Dynamics and Monte Carlo next up previous
Next: Molecular Comparisons Up: Molecular Modeling Previous: Molecular Mechanics

Molecular Dynamics and Monte Carlo

The true picture of a molecular system is far from the static, idealized image provided by molecular mechanics. The atoms and groups in molecules are in constant motion, and, especially in the case of biological macromolecules, these movements are concerted (correlated) and may be essential for biological function. The direct proof of this behavior is provided by X-ray and NMR experiments. Thermodynamic properties of molecules, especially when they form a complex or are immersed in a solvent, can not be derived from the harmonic approximation which inherently assumes small amplitude motions around a symmetric minimum. While molecular mechanics and the simulation methods described below share the same potential energy function they differ conceptually. In molecular mechanics, the information is derived from a single geometry of the molecule. Simulation methods require thousands to millions of geometries to produce meaningful results. The biological action of molecules frequently involves large amplitude motions (biochemists aften call them ``conformational transitions'' or ``changes''), resulting in abrupt changes in the geometry of the protein/ligand molecule as well as rearrangement of solvent molecules. The thermodynamic and energetic parameters of such systems can only be derived by realistic and explicit simulation, i.e., by leaving the potential energy minimum and probing the energy surface of the molecular system. Molecular simulations are a recent addition to molecular modeling systems and are still in active stages of development. They are also more complex conceptually than molecular mechanics. Recent monographs on this topic contain not only the physical and mathematical background necessary to use the method but also point to applications of the methods: Brooks et. al, (1988); McCammon & Harvey (1987), van Gunsteren & Weiner (1989).

There are essentially two approaches to performing molecular simulations: the stochastic and the deterministic. The stochastic approach, called Monte Carlo, is based on exploring the energy surface by randomly probing the geometry of the molecular system (or, in a statistical mechanics language, its configuration space). It is essentially composed of the following steps:

  1. Specify the initial coordinates of atoms tex2html_wrap_inline2560 .
  2. Generate new coordinates tex2html_wrap_inline2562 by changing the initial coordinates at random.
  3. Compute the transition probability W(0,a).
  4. Generate a uniform random number R in the range [0,1].
  5. If W(0,a) < R, then take the old coordinates as the new coordinates and go to step 2.
  6. Otherwise accept the new coordinates and go to step 2.

The most popular realization of the Monte Carlo method (though not the only one !) for molecular systems is the Metropolis method:

  1. Specify the initial atom coordinates (e.g., from molecular mechanics geometry optimization).
  2. Select some atom i randomly and move it by random displacement: tex2html_wrap_inline2574 , tex2html_wrap_inline2576 , and tex2html_wrap_inline2578 .
  3. Calculate the change of potential energy tex2html_wrap_inline2580 corresponding to this displacement.
  4. If tex2html_wrap_inline2582 accept the new coordinates and go to step 2.
  5. Otherwise, if tex2html_wrap_inline2584 , select a random number R in the range [0,1] and:
    A.
    if tex2html_wrap_inline2590 accept the new coordinates and go to step 2,
    B.
    if tex2html_wrap_inline2592 keep the original coordinates and go to step 2.

Note that the iterations are independent of one another (i.e., the system does not contain any ``memory''). The possibility that the system might revert to its previous state is as probable as choosing any other state. This condition has to be satisfied if the system is to behave as a Markov process, for which the methods of calculating ensemble (i.e., statistical) averages are known. Another important condition to be satisfied is the continuity of the potential energy function in order for the system to be ergodic. In this context, ergodicity means that any state of the system can be reached from any other state. A clearly written introduction to these concepts is given by Heermann, (1990).

As a result of a stochastic simulation, the large number of configurations (geometries) are accumulated and the potential energy function is calculated for each of them. These data are used to calculate thermodynamic properties of the system. The basic statistical mechanics formulae will be given later in this section. The Monte Carlo method is accepted more by physicists than by chemists, probably because MC is not a deterministic method and does not offer time evolution of the system in a form suitable for viewing. It does not mean however, that for deriving the thermodynamic properties of systems molecular dynamics is better. In fact many chemical problems in statistical mechanics are approached more efficiently with MC, and some (e.g., simulations of polymers chains on a lattice) can only be done efficiently with MC. Also, for Markov chains, there are efficient methods for deriving time related quantities such as relaxation times. Currently, the stronghold of MC in chemistry is in the area of simulations of liquids and solvation processes.

The deterministic approach, called Molecular Dynamics (MD), actually simulates the time evolution of the molecular system and provides us with the actual trajectory of the systemgif. The information generated from simulation methods can in principle be used to fully characterize the thermodynamic state of the system. In practice, the simulations are interrupted long before there is enough information to derive absolute values of thermodynamic functions, however the differences between thermodynamic functions corresponding to different states of the system are usually computed quite reliably.

In molecular dynamics, the evolution of the molecular system is studied as a series of snapshots taken at close time intervals (usually of the order of femtoseconds, 1 fs = 10 tex2html_wrap_inline2594 s). For large molecular systems the computational complexity is enormous and supercomputers or special attached processors have to be used to perform simulations spanning long enough periods of time to be meaningful. Typical simulations of small proteins including surrounding solvent cover the range of tens to hundreds of picoseconds (1 ps = 10 tex2html_wrap_inline2596 s), i.e., they incorporate thousands of elementary time steps.

Based on the potential energy function V, we can find components, tex2html_wrap_inline2598 , of the force tex2html_wrap_inline2600 acting on an atom as:

equation851

This force results in an acceleration tex2html_wrap_inline2602 according to Newton's equation of motion:

equation855

By knowing acceleration, we can calculate the velocity of an atom in the next time step. From atom positions, velocities, and accelerations at any moment in time, we can calculate atom positions and velocities at the next time step. Integrating these infinitesimal steps yields the trajectory of the system for any desired time range. There are efficient methods for integrating these elementary steps with Verlet and leapfrog algorithms being the most commonly used. For the details of these algorithms, interested readers can consult Hermans (1985) and the references therein.

To start the MD simulation we need an initial set of atom positions (i.e., geometry) and atom velocities. In practice, the acceptable starting state of the system is achieved by ``equilibration'' and ``heating'' runs prior to the ``production'' run. The initial positions of atoms are most often accepted from the prior geometry optimization with molecular mechanics. Formally, such positions correspond to the absolute zero temperature. The velocities are assigned randomly to each atom from the Maxwell distribution for some low temperature (say 20 tex2html_wrap_inline1876 K). The random assignment does not allocate correct velocities and the system is not at thermodynamic equilibriumgif. To approach the equilibrium the ``equilibration'' run is performed and the total kinetic energy (or temperature) of the system is monitored until it is constant. The velocities are then rescaled to correspond to some higher temperature, i.e., the heating is performed. Then the next equilibration run follows. The absolute temperature, T, and atom velocities are related through the mean kinetic energy of the system:

equation860

where N denotes the number of atoms in the system, tex2html_wrap_inline2610 represents the mass of the i-th atom, and k is the Boltzmann constant. By multiplying all velocities by tex2html_wrap_inline2616 we can effectively ``heat'' the system. Heating can also be realized by immersing the system in a ``heat bath'' which stochastically (i.e., randomly) accelerates the atoms of the molecular system. These cycles are repeated until the desired temperature is achieved and at this point a ``production'' run can commence. In the actual software, the ``heating'' and ``equilibration'' stages can be introduced in a more efficient way by assigning velocities in such a way that ``hot spots'' (i.e., spots in which the neighboring atoms are assigned high velocities) are avoided.

Molecular dynamics for larger molecules or systems in which solvent molecules are explicitly taken into account, is a computationally intensive task even for the most powerful supercomputers, and approximations are frequently made. The most popular is the SHAKE method which in effect freezes vibrations along covalent bonds. This method is also applied sometimes to valence angles. The major advantage of this method is not the removal of a number of degrees of freedom (i.e., independent variables) from the system, but the elimination of high frequency vibrations corresponding to ``hard'' bond stretching interactions. In simulations of biological molecules, these modes are usually of least interest, but their extraction allows us to increase the size of the time step, and in effect achieve a longer time range for simulations.

 perio  
Figure 6.24: Periodic boundary conditions in molecular dynamics.

Another approximation is the united-atom approach where hydrogen atoms which do not participate in hydrogen bonding are lumped into a heavier atom to form a pseudo-atom of larger size and mass (e.g., a CH tex2html_wrap_inline2618 group).

Even supercomputers have their limitations and there is always some practical limit on the size (i.e., number of atoms) of the simulated system. For situations involving solvent, the small volume of the box in which the macromolecule and solvent are contained introduces undesirable boundary effects. In fact, the results may depend sometimes more on the size and shape of the box than on the molecules involved. To circumvent this limited box size difficulty, periodic boundary conditions are used. This idea is represented in Fig. 6.24. In this approach, the original box containing a solute and solvent molecules is surrounded with identical images of itself, i.e., the positions and velocities of corresponding particles in all of the boxes are identical. The common approach is to use a cubic or rectangular parallelepiped box, but other shapes are also possible (e.g., truncated octahedron). By using this approach, we obtain what is in effect an infinite sized system. The particle (usually a solvent molecule) which escapes the box on the right side, enters it on the left side, due to periodicity. Since MD simulations are usually performed as an NVE (microcanonical) ensemble (i.e., at constant number of particles, constant volume, and constant total energy) or an NVT (canonical) ensemble, the volume of the boxes does not change during simulation, and the constancy in the number of particles is enforced by the periodicity of the lattice, e.g., a particle leaving the box on left side, enters it on the right side. There are also techniques for performing simulations in a NPT (isothermal-isobaric), and NPH (isobaric-isoenthalpic) ensembles, where the pressure constancy during simulation is achieved by squeezing or expanding box sizes. The constant temperature is usually maintained by ``coupling the system to a heat bath'', i.e., by adding dissipative forces (usually Langevine, friction type forces) to the atoms of the system which as a consequence affects their velocities. However, each approximation has its price. In the case of periodic boundary conditions we are actually simulating a crystal comprised of boxes with ideally correlated atom movements. Longer simulations will be contaminated with these artificially correlated motions. The maximum length for the simulation, before artifacts start to show up, can be estimated by considering the speed of sound in water ( tex2html_wrap_inline2620 15 Å/ps at normal conditions). This means that for a cubic cell with a side of 60 Å, simulations longer than 4ps will incorporate artifacts due to the presence of images.

 stoch 
Figure 6.25: Stochastic bondary conditions in molecular dynamics.

In the other popular approach, stochastic boundary conditions allow us to reduce the size of the system by partitioning the system into essentially two portions: a reaction zone and a reservoir region. The reaction zone is that portion of the system which we want to study and the reservoir region contains the portion which is inert and uninteresting. For example, in the case of an enzyme, the reaction zone should include the proximity of the active center, i.e., portions of protein, substrate, and molecules of solvent adjacent to the active center. The reservoir region is excluded from molecular dynamics calculations and is replaced by random forces whose mean corresponds to the temperature and pressure in the system. The reaction zone is then subdivided into a reaction region and a buffer region. The stochastic forces are only applied to atoms of the buffer region, in other words, the buffer region acts as a ``heat buffer''. There are several other approximations whose description can be found in the molecular dynamics monographs quoted here.

The straightforward result of a molecular dynamics simulation, a movie showing changing atom positions as a function of time, contains a wealth of information in itself. Viewing it may shed light on the molecular mechanisms behind the biological function of the system under study.

Molecular dynamics, contrary to energy minimization in molecular mechanics, is able to climb over small energy barriers and can drive the system towards a deeper energy minimum. The use of molecular dynamics allows probing of the potential energy surface for deeper minima in the vicinity of the starting geometry. It is exploited in a simulated annealing method, where the molecular system is first heated to an artificially high temperature and then snapshots of the trajectory at high temperature are taken as a starting state for cooling runs. The cooled states obtained from hot frames correspond frequently to much deeper potential energy minima than the original structure taken for dynamics. It is particularly suitable for imposing geometrical constraints on the system. Such constraints may be available from experimental results. For example, the NOEgif constraints, electron density maps from X-ray, etc., may be incorporated as templates for molecular geometry. Molecular dynamics will try to satisfy these artificially imposed constraints by jumping over shallow potential energy minima -- behavior which is not possible in molecular mechanics.

Since molecular dynamics is a very active field of research, new approaches are reported frequently. To facilitate the incorporation of new techniques and approaches, many molecular dynamics software packages are organized in a modular fashion. Modules communicate their data and results via disk files. In this way, the versatility and flexibility of the software is achieved, and new modules can be inserted easily between the old ones without the need of modifying existing computer code.

Statistical mechanics concepts can supply even more information from molecular dynamics runs, since they furnish prescriptions for how to calculate the macroscopic properties of the system (e.g., thermodynamic functions like free energy and entropy changes, heats of reaction, rate constant, etc.) from statistical averages of elementary steps on molecular trajectory. They also allow us to uncover a correlation between the motions of groups or fragments of the system as well as provide various distribution functions. Essentialy, all expressions derived below apply to both MD and MC, with the only difference being that the ``time step'' in MD should be changed to a ``new configuration'' for MC. MD was chosen here for illustration, since it is more often applied to chemical problems.

Three quantities form a foundation for these calculations: the partition function Z, the Boltzmann probability functional P, and the ensemble average, <A>, of a quantity A:

eqnarray891

where tex2html_wrap_inline2632 and tex2html_wrap_inline2634 are all generalized momenta and coordinates of the system (e.g., angular and linear velocities and internal coordinates); tex2html_wrap_inline2636 is the Hamiltoniangif for the system, k denotes the Boltzmann constant ( tex2html_wrap_inline2648 ), and T is the temperature. The integration extends over the whole range accessible to momenta and coordinates. For the isochoric-isothermic system of constant NTV, the Helmholtz free energy, A = U - TS = G - PV (U, G, S and T are internal energy, Gibbs free energy, entropy and absolute temperature, respectively) for the system of unit volume is given as:

equation922

The above equation simplifies significantly when cartesian coordinates and cartesian momenta are used, since the kinetic energy factor of the total energy represented by the hamiltonian tex2html_wrap_inline2638 in the expression for Z reduces to a constant depending upon the size, temperature and volume of the system:

equation925

where the potential energy function tex2html_wrap_inline2664 depends only on cartesian coordinates of atoms and the integral tex2html_wrap_inline2666 is called a classical configurational partition function or a configurational integral. Hence, the free energy, A, can in principle be calculated from the integral of the potential energy function, tex2html_wrap_inline2308 . In practice, however, calculation of the absolute value of the free energy is not possible, except for very simple molecules or systems in which the motion of atoms is very restricted (e.g., crystals). Note that cartesian coordinates span the range tex2html_wrap_inline2672 and for systems which are diffusional in nature (i.e., solutions, complexes, etc.) atoms can essentially occupy any position in space and contributions from the scattered locations to the value of the partition function are not negligible.

The situation is more optimistic with differences in the free energy between two closely related states, since it can be assumed that contributions from points far apart in the configurational space will be similar, and hence will cancel each other in this case. The free energy difference, tex2html_wrap_inline2674 , between two states 0 and 1 is:

equation935

By multiplying the numerator of the fraction tex2html_wrap_inline2676 with the unity written as:

equation940

the ratio becomes:

equation946

or, after including the integral from the denominator into the integral in the numerator:

  equation960

i.e., the ensemble average of the exponential function involving the difference between potential energy functions for state 0 and 1 calculated at coordinates obtained for the state 0. To clarify the importance of this statement, let us assume that we have two closely related states which are described by potential functions tex2html_wrap_inline2678 and tex2html_wrap_inline2680 , respectively. We perform the molecular dynamics simulation using the function for the state 0, i.e., tex2html_wrap_inline2678 . At each time step, tex2html_wrap_inline2684 , we take the current coordinates tex2html_wrap_inline2686 obtained for system 0 and calculate the value not only of tex2html_wrap_inline2688 but also of tex2html_wrap_inline2690 gif, i.e., we take atom coordinates derived for one state and calculate the potential energy function for this state and for another state. Then the difference between these two values is calculated, and the exponent of this difference taken, i.e. tex2html_wrap_inline2692 . We also calculate tex2html_wrap_inline2694 which is needed for the Boltzmann factor. At the end of the calculations we combine the values from all n time steps to calculate the ensemble average in eq. 6.50 as:

equation993

The above formula is given here only for illustration, since in practical computations this integration is done differently for the sake of efficiency. As was mentioned earlier, the difference between states 0 and 1 should be very small (not more than tex2html_wrap_inline2698 kcal/mol), i.e., a perturbation. For this reason the method is called a free energy perturbation. If the difference between states 0 and 1 is larger, the process has to be divided into smaller steps. In practice, the change from state 0 to 1 is represented as a function of a suitably chosen coupling parameter tex2html_wrap_inline2700 such that tex2html_wrap_inline2702 for state 0 and tex2html_wrap_inline2704 for state 1. This parameter is incorporated into the potential energy function to allow smooth transition between the two states. For example, if a fluorine atom in C--F bond is being changed to a chlorine, the bond stretching term in the potential energy function appears as:

  equation1003

where tex2html_wrap_inline2706 and tex2html_wrap_inline2708 are the stretching parameters for C--F and C--Cl bonds, respectively, while tex2html_wrap_inline2710 and tex2html_wrap_inline2712 denote the optimal lengths for these bonds. Similar expressions exist for all terms in the potential energy function and include provisions for creating/annihilating atoms by converting them from/to dummy atoms. Now, the path tex2html_wrap_inline2714 can be easily split into any number of desired substates spaced close enough to satisfy the small perturbation requirement. Assuming that the process was divided into n equal subprocesses we have: tex2html_wrap_inline2718 , where tex2html_wrap_inline2720 . However, in actual calculations, the division is usually not uniform to account for the fact that the most rapid changes in free energy occur often at the beginning and at the end of the path. The simulation is run for each value of tex2html_wrap_inline2700 providing free energy changes between substates i and i+1 as:

equation1023

i.e., the coordinates calculated at the substate tex2html_wrap_inline2728 are used to calculate tex2html_wrap_inline2580 between states tex2html_wrap_inline2728 and tex2html_wrap_inline2734 . Then the total change of free energy is calculated as a sum of partial contributions:

equation1037

The most important conclusion from the above is that the conversion between two states does not have to be conducted along a physically meaningful path. Obviously, atoms cannot be converted stepwise to one another, since they are made of elementary particles. However, basic thermodynamics, namely, the Hess's law of constant heat summation, ensures that the difference between free energy depends only upon initial and final states. Any continuous path which starts at state 0 and ends at state 1, even if it is a completely fictitious one, can be used to calculate this value. There are other methods than perturbation of free energy to calculate the difference in free energies between states. The most accepted one is the thermodynamic integration method where the derivative of free energy A versus coupling parameter tex2html_wrap_inline2700 is calculated:

equation1043

It can be proven that:

equation1053

in other words, the derivative of the free energy versus tex2html_wrap_inline2700 is equal to the ensemble average of the derivative of potential energy versus tex2html_wrap_inline2700 , and hence:

equation1060

In most cases, the calculations are run in both directions, i.e., tex2html_wrap_inline2744 and tex2html_wrap_inline2746 to provide the value of hysteresis, i.e., precision of the integration process. Large differences between tex2html_wrap_inline2674 and tex2html_wrap_inline2750 indicate that either the simulation was not run long enough for each of the substates, or that the number of substates is too small. While checking the hysteresis represents a proof of the precision of the calculations, it is not a proof of the accuracy of the results. Errors in estimating the free energy changes may result from an inadequate energy function and in most cases from inadequate probing of the energy surface of the system. Molecular dynamics, similar to molecular mechanics, can also suffer from the ``local minimum syndrome''. If atom positions corresponding to the minimum of energy in state 0 are substantially different from those for state 1, the practical limitations of the length of the computer simulation may not allow the system to explore the configuration space corresponding to the true minimum for state 1. For this reason the set of independent simulations, each starting at different atom positions should be performed to assess the adequacy in sampling of the configuration space for both states.

The methods above allow one to calculate in principle the free energies of solvation, binding, etc. In the case of free energies of solvation, for example, one solvent molecule is converted to a solute molecule in a stepwise fashion, by creating and annihilating atoms or changing their type as illustrated by eq. 6.52, which in effect is creating the solute molecule de novo. Binding energies could in principle be calculated in a similar way by ``creating'' the ligand molecule in the enzyme cavity. In practice, these calculations are very difficult, since they involve a massive displacement of solvent, i.e., the initial and final state differ dramatically. However, in drug design we are in most cases interested not in the actual free energies of solvation or binding, but in differences between these parameters for different molecules, i.e., in values of tex2html_wrap_inline2752 (or tex2html_wrap_inline2754 if an isothermic-isobaric system is being considered). The quantitative method of finding these parameters is often called a thermodynamic cycle approach and is becoming a routine procedure in finding the differences in free energy of binding for two different ligands or the influence of mutation in the macromolecular receptor or enzyme on binding. As an illustration, let us assume that we want to find the difference in the free energy of binding, ( tex2html_wrap_inline2756 ), between two molecules tex2html_wrap_inline1896 and tex2html_wrap_inline1898 to a protein P. Note that tex2html_wrap_inline2756 is simply related to the ratio of equilibrium binding constants:

equation1076

The thermodynamic cycle for this processes is represented in Fig. 6.26.

 thermocyc
Figure 6.26: The thermodynamic cycle for two molecules tex2html_wrap_inline1896 and tex2html_wrap_inline1898 binding to the same protein P and corresponding free energy changes tex2html_wrap_inline1902 .  

The difference between free energies of binding for molecule tex2html_wrap_inline1896 and tex2html_wrap_inline1898 is tex2html_wrap_inline2778 . Computation of binding energies tex2html_wrap_inline2780 and tex2html_wrap_inline2782 is extremely difficult since it involves bringing a molecule from a bulk solution to the active site on the protein. This requires the removal of water molecules from the active site and the need for very long simulation times to reequilibrate the solvent and the protein. Though such calculations are possible in principle, they are still very prone to error and are tremendously expensive. However, from Hess's law of constant heat summation, the sum of all changes in free energy should be zero in a closed thermodynamic cycle, i.e., tex2html_wrap_inline2784 . Hence:

equation1086

Since eq. 6.50 requires that the potential energy function for state 1 is calculated at the coordinates of state 0, the number of atoms and bonds in both molecules must be equal. The processes tex2html_wrap_inline2786 and tex2html_wrap_inline2788 involve annihilation/formation of atoms and cannot occur in biological systems, however, they are much easier for calculations. These pseudo reactions are realized computationally by changing the type of atoms and groups. If molecules are of similar size the solvent molecules need not diffuse out of the active site. If new atoms are needed, dummy atoms are converted stepwise to real atoms. On the other hand, if molecule tex2html_wrap_inline1898 has fewer atoms than molecule tex2html_wrap_inline1896 , atoms are annihilated by converting them to dummy atoms. As an example consider two molecules which differ by the presence of a methyl group. ``Mutating'' the hydrogen atom into a methyl group is illustrated in Fig. 6.27.

 muta 
Figure 6.27: Conversion of a hydrogen atom into a methyl group. Note that the number of atoms and bonds is conserved during the conversion.

A molecular dynamics simulation is first performed to calculate tex2html_wrap_inline2794 , i.e., the free energy change corresponding to ``mutating'' molecule tex2html_wrap_inline1896 into tex2html_wrap_inline1898 in the solution. In this simulation the protein molecule is not present in the system since it is inert in this reaction (i.e., ``reduces'' on both sides of the chemical equation). The second simulation involves calculation of tex2html_wrap_inline2800 , i.e., the molecule tex2html_wrap_inline1896 is converted to molecule tex2html_wrap_inline1898 inside the active site of the protein P. The tex2html_wrap_inline2808 is then calculated as tex2html_wrap_inline2810 .

It cannot be stressed strongly enough that both of these free energy changes, tex2html_wrap_inline2794 and tex2html_wrap_inline2800 , are non-physical. tex2html_wrap_inline2794 is not a difference between the free energy of solvation of molecules tex2html_wrap_inline1896 and tex2html_wrap_inline1898 unless the ``molecules'' are mononuclear. Calculation of the difference in solvation energies would require a dynamics simulation for molecules in a vacuum to account for the differences in the interaction energy and the entropy of free and solvated molecules. Similarly, tex2html_wrap_inline2800 alone is not a difference between the free energies of binding, since it contains no information about that solvent interaction with the part of the molecule which is submerged in the active site. The differences in the solvent-solute interaction between ligands may comprise a major portion of their relative affinity for the protein.

In these instances where we know the structure of the receptor protein, the thermodynamic cycle method is a very promising tool for drug design, since it allows us to actually calculate the differences between free energies of binding for molecules which have not even been synthetized yet! Likewise, it allows assessing the impact of variations in protein structure (e.g., mutant versus wild type protein) on the binding, i.e., perform genetic engineering on the computer. This is why the X-ray crystallography or NMR studies of proteins combined with molecular simulation techniques form a framework for the drug design of the future, where prospective drug molecules will be first inferred from theory, and only later synthetized. Morerover, intensive research on deducing protein tertiary structure from the knowledge of its primary sequence is bound to result in new possibilities for folding of proteins in the computer. With the astonishing advances in the power of computers and progress in methodology, there is no doubt that the importance of these techniques to rational drug design will increase steadily.


next up previous
Next: Molecular Comparisons Up: Molecular Modeling Previous: Molecular Mechanics

Computational Chemistry
Wed Dec 4 17:47:07 EST 1996
Modified: Sat May 23 16:00:00 1998 GMT
Page accessed 11434 times since Tue Apr 20 17:23:34 1999 GMT