Introduction to Biological Systems and Soft Condensed Matter

From testwiki
Jump to navigation Jump to search

Lecture notes for the course given by Prof. David Andelman, Tel-Aviv University 2009

Original author: Guy Cohen, Tel-Aviv University

PDF version. All diagrams.

Introduction and Preliminaries

We will make several assumptions throughout the course:

  1. The physics in question are generally in the classical regime, 0.
  2. Materials are "soft": quantitatively, this implies that all relevant energy scales are of the order of kBT.
  3. Condensed matter physics deals with systems composed of O(1023) particles, and statistical mechanics applies. We are always interested in a reduced description, in terms of continuum mechanics and elasticity, hydrodynamics, macroscopic electrodynamics and so on.

We begin with an example from Chaikin & Lubensky, the story of an H2O molecule. This molecule is bound together by a chemical bond which is around

20kBT

at room temperature and not easily broken under normal circumstances. What happens when we put

1023

water molecules is a container? First of all, with such large numbers we can safely discuss phases of matter: namely

solid(amorphous/crystal ice)fluid(water)gas(steam).

Gas is typical to low density, high temperature and low pressure. It is generally prone to changes in shape and volume, homogeneous, isotropic, weakly interacting and insulating. This is the least orderedform of matter relevant to our scenario, and relatively easy to treat since order parameters are small. The liquid phase is typical of intermediate temperatures. It flows but is not very compressible. It is homogeneous, isotropic, dense and strongly interacting. Its response to external forces depends on the rate of its deformation. Liquids are hard to treat theoretically, as their intermediate properties make simple approximations less effective. The solid is a dense ordered phase with low entropy and strong interactions. It is anisotropic and does not flow, it strongly resists compression and its response to forces depends on the amount of deformation they cause (elastic). Transitions between these phases occur at specific values of thermodynamic

parameters (see diagram (1)). First order changes (volume/density "jumps" at the transition, and no jump in pressure/temperature) occur on the lines; at the critical liquid/gas point, second order phase transitions occur; at the triple point, all three phases (solid/liquid/gas) coexist. The systems we are interested in are characterized by several kinds of interactions between their constituent molecules: for example, Coulombic interactions of the form q2r2 when charged particles are present, fixed dipole interaction of the form 𝐩1𝐩2r3 when permanent dipoles exist, and almost always induced dipole/van der Waals interaction of the form Δ𝐩1Δ𝐩2r6. At close range we also have the "hard core" or steric repulsion, sometimes modeled by a 1r12 potential. Simulations often use the so-called 126 Lennard-Jones potential U=4ε[(σr)12(σr)6](as pictured in (2)), which with appropriate parameters correctly describes both condensation and crystallization in some cases. Template:Sidenote Starting from a classical Hamiltonian such as H=i(𝐩i22m+VVdW), we can predict all three phases of matter and the transitions between them. In biological systems, this simple picture does not suffice: the basic consideration behind this is that of effects which occur at different scales between the nanometric scale, through the mesoscopic and up to the macroscopic scale. Biological systems are mesoscopic in nature, and their properties cannot be described correctly when a coarse-graining is performed without accurately accounting for mesoscopic properties.

A few examples follow:

Liquid crystals

The most basic assumption we need in order to model liquid crystals is that isotropy at the molecular level is broken: molecules are represented by rods rather than spheres. Such a description was suggested by Onsager and others, and leads to three phases as shown in (3).

Polymers

When molecules are interconnected at mesoscopic ranges, new phases and properties are encountered.

Soap/beer foam

This kind of substance is approximately 95% agent, with the remainder water – yet it behaves like a weak solid as long as its deformations are small. This is because a tight formation of ordered cells separated by thin liquid films is formed, and in order for the material to change shape the cells must be rearranged. This need for restructuring is the cause of such systems' solid-like resistance to change.

Structured fluids

Polymers or macromolecules in liquid state, liquid crystals, emulsions and colloidal solutions and gels display complex visco-elastic behavior as a result of mesoscopic super-structures within them.

Soft 2D membranes

Interfaces between fluids have interesting properties: they act as a 2D liquid within the interface, yet respond elastically to any bending of the surface. Surfactant molecules will spontaneously form membranes within the same fluid, which also have these properties at appropriate temperatures. Surfactants in solution also form lamellar structures - multilayered structures in which the basic units are the membranes rather than single molecules.

Polymers

Books: Doi, de Gennes, Rubinstein, Doi & Edwards.

Introduction

Brief history

Natural polymers like rubber have been known since the dawn of history, but not understood. The first artificial polymer was made 1905. Stadinger was the first to understand that polymers are formed by molecular chains and is considered to be the father of synthetic polymers. Most polymers were made by petrochemical industry. Nylon was born in 1940. Various uses and unique properties (light, strong, thermally insulating; available in many different forms from strings and sheets to bulk; cheap, easy to process, shape and mass-produce...) have made them very attractive commercially. Later on, some leading scientists were Kuhn and Flory in chemistry (30's to 70's) and Stockmayer in physical chemistry (50's and 60's). The famous modern theory of polymers was first formulated by P.G. de Gennes and Sam Edwards.

What is a polymer?

Material composed of chains, having a repeating basic unit (monomer). Connections between monomers are made by chemical (covalent) bonds, and are strong at room temperature.

[A]NAAA...AN times.
N

is the polymerization index. Template:Sidenote

Template:Sidenote

Polymerization is also the name of the process by which polymers are synthesized, which involves a chain reaction where a reactive site exists at the end of the chain. Some chemical reactions increase the chain length by one unit, while simultaneously moving the reactive site to the new end:

[A]N+[A]1[A]N+1.

There also exist condensation processes, by which chains unite:

[A]N+[A]M[A]N+M,

where

N,M1

. A briefer notation, dropping the name of the monomer, is

(N)+(M)(N+M).

Consider the example of hydrocarbon polymers, where we have a monomer which is C2H4(Check this...). As a larger number of such units is joined together to become polyethylene molecules, the material composed of these molecules changes drastically in nature:

N phase type of material
1-4 gas flammable gas
5-15 thin liquid liquid fuel/organic solvents
16-25 thick liquid motor oil
20-50 soft solid wax, paraffin
1000 hard solid plastic

Types of polymer structures

Polymers can exist in different topologies, which affect the macroscopic properties of the material they form (see (4)):

  • Linear chains (this is the simplest case, which we will be discussing).
  • Rings (chains connected at the ends).
  • Stars (several chain arms connected at a central point).
  • Tree (connected stars).
  • Comb (one main chain with side chains branching out).
  • Dendrimer (ordered branching structure).

Polymer phases of matter

Depending on the environment and larger-scale structure, polymers can exist in many states:

  • Gas of isolated chains (not very relevant).
  • In solution (water or organic solvents). In dilute solutions, polymer chains float freely like gas molecules, but their length alters their behavior.
  • In a liquid state of chains (called a melt).
  • In solid state (plastic) – crystals, poly-crystals, amorphous/glassy materials.
  • Liquid crystal formed by polymer chains (Polymeric Liquid Cristal or PLC)
  • Gels and rubber: networks of chains tied together.

Ideal Polymer Chains in Solution

Some basic models of polymer chains

The simplest model of an ideal polymer chain is the freely jointed chain (FJC), where each monomer performs a completely independent random rotation. Here, at equilibrium the end-to-end length of the chain is R0N12=L1212, where L=N is the contour length.

A slightly more realistic model is the freely rotating chain (FRC), where monomers are locked at some chemically meaningful bond angle

ϑ

and rotate freely around it via the torsional angle

φ

. Here,

R02LeffN12,eff=1+cosϑ1cosϑ.

Note that for cosϑ=0 we find that eff= and this is identical to the FJC. For very small ϑε, we can expand the cosine an obtain

effϑ02ε(1ε2).

This is the rigid rod limit (to be discussed later in detail).

A second possible improvement is the hindered rotation (HR) model. Here the angles φi have a minimum-energy value, and are taken from an uncorrelated Boltzmann distribution with some

potential

V(φi)

. This gives

R02LeffN,eff=(1+cosϑ1cosϑ)(1+cosφ1cosφ).

Template:Sidenote Another option is called the rotational isomeric state model. Here, a finite number of angles are possible for each monomer junction and the state of the full chain is given in terms of these. Correlations are also taken into account and the solution is numeric, but aside from a complicated eff this is still an ideal chain with R02LeffN.

Calculating the end-to-end radius

For the polymer chain of (5), obviously we will always have 𝐑N=0. The variance, however, is generally not zero: using 𝐑N=i=1N𝐫i,

𝐑N2=i,j=1N𝐫i𝐫j=i,j=1N2cosϑij.

FJC

In the freely jointed chain (FJC) model, there are neither correlations between different sites nor restrictions on the rotational angles. We therefore have cosϑij=12𝐫i𝐫j=δij,

and

𝐑N2=ij2δij=N2=L.

Template:Sidenote

Therefore, R0𝐑N2=N12.

FRC

In the freely rotating chain model, the bond angles are held constant at angles ϑi while the torsion angles φi are taken from a uniform distribution between π and π. This introduces some correlation between the angles: since (for one definition of the φi) 𝐫i+1=cosϑi𝐫i+sinϑi(sinφi𝐲^×𝐫i+cosφi𝐱^×𝐫i),

and since the

φi

are independent and any averaging over a sine of cosine of one or more of them will result in a zero, only the

φi

independent terms survive and by recursion this correlation has the simple form

𝐫i𝐫j=2(cosϑ)|ij|.

The end-to-end radius is

R02=ij=1N𝐫i𝐫j=i=1N𝐫i2=2+2i=1Nj=1i1(cosϑ)ijk=ij+2i=1Nj=i+1N(cosϑ)jik=ji=N2+2i=1N[k=1i1(cosϑ)k+k=1Ni(cosϑ)k].

At large

N

we can approximate the two sums in

k

by the series

k=1coskϑ=cosϑ1cosϑ

, giving

R02N2+22i=1Ncosϑ1cosϑ=N21+cosϑ1cosϑ.

To extract the Kuhn length

eff

from this expression, we rewrite in in the following way:

R02=N(1+cosϑ1cosϑ)2Neff=Leff,
eff=1+cosϑ1cosϑ.

To go back from this to the FRC limit, we would consider a chain with a random distribution of ϑ angles such that cosϑ=0.

Gyration radius

Consider once again the polymer chain of (5). Define:

Rg2=1Ni=1N(𝐑i𝐑CM)21Ni=1N𝐑i2.

The unprimed coordinate system is refocused on the center of mass, such that i𝐑i=0. Now, it is easier to work with

the following expression:

12N2ij(𝐑i𝐑j)2=12N2ij(2𝐑i𝐑j+2𝐑i𝐑i)=2N2N2i𝐑i21N2(i𝐑i)=0(j𝐑j)=0=Rg2.

We will calculate

Rg

for a long FJC. For

N1

we can replace the sums with integrals, obtaining

Rg2=12N2ij(𝐑i𝐑j)2|ij|2=12N20Ndu0Ndv2|uv|=22N20Ndu0udv2(uv)=2N20Ndu[u212u2]=16N2.

This gives the gyration radius for an FJC:

Rg2=16N2.

Polymers and Gaussian distributions

An ideal chain is a Gaussian chain, in the sense that the end-to-end radius is taken from a Gaussian distribution. We will see two proofs of this.

Random walk proof

One way to show this (see Rubinstein, de Gennes) is to begin with a random walk. For one dimension, if we begin at x=0 and at each

time step

i

move left or right with moves

xi=±

and the final displacement

x=ixi,

then

x=(N+N)N.

We define ZN(x) as the number of configurations of N steps with a final displacement of x. PN(x)

is the associated normalized probability.

ZN(x)=N!(N+)!(N)!NCNex22x2,x2=N2.

In fact, for N the central limit theorem tells us that x=ixi will have a Gaussian distribution for any distribution of the xi. This can be extended to d dimensions

with a displacement

𝐑=i𝐱i

:

ZNd=(CN)dexp{dR22R2},R2=d𝐱i2=N2.

To find the normalization constant

C

we must integrate over all dimensions:

1=d𝐑ZN(𝐑)=(dxZN(x))d=(CN2πx2)d,
PNd(𝐑)=(d2πN2)d2exp{dR22N2}.

Some notes:

  • An ideal chain can now be redefined as one such that PNd(R) is Gaussian in any dimension d1.
  • This is also true for a long chain with local interactions only, such that R02=Neff=LeffN.
  • The probability of being in a spherical shell with radius 𝐑 is 4πR2PN(R).
  • The chance of returning to the origin PN(R=0) is (d2π2N)d2(1N)d2Nγ. γ=d2 is typical of an ideal chain.
  • For any dimension d1, R0=R2N12.

Formal proof

Another way to show this follows, which is also extensible to other distributions of the {𝐫i}. Template:Sidenote

In general, we can write

PN(𝐑)=d𝐫1d𝐫2...d𝐫NΨ(𝐫1,...,𝐫N)δ(𝐑i=1N𝐫i).

In the absence of correlations, we can factorize

Ψ

:

Ψ(𝐫1,...,𝐫N)=ψ(𝐫1)...ψ(𝐫N).

For example, for a freely jointed chain ψ(𝐫i)=αδ(|𝐫i|). The normalization constant is found from ψ(ri)4πri2dri=4πα2=1,

giving

ψ(𝐫i)=14π2δ(|𝐫i|).

We can replace the delta functions with δ(𝐫)=1(2π)3d𝐀ei𝐀𝐫,

leaving us with

PN(𝐑)=1(2π)3d𝐀ei𝐀𝐑d𝐫1...d𝐫Ni[ei𝐀𝐫iψ(𝐫i)]=1(2π)3d𝐀ei𝐀𝐑[d𝐫ei𝐀𝐫ψ(𝐫)]N.

In spherical coordinates,

d𝐫ei𝐀𝐫ψ(𝐫)=r2drdϑdφsinϑeikrcosϑ14π2δ(r)=α=cosϑ1211dαeikα=sinkk,

which gives

PN(𝐑)=(12π)3d𝐀ei𝐀𝐫(sinkk)N.

We are left with the task of evaluating the integral. This can be done analytically with the Laplace method for large N, since the largest contribution is around k=0: we can approximate (sinkk)N by (1(k)26+...)Ne(k)2N6.

The integral is then

Pn(𝐑)=(12π)3d𝐀ei𝐀𝐑ek22N6=(12π)3dk1dk2dk3exp[α(ikαRαNkα226)]=(12π)3αdkαexp(ikαRαNkα226)=(32πN2)32exp{3R22N2}.

This is, of course, the same Gaussian form we have obtained from the random walk (we have done the special case of d=3, but once again this process can be repeated for a general dimension d1).

03/26/2009

Rigid and Semi-Rigid Polymer Chains in Solution

Worm-like chain

In considering the ϑ0 limit of the freely rotating chain, we have seen that effϑ2. This is of course unphysical, and this limit is actually important for many interesting cases of stiff chains (for instance, DNA). If we take the N limit along with ϑ0

and start over, we can make the following change of variables:

𝐫i𝐫j=2cosϑij=2(cosϑ)|ij|=2exp[|ij|p],

which defines the persistence length p. For the FRC

model,

p=lncosϑ.

This is a useful concept in general, however: it defines the typical length scale over which correlations between chain angles dies out, and is therefore an expression of the chain's rigidity.

At small

ϑ

we can expand the logarithm to get

lncosϑln(1ϑ22)ϑ22
p2ϑ2.

Taking the continuum limit carefully then requires us to consider N and 0 such that Rmax=Ncosϑ2N is constant. Now, we can calculate the end-to-end length R02=RN2

at the continuum limit using out the new form for the correlations:

R02=ij(cosϑ)|ij|=2ijexp{|ij|p}20Rmdu0Rmdvexp{|uv|p}.

To simplify the calculation, we can define the dimensionless variable u=u/p, v=v/p and Rm=Rm/p.

With these replacements,

R02p2=0Rmdu0Rmdve|uv|=0Rmdu0udveuev+0RmduuRmdveuev=0Rmdueu(eu1)+0Rmdueu(eueRm)=Rm+(eRm1)+RmeRm(eRm1)=2Rm2(1eRm).

The final result (known as the Kratchky-Porod worm-like-chain or WLC)

is

R02=RN2=2pRmax2p2(1eRmaxp).

Importantly, is does not depend on ϑ or N but only on the physically transparent persistence length and contour length.

We will consider the two limits where one parameter is much larger than the other. First, for pRmax we encounter the

rigid rod limit: we can expand the previous expression into

R02=2pRmax2p2(11+Rmaxp12(Rmaxp)2+...)=Rmax2+ϑ(Rmax3p3),R0N.

The fact that

R0N

rather than

R0N12

is a result of the long-range correlations we have introduced, and is an indication that at this regime the material is in an essentially different phase. Somewhere between the ideal chain and the rigid rod, a crossover regime must exist.

Template:Sidenote

For

pRmax

we can neglect the exponent, obtaining

R022pRmax,p2ϑ2,RmaxN.

This therefore returns us to the ideal chain limit, with a Kuhn length eff=2p. The crossover phenomenon we discussed occurs on the chain itself here as we observe correlation between its pieces at differing length scales: at small scales (p) it behaves like a rigid rod, while at long scales we have an uncorrelated random walk. An interesting example is a DNA chain, which can be described by a worm-like chain with p500Å and Rmax10μmp: it will therefore typically cover a radius of R07000Å.

Free Energy of the Ideal Chain and Entropic Springs

We have calculated distributions of 𝐑 for Gaussian chains with N components, ZN(𝐑). Let's consider

the entropy of such chains:

SN(𝐑)=kBlnZN(𝐑)PN(𝐑)=ZN(𝐑)d𝐑ZN(𝐑)=(32πN2)32exp(3R22N2).

The logarithm of ZN(𝐑) is the same as that of PN(𝐑), aside from a factor which does

not depend on

𝐑

. Therefore,

SN(𝐑)=kBln(ZN(𝐑)d𝐑)+32kBln(32πN2)=SN(0)32kBR2N2=SN(0)32kBR2N2.

The free energy is

FN(𝐑)=UN(𝐑)TSN(𝐑)=32kBTR2N2+FN(0)UN(0)TSN(0)

since UN(𝐑)=UN(0) for an ideal chain.

What does FN(𝐑) mean? It represents the energy needed to stretch the polymer, and this energy is R2 like a harmonic spring (U12kx2) with k=3kBTN2TN. Note that the polymer becomes less elastic (more rigid) as the temperature increases, unlike most solids. This is a physical result and can be verified experimentally: for instance, the spring constant of rubber (which is made of networks of polymer chains) increases linearly with temperature. Consider an experiment where instead of holding the chain at constant length, we apply a perturbatively weak force ±πŸ to its ends and measure its average length. We can perform a Legendre transform between distance and force: from equality of forces along the direction

in which they are applied,

fx=FNRx=Rx(3kBT2N2R2)=3kBTN2kRx,𝐟=k𝐑.

To be in this linear response (𝐟𝐫) region, we must demand that 𝐑|𝐑0|Rmax=N,

and to stress this we can write

𝐟=(3kBT)𝐑Rmax.

Numerically, with a nanometric and at room temperature the forces should be in the picoNewton range to meet this requirement. A more rigorous treatment which works at arbitrary forces can be carried out by considering an FJC with oppositely charged (±q) ends in an electric field 𝐄𝐳^. The chain's sites are at 𝐫i with 𝐑𝐑N𝐑0.

The potential is

Uelec=+q𝐄𝐑0q𝐄𝐑N=fπ‘πŸ=q𝐄.

Since

𝐑=i𝐫i

, we can write the potential as

Uelec=q𝐄𝐑=q𝐄(i𝐫i)=ficosϑi,

with cosϑi=𝐳^𝐫i. The

partition function is

ZN(𝐟)=d𝐫1...d𝐫NΨ({𝐫i})eβUelec({𝐫i})=TrΨieβU({ψi}).

The function Ψ is separable into product of functions ψ(𝐫i)=14π2δ(|𝐫i|).

Now,

exp{UeleckBT}=exp{fkBTicosϑi}.

In spherical coordinates 𝐫i=(ri,ϑi,φi)

we can solve the integral:

ZN(𝐟)=[0drr24π2δ(r)]N×[02πdφ]N×i0πdϑisinϑiefkBTcosϑi=x=cosϑ(14π)N(2π)N[11dxefkBTx]N=12N[2kBTfsinh(fkBT)]N=[kBTfsinh(fkBT)]N.

The Gibbs free energy (Gibbs because the external force is fixed)

is then

GN(𝐟)=kBTlnZN(𝐟)=kBTNln[sinh(fkBT)]+kBTNln(fkBT),

and the average extension

Rf=GN(f)f=kBTNcoth(fkBTα)kBT+kBTN1f=N[cothα1α]Nβ„’(α)

The Langevin function β„’(α)=cothα1α is also typical of spin magnetization in external magnetic fields and of dipoles in electric fields at finite temperatures. 04/02/2009

Polymers and Fractal Curves

Introduction to fractals

Book: B. Mandelbrot.

A fractal is an object with fractal dimensionality , called also the Hausdorff dimension . This implies a new definition of dimensionality, which we will discuss. Consider a sphere of radius R. It is considered three-dimensional because it has V=4π3R3 and M=ρVRD for D=3. A plane has by the same reasoning MRD for D=2, and is therefore a 2D object. Fractals are mathematical objects such that by the same sort of calculation they will have MRDf, for a Df which is not necessarily an integer number (this definition is due to Hausdorff). One example is the Koch curve (see (7)): in each of its iterations, we decrease the length of a segment by a factor

of 3 and decrease its mass by a factor of 4. We will therefore have

M2=14M1=A(r2)Df=A(13r1)Df,M1=A(r1)Df.14=(13)DfDf=ln4ln31.26 and 1<Df<2.

Note that a fractal's "real" length is infinite, and its approximations will depend on the resolution. The structure exhibits self-similarity: namely, on different length scales it will look the same. This can be seen in the Koch snowflake: at any magnification, a part of the curve looks similar to the whole curve. There's a very nice animation of this in Wikipedia. The total length of the curve depends on the ruler used to measure it: the actual length at iteration n is L0(43)n.

Another definition for the fractal dimension is

Df=lnL00ln10=ln4ln3.

Linking fractals to polymers

Template:Sidenote Consider the ideal Gaussian chain again. It has R02=N2N. Since N is proportional to the mass, we have an object with a fractal dimension of 2 no matter what the dimensionality of the actual space is. We can say that a polymer in d-space fills only Dfd dimensions of the space it occupies, where Df is 2 for an ideal polymer Gaussian and 2Dfd in general. Flory has shown that in some cases a non-ideal polymer can also have Df<2, in particular when a self-avoiding walk (SAW) is accounted for. The SAW as opposed to the Gaussian walk (GW) is the defining property of a physical rather than ideal polymer, and gives a fractal dimension of Df1.66. A collapsed polymer has Df=3 and fills space completely. Note that two polymers with fractal dimensions Df and Df* do not "feel" each other statistically if Df+Df*<d.

Polymers, Path Integrals and Green's Functions

Books: Doi & Edwards, F. Wiegel, or Feynman & Hibbs.

Local Gaussian chain model and the continuum limit

This model is also known as LGC. We start from an FJC in 3D where Ψ=iψ(𝐫i) and ψ(𝐫i)=14π2δ(𝐫i). By the central limit theorem 𝐑=i𝐫i will always be taken from a Gaussian distribution when the number of monomers is large (whatever the form of ψ, as long as it

is symmetrical around zero such that

𝐫i=0

):

PN(𝐑)=(32πN2)32exp(3R22N2).

In the LGC approximation we exchange the rigid rods for Gaussian springs with 𝐫'i=0 and 𝐫i2=2, by

setting

ψ(𝐫i)=(32π2)32exp(3R222).

We can then obtain for the full probability distribution

Ψ({𝐫i})=iψ(𝐫i)=(32π2)3N2exp(i=1N3(𝐑i𝐑i1)222),

where 𝐫i=𝐑i𝐑i1. Ψ describes N harmonic springs with k=3kBT2 connected

in series:

U0({𝐑i})=322kBTi=1N(𝐑i𝐑i1)2,ΨeU0kBT.

An exact property of the Gaussian distributions we have been using is that a sub chain of mn monomers (such as the sub chain starting at index m and ending at n) will also have a Gaussian distribution

of the end-to-end length:

P(𝐑m𝐑n,mn)=(32π|nm|2)32exp(3R22|nm|2),(𝐑m𝐑n)2=2|nm|.

At the continuum limit, we will get Wiener distributions : the correct way to calculate the limit is to take N and 0 with N=L remaining constant. The length along the chain up to site n is then described by ns, 0sL. At this limit we can also substitute derivatives 𝐑s=1𝐑n for the finite differences 𝐑i𝐑i1,

such that

i=1N12(𝐑i𝐑i1)20Lds(𝐑s)2=0Ndn12(𝐑(n)n)2Ψ({𝐑i})const.×exp{3220N(𝐑(n)n)2dn}.

If we add an external spatial potential U(𝐑i) (which is single-body), its contribution to the free energy will amount

in a factor of

exp{1kBTi=1NU(𝐑i)}exp{1kBT0NU(𝐑(n))dn}.

to the Boltzmann factor. 04/23/2009

Functional path integrals and the continuum distribution function

Books: F. Wiegel, Doi & Edwards.

Consider what happens when we hold the ends of a chain defined by {𝐑i} in place, such that 𝐑0=𝐑 and 𝐑N=𝐑. We can calculate the probability

of this configuration from

PN(𝐑0,𝐑N)=i=1N1d𝐑iΨ({𝐑i}).

At the continuum limit the definition of the chain configurations translates into a function 𝐑(n) and the product of integrals can be taken as a path integral according to i=1N1d𝐑iπ’Ÿπ‘(n). The probability for each configuration with our constraint is a functional

of

𝐑(n)

. The partition function is:

ZN(𝐑,𝐑)=𝐑0=𝐑𝐑N=π‘π’Ÿπ‘(n)exp{3220N(𝐑(n)n)2dn1kBT0NU(𝐑(n))dn},

and we can normalize it to obtain a probability distribution function,

given in terms of this path integral:

PN(𝐑,𝐑)=ZN(𝐑,𝐑)ZN(𝐑,𝐑)d𝐑d𝐑.

We now introduce the Green's function G(𝐑,𝐑;N),which as we will soon see describes the evolution from 𝐑

to

𝐑

in

N

steps. We define it as:

G(𝐑N=𝐑,𝐑0=𝐑;N)𝐑0=𝐑𝐑N=π‘π’Ÿπ‘(n)exp{3220N(𝐑(n)n)2dn1kBT0NU(𝐑(n))dn}d𝐑d𝐑𝐑0=𝐑𝐑N=π‘π’Ÿπ‘(n)exp{3220N(𝐑(n)n)2dn}.

Note that while the nominator is proportional to the probability PN, the denominator does not include include the external potential.

G has several important properties:

  1. It is equal to the exact probability PN for Gaussian chains in the absence of external potential.
  2. If we consider that the chain might be divided into one sub chain between step 0 and i and a second sub chain from step i to step N, then
    G(𝐑,𝐑;N)=d𝐑G(𝐑,𝐑;Ni)G(𝐑,𝐑;i).

We can use this property to compute expectations values of observables. If we have some function of a specific monomer

A(𝐑i)

, for instance:

A(𝐑i)=d𝐑Nd𝐑0d𝐑iG(𝐑N,𝐑i;Ni)A(𝐑i)G(𝐑i,𝐑0;i)d𝐑Nd𝐑0G(𝐑N,𝐑0;N).
  1. The Green's function is the solution of the differential equation (see proof in Doi & Edwards and in homework):
    GN262G𝐑2+U(𝐑)kBTG=δ(𝐑𝐑)δ(N).
  1. The Green's function is defined as 0 for N<0 and is equal to δ(𝐑𝐑) when N0 in order to satisfy the boundary conditions.

Relationship to quantum mechanics

This equation for N>0, 𝐑𝐑 is very similar in form to the SchrΓΆdinger equation. To see this, we

can rewrite it as:

[N262𝐑2+U(𝐑)kBTβ„’]G(𝐑,𝐑;N)=[Nβ„’]G(𝐑,𝐑;N)=0.

If we make the replacement Nit, β„’β„‹ and 2622m this is identical to itG=[222m+V(𝐑)]G=β„‹G. Like the quantum Hamiltonian the Hermitian operator β„’ has eigenfunctions such that β„’φk=Ekφk, which according to Sturm-Liouville theory span the solution space (kφk*(𝐫)φk(𝐫)=δ(𝐫𝐫)) and can be orthonormalized (φk*φmd𝐫=δkm).

The solution of the non-homogeneous problem is therefore

G(𝐑,𝐑;N)=kφk*(𝐑)φk(𝐑)eNEk,

where the φk are solutions of the homogeneous equation (β„’En)φn=0.

Example A polymer chain in a box of dimensions Lx×Ly×Lz: The potential U is 0 within the box and on the edges. The boundary conditions are G(𝐑,𝐑;N)=0 if 𝐑 or 𝐑 are on the boundary. The

function is also separable in Cartesian coordinates:

G(𝐑,𝐑;N)=i=13gi(Ri,Ri;N).

Let's solve for g1gx (the other g functions are

similar):

(N262R12)u(R1,N)=0.

If we separate variables again with the ansatz u(R1,N)=φ(R1)eEN

we obtain

Eφ26φ=0,φ(R1)=Asink1R1+Bcosk1R1.

With the boundary condition

{φ(0)=0B=0,φ(Lx)=0k1=nπLx.

This gives an expression for the energy and eigenfunctions:

En=(2π26Lx)n2=E1n2,φn(R1)=2Lxsin(nπLxR1),un(R1)=2Lxsin(nπLxR1)eEnN.

The Green's function can finally be written as

g1(R1,R1;N)=n=12Lxsin(nπLxR1)sin(nπLxR1)eNEn.

Since with the Cartesian symmetry of the box the partition function

Z=i=13Zi

is also separable and using

0LxsinnπLxxdx={2Lxnπn=2,4,6,...(even),0n1,3,5,...(odd)

we can calculate

Zx=dxdxg1(x,x;N)=2Lxn=1,3,5...(2Lxnπ)2exp(N2π2n26Lx2)=8Lxπ2n=1,3,5...1n2en2E1N.

We can now go on to calculate F=kBTlnZxZyZz, and we can for instance calculate the pressure on the box edges in the

x

direction:

Px=1LyLzFLx.

Two limiting cases can be done analytically: first, if the box is much larger than the polymer,

LiN

and

1n2exp{π22NLx2n2}1n2,n=1,3,5...1n2=π28,Px=1LyLzLx(kBTlnZxZyZz)=kBTLyLz1ZxZxLx=kBTZxZyZz.

This is equivalent to a dilute gas of polymers (done here for a single chain). At the opposite limit, LiN, the polymer should be "squeezed". The Gaussian approximation will be no good if we squeeze too hard, but at least for some intermediate regime

we can neglect all but the first term in the series:

Zx=8Lxπ2n=1,3,5...1n2eπ22n26Lx2N8Lxπ2eπ226Lx2N,Px=kBTLyLzlnZxLxkBTV{1Lx+π22N3Lx3}=kBTV{1+π22N3Lx2}.

There is a large extra pressure caused by the "squeezing" of the chain and the corresponding loss of its entropy.

04/30/2009

The same formalism can be used to treat polymers near a wall or in a well near a wall, for instance (see the homework for details). In the well case, like in the similar quantum problem, we will have bound states for T<Tc (where the critical temperature is defined by a critical value of βcV0=V0kBTc, and describes the condition for the potential well to be "deep" enough to contain a bound state).

Dominant ground state

Note that since

G=n=0φn(x)φn*(x)eNEn,

where N is positive and the En are real and ordered (assuming no degeneracy, E0<E1<E2<...), at large N we can neglect

all but the leading terms (smallest energies) and

Gφ0(x)φ0*(x)eNE0+φ1(x)φ1*(x)eNE1+....

This is possible because the exponent is decreasing rather than oscillating, as it is in the quantum mechanics case. Taking only the first term in this series is called the dominant ground state approximation .

Polymers in Good Solutions and Self-Avoiding Walks

Virial expansion

So far, in treating Gaussian chains, we have neglected any long-ranged interactions. However, polymers in solution cannot self-intersect, and this introduces interactions V(𝐑i𝐑j) into the picture which are local in real-space, but are long ranged in terms of the contour spacing – that is, they are not limited to ij. The importance of this effect depends on dimensionality: it is easy to imagine that intersections in 2D are more effective in restricting a polymer's shape than intersections in 3D.

The interaction potential V(𝐫) can in general have both attractive and repulsive parts, and depends on the detailed properties of the solvent. If we consider it to be due to a long ranged attractive Van der-Waals interaction and a short ranged repulsive hard-core interaction, it might be modeled by a 612 Lennard-Jones potential. To treat interaction perturbatively within statistical mechanics, we can use a virial expansion (this is a statistical-mechanical expansion in powers of the density, useful for systematic perturbative corrections to non-interacting calculations when one wants to include

many-body interactions). The second virial coefficient is

v2=d3r[1eV(r)kBT].

To make the calculation easy, consider a potential even simpler than

the 6-12 Lennard-Jones:

V(𝐫)={r<σ,εσ<r<2σ,0r>2σ.

This gives

v2=r<σd3r[1eV(r)kBT]=4π3σ3V0+σ<r<2σd3r[1eV(r)kBT]=4π3[(2σ)3σ3](1eβε)=8V07V0eβε.

This can be positive (signifying net repulsion between the particles) at kBT>εln87 or negative (signifying attraction) for kBT<εln87. While the details of this calculation depend on our choice and parametrization of the potential, in general we will have some special temperature known as the ϑ temperature (in our case kBϑ=εln87)

where

v2(ϑ)=0.

This allows us to define a good solvent: such a solvent must have T>ϑ at our working temperature. This assures us (within the second Virial approximation, at least) that the interactions are repulsive and (as can be shown separately) the chain is swollen . A bad solvent for which T<ϑ will have attractive interactions, resulting in collapse . A solvent for which T=ϑ is called a ϑ solvent, and returns us to a Gaussian chain unless the next Virial coefficient is taken.

Lattice model

A common numerical treatment for this kind of system is to draw the polymer on a grid and make Monte-Carlo runs, where steps must be self-avoiding and their probability is taken from a thermal distribution while maintaining detailed balance. This gives in 3D RNNν where ν0.588.

Renormalization group

A connection between SAWs and critical phenomena was made by de Gennes in the 1970s. Some of the similarities are summarized in the table below. Using renormalization group methods, de Gennes showed by analogy

to a certain spin model that

ν(ε)=12+116ε+15512ε2+ϑ(ε3),ε4d.

This gives in 3D a result very close to the SAW: νRG=12+116+15512+ϑ(ε3)=0.5625+ϑ(ε3).

Polymers Magnetic Systems
N, 1N1. Tc (critical temperature) TTc is a small parameter.
RgNν=(1N)ν. Correlation length ξ=ξ0|TTc|ν – critical exponent ν.
Gaussian chains (non-SAW). Mean field theory.
ν(d=3)νGaussian=12. ν(d=3)νMFT
For d>du=4, ν(d)=νGaussian. MFT is accurate for d>du (Ising model: du=4).

Flory model

This is a very crude model which gives surprisingly good results. We write the free energy as Ftot(R)=Fint+Fent. For the entropic part we take the expression for an ideal chain: SN(R)=d2kBR2N2+SN(0), Fent=TSN. For the interaction, we use the second virial

coefficient:

Fint(R)kBT=12ν2[c(r)]2d3r.

Here c(r) is a local density such that its average value is c=NVNRd.

If we neglect local fluctuations in

c

, then

[c(r)]2d3r=Vc2(r)Vc(r)2=R2(NRd)2,FintkBT12v2N2Rd.

The total free energy is then

FtotkBTd2R2N2+12v2N2Rd.

The free parameter here is R, but we do not know how it relates

to

N

. For constant

N

the minimum is at

RF=(v222)1d+2N3d+2,

which gives the Flory exponent

νF=3d+2.

This exponent is exact for 1, 2 and 4 dimensions, and gives a very good approximation (0.6) for 3 dimensions, but it misses completely for more than 4 dimensions. For a numerical example consider a polymer of 105 monomers each of which is about 5Å in length.

From the expressions above,

R={1600ÅGW,5000ÅFlory,4400ÅSAW.

This difference is large enough to be experimentally detectable by the scattering techniques to be explained next.

The reason the Flory method provides such good results turns out to be a matter of lucky cancellation between two mistakes, both of which are by orders of magnitude: the entropy is overestimated and the correlations are underestimated. This is discussed in detail in all the books.

Field Theory of SAW

Books: Doi & Edwards, Wiegel

The seminal article of S.F. Edwards in 1965 was the first application of field-theoretic methods to the physics of polymers. To insert interactions into the Wiener distribution, we take sum over the two-body interactions 12ijV(𝐑i𝐑j) to the continuum limit 120Ndn0NdmV(𝐑(m)𝐑(n)).

This formalism is rather complicated and not much can be done by hand. One possible simplification is to consider an excluded-volume (or self-exclusion) interaction of Dirac delta function form, which prevents

two monomers from occupying the same point in space:

V(𝐑i𝐑j)=kBTv2δ(𝐑i𝐑j).

The advantage of this is that a simple form is obtained in which only the second virial coefficient v2 is taken into account. The

expression for the distribution is then

Ψ({𝐑n})exp{3220Ndn(R(n)n)2v220Ndn0Ndmδ(𝐑m𝐑n)}.

With expressions of this sort, one can apply standard field-theory/many-body methods to evaluate the Green's function and calculate observables. This is more advanced and we will not be going into it. 05/07/2009

Scattering and Polymer Solutions

The form factor

Materials can be probed by scattering experiments, and for dilute polymer solutions this is one way to learn about the polymers within them. Laser scattering requires relatively little equipment and can be done in any lab, while x-ray scattering (SAXS) requires a synchrotron and neutron scattering (SANS) requires a nuclear reactor. We will discuss structural properties on the scale of chains rather than individual monomers, which means relatively small wavenumbers. It will also soon be clear that small angles are of interest. Template:Sidenote If we assume that the individual monomers act as point scatterers (see (8)) and consider a process which scatters the incoming wave at 𝐀i to 𝐀f, we can define a scattering angle ϑ and a scattering wave vector 𝐀=𝐀f𝐀i (which becomes smaller in magnitude as the angle ϑ becomes smaller). We then measure scattered waves at some outgoing angle for some incoming angle as illustrated in (9), where in fact many chain scatterers are involved we should have an ensemble average over the chain configurations (which should be incoherent since the chains are far apart compared with the typical decoherence length scale). All this is discussed in more detail below. Template:Sidenote

Within a chain scattering is mostly coherent such that that the scattered wavefunction is Ψ=i=1Naiei𝐀𝐑i. The intensity or power should be proportional to I=|Ψ|2=i,j=1Naiaj*ei𝐀(𝐑i𝐑j)).

If we specialize to homogeneous chains where

ai=a

, then

I=|a|2i,j=1Nei𝐀(𝐑i𝐑j).

This expression is suitable for a single static chain in a specific configuration {𝐑i}. For an ensemble of chains in solution, we average over all chain configurations incoherently,

defining the structure factor

S(k)

:

I=Ψ2,S(k)|Ψ(k)|2|Ψ(0)|2.

The normalization is with respect to the unscattered wave at k=0, |Ψ(0)|2=a2N2. Note that in an isotropic system like the system of chain molecules in a solvent, the structure factor must depend only on the magnitude of k.

Inserting the expression for Ψ2 into the above equation gives

S(k)=1N2i,j=1Nei𝐀(𝐑i𝐑j).

We now switch to spherical coordinates with 𝐳 parallel to 𝐀 with the added notation 𝐑ij=𝐑i𝐑j. Since in these coordinates 𝐀𝐑ij=kRijcosϑ,

we can write

eiπͺ𝐑ij=14π02πdφ0πdϑsinϑeikRijcosϑ=1211dxeikRijx=sin(kRij)kRij,
S(k)=1N2ijsin(kRij)kRijconfigurations.

The gyration radius and small angle scattering

For small k (which at least in the elastic case implies small ϑ), we can expand the above expression for S(k) in powers

of

kRij

to obtain

S(k)1N2ij1(kRij3!)2=1N2N2161N2|𝐀|2ij𝐑ij2=113k2Rg2.

The last equality is due to the fact Rg2=12N2ij𝐑ij2. If the scattering is elastic, |𝐀i|=|𝐀f|=2πλ

and

k=|𝐀i𝐀f|=𝐀i2+𝐀f22𝐀i𝐀f=|𝐀i|1+12cosϑ=2πλ2sinϑ2.

With this expression for k in terms of the angle ϑ,

the structure factor is then

S(k)113k2Rg2.=116π23sin2ϑ2λ2Rg2.

From an experimental point of view, we can plot S as a function of k2sin2ϑ2 and determine the polymer's gyration radius Rg from the slope.

The approximation we have made is good when kRgsinϑ2λRg1, and this determines the range of angles that should be taken into account: we must have sinϑ2ϑ2λRg. For laser scattering usually λ500nm (about enough to measure Rg) while for neutron scattering λ0.3nm (meaning we must take only very small angles into account to measure Rg, but also allowing for more detailed information about correlations within the chain to be collected).

Debye scattering function

Around 1947, Debye gave an exact result (the Debye function )

for Gaussian chains:

ei𝐀(𝐑i𝐑j)=e2k26|ij|,SD(k)=2(k2Rg2)2(k2Rg21+ek2Rg2=x),
SD(x)=2x2(x1+ex).

At the limit where x1 we can expand S(x) around x=0, yielding the k0 limit we have encountered earlier. For x1, S(x)=2x2(x1+ex)2x.

Template:Sidenote This also works very well for non-Gaussian chains in non-dilute solutions, where a small percentage of the chains is replaced by isotopic variants. This gives an effectively dilute solution of isotopic chains, which can be distinguished from the rest, and these chains are effectively Gaussian for reasons which we will mention later. An example from Rubinstein is neutron scattering from PMMA as done by R. Kirste et al. (1975), which fits very nicely to the Debye function for Rg130Å. In general, however, a SAW in a dilute solution modifies the tail of the Debye function, since ρ(k)kDf and Df=53 for a SAW.

The structure factor and monomer correlations

Consider the full distribution function of the distances 𝐑ij=𝐑i𝐑j.

This is related to the correlation function for monomer

i

:

gi(𝐫)=1Nj=1Nδ(𝐫𝐑ij).

This function is evaluated by fixing a certain monomer i and counting which other monomers are at a distance 𝐫 from it, averaging over all chain configurations. If we now average over all monomers

1iN

, we obtain

g(𝐫)=gi(𝐫)=1N2ijδ(𝐫𝐑ij).

Fourier transforming it,

d𝐫g(𝐫)ei𝐀𝐫=1N2ijei𝐀𝐑ij=S(k).

The fact that the structure function is the Fourier transform of the scatterer density correlation function is, of course, not unique to the case of polymers. At large k, it can be shown (homework) that if S(k)kDf then g(r)1rdDf. We can therefore determine the fractal dimension of the chain from the large k tail of the structure factor (see table).

Model (d,Df) g(r) S(k)
3D GW 3,2 1r (1k)2
3D SAW 3,52 (1r)4/3 (1k)5/3
3D collapsed chain 3,3 (1r)0lnr (1k)3

Polymer Solutions

Dilute and semi-dilute solutions

Up to this point, we have considered only independent chains in dilute solutions. We have also discussed the quality of solvents and the ϑ temperature. Now, we consider multiple chains in a good solvent (good because we do not want them in a collapsed state). The concentrations of monomers c is defined as the number of monomers (for all chains) per unit volume. A solution is dilute if the typical distance between chains is more that Rg and semi-dilute if it is more that Rg. Between these limits, the concentration passes through a crossover value c* where the inter-chain distance is equal to the typical chain size Rg. Template:Sidenote

We can calculate

c*

by calculating the concentration of monomers within a single chain and equating it to the average monomer concentration:

c*NRgd=N1dνd.

For instance, in a 3D SAW d=3 and ν=35 such that c*=N0.83. We can also work in terms of volume fraction ϕ=3c. This turns out to be very small (for N=106 it is about 0.001 and for N=103 it is about 0.4%). 05/14/2009

Free energy of mixing

If we have a mixture of two components - NA units of A and NB units of B on a lattice model with cell length such that NA+NB=N is the total number of cells – we can define the relative volumes ϕϕA=NANA+NB and ϕB=1ϕ. The free energy of mixing (in the simple isotropic

case) is then

Fmix=UmixTSmix.

From a combinatorical argument and with the help of the Stirling series,

Smix=kBln(NA+NB)!NA!NB!NAlnNANA+NB=ϕNBNBNA+NB=1ϕ.

The average entropy of mixing per cell is therefore

SmixkB(NA+NB)=σmixkB=ϕlnϕ+(1ϕ)ln(1ϕ).

We now consider interactions UAA, UBB and UAB between nearest neighbors of the two species. The specific form of the interaction depends on the coordination number z, or the number of nearest neighbors per grid point: for instance, on a square 2D grid z=4.

The mixing interaction energy can be written as

Umix=NABUAB+NAAUAA+NBBUBB,

where the Nij count the number of boundaries of the different types within the system. In the mean field approximation , we

can evaluate them by neglecting local variations in density:

NAB=NAzNBNA+NB=z(NA+NB)ϕ(1ϕ),NBB=z2(NA+NB)(1ϕ)2,NAA=z2(NA+NB)ϕ2.

The interaction energy per-particle due to mixing is then

umix=UmixNA+NB=zϕ(1ϕ)UAB+z2(1ϕ)2UBB+z2ϕ2UAA,

and we will subtract from it the enthalpy of the "pure" system,

where the components are unmixed:

U0NA+NB=(1ϕ)z2UBB+ϕz2UAA.

The difference between these two quantities it the change in enthalpy

per unit cell due to mixing:

Δumix=UmixU0NA+NB=z(UAB12UAA12UBB)kBTχϕ(1ϕ)=kBTχϕ(1ϕ),
χ=1kBT[UABUAAUBB2]z.

The sign of the Flory parameter χ determines whether the minimum of the energy will be at the center or edges of the parabola

in

ϕ

.

Δfmix=ΔFmixNA+Nb=ΔumixTσmix,
Δfmix=kBTχϕ(1ϕ)+kBTϕlnϕ+kBT(1ϕ)ln(1ϕ).

This is the MFT approximation for the free energy of mixing. Template:Sidenote

The Flory-Huggins model for polymer solutions

This is based on work mostly done by Huggins around 1942. The basic idea is to consider a lattice like the one shown in (11), with chains (inhabiting N=5 blocks in the example) in a solvent (which can also be a set of chains, but in the example the number of blocks per solvent unit is S=1). The enthalpy of mixing Umix is approximately independent of the change from the molecule-solvent system to this polymer-solvent system, at least within the MFT approximation. We can therefore set ϕ=NPNP+NS (NP is the number of monomers and NS the number of solvent units; NpN is the number of chains) and use the previous expressions for Δumix and χ. The fact we have chains rather than individual monomers is of crucial importance when we calculate the entropy, though: chains have more constraints and therefore a lower entropy than isolated monomers. We will make an approximation (correct to first order in N for N1) based on the assumption that the chains are solid objects and can only be transformed, rather than also rotated and conformed around their center of mass. Template:Sidenote This gives, making the Stirling approximation

as before,

SmixkB(NP+NS)!NS!(NPN)!,σmixkBϕNlnϕ+(1ϕ)ln(1ϕ)+αϕ,α=1Nln(NS+NPN)ln(NS+NP)+11N.

If we neglect the term linear in

ϕ

, which we will later show is of no importance, these two expressions lead to the Flory-Huggins free energy of mixing:

f(ϕ,T)=ΔfmixkBT=χϕ(1ϕ)+ϕNlnϕ+(1ϕ)ln(1ϕ).

Compared to our previous expression, we see that the only difference is in the division of the second term by N. Template:Sidenote

Polymer/solvent phase transfers

A system composed of a polymer immersed in a solvent can be in a uniform phase (corresponding to a good solvent) or separated into two distinct phases (a bad solvent). Qualitatively, this depends on χ: the entropic contribution to the free energy from σmix will always prefer mixing, but the preference of Δumix depends on the sign of χ. Phase transfers can only possibly exist if χ>0, because otherwise the total change in energy due to mixing is always negative. When discussing Helmholtz free energy, ϕ is the degree of freedom - however, in the physical case of interest it constant and we must perform a Legendre transformation, or in other words introduce a Lagrange multiplier to impose the constraint that ϕ=Φ. We therefore

define

g=fμϕ,

and after g is minimized μ will be determined so as to maintain our constraint (it turns out that μ is the difference between the chemical potentials of the polymer and solvent). When g has multiple minima (dgdϕ=dfdϕμ=0 for more than one ϕ), a phase transfer can exist.

If g has only one minimum at ϕ, then we must have f(ϕ)=μ. If g has two minima, a first order phase transfer will exist when the free energy g at these two minima is the same. This amount

to a common tangent construction condition for

f

(see 12):

{f(ϕ1)μϕ1=f(ϕ2)μϕ2,f(ϕ1)=f(ϕ2)=μ.

This requires f(ϕ1)f(ϕ2)=μ(ϕ1ϕ2). The two formulations (in terms of g and f) are of course identical. The common tangent actually describes the free energy f of a mixed phase system (having a volume v1 at concentration ϕ1 and a volume v2 at concentration ϕ2, such that Φ=v1ϕ1v1ϕ1+v2ϕ2). When ϕ1<ϕ<ϕ2 this line is always lower than the concave profile of the uniform system with concentration ϕ, and therefore the mixed-phase system must be the stable state.

Note that any additional term to f which is linear in ϕ will only produce a shift in μ, and not qualitatively change the phase

diagram. This is because

ff+αϕg(f+αϕ)μϕ=f(μα)ϕ.

Returning to the Flory-Huggins mixing energy, for χ>0 we can see that f has two minima and the system can therefore be in two phases. For χ<0 only one minimum exist, and therefore only one phase. Generalizing beyond the Flory-Huggins model, at any temperature T there exists some χ(T), and often a dependence χ(T)=ABT works well experimentally (we have found a dependence 1T assuming that the interactions are independent of temperature). For every T or χ, we can find ϕ1 and ϕ2 from the procedure above where two phases exist. This produces a phase diagram similar to (13), where the ϕ(T) curve is known as the binodal or demixing curve .

The phase diagram (13) includes a few more details: one is the critical point

(Tc,ϕc)

or

(χc,ϕc)

, beyond which two solution can no longer exist. Another is the spinodal curve, existing within the demixing curve at

f=0

, marks the point of transition between metastability and instability (within the spinodal curve, phase spearation occurs spontaneously, while between the spinodal and binodal curves it requires some initial nucleation). The spinodal curve is usually quite close to the binodal curve, and since it can be found analytically provides a useful estimate:

f(ϕ)=χϕ(1ϕ)+ϕNlnϕ+(1ϕ)ln(1ϕ),f(ϕ)=2χ+1N1ϕ+11ϕ,f(ϕ)=02χs=11ϕ+1N1ϕ.

The endpoint of the spinodal curve is also the endpoint of the binodal curve; also, this endpoint is the same for the χ(ϕ)

and

T(ϕ)

curves. We can find it from

0=χsϕ|c=12[1N1ϕc2+1(1ϕc)2],ϕc=11+N.

Inserting this into the equation for

χs

gives

χc=12(1+1N)2.

There is a great deal to expand on here. Chapter 4 in Rubinstein is a good place to start.

Surfaces, Interfaces and Membranes

Introduction and Motivation

We will differentiate between several types of surfaces:

  • An outer surface (or boundary) between a liquid phase and a solid boundary or surface. This surface needs not be in thermal equilibrium and exists under external constraints.
  • An interface between two phases in equilibrium with each other, like the A/B liquid mixture that was studied earlier.
  • Membranes have a molecular thickness and are in equilibrium with surrounding water.

We will talk now separately about flat interfaces first, and then extend the discussion to curved and fluctuating interfaces.

Flat Surfaces

Ginzburg-Landau formalism

The simplest kind of non-homogeneous system one can imagine may be described by the variation in some order parameter or concentration as a function of a single spatial direction, ϕ(z). For instance, if we have a gas at z+ and a liquid at z, there will be some crossover regime between them. This kind of physics can be treated with a Ginzburg-Landau formalism, which can be derived from the continuum limit of a lattice gas/Ising model.

If every cell i (with size ) is parametrized by a discrete

spin variable $S_{i}$ such that

Si=1cell i contains molecule A,Si=0cell i contains molecule B,

we may write the Hamiltonian as

β„‹=12ijJijSi(1Sj).

The {Jij} are the interaction constants between

cells. Note that

JijSi(1Sj)+JjiSj(1Si)={0Si=Sj,Jij(Si=1Sj=0)(Si=0Sj=1).

The partition function is

Z=Tr{Si}eββ„‹,

with F=kBTlnZ. We can formulate now a mean-field theory (by neglecting correlations such as: SiSjSiSj) for this model in cases of spatial inhomogeneities (presence of walls and interfaces). The full development is left as an exercise: the result assumes a local thermal equilibrium ϕi=Si

and gives

F0=F0=12ijJijSi(1Sj)+kBTi[ϕilnϕi+(1ϕi)ln(1ϕi)].

Separating this $F_{0}$ into internal energy and entropy,

U=12ijJijϕi(1ϕj),
TS=kBTi[ϕilnϕi+(1ϕi)ln(1ϕi)].

In the continuum limit ϕiϕ(z) and id𝐫3 and

neglecting long-term interactions, we can perform a Taylor expansion:

ϕiϕj=ϕ(𝐫i)ϕ(𝐫i+)ϕ|𝐫i,
Jijϕi(1ϕj)=12Jij[(ϕiϕj)2ϕi2ϕj2+2ϕi]ijJijϕi(1ϕj)14d𝐫3zJ(ϕ2ϕ2+2ϕ)+14d𝐫3ijJij(jϕ)2

$z$ is the coordination number.

U=12d𝐫3[zJϕ(1ϕ)]+14d𝐫3J2(ϕ)2.

Adding the continuum limit entropy,

F=d𝐫[f0(ϕ)+12B(ϕ)2],f0=kBT3[χϕ(1ϕ)+ϕlnϕ+(1ϕ)ln(1ϕ)],BJ2,12zJkBTχ.

We can find the profile ϕ(𝐫) at equilibrium by minimizing the free energy functional F[ϕ(𝐫)] with respect to ϕ(𝐫) and taking external constraints into account. Normally, B>0 and the minimum of F is homogeneous other than surfaces and interfaces. If ϕ()=ϕ(+)=ϕA, the minimal solution ϕ(𝐫) is a constant

and we will have a single homogeneous phase. On the other hand, if

{ϕ()=ϕA,ϕ(+)=ϕB,

and we are in the two-phase region in (χ,ϕ) then a 1D profile must exist that solves the Euler-Lagrange equation, and becomes approximately homogeneous far from the center of the interface.

1D profile at an interface

Quite independently of the previous treatment and the microscopic model, the free energy can be written as a functional of an order

parameter and its gradients:

F=d𝐫{f0(ϕ(𝐫))+B2[ϕ(𝐫)]2}.

Since (ϕ)2>0, for B>0 the system avoids strong local fluctuations and smooth states have smaller energies. A uniform state is therefore preferred, and if the system is not allowed to become fully uniform then regions of different phases form in equilibrium with each other. This is shown in (16), and can also be described by a tangent construction of the type illustrated in (12). In the two phase example above, due to the symmetry of f in ϕ (ϕ1ϕ), the critical point is clearly at ϕc=12. We will make a Taylor expansion of ϕ

around $\phi_{c}$:

ϕ=ϕc+ψ=12+ψ.

Due to the same symmetry in ϕ, an expansion of f in ψ should contain only even powers. Performing this expansion gives the

result

f0=kBT3[2ψ2+4ψ43ln2+χ(14ψ2)]+B(ψ)2=kBT3[(2χ)ψ2+43ψ4]+const.

In general the 43 will be replaced by some positive numerical factor γ4. To obtain the correct critical behavior (note that χc=χ(Tc)=2) we assume a linear dependence of the form α2(TcT)=2χ,

and minimize

F=kBT3[α2(TTc)ψ2+γ4ψ4]d𝐫+B2(ψ)2d𝐫.

The above expansion of the inhomogeneous free energy is called the Ginzburg-Landau (GL) model or expansion. By applying a variational

principle on this free energy, we obtain the Euler-Lagrange (EL) equations:

δFδψ(𝐫)=0,Fψ(𝐫)F(ψ)=0.

06/09/2009

Here

F(ψ)=irifiψ,iψ=riψ.

In particular, B2(ψ)(ψ)2=Bψ=B2ψ and δf0δψ=α(TcT)ψ+γψ3.

The EL equation is therefore

α(TcT)ψ+γψ3B2ψ=0.

This is the well-known Ginzburg-Landau (GL) equation. For T>Tc the only homogeneous (bulk) solution (arrived at by

neglecting the Laplacian term) is

α(TcT)ψ+γψ3=0ψ=0.

In the other case when T<Tc, the system has two homogeneous

solutions

ψ=±α(TcT)γ±ψb.

If we do not neglect the derivative but assume a 1D profile with ψ(±)=±ψb

and $\psi^{\prime}\left(\pm\infty\right)=$0, we must solve the equation

Bψ(z)+α(TcT)ψ(z)γψ3(z)=0.

The exact solution of the GL model is

ψ(z)=ψbtanh(zξ),
ξ2=α(TcT)γ.

We have introduced the correlation length ξ, which is typical of the width of the meniscus (the layer in which the phases are mixed). As a matter of fact, ξ is also the correlation length by the definition ψ(a)ψ(r+a)exp(r/ξ). The dependence ξ(TcT)12 is the mean field result with an exponent ν=12. In general, ξ(TcT)ν. We also have for the order parameter dependence ψ(TcT)β where we have obtained in MFT β=12.

Surface energy and surface tension

Surface energy is the excess of energy in the system with respect to the bulk. Surface tension σ is defined as the surface energy per unit area. Therefore, in our case of two phases separated

by a meniscus, $\sigma$ can be calculated using

σArea=ΔF=F[ψ(𝐫)][12Vf0(ψb)+12Vf0(ψb)].

Here, we have subtracted the bulk energy of the separate surfaces from the energy of the full system including the interface. Note that in equilibrium, by definition f0(ψb)=f0(ψb). With the 1D dependence we are treating, then, ψ(𝐫)=ψ(z)

and

σ=1Areadxdy=1dz[B2[ψ(z)]2+f0(ψ(z))f0(ψb)]=dz[B2[ψ(z)]2+f0(ψ)f0(ψb)].

This is not an extensive quantity like F[ψ], a single number in the size of the system: it is rather a geometry independent parameter with units of energy per unit area.

The first term above is reminiscent of kinetic energy and the second of potential energy. An analogy to the classical mechanics of a point particle exists, as detailed in the following table. \begin{table}[H] \centering{}\begin{tabular}{|c|c|} \hline z & t (time)\tabularnewline \hline \hline ψ(t) & x(t) (distance)\tabularnewline \hline B2(ψ(z))2 & 1mxΛ™2 (kinetic energy)\tabularnewline \hline f0(ψ) & V(x) (potential energy)\tabularnewline \hline f0(ψb) & E (total energy)\tabularnewline \hline \end{tabular} \end{table} With this analogy in mind, we can derive an expression similar to energy conservation in mechanics. From applying the variational principle

to $f_{0}$ we obtain

f0ψ=zψ[B2(ψ)2]=Bψ.

Multiplying this by $\psi^{\prime}$ gives

ψf0ψ=Bψψ=B2ddz[(ψ)2].

Integrating over $z$ from $-\infty$ to $+\infty$,

zdf0dψdψdzdz=zdf0dzdz=B2zddz(ψ)2dz=B2(ψ)2|zf0(ψ)f0(ψb)=B2{[ψ(z)]2(ψ())2=0}
f0(ψ)f0(ψb)=12B(ψ)2.

The last term disappears due to the boundary condition at z, where ψψb=const. and therefore ψ=0. The analogy between this equation and the law of conservation of mechanical

energy can be stressed by writing it as

12B(ψ)2kin.En.f0(ψ)Pot.En.=f0(ψb)Tot.En..

Returning to the surface tension, we can use this conservation law

to rewrite it in the simpler form

σ=[B2(ψ(z))2+f0(ψ)f0(ψb)=B2(ψ)2]dz=B(ψ(z))2dz.

An estimate may be obtained from

σB(ψ(z))2dzB(ψbξ)2ξ,

or

σBψb2ξ1.

The exact expression for σ may be obtained from the exact GL form that we have derived for ψ. In any case, the temperature

dependence of $\sigma$ is of the form

σBTcT2Bξ1α(TcT)γψb2(TcT)32.

If we insert the general exponential dependencies of ξ and ψb into the equation, we will see that the exponent for surface energy as function of (TcT) is 2β+ν. This discussion can be extended to systems which do not have symmetry between ϕ and 1ϕ, such as a liquid/gas system with two densities n and ng. Without proof, we will state that

within the GL formalism it can be shown that

f0(n)12[f0(ng)+f0(n)]=c(nng)2(nn)2.

The surface energy will be

ΔF=d𝐫[12B(n)2+f0(n)12f0(ng)12f0(n)].

For a profile in the $z$ direction,

σ=ΔFArea=dz[B2(n)2+c[n(z)ng]2[n(z)n]2].

After variation, one obtains for the two coexisting phases with $n_{\ell}>n>n_{g}$

lnnnnng=zξ,ξ=(nng)2cB,

with nng(TcT)12 and ξ(TcT)12.

The density profile interpolates smoothly between the two phases:

n(z)=n1+ezξ+ng1+e+zξ.

A few generalizations:

  • Surfactants or surface active materials: this includes soap, detergent, biological membranes composed of biological amphiphiles called phospholipids and more. What they have in common is that they are formed of molecules with charged or polarized {}"heads" connected to long hydrocarbon {}"tails". These molecules are called amphiphyllic , since the tails are hydrophobic and the heads hydrophillic . This causes them to accumulate on interfaces between water and air, and reduce surface tension (by a factor $\sim2-$3):
    σ(with soap)=σair/waterΔσ(ϕs),

where ϕsis the surface concentration of the soap molecules.

  • Emulsions: drops of oil in water (or water in oil), stabilized by some sort of emulsifier (which is also a surfactant). Some common examples are milk and mayonnaise.

Template:Sidenote

  • Detergency of soap: while soap reduces surface tension between oil and water, it does not create a phase where oil and water are mixed on a molecular level. Rather, micrometric oil droplets are formed in the aqueous solution. The process of cleaning is the process where oily dirt is solubilized in the aqueous solution and is washed away from the object we clean.

06/11/2009

Curved Surfaces

Review of differential geometry

\begin{description}

[{Books:}] The book by Safran has a short introduction which will be followed here. The one by Visconti is more thorough and oriented towards other physics problems such as relativity . There also exists a multi-authored book on the subject edited by David Nelson, and a mathematical book on the theory of manifolds by Spivak. \end{description} In order to discuss surfaces and curves which exhibit local curvature, we will need to introduce a few mathematical concepts. A brief introduction follows. \paragraph{Curves} A parametric curve 𝐑(u) is a set of vectors along some contour in space, expressed as a function of the parameter u, which may vary, for example, from 0 to the length L of the curve. The differential length element ds

along the curve can be expressed by

ds=|𝐑(u+du)𝐑(u)||𝐑u|du,ds=|𝐑u|du,dsdu=|𝐑u|.

A tangent vector $\hat{\mathbf{t}}$ can be found from

𝐭^=d𝐑ds=d𝐑dudsdu.

Note that from the magnitude of this expression, 𝐭^ is always a unit vector. It is tangent to the curve because it is proportional to Δ𝐬=𝐑(u+du)𝐑(u).

With these definitions, we can define curvature as one extra

derivative:

d𝐭^ds=d2𝐑ds2κ𝐧^c.

The unit vector 𝐧^c is a unique vector perpendicular to 𝐭^ (this is easy to show by taking dds𝐭^2=2𝐭^d𝐭^ds=dds(1)=0),

and we can also write

κ=|d𝐭^ds|.

It is also useful to define the local radius of curvature R=κ1. Some intuition can be gained from an analogy with the kinetics of point particles moving without a friction on a curve in space, if u is replaced by the time t. The tangent and curvature vectors can then be related to the velocity and acceleration, respectively.

\paragraph{Surfaces}

Similarly to curves, a parametric surface Failed to parse (syntax error): {\displaystyle \mathbf{r'' =\mathbf{R}\left(u,v\right)} } in space can be defined as a function of two parameters. There are

three scalar functions contained in this explicit definition:

{x=f(u,v),y=g(u,v),z=h(u,v).

Note that it is also possible to represent surfaces implicitly as

F(x,y,z)=0,

where other than its zeros F is arbitrary.

Any explicit definition requires some particular choice of u and

$v$. For instance, one choice (called the Monge representation) is

{x=f(u,v)=u,y=g(u,v)=v,z=h(u,v)=h(x,y).

In vector notation,

𝐫=(x,y,h(x,y)).

This works only if there is a single z value for each choice of x and y, and is very convenient for surfaces which are almost flat. Another common choice useful for nearly spherical surfaces is the spherical representation, where u=ϑ and v=φ.

In spherical coordinates, this is

𝐫=(r,ϑ,φ)=(R(ϑ,φ),ϑ,φ).

We can define two tangent vectors 𝐫^u and 𝐫^v at every point on the surface, such that 𝐫^u𝐫^v=0. The unit vector normal to the surface is 𝐧^=𝐫^u×𝐫^v|𝐫^u×𝐫^v|. It is easy to find the unit vector from the implicit representation, and one can usually find an implicit representation: for instance, starting from Monge F=zh(x,y)=0. On the surface, F=0

implies

dF=F(𝐫)+F(𝐫+d𝐫)=d𝐫F=0

The vector d𝐫 can be any vector tangent to the surface, and therefore F must be proportional to

the normal vector:

𝐧^=F|F|.

\paragraph{Metric of a curved surface}

A surface has been defined as an ensemble of points {𝐫(u,v)} embedded in 3-dimensional space. In order to measure length along such a surface, we must integrate along a differential length element

within it:

d𝐫=𝐫udu+𝐫vdv𝐫udu+𝐫vdv,ds=|d𝐫|,ds2=d𝐫d𝐫=(𝐫udu+𝐫vdv)(𝐫udu+𝐫vdv)=𝐫u2E(du)2+2𝐫u𝐫vFdudv+𝐫v2G(dv)2.

The metric is defined as

gEGF2.

It is positive definite since

g=EGF2=(𝐫u𝐫u)(𝐫v𝐫v)(𝐫u𝐫v)2=(𝐫u×𝐫v)2>0.

The surface element can be expressed in terms of the metric:

dA=|d𝐫u×d𝐫v|=|𝐫u×𝐫v|dudv=gdudv.

We illustrate this in the Monge representation as an example. Here,

𝐫u=𝐫x=x(x,y,h(x,y))=(1,0,hx),𝐫v=𝐫y=(0,1,hy).

The surface element is

dA=gdxdy,

with the metric

g=(𝐫u×𝐫v)2=|𝐱^𝐲^𝐳^10hx01hy|,g=1+hx2+hy2.

The length element is

ds2=rx2dx2+2𝐫x𝐫ydxdy+ry2dy2,ds=(1+hx)2dx2+(1+hy)2dy2.

and therefore we have in the Monge representation

ds=(1+hx)2dx2+(1+hy)2dy2,
dA=1+hx2+hy2dxdy.

In the implicit representation, one can begin the same process by

writing the surface element in terms of the volume element:

dA=δ(3)(𝐫𝐫s(x,y,z))d3r,

using the 3D Dirac delta function δ(3)(𝐫).

A general property of the Dirac delta is that

δ(f(x)a)=δ(xf1(a))fx|x=f1(a),

where f1 is the inverse function such that f(x)=af1(a)=x. In terms of the function F such that the surface is defined by

$F=0$, we can use this property to write

δ(3)(𝐫𝐫s(F))=δ(F)|F|,

or

dA=δ(F)|F|d3r.

Returning to the implicit version of the Monge representation,

F=(hx,hy,1),|F|=1+hx2+hy2,
dA=δ(zh(x,y))dz1+hx2+hy2dxdy.

\paragraph{Curvature of surfaces}

So far we have discussed first order differential expressions and the area element. This has to do with properties like surface energy df=σdA. Curvature is a second order property, useful in discussing deformations and fluctuations. Consider a curve 𝐫(s) with 0<s<L on a surface parametrized by u and v. On the curve, u=u(s) and v=v(s). If 𝐧^ is a vector normal

to the surface, the local curvature (of the curve) is

κ=d2𝐫(s)ds2n^=𝐫(s)𝐧^.

The first derivative is

d𝐫ds=𝐫ududs+𝐫vdvds=𝐫uu+𝐫vv,

and the second derivative

d2𝐫ds2=dds(𝐫uu+𝐫vv)=𝐫uu(u)2+𝐫vv(v)2+2𝐫uvuv+𝐫uu+𝐫vv.

Since 𝐧^ is perpendicular to 𝐫u and

$\mathbf{r}_{v}$, we are left with

κ=L(u)2+N(v)2+2Muv,
L=𝐧^𝐫uu,N=𝐧^𝐫vv,M=𝐧^𝐫uv.

\begin{minipage}[t]{1\columnwidth} \begin{shaded} (some missing formulas...)\end{shaded}

\end{minipage}

L=𝐧^𝐫u,N=𝐧^v𝐫v,M=𝐧^v𝐫u=𝐧^u𝐫v.

We finally obtain

κ=(𝐧^u𝐫u)du2+(𝐧^v𝐫u+𝐧^u𝐫v)dudv+(𝐧^v𝐫v)dv2(ds)2,

and with d𝐫=𝐫udu+𝐫vdv

and $\mathrm{d}\hat{\mathbf{n}}=\hat{\mathbf{n}}_{u}\mathrm{d}u+\hat{\mathbf{n}}_{v}\mathrm{d}v$,

κ=d𝐧^d𝐫d𝐫d𝐫=d𝐧^d𝐫ds2.

(missing diagram...)

\paragraph{Curvature tensor}

Since $\mathrm{d}\mathbf{r}\cdot\mathbf{\hat{n}}=0$,

𝐧^+d𝐧^=𝐧^(𝐫+d𝐫)=𝐧^(𝐫)+d𝐫𝐧^,

or

d𝐧^=d𝐫𝐧^d𝐫Q.

The quantity

Q=𝐧^,Qij=(𝐧^)ij=ri(𝐧^)j,

is a second rank tensor or dyadic.

Now, we can write κ with \begin{minipage}[t]{1\columnwidth} \begin{shaded} (some missing formulas...)\end{shaded}

\end{minipage}

κ=d𝐧^d𝐫d𝐫d𝐫=d𝐫Qd𝐫d𝐫d𝐫=d𝐫Qd𝐫ds2,

or

κ=𝐫(s)Q𝐫(s)=ijriQijrj

where Failed to parse (unknown function "\normalcolor"): {\displaystyle \mathbf{r}'(s)={\normalcolor \frac{\mathrm{d}\mathbf{r}}{\mathrm{d}s}}} .

This can be used for the case of an implicitly defined surface where

$\hat{\mathbf{n}}=\frac{\triangledown F}{\left|\triangledown F\right|}$:

Qij=[(F|F|)]ij=i(jF|F|)=ijF|F|(jF)iF|F|.

Using $\partial_{i}\left|\triangledown F\right|=\partial_{i}\sqrt{\left(\partial_{x}F\right)^{2}+\left(\partial_{y}F\right)^{2}+\left(\partial_{z}F\right)^{2}}$,

Qij=1|F|[i|F|jF|F|ijF].

06/16/2009

\paragraph{The curvature tensor and its invariants}

The dyadic matrix Q=Q has eigenvalues λi, a trace TrQ=iλi and a determinant DetQ=|Q|=iλi which are invariant under similarity transformations Q~=UQU1. The sum of the principal minors iMii is also invariant:

to see this, consider the characteristic polynomial

P(λ)=Det(QλI)=Det(U1U)Det(QλI)Det(U1)Det(QλI)Det(U)=Det(U1QUλU1IU)=Det(Q~λI).

Here I is the unit matrix. Expanding P(λ) in

powers of $\lambda$,

P(λ)=|Q|λiMii+λ2iQiiλ3.

We can identify clearly the coefficients of the polynomial P(λ)as the 3 invariants. One eigenvalue is always equal to zero (as an exercise do it in the implicit representation). If we choose λ3=0, we are reduced to two nontrivial invariants: TrQ=λ1+λ2 and iMii=λ1λ2(as Failed to parse (unknown function "\normalcolor"): {\displaystyle {\normalcolor \mathrm{Det}}(Q)=0)} . These invariants are called the mean curvature H and the

Gaussian curvature $K$:

H12(λ1+λ2),
Kλ1λ2.

For example, in the implicit representation we can write

𝐧^=F|F|,N|F|,TrQ=1Ni[NiFiNFii],

where

{Ni=riN,Fij=rirjF.

Note that since, for instance,

Ni=iFx2+Fy2+Fz2=i(FjFj)2Fx2+Fy2+Fz2=1NFjFij,

with a few more steps we can show (another exercise) that

H=12N3[2FxFyFxyFxx(Fy2+Fz2)+2cyclicpermuations],
K=12N3[FxxFyyFz2Fxy2Fz2+2FxzFx(FyFyzFzFyy)+2cyclicpermuations],

where by cyclic permutations we mean permuting the axes: xyzx. In the case of the Monge representation where F=zh(x,y), H and K have a simpler form:

N=1+hx2+hy2,hi=ih,Fx=hx,Fy=hy,Fz=1.

One can then show that

H=12(1+hx2+hy2)3[(1+hy2)hxx+(1+hx2)hyy2hxhyhz],
K=hxxhyyhxy2(1+hx2+hy2)2.

Small disturbances of planar surfaces

To treat nearly flat surfaces, one can use the Monge representation to expand a Taylor series around a completely flat surface in derivatives

of $h\left(x,y\right)$:

N=1+hx2+hy21+12(hx2+hy2),

or equivalently

N=|F|1+12(||h)2.

From similar arguments, one can show that

H12(hxx+hyy)=122h,Khxxhyy(hxy)2.

In the general parametric representation,

κ=L(du)2+2Mdudv+N(dv)2E(du)2+2Fdudv+G(dv)2

with L=𝐧^𝐫uuM=𝐧^𝐫uvand N=𝐧^𝐫vv. Picking a unit vector 𝐚^=l𝐫u+m𝐫v in the plane, the curvature in the direction of 𝐚^

is given by

κ=Ll2+2Mlm+Nm2.

The parameters $l$ and $m$ must obey

a=|𝐚^|=1𝐚^𝐚^=El2+2Fml+Gm2=1.

In investigating κ(l,m) as a function of the direction of 𝐚^, we can find its extrema with the constraint

$a=1$ by adding a Lagrange multiplier:

κ~=κ(l,m)λ(El2+2Flm+Nm2),κ~l=κ~m=0.

The solution takes the form of a quadratic equation

κ2(EGF2)κ(EN+GL2FM)+(LNM2)=0,

which has 2 roots: κa and κb. This extremum finding process defines the principal directions , which (we will state without proof) are always perpendicular to each other.

The two invariants are then

H=12(κa+κb)=EN+GL2FM2(EGF2),K=κaκb=LNM2EGF2.

Consider a few cases in terms of the radii of curvature ra=1κa and rb=1κb:

  1. If at some point both radii are positive, then κa, κb, H and K are all positive. The surface is convex around the point, as in (17a).
  2. If both are negative, then H<0 and K>0. The surface is concave around the point, as in (17b).
  3. If the two have opposing signs, K is negative and one is at a saddle point of the surface, as in (17c).
  4. The special surface having H=(κa+κb)/2=0 at any point is called a minimal surface (or Schwartz surface, after the 19th century mathematician who studied them in detail). These surfaces have a saddle at every point, as one curvature is always positive and the second negative: κa=κb. Hence, their Gaussian curvature is always negative: K<0.

We will use the principal directions to describe a local paraboloid

expansion of a nearly flat surface. In general,

z=z0+12L(uu0)2+12N(vv0)2.

In the Monge representation,

z=z0+12κa(xx0)2+12κb(yy0)2.

Free energy of soft surfaces

\begin{description}

[{Book:}] Landau & Lifshitz' book on Elasticity has a chapter on elasticity of hard (solid) shells. There is also a book by Boal on elasticity and mechanics of fluid membranes . Safran's book shows how the parameters we describe can be derived from a microscopic model where the lipid (surfactant) molecules are modeled as beads connected with various springs. \end{description} Consider a liquid surface or fluid membrane: as such a surface curves,

its free energy varies. Phenomenologically,

ΔF=σdA+2k(HC0)2dA+kΒ―KdA.

All the integrals are taken over the surface. The fact that the above expression models a fluid membrane is related to the fact that we do not account for any lateral shear forces. Molecules composing the fluid membrane are free to flow inside the membrane but they resist elastic deformations such as bending. The first term describes the contribution of surface tension, which is proportional to the total surface area. The geometric values H and K are the mean and Gaussian curvatures we have already encountered. The coefficients k and kΒ― (with units of energy) depend, like σ, on the material properties in question. The spontaneous curvature C0 is also a material property: it defines a certain preferred angle (perhaps due to the shape of surfactant molecules), and its sign depends on the preferred direction of curvature. See (18) for an illustration. Unless there is an active process that causes an asymmetry in the lipid composition of the two leaflets, the bilayer will have the same lipid composition on the inside and outside, and therefore has in total C0=0. Usually for fluid membranes, k and kΒ― range from 1kBT to 40kBT.

One example is a sphere of radius $R$, where:

H=12(R11+R21)=R1,K=R2.

This gives

ΔF|sphere=σdA+2k(HC0)2dA+kΒ―KdA=4πR2σ+2k(1RC0)24πR2+4πkΒ―=4πR2σ+8πk(1C0R)2+4πkΒ―.

The interesting fact that the surface integral over the Gaussian curvature K gives a constant value of 4π – independent on the radius R of the sphere – has a deep meaning. It is related to the famous Gauss-Bonnet theorem which will be stated here without further details: according to this theorem, the integral over the Gaussian curvature is a topological invariant of the surface whose value is equal to 4π(1g), where g is the genus of the surface. A sphere or any closed object with no {}"holes" has g=0 and an integrated Gaussian curvature of 4π, while a torus (or {}"donut") with one hole has g=1 and hence a zero integrated Gaussian curvature. Template:Sidenote

A second example is an infinite cylinder with radius R. Here, κa=1R

and $\kappa_{b}=$0. The free energy per unit length is

ΔFL=2πRσ+2k(12RC0)2πR+kΒ―1R0dA=0=2πRσ+4πkR(12C0R)2.

An even simpler example is the infinite plane, where κa=κb=0K=H=0.

This yields

ΔFArea=σ+2kC02.

06/18/2009

Thermal fluctuations of a plane

\begin{description}

[{Book:}] Safran's book. \end{description} To second order in derivatives of h in the Monge representation

for $\bar{k}=$0,

ΔF=σ2(h)2dxdy+2k4(2h)2dxdy.

The minimum of energy is obtained for a flat surface. Going to a Fourier

transformed form, we have

hπͺ=(12π)2d2rh(𝐫)eiπͺ𝐫,h(𝐫)=(12π)2d2qhπͺeiπͺ𝐫,(2h)2q4hπͺhπͺ,(h)2q2hπͺhπͺ.

This gives for the free energy in terms of the normal surface modes

$\left\{ h_{q}\right\} $:

ΔF=σ2(qx2+qy2)hπͺhπͺd2q+k2(qx2+qy2)2hπͺhπͺd2q.

With h(𝐫) real, we know that hπͺ=hπͺ*, or hπͺhπͺ=|hπͺ|2. From the classical equipartition theorem we can estimate the equilibrium

energy for the average of this quantity:

kBT2=12σq2hπͺhπͺ+12kq4hπͺhπͺ|hπͺ|2=kBTσq2+kq4.

It is now useful to define the new length scale ξkσ, and examine the limits of kξ1 and kξ1. In the kξ1 limit, one obtains a surface dominated by surface

tension. Consider the real space thermal correlation function

h(𝐫)h(𝐫)=1(2π)2eiπͺ𝐫eiπͺ𝐫hπͺhπͺ=2πδ(πͺ+πͺ)d2qd2q=12πd2qkBTσq2.

Since this integral diverges at both large and small q, to obtain a physically meaningful result we must introduce cutoffs to the range of π‘ž: qmin=2πL where L is the linear dimension of the system, and qmax=2πa where a3Å is the typical molecular size. This gives an example of a famous result from the 1930s, known as Landau-Peierls instability

for 2-dimensional systems and the lack of an ordered phase at $T>$0:

h2(𝐫)=kBT2πσlnLa.

Since the logarithmic divergence is very weak, it turns out that the thermal fluctuations are two or three Angstroms in size for a water surface of macroscopic (a few millimeters or centimeters) dimension. These thermal fluctuations are not easy to measure because the signal should come only from the water molecules at the water surface. In the 1980s they were measured for the first time for water surfaces at room temperature using a powerful synchrotron X-ray source. The technique employs scattering at very low angles (called grazing incidence) from the water surface and obtains the intensity of the scattered X-ray as function of q. This quantity is proportional to |hq|2. In the opposite limit where kξ1, the membrane is dominated

by its elastic energy, and $\sigma\ll kq^{2}$ can be neglected:

|hπͺ|2=kBTσq2+kq4kBTkq4q4.

The divergence at small q is much larger here than in the first

case, and

h2=kBT2πkqminqmax1q3dq=kBT4πk[(1qmax)20+(1qmin)2],

and

h2=kBT2(2π)2L2kh2L.

In such membranes, which are dominated by elasticity, the fluctuations increase linearly with membrane size. For a membrane around 1cm in length, a typical amplitude is in the μm range. Another interesting observation is that h2T/k. For small k (flexible membranes), as well as for higher temperatures, the fluctuations become larger. This is valid as long as the condition of the elastic-dominated case, σkq2, remains satisfied. Also, recall that the source of the large membrane fluctuations comes from the small q or large wavelengths, and not from small wiggles associated with the motion of individual molecules.

Rayleigh instability

Due to surface tension, a cylinder of liquid created in air (or surrounded by another immiscible liquid) is unstable and will break into spherical droplets. Let's consider the following model: a cylinder of length L and smaller radius R0L, which contains inside it an incompressible liquid with a total volume of V=πLR02. For simplicity, we will consider perturbations which preserve the body-of-revolution symmetry around the main axis and the cylinder length L, such that only the local radius ρ(z) along the cylinder's axis may vary. Expanding ρ in normal modes

then gives

ρ(z)=R+q0ρqeiqz.

The mode amplitudes are ρq=1Lρ(z)eiqzdz.

Note that

ρ(z)=1L0Lρ(z)dz=R,

with R depending on R0. This dependence can be found from

the constant volume constraint

V=πR02L=0Lπρ2(z)dz=πR2L+πLqq(ei(q+q)zdz)=LδqqρqρqR2L=R02Lq0ρqρq.

This is exact, but for small perturbations we can expand the root

and obtain

RR0(112Lq0|ρq|2).

The surface energy of the distorted cylinder will be

Fs=σArea=σ0L2πρ(z)1+(ρz)2dz.

(We have used expression for the surface area of a body-of-revolution with axial symmetry). Expanding all quantities up to second order

in

ρq

gives

Fs2πσ0L(ρ(z)R)dz=0+2πσ0LdzR(1+12(ρr)2)=2πσL(R0R02Lq|ρq|2R02)+πσR0L0Lqq(qq)ei(q+q)zdz=2πσLR02πσR0q|ρq|22R02+2πσR0qq2|ρq|22R02.

Finally,

ΔF=FsFs0=2πσR0q|ρq|22R02(q2R021).

The conclusion is that modes having q2R021 will reduce the original cylinder free energy Fs0. Hence, this is the onset of an instability called the Rayleigh instability of a liquid cylinder. A liquid cylinder will spontaneously start to develop undulations of wavelength λq1R0. These undulations will grow and eventually break up the cylinder into spherical droplets of size R0. Note that if we go back to the planar surface by taking the limit R0, no such instability will occur since the planar geometry has the lowest surface area with respect to any other fluctuating surface.

Student Projects

Polymer Dynamics

\noindent \begin{center} {\huge Physical Models in Biological } \par\end{center}{\huge \par} \noindent \begin{center} {\huge Systems and Soft Matter}

{\huge ~}

~

~{\huge }

\par\end{center}{\huge \par} \noindent \begin{center} {\huge Final Course Project}

{\huge ~}

{\huge ~}

{\huge ~}

{\huge ~}

\par\end{center}{\huge \par} \begin{center} \includegraphics[scale=0.6]{Photo-of-Combi-Formulations-Example-4} \par\end{center} ~ ~ ~ ~ \noindent \begin{center} {\huge A Guided Tour to the Essence }

{\huge ~}

{\huge of Polymer Dynamics} \par\end{center}{\huge \par} ~ \noindent \begin{center} {\large Submitted by : Shlomi Reuveni} \par\end{center}{\large \par} ~ ~ ~ ~\newpage{} \tableofcontents{} \newpage{}

What is this document all about?

This paper is submitted as a final project in the course {\small {}"Physical Models in Biological Systems and Soft Matter". }Writing this document I aimed at achieving two goals. The first was getting to know a little better a subject that I found interesting and was not covered during the course. As an interesting by product I have also profoundly improved my knowledge on diffusion, a subject I was already superficially acquainted with. The second goal was to provide an accessible exposition to the subject of polymer dynamics aimed mainly for advanced undergraduate students who are curious about the subject and would like an easy start. This is also the reason this document is titled: {}"A Guided Tour to the Essence of Polymer Dynamics" and for the fact it is written in the form of questions and answers.

The saying goes: {}"There are two ways by which one can really master a subject: research and teaching". I felt that the effort I have put into making this document readable for advanced undergraduate students taught me more than I would have learned by passive reading. I have tried hard to make this document as self contained and self explanatory as possible and therefore hope that it will be of some help to you the curious student. So, if you wonder {}"What do you mean by polymer dynamics?" and {}"How can this subject be of any interest to me?" please read on. \newpage{} \section{O.K, sum it up in a few lines so I can decide if I want to go on reading!}

What's a polymer?

A polymer is a large molecule (macro-molecule) composed of repeating structural units (monomers) typically connected by covalent chemical bonds. Due to the extraordinary range of properties accessible in polymeric materials, they have come to play an essential and ubiquitous role in everyday life – from plastics and elastomers on the one hand to natural biopolymers such as DNA and proteins that are essential for life on the other. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{Single_Polymer_Chains_AFM} \par\end{centering} \caption{Appearance of real linear polymer chains as recorded using an atomic force microscope on surface under liquid medium. Chain contour length for this polymer is 204nm; thickness is 0.4nm. Taken from: Y. Roiter and S. Minko, AFM Single Molecule Experiments at the Solid-Liquid Interface: In Situ Conformation of Adsorbed Flexible Polyelectrolyte Chains, Journal of the American Chemical Society, vol. 127, iss. 45, pp. 15688-15689 (2005) }

\end{figure}

What's polymer dynamics?

As every other molecule a polymer is also affected by the thermal motion of surrounding molecules. It is this thermal agitation that causes a flexible polymer to move about in the solution while constantly changing its shape. This motion is referred to as polymer dynamics.

\begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{Motion} \par\end{centering}

\caption{Photographs of DNA polymers in aqueous solution taken by fluorescence microscopy. There is a 1-second interval between successive frames. The motion is clearly visible. Taken from: Introduction to Polymer Physics, M. Doi Translated by H. See, Clarendon Press, 30 November 1995.}

\end{figure}

What can I find in the rest of this document?

If you ever wondered how can one understand the motion of a polymer and what are the physical properties emanating from the dynamics of these materials you should read on. We will start with the building blocks, the dynamics of a single particle in solution. We will then gradually build on, presenting two models for polymer dynamics. Experimental observations will also be discussed as we confront our models with reality.

\newpage{}

\section{I knew there must be some preliminaries, can you keep it short and to the point? }

\subsection{Why do you bore me with this? why can't I skip directly to section 4?}

If you are familiar with concepts such as Diffusion, Einstein relation and Brownian motion you would find this section easier to read. If you are also familiar with the mathematics behind these concepts, Smoluchowski and Langevin equations, you can skip directly to section 4. In order to understand polymer dynamics we have to start from something more basic. A polymer can be thought of as long chain of particles (the monomers), the particles are connected to one another and hence interact. It would be wise to first try and understand the dynamics of a single particle and only then take into account these interactions. The dynamics of a single particle lies in the heart of the section.

\subsection{Can't say I know much about any of the stuff you mentioned above but first thing is first, what is diffusion?}

Molecular diffusion, often called simply diffusion, is a net transport of molecules from a region of higher concentration to one of lower concentration by random molecular motion. The result of diffusion is a gradual mixing of material. In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing or a state of equilibrium. Basically, it is the movement of molecules from an area of high concentration to a lower area. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.55]{Diffusion_(1)}

~

\includegraphics[scale=0.14]{cell_diffusion_ink_India} \par\end{centering}

\centering{}\caption{Top: Schematic representation of mixing of two substances by diffusion. Bottom: Ink diffusing in water.}

\end{figure}

How can we mathematically treat diffusion?

As mentioned above diffusion is basically the movement of molecules from an area of high concentration to an area of lower concentration. For simplicity we will consider one-dimensional diffusion. Let c(x,t) be the concentration at position x and time t. A Phenomenological

description of diffusion is given by Fick's law:

j(x,t)=Dc(x,t)x

In words: if the concentration is not uniform, there will be a flux of matter which is proportional to the gradient in concentration. The proportionality constant is called the diffusion constant and it is denoted by D its units are length2time. The minus sign is there to take care of the fact that the flow is from the higher concentration region to the lower concentration region.

Where is this flux coming from?

Its microscopic origin is the random thermal motion of the particles. The average velocity of each particle is zero, and there is an equal probability for each particle to have a velocity directioned right or left. However, if the concentration is not uniform the number of particles which happen to flow from the higher concentration region to the lower concentration region is higher than the number of particles flowing in the other direction simply because there are more particles there. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"23-07-2009 22-19-57\string".eps} \par\end{centering} \centering{}\caption{Microscopic explanation for Fick's law. Suppose that the particle concentration c(x) is not uniform. If the particles move randomly as shown by the arrows, there is a net flux of particles flowing from the higher concentration region to the lower concentration region. Here the diffusion constant of the particle, which determines the average length of the arrows, is assumed to be constant. }

\end{figure}

How do we go on?

We now write an equation for the conservation of matter, the change in the number of particles located at the interval [x,x+x] from time t to time t+t is given by the number of particles coming/going from the left minus the number of particles

coming/going from the right: {\tiny

N(t+t)N(t)[c(x+x2,t+t)c(x+x2,t)]x[j(x,t+t2)j(x+x,t+t2)]t

}or:

[c(x+x2,t+t)c(x+x2,t)]t=[j(x,t+t2)j(x+x,t+t2)]x

taking the limits t,x0 and assuming continuity and differentiability of the concentration and the flux

we obtain:

c(x,t)t=j(x,t)x

Substituting the expression for the flux gives the well known diffusion

equation:

c(x,t)t=D2c(x,t)x2

\subsubsection{What happens if the particles are under the influence of some kind of a potential U(x)?}

If this happens Fick's law must be modified since the potential U(x)

exerts a force:

F=Ux

on the particle and gives an non zero average velocity v. If the force is weak there is a linear relation between force and velocity

given by:

v=Fζ=1ζUx

the constant ζ is called the friction constant and its inverse 1/ζ is called the mobility.

\subsubsection{How come the velocity doesn't grow indefinitely? there is a constant force!}

Correct, but it is not the only force acting on the particle. There are also friction and random forces exerted by other particles and hence like a feather falling under its own weight the particle reaches a finite average velocity.

O.K, and what do we do now?

We will obtain the Smoluchowski equation that takes the potential into account, but first we will obtain an important relation between the diffusion constant the temperature and the friction constant. The average velocity of the particle gives an additional flux cv

so that the total flux is:

j(x,t)=Dc(x,t)xcζUx

An important relation is obtained from this equation. As you may recall from statistical mechanics, in equilibrium the concentration is given

by the Boltzmann distribution:

ceq(x,t)exp(U(x)/kBT)

for which the flux must vanish and hence:

Dceq(x,t)xceqζUx=0

Substituting for $c_{eq}(x,t)$ we get:

DceqkBTUxceqζUx=ceqUx[DkBT1ζ]=0

Since this is true for every $x$ it follows that:

D=kBTζ

this relation is called the Einstein relation. The Einstein relation states that the diffusion constant which characterizes the thermal motion is related to the friction constant which specifies the response to external force. The Einstein relation is a special case of a general theorem called the fluctuation dissipation theorem, which states the spontaneous thermal fluctuations are related to the characteristics of the system response to an external field. ====And the Smoluchowski equation is obtained by plugging in the {===="new" flux into the continuity equation right?} Exactly right! using the Einstein relation we rewrite the flux as:

j(x,t)=1ζ[kBTc(x,t)x+cUx]

Substituting this into the continuity equation we get the Smoluchowski

equation:

c(x,t)t=j(x,t)x=x1ζ[kBTc(x,t)x+cUx]

which serves as a phenomenological description of diffusion under the influence of an external potential. Although we have derived the above equation for the concentration c(x,t) the same equation will also hold for the probability distribution function Ψ(x,t) that a particular particle is found at position x at time t. This is true since the distinction between c(x,t) and Ψ(x,t) is, for non-interacting particles, only the fact that Ψ(x,t) is normalized. The evolution equation for the probability Ψ(x,t)

is hence written as:

Ψ(x,t)t=x1ζ[kBTΨ(x,t)x+Ψ(x,t)Ux]

which will also be termed the Smoluchowski equation.

What's Brownian motion?

Brownian motion (named after the Scottish botanist Robert Brown) is the seemingly random movement of particles suspended in a fluid (i.e. a liquid or gas) or the mathematical model used to describe such random movements. Brownian motion is traditionally regarded as discovered by the botanist Robert Brown in 1827. It is believed that Brown was studying pollen particles floating in water under the microscope. He then observed small particles within the vacuoles of the pollen grains executing a jittery motion. By repeating the experiment with particles of dust, he was able to rule out that the motion was due to pollen particles being 'alive', although the origin of the motion was yet to be explained. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{PerrinPlot2} \par\end{centering} \caption{Three tracings of the motion of colloidal particles of radius 0.53\textmu{}m, as seen under the microscope, are displayed. Successive positions every 30 seconds are joined by straight line segments (the mesh size is 3.2\textmu{}m).Reproduced from the book of Jean Baptiste Perrin, Les Atomes,Perrin, 1914, p. 115.} \end{figure}

And the mathematical treatment?

\subsubsection{Before we start I have to say that it seems awfully similar to diffusion, what's new?} You are right! these are opposite sides of the same coin. However, the approach we take here is microscopic rather than macroscopic. Instead of starting from a macroscopic quantity, the concentration, we will start from the equation of motion for a single particle in

solution, Newton's second law:

md2xdt2=ζdxdtUx+g(t)

Here the first term on the right hand side is the friction force which is assumed to take a standard form of being opposite in direction and proportional to the velocity. The second term is the force exerted as a consequence of the external potential and the third term is a random force that represents the sum of the forces due to collisions with surrounding particles. Let us now rewrite this equation in the

form:

mζd2xdt2+dxdt=1ζUx+g(t)ζ1ζUx+g(t)

where we have defined g(t)g(t)ζ. Our next step is an approximation, treating very small and light weight particles we will drop the inertial term mζd2xdt2

assuming it is negligible and obtain:

dxdt=1ζUx+g(t)

we will refer to this equation as the Langevin equation. This equation describes the motion of a single Brownian particle, solving it one can (in principle) obtain a trajectory of such a particle.

I don't understand why you throw away the inertial term, please explain!

This can be further explained by the following example. Consider a particle immersed in some solvent moving under the influence of a constant external force F=Ux. Let us

denote the velocity:

v=dxdt

the equation of motion for $v$ is given by:

mζdvdt+v=Fζ+g(t)

For simplicity let us factor out the random force by taking an ensemble average (to avoid the subtleties of taking the time average) of both

sides of the equation and obtaining an equation for the average velocity:

d<v>dt+ζm<v>=Fm

Multiplying both sides by etζm and integrating

from zero to $t$ we are able to solve for $<v>$:

<v>=etζm0tFmetζmdt=Fζ[1etζm]

Where we have assumed that the particle was at rest at time zero. We see that the velocity approaches an asymptotic value of Fζ exponentially fast and that the characteristic relaxation time is τ=mζ. Dropping the inertial term in the first place

we would have simply gotten:

<v>=Fζ

i.e. an immediate response to the force. It is now clear that if the relaxation time τ=mζ is small, dropping the inertial term is a good approximation! In the case of small particles (atoms,molecules,colloidal particles, etc...) immersed in a liquid, the relaxation time τ is indeed very small supporting the validity of our approximation. \subsubsection{If these are opposite sides of the same coin how does the Langevin equation relate to the Smoluchowski equation?} As mentioned earlier, since we don't know the exact time dependence of g(t) we will treat it as a random force. The freedom in choosing the distribution of g(t) is very large, here however we will limit ourselves to a model which will be equivalent to the Smoluchowski equation. \subsubsection{The Langevin equation gives us trajectories, the Smoluchowski equation gives us a probability distribution for the position, how can they be equivalent?} Excellent question! Examining many trajectories one can generate the probability distribution for the position. For example, starting the particles from a given origin and following its trajectory up to some time t one can record the position x(t). Repeating the processes many many times will yield many many different x(t). Creating a histogram one can generate an empirical probability distribution for the position at time t. One can show () that if the probability distribution of g(t) is assumed to be Gaussian and is characterized

by:

{<g(t)>=0<g(t)g(t)>=2kBTζδ(tt)

then the distribution of x(t) determined by the Langevin equation satisfies the Smoluchowski equation. In other words, if g(t) is a Gaussian random variable with zero mean and variance 2kBTζ and if g(t) and g(t) are independent for tt then the above statement holds. \subsubsection{I still don't understand, can you demonstrate on a simple special case?} Yes! Consider the Brownian motion of a free particle (no external

potential) for which the Langevin equation reads:

dxdt=g(t)

If the particle is at x0 at time t=0, its position at time

$t$ is given by:

x(t)=x0+0tg(t)dt

From the above we deduce that x(t)x0 is a linear combination independent Gaussian random variables. We now recall that the sum independent Gaussian random variables is a Gaussian random variable itself and hence the probability distribution of x(t) may be written

as:

Ψ(x,t)=12πBexp[(xA)22B]

where:

{A=<x(t)>B=<(x(t)A)2>

The mean is calculated from:

A=<x(t)>=x0+0t<g(t)>dt=x0+0t0dt=x0

For the variance we have:

Failed to parse (syntax error): {\displaystyle B=<\left(\overset{t}{\underset{0}{\int}}g(t')dt'\right)\left(\overset{t}{\underset{0}{\int}}g(t")dt"\right)>=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}<g(t')g(t")>dt'dt"}

hence:

Failed to parse (syntax error): {\displaystyle B=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}\frac{2k_{B}T}{\zeta}\delta(t-t')dt'dt"=\frac{2k_{B}T}{\zeta}t=2Dt}

and thus:

Ψ(x,t)=14πDtexp[(xx0)24Dt]

which is exactly (check by direct differentiation) the solution for

the Smoluchowski equation:

Ψ(x,t)t=D2Ψ(x,t)x2

In other words, both equations result in the same probability distribution for x(t). An important conclusion is that the mean square displacement of a Brownian particle from the origin is given by 2Dt and is hence linear in time.

O.K, I think we covered everything! anything else?

We are almost done but in order to complete our analysis we need to analyze one more problem, the Brownian motion of a harmonic oscillator.

\subsubsection{Why do we have to do this? how come we always have to talk about the harmonic oscillator?}

The harmonic oscillator is a simple system that serves as a prototype for problems we will solve later one. Treating it here will ease things for us later.

I understand, please go on.

Consider a Brownian particle moving under the following potential:

U(x)=12kx2

The equation of motion for this particle is given by:

dx(t)dt=kζx(t)+g(t)

In order to get a formal solution for x(t) we multiply both sides

by $e^{\frac{k}{\zeta}t'}$and do some algebra:

dx(t)dtekζt+kζx(t)ekζt=ddt[x(t)ekζt]=g(t)ekζt

We now integrate from $-\infty$ to $t$ and get:

[x(t)ekζt]t=t=t=tg(t)ekζtdt

Assuming the following boundary condition:

x(t=)ekζ=0

We conclude that:

x(t)=tg(t)ekζ(tt)dt

It is also possible to solve under the initial condition x(t=0)=x0,

in that case:

[x(t)ekζt]t=0t=t=0tg(t)ekζtdt

and we have

x(t)=x0ekζt+0tg(t)ekζ(tt)dt

\subsubsection{O.K, but g(t) is a random variable and hence x(t) is also one that doesn't tell me much... Can we calculate some moments? Start with the case of the particle that has been with us since t=.}

First we note that for the mean position we have:

A(t)=<x(t)>=t<g(t)>ekζ(tt)dt=0

and the mean position is hence zero. We now aim at finding an expression for the mean square displacement from the origin <(x(t)x(0))2>, the variance of x(t) will be calculated as a by product. We start

with the time correlation function of $x(t)$:

<x(t)x(0)>=tdt10dt2ekζ(t1+t2t)<g(t1)g(t2)>

Recalling that:

<g(t)g(t)>=2Dδ(tt)=2kBTζδ(tt)

we get:

<x(t)x(0)>=0dt1ekζ(2t1t)2kBTζ=[ekζ(2t1t)kBTk]t1=t1=0=kBTkekζt

Here we assumed that t>0 and used the fact that <g(t1>0)g(t2)>0

since $t_{2}<$0. Similarly if $t<$0 we get:

<x(t)x(0)>=tdt1ekζ(2t1t)2kBTζ=[ekζ(2t1t)kBTk]t1=t1=t=kBTkekζt

We may hence conclude that:

<x(t)x(0)>=kBTkekζ|t|

Letting $t=$0 we get

<x(0)x(0)>=<x2>=kBTk

which coincides with the known result obtained from statistical mechanics with the use of the Boltzmann distribution ψeqexp(kx2/2kBT).

We will now show that this is also the variance:

Failed to parse (syntax error): {\displaystyle B=<(x(t)-A(t))^{2}>=<x(t)x(t)>=\overset{t}{\underset{-\infty}{\int}}\overset{t}{\underset{-\infty}{\int}}<g(t')g(t")>e^{\frac{k}{\zeta}\left(t'+t"-2t\right)}dt'dt"}

and hence:

B=t2kBTζekζ(2t2t)dt=kBTk
The mean square displacement <(x(t)x(0))2> can now be easily

calculated:{\scriptsize

<(x(t)x(0))2>=<x(t)2>+<x(0)2>2<x(t)x(0)>=2[<x(t)2><x(t)x(0)>]

}and hence:

<(x(t)x(0))2>=2kBTk[1ekζ|t|]

Here, unlike the case of free diffusion, for long times the mean square displacement is bounded by 2kBTk. The bound is approached exponentially fast with a characteristic relaxation time τ=ζk. Considering the opposite limit |t|0 (very

short times) we have (to first order):

<(x(t)x(0))2>=2kBTk[11+kζ|t|]=2D|t|

Indeed, in this limit the particle has yet to {}"feel" the harmonic potential and we expect regular diffusion.

\subsubsection{I think that since x(t) is a linear sum of Gaussian random variables and hence Gaussian itself, we can also write an expression for the it probability distribution. Am I right?} Yes you are! We already found the mean and variance and hence the

probability distribution for $x(t)$ is:

Ψ(x,t)=12πBexp[(xA)22B]=12πkBTkexp[kx22kBT]

which is exactly the Boltzmann distribution. We could have guessed that this will be so since we have given the particle an infinite amount of time to equilibrate with the potential well. ====Let's proceed to the case of the particle that started at Failed to parse (syntax error): {\displaystyle x_{0====} ! }

First we note that for the mean position we have:

A(t)=<x(t)>=x0ekζt+t<g(t)>ekζ(tt)dt=x0ekζt

the mean position depends on time and exponentially decays towards

zero. For the variance we have:

Failed to parse (syntax error): {\displaystyle B=<(x(t)-A(t))^{2}>=\overset{t}{\underset{0}{\int}}\overset{t}{\underset{0}{\int}}<g(t')g(t")>e^{\frac{k}{\zeta}\left(t'+t"-2t\right)}dt'dt"}

and hence:

B=0t2kBTζekζ(2t2t)dt=kBTk[1e2kζt]

Here again the variance exponentially decays towards the equilibrium variance. The probability distribution is Gaussian again and we have:

{\footnotesize

Ψ(x,t)=12πBexp[(xA)22B]=12πkBTk[1e2kζt]exp[k(xx0ekζt)22kBT[1e2kζt]]

}which for short times tτ=ζk is the same as

free diffusion:

Ψ(x,t)=14πDtexp[(xx0)24Dt]

and for long times gives the Boltzmann distribution:

Ψ(x,t)=12πkBTkexp[kx22kBT]

\newpage{}

The Bead-Spring (Rouse) Model for Polymer Dynamics

Give me the simplest model for polymer dynamics!

A polymer is a chain of monomers linked to one another by covalent bonds. It is natural to represent a polymer by a set of beads connected to one another by springs. The dynamics of the polymer is modeled by the Brownian motion of these beads. Such a model was first proposed by Rouse in the fifties of the twentieth century and has been the basis of the dynamics of polymers in dilute solutions. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"26-07-2009 16-17-49\string".eps} \par\end{centering} \caption{A pictorial description of the Rouse model.} \end{figure}

But now the beads are connected! how do we take that into account?

Let {R→1,..,R→N} be the positions of the beads, if we assume the beads experience a drag force proportional to their velocity as they move through the solvent, then for each bead we can

write the following Langevin equation:

dRβ†’n(t)dt=1ζnURβ†’n+gβ†’n(t)

Here ζn is the friction coefficient of the nth bead and from now on we will assume that the beads are all alike and hence ζ=ζn for every n. The random force gβ†’n(t)

is Gaussian with the following characteristics:

{<gαn(t)>=0n,α=x,y,z<gαn(t)gβm(t)>=2kBTζδnmδαβδ(tt)n,m,α,β=x,y,z

i.e. the random forces acting on different beads and/or in perpendicular directions and/or in different times are independent.

And the potential U? Harmonic as always?

Indeed, having harmonic springs connecting the beads, we will take

it as:

U=k2n=2N(R→n(t)R→n1(t))2

In this model the Langevin equation becomes a linear equation for

$\vec{R}_{n}(t)$, for the internal beads we have:

{dRβ†’n(t)dt=kζ(Rβ†’n+1(t)+Rβ†’n1(t)2Rβ†’n(t))+gβ†’n(t)n=2,...,N1

and for the beads at each end we have:

{dRβ†’n(t)dt=kζ(Rβ†’2(t)Rβ†’1(t))+gβ†’1(t)n=1dRβ†’n(t)dt=kζ(Rβ†’N1(t)Rβ†’N(t))+gβ†’N(t)n=N

In order to unify the treatment we define two additional hypothetical

beads $\vec{R}_{0}$ and $\vec{R}_{N+1}$ as:

{R→0R→1R→N+1R→N

under this definition the Langevin equation for beads n=1,2,...,N

is given by:

{dRβ†’n(t)dt=kζ(Rβ†’n+1(t)+Rβ†’n1(t)2Rβ†’n(t))+gβ†’n(t)n=1,...,N
How do we proceed?

In order to proceed it is convenient to assume that the beads are continuously distributed along the polymer chain. We first recall

that in the continuum limit:

R→n+1(t)+R→n1(t)2R→n(t)2R→(n,t)n2

Letting n be a continuous variable, and writing R→n(t)

as $\vec{R}(n,t)$ the Langevin equation takes the form:

Rβ†’(n,t)t=kζ2Rβ†’(n,t)n2+gβ†’(n,t)

The definitions we made regarding the additional hypothetical beads R→0 and R→N+1 now turn into the following boundary

conditions:

{R→(n,t)n|n=0=0R→(n,t)n|n=N=0

\subsubsection{I don't know how to solve this one, can we bring it to a form of something we have solved before? }

Yes we can, as a first step we define normal coordinates by the following

transformation:

{Xβ†’p(t)=1N0Ncos(pπnN)Rβ†’(n,t)dnp=0,1,2,...

whose inverse is given by:

Rβ†’(n,t)=Xβ†’0(t)+2p=1cos(pπnN)Xβ†’p(t)

\subsubsection{Defining new coordinates (call them as you will) is one thing but the inverse must be defined such that it takes you back to the original coordinates! Is this truly the correct inverse? }

We verify this by direct substitution:

Xβ†’p(t)=1N0Ncos(pπnN)[Xβ†’0(t)+2p=1cos(pπnN)Xβ†’p(t)]dn

The first term gives:

Xβ†’0(t)δp,0={Xβ†’0(t)pπsin(pπnN)|n=0n=N=0p=1,2,3,..Xβ†’0(t)p=0

Using the trigonometric identity:

cos(A)cos(B)=12[cos(AB)+cos(A+B)]

the second term is written as:

{1Np=1Xβ†’p(t)0N[cos((p+p)πnN)+cos((pp)πnN)]dnp=1,2,3,..p=12Xβ†’p(t)pπsin(pπnN)|n=0n=N=0p=0

which gives:

Xβ†’p(t)(1δp,0)={p=1Xβ†’p(t)δp,p=Xβ†’pp=1,2,3,..0p=0

We conclude that:

Xβ†’p(t)=Xβ†’0(t)δp,0+Xβ†’p(t)(1δp,0)=Xβ†’p(t)

which proves that the inverse transformation is defined correctly.

How does this new set of coordinates help us?

We will now show that the equations of motion for the normal coordinates Xβ†’p(t) are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators. Since we have already treated the problem of a Brownian harmonic oscillator, this will ease our lives considerably. We start by applying 1N0Ncos(pπnN)dn

to both side of the Langevin equation for $\vec{R}(n,t)$: {\footnotesize

1N0Ncos(pπnN)Rβ†’(n,t)tdn=1N0Ncos(pπnN)kζ2Rβ†’(n,t)n2dn+1N0Ncos(pπnN)gβ†’(n,t)dn

}The left hand side term is identified as:

1N0Ncos(pπnN)Rβ†’(n,t)tdn=t[1N0Ncos(pπnN)Rβ†’(n,t)dn]=dXβ†’p(t)dt

The first term on the right hand side gives:

cos(pπnN)NkζRβ†’(n,t)n|n=0n=N+0NpπN2sin(pπnN)kζRβ†’(n,t)ndn

by integration by parts. Invoking the boundary condition for R→(n,t)n

the first term drops, another round of integration by parts gives:

pπN2sin(pπnN)kζRβ†’(n,t)|n=0n=N0N(pπ)2N3cos(pπnN)kζRβ†’(n,t)dn

Here the sine kills the first term and the second term is identified

as:

p2π2kXβ†’p(t)N2ζ=kpζpXβ†’p(t)

where we have defined:

{kp=2p2π2kNp=0,1,2,3...ζ0=Nζp=0ζp=2Nζp=1,2,3...

We are left with the second term on the right hand side of the original

equation which we deal with by defining the random forces:

{gβ†’p(t)=1N0Ncos(pπnN)gβ†’(n,t)dnp=0,1,2,3...

Which are characterized by zero mean:

{<gαp(t)>=1N0Ncos(pπnN)<gα(n,t)>dn=0p,α=x,y,z

And by:

{<gαp(t)gβp(t)>=2kBTδαβδppδ(tt)ζpp,p,α,β=x,y,z

since: {\footnotesize

<gαp(t)gβp(t)>=1N20N0Ncos(pπnN)cos(pπnN)<gα(n,t)gβ(n,t)>dndn

}and use of the trigonometric identity:

cos(A)cos(B)=12[cos(AB)+cos(A+B)]

gives: {\footnotesize

<gαp(t)gβp(t)>=kBTδαβδ(tt)ζN20N[cos((p+p)πnN)+cos((pp)πnN)]dn

}which yields the result after preforming the integration. This means that the random forces with different values of p and/or acting in perpendicular directions and/or acting in different times are independent. The equations of motion for the normal coordinates X→p(t)

are given by:

dXβ†’p(t)dt=kpζpXβ†’p(t)+gβ†’p(t)

and since the random forces are independent of each other, the motions of the Xβ†’p's are also independent of each other. These are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators, each with a force constant kp and friction constant ζp of its own. We have gone from one partial differential equation (which we don't know how to solve directly) for Rβ†’(n,t) to an infinite set of uncoupled ordinary differential equations (from a type we are already familiar with) for the normal coordinates Xβ†’p(t).

Great, we can now do some analysis!
What can we say about the motion of the center of mass?

Using the results of section 3 we will now calculate two time correlation function that will help us in the near future. We first note that since k0=0, X→0 is actually preforming free diffusion

and hence:{\tiny

<(X0(t)X0(0))α(X0(t)X0(0))β>=<(X0(t)<X0(t)>)α(X0(t)<X0(t)>)β>=δαβ2kBTNζt

}On the other hand, the time correlation function for X→p(t)

($p>0)$ is the one for a Brownian harmonic oscillator and hence:

<Xpα(t)Xpβ(0)>=δαβδppkBTkpet/τp

where the relaxation time $\tau_{p}$ is given by:

τp=ζpkp=N2ζp2π2k=τ1p2

A conclusion from the previous result is that:

<(Xpα(t)Xpβ(0))2>=δαβδpp2kBTkp[1et/τp]

We are now ready to calculate some real features of the Brownian motion of a polymer. We start with the motion of the center of mass, the

position of the center of mass:

R→cm(t)=1N0NR→(n,t)dnX→0(t)

is the same as the normal coordinate X→0(t). The mean square

displacement of the center of mass is hence given by:

<(Rβ†’cm(t)Rβ†’cm(0))2>=3<(X0(t)X0(0))α2>=6kBTNζt6DGt

where the diffusion constant $D_{G}$ is given by:

DG=kBTNζ

and we note that it is inversely proportional to the number of monomers.

What can we say about rotational motion?

To characterize rotational motion of the polymer molecule as a whole, let us consider the time correlation function <P→(t)P→(0)> of the end to end vector P→. Using normal coordinates, P→(t)

can be written as:

Pβ†’(t)Rβ†’(N,t)Rβ†’(0,t)=2p=1[cos(pπ)1]Xβ†’p(t)

which results in:

P→(t)4p:oddintegersX→p(t)

We therefore conclude that:

<Pβ†’(t)Pβ†’(0)>=16p:oddintegers<Xβ†’p(t)Xβ†’p(0)>=16p:oddintegers3kBTkpet/τp

This time correlation function is a summation over many terms with different relaxation times. We will now see that for large enough times this infinite sum is well approximated by the first term. We

rewrite the correlation function as:

<Pβ†’(t)Pβ†’(0)>=24NkBTπ2ket/τ1[1+p:oddintegers>11p2et[1τp1τ1]]

but since:

[1τp1τ1][1τ31τ1]>0

we have:

et[1τp1τ1]et[1τ31τ1]

We also know that:

p:oddintegers>11p2=π281

and hence the second term in the parentheses is bounded by an exponentially

decaying function and moreover it is never larger than $1/$4:

p:oddintegers>11p2et[1τp1τ1][π281]et[1τ31τ1]<14

We conclude that the second term may be neglected for large times

and the correlation function is approximated to be:

<Pβ†’(t)Pβ†’(0)>24NkBTπ2ket/τ1

which decays exponentially with a single relaxation time τ1. The relaxation time τ1 is called the rotational relaxation

time, it is also denoted $\tau_{r}$ and is given by:

τr=N2ζπ2k

What can we say about the motion of one specific bead?

We now turn to study the internal motion of a polymer chain focusing

on the mean square displacement of the $n-th$ monomer:

ϕ(n,t)<[Rβ†’(n,t)Rβ†’(n,0)]2>

Direct substitution for R→(n,t) and R→(n,0) gives:

ϕ(n,t)=<[Xβ†’0(t)Xβ†’0(0)+2p=1cos(pπnN)(Xβ†’p(t)Xβ†’p(0))]2>

utilizing the correlation functions we have obtained above all the

cross terms vanish and we are left with:

ϕ(n,t)=6DGt+p=124kBTkpcos(pπnN)2[1etp2/τr]

Let us examine this expression in two limits, for tτr:

ϕ(n,t)6DGt+p=112kBTp2π2kcos(pπnN)2

The second term is a constant that doesn't depend on time (it is easily seen that the infinite sum converges) and hence ϕ(n,t) is linear in t in this limit. For large enough times the displacement of the nth monomer is determined by the diffusion constant of the center of mass as the monomer drifts away with the polymer as a whole. On the other hand, for tτr, the motion of the segments reflects the internal motion due to the many modes of vibration. In this limit we may approximate by replacing summation with integration and cos(pπnN)2 by its average value 1/2:

ϕ(n,t)=6kBTNζt+p=06NkBTp2π2k[1etp2/τr]dp=6kBTNζt+I

Doing the integral by parts we get: {\tiny

I=p=06NkBTπ2k[etp2/τr1]d1p=6NkBTpπ2k[etp2/τr1]|0+p=012tNkBTτrπ2ketp2/τrdp

}The first term vanishes (basic calculus) and the second term is transformed

into a Gaussian integral which gives:

I=6tNkBTτrπ2kπτrt=6NkBTτrπ3/2kt

We can now write the $\phi(n,t)$ as:

ϕ(n,t)=6NkBTπ2k[tτr+πtτr]tτr6NkBTπ3/2ktτr

and observe that in this limit the mean square displacement of the nth monomer increases like t, i.e. in a sub-diffusive manner.

How does the Rouse model stand in comparison to experimental results?

Unfortunately not as good as one might have hoped. The Rouse model may seem to be a very natural way to describe the Brownian motion of a polymer chain, but unfortunately the conclusions drawn from it do not agree with the experimental results. As we saw above, for the

Rouse model:

{τr=N2ζp2π2kN2M2DG=6kTNζ1N1M

where M is the molecular weight of the polymer. Experimentally

however, the following dependencies were measured:

{τrM3νDGMν

Here, the exponent ν is that which is used to express the dependence of the radius of gyration RG on molecular weight ():

RGMν

The value of ν is determined by the nature of the interaction between the solvent and the polymer, in a good solvent ν35 and in the Θ state ν=12 ().

The reason for the discrepancy between experiments and the Rouse model is that in the latter we have assumed the average velocity of a particular bead is determined only by the external force acting on it, and is independent of the motion of the other beads. However, in reality the motion of one bead is influenced by the motion of the surrounding beads through the medium of the solvent. For example, if one bead moves the solvent surrounding it will also move, and as a result other beads will be dragged along. This type of interaction transmitted by the motion of the solvent is called hydrodynamic interaction. We will discuss a model taking this interaction into account in the next section. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"29-07-2009 14-52-26\string".eps} \par\end{centering} \caption{The hydrodynamic interaction. If bead n moves under the action of the force F→n, a flow is created in the surrounding fluid, which causes the other beads to move.} \end{figure} \newpage{}

The Zimm Model for Polymer Dynamics

\subsection{So we need a model that will take into account hydrodynamics interactions, but how do we do that?} In the Rouse model we have assumed the average velocity of a particular bead is determined only by the external force acting on it, and is independent of the motion of the other beads. This assumption led

to the following Langevin equation:

vβ†’n(t)=dRβ†’n(t)dt=1ζnURβ†’n+gβ†’n(t)=Fβ†’nζn+gβ†’n(t)

In order to take into account hydrodynamic interaction we can generalize this assumption. Denoting the forces acting on the beads by F→n(n=1,2,..) , we assume that there is a linear relationship between these forces and the average velocity <v→n(t)> and so the following

holds:

<v→n(t)>=mHnmF→m

Here Hnm is a 3×3 matrix, the nm component of H. It is now our task to calculate Hnm and write the appropriate Langevin equation. This can be done using hydrodynamics and some approximations (),

the result of the calculation gives:

Hnm={I/ζn=m[r^nmr^nm+I]8πηrβ†’nmnm

where η is the viscosity of the liquid, I is the 3×3 identity matrix, rβ†’nmRβ†’nRβ†’m and r^nm is a unit vector in the direction of rβ†’nm. The appropriate Langevin equation is given by (taking the same potential

$U$ as in the Rouse model):

dR→n(t)dt=kmHnm(R→m+1(t)+R→m1(t)2R→m(t))+g→n(t)

and the random force g→n(t) is Gaussian with the following

characteristics:

{<gαn(t)>=0n,α=x,y,z<gαn(t)gβm(t)>=2kBT(Hnm)αβδ(tt)n,m,α,β=x,y,z

\subsubsection{The Langevin equation we got seems complicated, it is not even linear in R→n(t)! I guess there is an approximation coming my way, am I right? }

Since Hnm depends on Rβ†’n(t) the Langevin equation we got is not linear in Rβ†’n(t) and hence tremendously hard to solve. Zimm's idea was to replace Hnm (the factor that is causing the non-linearity) by its equilibrium average value <Hnm>eq, this is called the preaveraging approximation. In general the equilibrium value of Hnm depends on the interactions between the solvent and the polymer and hence will have a different value in a good/medium/bad solvents. Here we will concentrate on a special state of a polymer in solution, this state was also mentioned earlier and is called the Θ state (). For a polymer in Θ conditions, the vector rβ†’nm is characterized by a Gaussian distribution with zero mean and a variance of |nm|b2. Here b is the distance between two adjacent monomers and it follows

that the probability density function for $\vec{r}_{nm}$ is:

P(rβ†’nm)=(32π|nm|b2)3/2exp(3rβ†’nm22|nm|b2)

Since Hnm is a function only of r→nm we can calculate

$<H_{nm}>_{eq}$ (for $n\neq m$) as follows:{\scriptsize

<Hnm>eq=d3rβ†’P(rβ†’)Hnm=drβ†’(32π|nm|b2)3/2exp(3r22|nm|b2)[r^r^+I]8πηr

}Noting that in spherical coordinates:

Failed to parse (unknown function "\begin{matrix}"): {\displaystyle \hat{r}\hat{r}=\left[\begin{matrix}{ccc} sin^{2}\theta cos^{2}\phi & sin^{2}\theta cos\phi sin\phi & cos\theta sin\theta cos\phi sin^{2}\theta cos\phi sin\phi & sin^{2}\theta sin^{2}\phi & cos\theta sin\theta sin\phi cos\theta sin\theta cos\phi & cos\theta sin\theta sin\phi & cos^{2}\theta\end{array}\right]}

We have:

Failed to parse (unknown function "\begin{matrix}"): {\displaystyle \overset{\pi}{\underset{0}{\int}}sin\theta d\theta\overset{2\pi}{\underset{0}{\int}}d\phi\hat{r}\hat{r}=\left[\begin{matrix}{ccc} \frac{4\pi}{3} & 0 & 0 0 & \frac{4\pi}{3} & 0 0 & 0 & \frac{4\pi}{3}\end{array}\right]=\frac{4\pi}{3}I}

and hence:{\scriptsize

<Hnm>eq=d3rβ†’P(rβ†’)Hnm=04πr2dr(32π|nm|b2)3/2exp(3r22|nm|b2)43I8πηr

}The integral is calculated in a straight forward way, defining t=r2

we have:{\scriptsize

0I3η(32π|nm|b2)3/2exp(3t2|nm|b2)dt=I3η(32π|nm|b2)3/22|nm|b23

}and hence:

<Hnm>eq=h(nm)I

where we have defined:

h(nm)1ηb(16π3|nm|)1/2

Substituting this result into our Langevin equation and re-writing

it in the continuum limit we get:

R→(n,t)t=k0Ndmh(nm)2R→(m,t)2m+g→(n,t)

where the random force g→(n,t) is Gaussian with the following

characteristics:{\small

{<gα(n,t)>=0n,α=x,y,z<gα(n,t)gβ(m,t)>=2kBTh(nm)δ(tt)n,m,α,β=x,y,z

}Note that h(nm) depend only on |nm| and we have indeed linearized our equation as promised.

Seems familiar, shall we try normal coordinates again?

Yes, we will one again use the normal coordinates defined for the Rouse model. We start by applying 1N0Ncos(pπnN)dn

to both side of the Langevin equation for $\vec{R}(n,t)$:{\tiny

1N0Ncos(pπnN)Rβ†’(n,t)tdn=1N0Ncos(pπnN)[k0Ndmh(nm)2Rβ†’(m,t)2m]dn+1N0Ncos(pπnN)gβ†’(n,t)dn

}The left hand side term is identified as:

1N0Ncos(pπnN)Rβ†’(n,t)tdn=t[1N0Ncos(pπnN)Rβ†’(n,t)dn]=dXβ†’p(t)dt

The first term on the right hand side gives:

1N0Ncos(pπnN)[k0Ndmh(nm)2[Xβ†’0(t)+2q=1cos(qπmN)Xβ†’q(t)]2m]dn

which yields:

1N0Ncos(pπnN)[2k0Ndmh(nm)q=1(qπN)2cos(qπmN)Xβ†’q(t)]dn

and with some additional algebra we get:

q=12kq2π2NXβ†’q(t)[1N20N0Ncos(pπnN)cos(qπmN)h(nm)dmdn]

Defining:

{kq=2q2π2kNq=0,1,2,3...hpq=1N20N0Ncos(pπnN)cos(qπmN)h(nm)dmdnp,q=0,1,2,3...

this term can be written as:

q=1hpqkqX→q(t)
But this doesn't decouple the equations! another approximation?

Indeed, we will approximate by neglecting all the off diagonal terms. The reasoning goes as follows, we first note that setting mn=l

and noting that $h(n-m)=h(m-n)$ we can write $h_{pq}$ as:

hpq=1N20NdnnNncos(pπnN)cos(qπ(n+l)N)h(l)dl

we now use a trigonometric identity:

cos(A+B)=cos(A)cos(B)sin(A)sin(B)

to get:{\tiny

hpq=1N20Ndn[cos(pπnN)cos(qπnN)nNncos(qπlN)h(l)dlcos(pπnN)sin(qπnN)nNnsin(qπlN)h(l)dl]

}For large q, the two inner integrals rapidly approach the following

integrals:

{cos(qπlN)h(l)dl=N3π3qηbsin(qπlN)h(l)dl=0

With this substitution $h_{pq}$ becomes:\underbar{ }

hpq=1N20Ncos(pπnN)cos(qπnN)N3π3qηbdn

and after using the trigonometric identity:

cos(A)cos(B)=12[cos(AB)+cos(A+B)]

{\small

{hpq=12N3/23π3qηb0Ncos(πn(pq)N)+cos(πn(p+q)N)dn=δpq12Nπ3pηbp>0

}If q is small our approximation is still fair but for the case q=0 it is invalid and this case deserves special attention. The

careful reader may have noticed that the sum:

q=1hpqkqX→q(t)

starts from q=1 and it may seem that a discussion regarding q=0 is pointless. We will nevertheless require this case (q=0) later

on and so we calculate directly{\small : }

hp0=1N20Ncos(pπnN)dn0N1ηb(16π3|nm|)1/2dm

The inner integral gives:

0n1ηb(16π3(nm))1/2dm+nN1ηb(16π3(mn))1/2dm

which results in:

2ηb(nm6π3)1/2|0n+2ηb(mn6π3)1/2|nN=2ηb(n6π3)1/2+2ηb(Nn6π3)1/2

Substituting this into the expression for $h_{p0}$ gives:

hp0=1N20Ncos(pπnN)2ηb(n6π3)1/2dn+1N20Ncos(pπpπtN)2ηb(t6π3)1/2dt

where we have changed variables to t=Nn. It is now easy to see

that for odd $p$: $h_{p0}=$0, while for even $p$ we get:

hp0=4ηbN2(16π3)1/20Nncos(pπnN)dnforevenp

For $p=$0 this gives:

h00=83ηb(16Nπ3)1/2

while for even p>0, the integral may be re-expressed in terms of the Fresnel integral S(x)=0xsin2(t)dt

to give:

0Nncos(pπnN)=2N3/22πp3/2S(2p)

and we see that:

|2N3/22πp3/2S(2p)|N3/2πp

concluding that for $p>$0:

{|hp0|4ηbπp(16Nπ3)1/2evenp>0hp0=0oddp>0

We see that for large N, hp0 is small and also decays with p. We will hence neglect hp0 for p>0 and keep only the diagonal term h00.

O.K, what about the random forces?

We are left with the second term on the right hand side of the original

equation which we deal with by defining the random forces:

{gβ†’p(t)=1N0Ncos(pπnN)gβ†’(n,t)dnp=0,1,2,3...

Which are characterized by zero mean:

{<gαp(t)>=1N0Ncos(pπnN)<gα(n,t)>dn=0p,α=x,y,z

And by:

{<gαp(t)gβp(t)>=2kBTδαβδ(tt)hppp,p,α,β=x,y,z

since: {\footnotesize

<gαp(t)gβp(t)>=1N20N0Ncos(pπnN)cos(pπnN)<gα(n,t)gβ(n,t)>dndn

}gives: {\footnotesize

<gαp(t)gβp(t)>=2kBTδαβδ(tt)N20N0Ncos(pπnN)cos(pπnN)h(nn)dndn

}which yields the result by definition of hpp. This means that the random forces with different values of p (remember hppδpp) and/or acting in perpendicular directions and/or acting in different times are independent.

That was a bit long, could you please sum up the main result?

The main result is that we have found the equations of motion for the normal coordinates X→p(t) and that they are given by:

dXβ†’p(t)dt=kpζpXβ†’p(t)+gβ†’p(t)

with

{kp=2p2π2kNp=0,1,2,3...ζ0=1/h00=3ηb8(6Nπ3)1/2p=0ζp=1/hpp=12Nπ3pηbp=1,2,3...

and since the random forces are independent of each other, the motions of the Xβ†’p's are also independent of each other. These are the equations of motion for an infinite set of uncoupled Brownian harmonic oscillators, each with a force constant kp and friction coefficient ζp of its own. We have once again gone from one partial differential equation (which we don't know how to solve directly) for Rβ†’(n,t) to an infinite set of uncoupled ordinary differential equations (from a type we are already familiar with) for the normal coordinates Xβ†’p(t).

\subsubsection{Great! This is very similar to what we got for the Rouse model, are we going to repeat the same type of analysis? }

Since the equation for the normal modes is the same as that for the Rouse model, we can immediately write the expressions for the diffusion constant of the center of mass and the rotational relaxation time

using the results of the previous section:

{DG=kBTζ0=8kBT3ηb(6Nπ3)1/2τr=ζ1k1=3πηbkN3/2
How does the Zimm model stand in comparison to experimental results?

As can been seen DG and τr depend on the molecular

weight $M$ as follows (recall that $M\propto N$):

{DG1MτrM3/2

The dependence of these quantities on the molecular weight agrees with experiments performed on solutions in the Θ state. Furthermore,

the relaxation times of the normal modes are:

τp=ζpkp=τrp3/2

and hence for short times (tτr) the average mean square

displacement of the $n-th$ monomer is given by:

ϕ(n,t)2Nb2π201exp(tp3/2/τr)p2dp

integration by parts gives:

ϕ(n,t)2Nb2π2[1+exp(tp3/2/τr)p]|0+3Ntb2τrπ20exp(tp3/2/τr)pdp

The first term drops (elementary calculus), the second term is treated

by a change of variable $x=tp^{3/2}/\tau_{r}$ :

ϕ(n,t)3Ntb2τrπ202τrexp(x)3t[τrxt]2/3dx=2Nb2Γ(1/3)π2[tτr]2/3

where we have identified the gamma function Γ(x)=0tx1etdt. The relation ϕ(n,t)t2/3 has been confirmed by analysis of the Brownian motion of DNA molecules. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{\string"31-07-2009 01-21-59\string".eps} \par\end{centering} \caption{The average mean square displacement of the terminal segment of a DNA molecule (solid line), observed by fluorescence microscopy. The dashed line is calculated from the theory of Zimm. The graph is plotted on a log-log scale, on this type of plot the slope of the lines corresponds to the exponent α in the relation ϕ(n,t)tα. The fact the lines are parallel, supports the prediction α=23. Taken from: J. Polym. Sci., 30, 779, Fig. 5. } \end{figure} \newpage{}

I Have More Questions, Where can I get Answers?

\begin{thebibliography}{3} \bibitem{key-4}Introduction to Polymer Physics, Chapters 4\&5, M. Doi Translated by H. See, Clarendon Press, 30 November 1995. \bibitem{key-5}The Theory of Polymer Dynamics, Chapters 3–5, M. Doi and S. F. Edwards, Clarendon Press, 3 November 1988. \bibitem{key-6}Polymer Physics, Chapter 8, Michael Rubinstein and Ralph H. Colby Oxford University Press, 26 June 2003. \end{thebibliography}

Template:Subpages Template:Shelves Template:Alphabetical Template:Status