21
Physics 125c Course Notes Density Matrix Formalism Solutions to Problems 040520 Frank Porter 1 Exercises 1. Show that any linear operator in an n-dimensional Euclidean space may be expressed as an n-term dyad. Show that this may be extended to an infinite-dimensional Euclidean space. Solution: Consider operator A in n-dimensional Euclidean space, which may be expressed as a matrix in a given basis: A = n i,j =1 a ij |e i e j | = i |e i j a ij e j | . (1) This is in the form of an n-term dyad A = n i=1 |α i β i |, with |α i = |e i and β i | = n j =1 a ij e j |. An arbitrary vector in an infinite dimensional Euclidean space may be expanded in a countable basis according to: |α = i=1 α i |e i . (2) Another way to say this is that the basis is “complete”, with “com- pleteness relation”: I = i |e i e i |. (3) An arbitrary linear operator can thus be defined in terms of its actions on the basis vectors: A = IAI = i,j |e i e i |A|e j e j |. (4) The remainder proceeds as for the finite-dimensional case. 1

Density Matrix Solutions

Embed Size (px)

DESCRIPTION

Statistical Mecahnics..!!

Citation preview

Page 1: Density Matrix Solutions

Physics 125cCourse Notes

Density Matrix FormalismSolutions to Problems040520 Frank Porter

1 Exercises

1. Show that any linear operator in an n-dimensional Euclidean spacemay be expressed as an n-term dyad. Show that this may be extendedto an infinite-dimensional Euclidean space.

Solution: Consider operator A in n-dimensional Euclidean space,which may be expressed as a matrix in a given basis:

A =n∑

i,j=1

aij|ei〉〈ej|

=∑

i

|ei〉

j

aij〈ej|

. (1)

This is in the form of an n-term dyad A =∑n

i=1 |αi〉〈βi|, with |αi〉 = |ei〉and 〈βi| =

∑nj=1 aij〈ej|.

An arbitrary vector in an infinite dimensional Euclidean space may beexpanded in a countable basis according to:

|α〉 =∞∑

i=1

αi|ei〉. (2)

Another way to say this is that the basis is “complete”, with “com-pleteness relation”:

I =∑

i

|ei〉〈ei|. (3)

An arbitrary linear operator can thus be defined in terms of its actionson the basis vectors:

A = IAI =∑

i,j

|ei〉〈ei|A|ej〉〈ej|. (4)

The remainder proceeds as for the finite-dimensional case.

1

Page 2: Density Matrix Solutions

2. Suppose we have a system with total angular momentum 1. Pick abasis corresponding to the three eigenvectors of the z-component ofangular momentum, Jz, with eigenvalues +1, 0,−1, respectively. Weare given an ensemble described by density matrix:

ρ =1

4

2 1 11 1 01 0 1

.

(a) Is ρ a permissible density matrix? Give your reasoning. For theremainder of this problem, assume that it is permissible. Does itdescribe a pure or mixed state? Give your reasoning.

Solution: Clearly ρ is hermitian. It is also trace one. This isalmost sufficient for ρ to be a valid density matrix. We can seethis by noting that, given a hermitian matrix, we can make atransformation of basis to one in which ρ is diagonal. Such atransformation preserves the trace. In this diagonal basis, ρ is ofthe form:

ρ = a|e1〉〈e1| + b|e2〉〈e2| + c|e3〉〈e3|,where a, b, c are real numbers such that a + b + c = 1. This isclearly in the form of a density operator. Another way of arguingthis is to consider the n-term dyad representation for a hermitianmatrix.

However, we must also have that ρ is positive, in the sense thata, b, c cannot be negative. Otherwise, we would interpret someprobabilities as negative. There are various ways to check this.For example, we can check that the expectation value of ρ withrespect to any state is not negative. Thus, let an arbitrary statebe: |ψ〉 = (α, β, γ). Then

〈ψ|ρ|ψ〉 = 2|α|2 + |β|2 + |γ|2 + 2<(α∗β) + 2<(α∗γ). (5)

This quantity can never be negative, by virtue of the relation:

|x|2 + |y|2 + 2<(x∗y) = |x+ y|2 ≥ 0. (6)

Therefore ρ is a valid density operator.

To determine whether ρ is a pure or mixed state, we consider:

Tr(ρ2) =1

16(6 + 2 + 2) =

5

8.

2

Page 3: Density Matrix Solutions

This is not equal to one, so ρ is a mixed state. Alternatively, onecan show explicitly that ρ2 6= ρ.

(b) Given the ensemble described by ρ, what is the average value ofJz?

Solution: We are working in a diagonal basis for Jz:

Jz =

1 0 00 0 00 0 −1

.

The average value of Jz is:

〈Jz〉 = Tr(ρJz) =1

4(2 + 0 − 1) =

1

4.

(c) What is the spread (standard deviation) in measured values of Jz?

Answer: We’ll need the average value of J2z for this:

〈J2z 〉 = Tr(ρJ2

z ) =1

4(2 + 0 + 1) =

3

4.

Then:

∆Jz =√〈J2

z 〉 − 〈Jz〉2 =

√11

4.

3. Prove the first theorem in section ??.

Solution: The theorem we wish to prove is:

Theorem: Let P1, P2 be two primitive Hermitian idempotents (i.e.,rays, or pure states, with P † = P , P 2 = P , and TrP = 1). Then:

1 ≥ Tr(P1P2) ≥ 0. (7)

If Tr(P1P2) = 1, then P2 = P1. If Tr(P1P2) = 0, then P1P2 = 0(vectors in ray 1 are orthogonal to vectors in ray 2).

First, suppose P1 = P2. Then Tr(P1P2) = Tr(P 21 ) = Tr(P1) = 1. If

P1P2 = 0, then Tr(P1P2) = Tr(0) = 0.

3

Page 4: Density Matrix Solutions

More generally, expand P1 and P2 with respect to an orthonormal basis{|ei〉}:

P1 =∑

i,j

aij|ei〉〈ej| (8)

P2 =∑

i,j

bij|ei〉〈ej| (9)

P1P2 =∑

i,j,k

aikbkj|ei〉〈ej|. (10)

We know from the discussion on pages 11,12 in the Density Matrixnote, that we can work in a basis in which aij = δijδi1. In this basis,

P1P2 = |e1〉∑

i

b1i〈ei|. (11)

The trace, which is invariant under choice of basis, is

Tr(P1P2) = b11 (12)

We are almost there, but we need to show that b11 > 0, if P1P2 6= 0.A simple way to see this is to notice that P2 is the outer product of avector with itself, hence b11 ≥ 0, with b11 = 0 if and only if P1P2 = 0(since b1i = bi1 = 0 for all i if b11 = 0). Finally, b11 < 1 if P1 6= P2.

4. Prove the von Neumann mixing theorem.

Solution: The mixing theorem states that, given two distinct en-sembles ρ1 6= ρ2, a number 0 < θ < 1, and a mixed ensemble ρ =θρ1 + (1 − θ)ρ2, then

s(ρ) > θs(ρ1) + (1 − θ)s(ρ2). (13)

Let us begin by proving the following:

Lemma: Let

x =n∑

i=1

λixi, (14)

where 1 > xi > 0, 0 < λi < 1 for all i, and∑n

i=1 λi = 1. Then

−x ln x > −n∑

i=1

λixi ln xi. (15)

4

Page 5: Density Matrix Solutions

Proof: This follows because −x ln x is a concave function of x. Itssecond derivative is

d2

dx2(−x ln x) =

d

dx(− ln x− 1) = −1/x. (16)

For 1 > x > 0 this is always negative; −x ln x is a concave functionfor 1 > x > 0. Hence, any point on a straight line between twopoints on the curve of this function lies below the curve. Thetheorem is for a linear combination of n points on the curve of−x ln x. Here, x is a weighted average of points xi. The function−x ln x evaluated at this weighted average point is to be comparedwith the weighted average of the values of the function at the npoints x1, x2, . . . , xn. Again, the function evaluated at the linearcombination is a point on the curve, and the weighted average ofthe function over the n points must lie below that point on thecurve. The region of possible values of the weighted average of thefunction is the polygon joining neighboring points on the curve,and the first and last points. See Fig. 1.

Now we must see how our present problem can be put in the form wherethis lemma may be applied. Consider the spectral decompositions ofρ1, ρ2:

ρ1 =∑

i

aiPi =∑

i

ai|ei〉〈ei| (17)

ρ2 =∑

i

biQi =∑

i

bi|fi〉〈fi|, (18)

where the decompositions have been “padded” with complete sets ofone-dimensional projections. That is, some of the ai’s and bi’s maybe zero. The idea is that the sets {|ei〉} and {|fi〉} form completeorthonormal bases. Note that we cannot have Pi = Qi in general.

Then we have:

ρ = θρ1 + (1 − θ)ρ2 (19)

=∑

i

[θai|ei〉〈ei| + (1 − θ)bi|fi〉〈fi|] (20)

=∑

i

ci|gi〉〈gi|, (21)

5

Page 6: Density Matrix Solutions

.. .

.

f(x)

xx x x <x> x

f(<x>)..

<f(x)>

1 2 3 4

Figure 1: Illustration showing that the weighted average of a concave functionis smaller than the function evaluated at the weighted average point. Theallowed values of the ordered pairs 〈x〉, 〈f(x)〉 lie in the polygon.

where we have defined another complete orthonormal basis, {|gi〉}, cor-responding to a basis in which ρ is diagonal.

We may expand the {|ei〉} and {|fi〉} bases in terms of the {|gi〉} basis.For example, let

|ei〉 =∑

j

Aij|gj〉, (22)

where A = {Aij} is a unitary matrix. The inverse transformation is

|gi〉 =∑

j

A∗ji|ej〉. (23)

Also,

〈ei| =

j

Aij|gj〉

=∑

j

A∗ij〈gj|, (24)

6

Page 7: Density Matrix Solutions

and hence,|ei〉〈ei| =

j

k

Aij|gj〉〈gk|A∗ik. (25)

Similarly, we define matrix B such that

|fi〉〈fi| =∑

j

k

Bij|gj〉〈gk|B∗ik. (26)

Substituting Eqns. 25 and 26 into Eqn: 20:

ρ =∑

i

θai

j,k

A∗ikAij + (1 − θ)bi

j,k

B∗ikBij

|gj〉〈gk|. (27)

Thus, the numbers c` are:

c` = 〈g`|ρ|g`〉

=∑

i

θai

j,k

A∗ikAij + (1 − θ)bi

j,k

B∗ikBij

δ`jδ`k (28)

=∑

i

[θ|Ai`|2ai + (1 − θ)|Bi`|2bi

]. (29)

The entropy for density matrix ρ is:

s(ρ) = −∑

i

ci ln ci (30)

= −∑

i

j

[θ|Aji|2aj + (1 − θ)|Bji|2bj

] ln

j

[θ|Aji|2aj + (1 − θ)|Bji|2bj

] .

Note that ci is of the form

ci =∑

j

(λ(a)ij aj + λ

(b)ij bj), (31)

where

λ(a)ij ≡ θ|Aji|2 (32)

λ(b)ij ≡ (1 − θ)|Bji|2. (33)

7

Page 8: Density Matrix Solutions

Furthermore, ∑

j

(λ(a)ij + λ

(b)ij ) = 1. (34)

Thus, according to the lemma (some of the ci’s might be zero; there isan equality, 0 = 0 in such cases),

−ci ln ci > −∑

j

(a)ij aj ln aj + λ

(b)ij bj ln bj

]. (35)

Finally, we sum the above inequality over i:

s(ρ) = −∑

i

ci ln ci

> −∑

i

j

(a)ij aj ln aj + λ

(b)ij bj ln bj

]

> −∑

j

[θaj ln aj + (1 − θ)bj ln bj] (36)

> θs(ρ1) + (1 − θ)s(ρ2) (37)

This completes the proof.

5. Show that an arbitrary linear operator on a product space H = H1⊗H2

may be expressed as a linear combination of operators of the formZ = X ⊗ Y .

Solution: We are given an arbitrary linear operator A onH = H1⊗H2.We wish to show that there exists a decomposition of the form:

A =∑

i

AiZi =∑

i

AiXi ⊗ Yi, (38)

where Xi are operators on H1 and Yi are operators on H2.

Let {fi : i = 1, 2, . . .} be an orthonormal basis in H1 and {gi : i =1, 2, . . .} be an orthonormal basis in H2. Then we may obtain an orhto-normal basis for H composed of vectors of the form:

eij = fi ⊗ gj, i = 1, 2, . . . ; j = 1, 2, . . . . (39)

It is readily checked that {eij} is, in fact, an orthonormal basis for H.

8

Page 9: Density Matrix Solutions

Expand A with respect to basis {eij}:

A =∑

i,j,m,n

Aij,mn|eij〉〈emn| (40)

=∑

i,j,m,n

Aij,mn|fi〉 ⊗ |gj〉〈fm| ⊗ 〈gn| (41)

=∑

i,j,m,n

Aij,mn|fi〉〈fm| ⊗ |gj〉〈gn| (42)

=∑

k

AkXk ⊗ Yk, (43)

where k is a relabeling for i, j,m, n.

The only step above which requires further comment is setting:

|fi〉 ⊗ |gj〉〈fm| ⊗ 〈gn| = |fi〉〈fm| ⊗ |gj〉〈gn|. (44)

One way to check this is as follows. Pick our bases to be in the form:

(fi)k = δik (45)

(gj)` = δj`. (46)

Then(|fi〉 ⊗ |gj〉〈fm| ⊗ 〈gn|)k`,pq = δikδj`δmpδnq. (47)

and(|fi〉〈fm| ⊗ |gj〉〈gn|)k`,pq = δikδj`δmpδnq. (48)

6. Let us try to improve our understanding of the discussions on the den-sity matrix formalism, and the connections with “information” or “en-tropy” that we have made. Thus, we consider a simple “two-state”system. Let ρ be any general density matrix operating on the two-dimensional Hilbert space of this system.

(a) Calculate the entropy, s = −Tr(ρ ln ρ) corresponding to this den-sity matrix. Express your result in terms of a single real para-meter. Make sure the interpretation of this parameter is clear, aswell as its range.

Solution: Density matrix ρ is Hermitian, hence diagonal in somebasis. Work in such a basis. In this basis, ρ has the form:

ρ =(θ 00 1 − θ

), (49)

9

Page 10: Density Matrix Solutions

where 0 ≤ θ ≤ 1 is the probability that the system is in state 1.We have a pure state if and only if either θ = 1 or θ = 0.

The entropy is

s = −θ ln θ − (1 − θ) ln(1 − θ). (50)

(b) Make a graph of the entropy as a function of the parameter. Whatis the entropy for a pure state? Interpret your graph in terms ofknowledge about a system taken from an ensemble with densitymatrix ρ.

Solution:

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

theta

entropy

Figure 2: The entropy as a function of θ.

The entropy for a pure state, with θ = 1 or θ = 0, is zero. Theentropy increases as the state becomes “less pure”, reaching maxi-mum when the probability of being in either state is 1/2, reflectingminimal “knowledge” about the state.

(c) Consider a system with ensemble ρ a mixture of two ensemblesρ1, ρ2:

ρ = θρ1 + (1 − θ)ρ2, 0 ≤ θ ≤ 1 (51)

As an example, suppose

ρ1 =1

2

(1 00 1

), and ρ2 =

1

2

(1 11 1

), (52)

10

Page 11: Density Matrix Solutions

in some basis. Prove that VonNeuman’s mixing theorem holds forthis example:

s(ρ) ≥ θs(ρ1) + (1 − θ)s(ρ2), (53)

with equality iff θ = 0 or θ = 1.

Solution: The entropy of ensemble 1 is:

s(ρ1) = −Trρ1 ln ρ1 = −1

2ln

1

2− 1

2ln

1

2= ln 2 = 0.6931 (54)

It may be noticed that ρ22 = ρ2, hence ensemble 2 is a pure state,

with entropy s(ρ2) = 0. Next, we need the entropy of the combinedensemble:

ρ = θρ1 + (1 − θ)ρ2 =1

2

(1 1 − θ

1 − θ 1

). (55)

To compute the entropy, it is convenient to determine the eigenvalues;they are 1− θ/2 and θ/2. Note that they are in the range from zero toone, as they must be. The entropy is

s(ρ) = −(

1 − θ

2

)ln

(1 − θ

2

)−(−θ

2

)lnθ

2. (56)

We must compare s(ρ) with

θs(ρ1) + (1 − θ)s(ρ2) = θ ln 2. (57)

It is readily checked that equality holds for θ = 1 or θ = 0. For thecase 0 < θ < 1, take the difference of the two expressions:

s(ρ) − [θs(ρ1) + (1 − θ)s(ρ2)] = −(

1 − θ

2

)ln

(1 − θ

2

)−(−θ

2

)lnθ

2− θ ln 2

= − ln

(

1 − θ

2

)1−θ/2 (θ

2

)θ/2

. (58)

This must be larger than zero if the mixing theorem is correct. This isequivalent to asking whether

(1 − θ

2

)1−θ/2 (θ

2

)θ/2

2θ (59)

11

Page 12: Density Matrix Solutions

is less than 1. This expression may be rewritten as

(1 − θ

2

)1−θ/2

(2θ)θ/2 . (60)

It must be less than one. To check, let’s find its maximum value, bysetting its derivative with respect to θ equal to 0:

0 =d

(1 − θ

2

)1−θ/2

(2θ)θ/2

=d

dθexp

[(1 − θ

2

)ln (1 − θ/2) + (θ/2) ln(2θ)

]

= −1

2ln(1 − θ/2) +

1

2ln(2θ) − 1

2+

2

4= ln(2θ) − ln(1 − θ/2). (61)

Thus, the maximum occurs at θ = 2/5. At this value of θ, s(ρ) = 0.500,and θs(ρ1) + (1 − θ)s(ρ2) = (2/5) ln 2 = 0.277. The theorem holds.

7. Consider an N -dimensional Hilbert space. We define the real vectorspace, O of Hermitian operators on this Hilbert space. We define ascalar product on this vector space according to:

(x, y) = Tr(xy), ∀x, y ∈ O. (62)

Consider a basis {B} of orthonormal operators in O. The set of den-sity operators is a subset of this vector space, and we may expand anarbitrary density matrix as:

ρ =∑

i

BiTr(Biρ) =∑

i

Bi〈Bi〉ρ. (63)

By measuring the average values for the basis operators, we can thusdetermine the expansion coefficients for ρ.

(a) How many such measurements are required to completely deter-mine ρ?

Solution: The question is, how many independent basis operatorsare there in O? An arbitrary N ×N complex matrix is described

12

Page 13: Density Matrix Solutions

by 2N2 real parameters. The requirement of Hermiticity providesthe independent constraint equations:

<(Hij) = <(Hji), i < j (64)

=(Hij) = −=(Hji), i ≤ j. (65)

This is N + 2[N(N − 1)/2] = N2 equations. Thus, O is an N2-dimensional vector space. But to completely determine the den-sity matrix, we have one further constraint, that Trρ = 1. Thus,it takes N2 − 1 measurements to completely determine ρ.

(b) If ρ is known to be a pure state, how many measurements arerequired?

Solution: We note that a complex vector in N dimensions iscompletely specified by 2N real parameters. However, one para-meter is an arbitrary phase, and another parameter is eaten bythe normalization constraint. Thus, it takes 2(N − 1) parametersto completely specify a pure state.

If ρ is a pure state, then ρ2 = ρ. How many additional constraintsover the result in part (a) does this imply? Let’s try to get a moreintuitive understanding by attacking this issue from a slightly dif-ferent perspective. Ask, instead, how many parameters it takesto build an arbitrary density matrix as a mixture of pure states.Our response will be to add pure states into the mixture one ata time, counting parameters as we go, until we cannot add anymore.

It takes 2(N − 1) parameters to define the first pure state in ourmixture. The second pure state must be a distinct state. That is,it must be drawn from an N − 1-dimensional subspace. Thus thesecond pure state requires 2(N − 2) parameters to define. Therewill also be another parameter required to specify the relativeprobabilities of the first and second state, but we’ll count up theseprobablilities later. The third pure state requires 2(N − 3) para-meters, and so forth, stopping at 2 · 1 paramter for the (N − 1)stpure state. Thus, it takes

2N−1∑

k=1

k = N(N − 1) (66)

13

Page 14: Density Matrix Solutions

parameters to define all the pure states in the arbitrary mixture.There can be a total of N pure states making up a mixture (theNth one required no additional parameters in the count we justmade). It takes N − 1 parameters to specify the relative prob-abiliteis of these N components in the mixture. Thus, the totalnumber of parameters required is:

N(N − 1) + (N − 1) = N2 − 1. (67)

Notice that this is just the result we obtained in part (a).

8. Two scientists (they happen to be twins, named “Oivil” and “Livio”,but never mind. . . ) decide to do the following experiment: They setup a light source, which emits two photons at a time, back-to-back inthe laboratory frame. The ensemble is given by:

ρ =1

2(|LL〉〈LL| + |RR〉〈RR|), (68)

where “L” refers to left-handed polarization, and “R” refers to right-handed polarization. Thus, |LR〉 would refer to a state in which photonnumber 1 (defined as the photon which is aimed at scientist Oivil, say)is left-handed, and photon number 2 (the photon aimed at scientistLivio) is right-handed.

These scientists (one of whom is of a diabolical bent) decide to play agame with Nature: Oivil (of course) stays in the lab, while Livio treksto a point a light-year away. The light source is turned on and emitstwo photons, one directed toward each scientist. Oivil soon measuresthe polarization of his photon; it is left-handed. He quickly makes anote that his brother is going to see a left-handed photon, sometimeafter next Christmas.

Christmas has come and gone, and finally Livio sees his photon, andmeasures its polarization. He sends a message back to his brother Oivil,who learns in yet another year what he knew all along: Livio’s photonwas left-handed.

Oivil then has a sneaky idea. He secretly changes the apparatus, with-out telling his forlorn brother. Now the ensemble is:

ρ =1

2(|LL〉 + |RR〉)(〈LL| + 〈RR|). (69)

14

Page 15: Density Matrix Solutions

He causes another pair of photons to be emitted with this new appa-ratus, and repeats the experiment. The result is identical to the firstexperiment.

(a) Was Oivil just lucky, or will he get the right answer every time,for each apparatus? Demonstrate your answer explicitly, in thedensity matrix formalism.

Solution: Yup, he’ll get it right, every time, in either case. Let’sfirst define a basis so that we can see how it all works with explicitmatrices:

|LL〉 =

1000

, |LR〉 =

0100

, |RL〉 =

0010

, |RR〉 =

0001

.

(70)In this basis the density matrix for the first apparatus is:

ρ =1

2(|LL〉〈LL| + |RR〉〈RR|)

=1

2

1000

( 1 0 0 0 ) +

1

2

0001

( 0 0 0 1 )

=1

2

1 0 0 00 0 0 00 0 0 00 0 0 1

. (71)

Since Tr(ρ2) = 1/2, we know that this is a mixed state.

Now, Oivil observes that his photon is left-handed. His left-handed projection operator is

PL =

1 0 0 00 1 0 00 0 0 00 0 0 0

, (72)

so once he has made his measurement, the state has “collapsed”

15

Page 16: Density Matrix Solutions

to:

PLρ =1

2

1 0 0 00 0 0 00 0 0 00 0 0 0

. (73)

This corresponds to a pure |LL〉 state, hence Livio will observeleft-handed polarization.

For the second apparatus, the density matrix is

ρ =1

2(|LL〉 + |RR〉)(〈LL| + 〈RR|)

=1

2

1 0 0 10 0 0 00 0 0 01 0 0 1

. (74)

Since Tr(ρ2) = 1, we know that this is a pure state. Applying theleft-handed projection for Oivil’s photon, we again obtain:

PLρ =1

2

1 0 0 00 0 0 00 0 0 00 0 0 0

. (75)

Again, Livio will observe left-handed polarization.

(b) What is the probability that Livio will observe a left-handed pho-ton, or a right-handed photon, for each apparatus? Is there aproblem with causality here? How can Oivil know what Livio isgoing to see, long before he sees it? Discuss! Feel free to modifythe experiment to illustrate any points you wish to make.

Solution: Livio’s left-handed projection operator is

PL(Livio) =

1 0 0 00 0 0 00 0 1 00 0 0 0

, (76)

The probability that Livio will observe a left-handed photon forthe first apparatus is:

〈PL(Livio)〉 = Tr [PL(Livio)ρ] = 1/2. (77)

16

Page 17: Density Matrix Solutions

The same result is obtained for the second apparatus.

Here is my take on the philosophical issue (beware!):

If causality means propagation of information faster than thespeed of light, then the answer is “no”, causality is not violated.Oivil has not propagated any information to Livio at superlumi-nal velocities. Livio made his own observation on the state of thesystem. Notice that the statistics of Livio’s observations are unal-tered; independent of what Oivil does, he will still see left-handedphotons 50% of the time. If this were not the case, then therewould be a problem, since Oivil could exploit this to propagate amessage to Livio long after the photons are emitted.

However, people widely interpret (and write flashy headlines) thissort of effect as a kind of “action at a distance”: By measuring thestate of his photon, Oivil instantly “kicks” Livio’s far off photoninto a particular state (without being usable for the propagationof information, since Oivil can’t tell Livio about it any faster thanthe speed of light). Note that this philosophical dilemma is notsilly: The wave function for Livio’s photon has both left- and right-handed components; how could a measurement of Oivil’s photon“pick” which component Livio will see? Because of this, quantummechanics is often labelled “non-local”.

On the other hand, this philosophical perspective may be avoided(ignored): It may be suggested that it doesn’t make sense to talkthis way about the “wave function” of Livio’s photon, since thespecification of the wave function involves also Oivil’s photon.Oivil is merely making a measurement of the state of the two-photon system, by observing the polarization of one of the pho-tons, and knowing the coherence of the system. He doesn’t needto make two measurements to know both polarizations, they arecompletely correlated. Nothing is causing anything else to hap-pen at faster than light speed. We might take the (determinis-tic?) point of view that it was already determined at productionwhich polarization Livio would see for a particular photon – wejust don’t know what it will be unless Oivil makes his measure-ment. There appears to be no way of falsifying this point of view,as stated. However, taking this point of view leads to to the fur-ther philosophical question of how the pre-determined information

17

Page 18: Density Matrix Solutions

is encoded – is the photon propagating towards Livio somehowcarrying the information that it is going to be measured as left-handed? This conclusion seems hard to avoid. It leads to thenotion of “hidden variables”, and there are theories of this sort,which are testable.

We know that our quantum mechanical foundations are compati-ble with special relativity, hence with the notion of causality thatimplies.

As Feynman remarked several years ago in a seminar I arrangedconcerning EPR, the substantive question to be asking is, “Dothe predictions of quantum mechanics agree with experiment?”.So far the answer is a resounding “yes”. Indeed, we often relyheavily on this quantum coherence in carrying out other researchactivities. Current experiments to measure CP violation in B0

decays crucially depend on it, for example.

9. Let us consider the application of the density matrix formalism to theproblem of a spin-1/2 particle (such as an electron) in a static externalmagnetic field. In general, a particle with spin may carry a magneticmoment, oriented along the spin direction (by symmetry). For spin-1/2, we have that the magnetic moment (operator) is thus of the form:

µµµ =1

2γσσσ, (78)

where σσσ are the Pauli matrices, the 12

is by convention, and γ is aconstant, giving the strength of the moment, called the gyromagneticratio. The term in the Hamiltonian for such a magnetic moment in anexternal magnetic field, BBB is just:

H = −µµµ ·BBB. (79)

Our spin-1/2 particle may have some spin-orientation, or “polarizationvector”, given by:

PPP = 〈σσσ〉. (80)

Drawing from our classical intuition, we might expect that in the ex-ternal magnetic field the polarization vector will exhibit a precessionabout the field direction. Let us investigate this.

18

Page 19: Density Matrix Solutions

Recall that the expectation value of an operator may be computed fromthe density matrix according to:

〈A〉 = Tr(ρA). (81)

Furthermore, recall that the time evolution of the density matrix isgiven by:

i∂ρ

∂t= [H(t), ρ(t)]. (82)

What is the time evolution, dPPP/dt, of the polarization vector? Expressyour answer as simply as you can (more credit will be given for rightanswers that are more physically transparent than for right answerswhich are not). Note that we make no assumption concerning thepurity of the state.

Solution: Let us consider the ith-component of the polarization:

idPi

dt= i

d〈σi〉dt

(83)

= i∂

∂tTr(ρσi) (84)

= iTr

(∂ρ

∂tσi

)(85)

= Tr ([H, ρ]σi) (86)

= Tr ([σi, H]ρ) (87)

= −1

3∑

j=1

BjTr ([σi, σj]ρ) . (88)

To proceed further, we need the density matrix for a state with polar-ization PPP . Since ρ is hermitian, it must be of the form:

ρ = a(1 + bbb · σσσ). (89)

But its trace must be one, so a = 1/2. Finally, to get the right polar-ization vector, we must have bbb=PPP .

Thus, we have

idPi

dt= −1

3∑

j=1

Bj

{Tr[σi, σj] +

3∑

k=1

PkTr ([σi, σj]σk)

}. (90)

19

Page 20: Density Matrix Solutions

Now [σi, σj] = 2iεijkσk, which is traceless. Further, Tr ([σi, σj]σk) =4iεijk. This gives the result:

dPi

dt= −γ

3∑

j=1

3∑

k=1

εijkBjPk. (91)

This may be re-expressed in the vector form:

dPPP

dt= γPPP ×BBB. (92)

10. Let us consider a system of N spin-1/2 particles (see the previous prob-lem) per unit volume in thermal equilibrium, in our external magneticfield BBB. Recall that the canonical distribution is:

ρ =e−H/T

Z, (93)

with partition function:

Z = Tr(e−H/T

). (94)

Such a system of particles will tend to orient along the magnetic field,resulting in a bulk magnetization (having units of magnetic momentper unit volume), MMM .

(a) Give an expression for this magnetization (don’t work too hard toevaluate).

Solution: Let us orient our coordinate system so that the z-axisis along the magnetic field direction. Then Mx = 0, My = 0, and:

Mz = N1

2γ〈σz〉 (95)

= Nγ1

2ZTr[e−H/Tσz

], (96)

where H = −γBzσz/2.

(b) What is the magnetization in the high-temperature limit, to lowestnon-trivial order (this I want you to evaluate as completely as youcan!)?

20

Page 21: Density Matrix Solutions

Solution: In the high temperature limit, we’ll discard termsof order higher than 1/T in the expansion of the exponential:e−H/T ≈ 1 −H/T = 1 + γBzσz/2T . Thus,

Mz = Nγ1

2ZTr [(1 + γBzσz/2T )σz] (97)

= Nγ2Bz1

2ZT. (98)

Furthermore,

Z = Tre−H/T (99)

= 2 +O(1/T 2). (100)

And we have the result:

Mz = Nγ2Bz/4T. (101)

This is referred to as the “Curie Law” (for magnetization of asystem of spin-1/2 particles).

21