xiwi

Mobile notice: If you’re browsing this on a mobile device, many equations might run off to the side of your screen (but should be scrollable). View in landscape mode or use a computer.


These are the solutions for Chapter 2: Introduction to Quantum Mechanics.

I decided not to solve problems 2.1–2.50, since they are mostly linear algebra, but I might come back to them later. Problems 2.72-2.81 are missing as well, at least for now.


2.51

We have to verify that $H^{\dagger}H = \mathbb{I}$.

We know that $H = \frac{1}{\sqrt 2}\left(\begin{smallmatrix}1 & 1\newline 1 & -1\end{smallmatrix}\right)$, so that $H^\dagger = \frac{1}{\sqrt 2}\left(\begin{smallmatrix}1 & 1\newline 1 & -1\end{smallmatrix}\right)^\dagger = H$.

Then $$\begin{align*} H^\dagger H = H^2 &= \frac{1}{2}\begin{pmatrix} 1&1\newline 1&-1 \end{pmatrix} \begin{pmatrix} 1&1\newline 1&-1\end{pmatrix}\newline &= \frac{1}{2}\begin{pmatrix} 2 & 0\newline 0 & 2\end{pmatrix} = \Bbb{I}. \end{align*}$$

2.52

Thanks to the previous exercise, we know that $H^\dagger H = \Bbb{I}$, and since $H^\dagger H = H^2$, then $H^2 = \Bbb{I}$.

2.53

We solve $$ \begin{align*} \det(H-\lambda\Bbb I) &= \det \begin{pmatrix} \frac{1}{\sqrt 2}-\lambda & \frac{1}{\sqrt 2}\newline \frac{1}{\sqrt 2}&-\frac{1}{\sqrt 2}-\lambda \end{pmatrix} = 0\newline \Rightarrow \lambda &= \pm 1. \end{align*} $$

For $\lambda_+ = 1$ and $\ket{\lambda_+} = \left(\begin{smallmatrix}\alpha\newline\beta\end{smallmatrix}\right)$ we have the eigenvector condition $H\ket{\lambda_+} = \lambda_+\ket{\lambda_+} $. We get $$ \begin{matrix} \Rightarrow & \frac{1}{\sqrt 2}\alpha + \frac{1}{\sqrt 2}\beta = \alpha & \text{and} & \frac{1}{\sqrt 2}\alpha - \frac{1}{\sqrt 2}\beta = \beta. \end{matrix} $$ Solving this system yields $\ket{\lambda_+} = \begin{pmatrix}1\newline \sqrt2 -1\end{pmatrix}$ before normalisation.

For $\lambda_{-} = -1$ we can do an analogous analysis yielding $\ket{\lambda_-} = \begin{pmatrix}1\newline -\sqrt2 -1\end{pmatrix}$ before normalisation.

2.54

Suppose $A$ and $B$ are both Hermitian and they commute. Sine they are Hermitian, they are diagonalisable $$\begin{matrix} A = \sum_i a_i \ket i\bra i & \text{and} & B = \sum_i b_i \ket i\bra i, \end{matrix} $$ where {$\ket i$} is the common eigenvalue basis between the two. We note that the sum $A + B = \sum_i (a_i + b_i)\ket i\bra i$ is Hermitian too (and thus diagonalisable).

Exponentiating $A$ and $B$ we have $$\Rightarrow\begin{matrix} \exp A = \sum_i e^{a_i} \ket i\bra i & \text{and} & \exp B = \sum_i e^{b_i} \ket i\bra i, \end{matrix} $$ so that $$\begin{align*} \exp A\exp B &= \sum_{ij} e^{a_i}e^{b_j}\ket i\braket{i\vert j}\bra j\hspace{0.5cm} (\braket{i\vert j} = \delta_{ij})\newline &= \sum_i e^{a_i+b_j}\ket i\bra i, \end{align*} $$ which is just $\exp{A+B}$. We can verify that $\exp B\exp A$ gives the same result.

2.55

We know that $$ U(t_1,t_2) \equiv \exp\left[\frac{-i\cal{H}(t_2-t_1)}{\hbar}\right], $$ where the Hamiltonian $\cal H$ is a Hermitian operator. Using its spectral decomposition it has the form $$ U(t_1,t_2) = \sum_E \exp\left[\frac{-iE(t_2-t_1)}{\hbar}\right]\ket E\bra E, $$ so that one of the unitary conditions becomes $$ \begin{align*} U(t_1,t_2)^\dagger U(t_1,t_2) &= \left(\sum_E \exp\left[\frac{-iE(t_2-t_1)}{\hbar}\right]\ket E\bra E\right)^\dagger\left(\sum_{E’} \exp\left[\frac{-iE’(t_2-t_1)}{\hbar}\right]\ket{E’}\bra {E’}\right)\newline &= \sum_{E,E’}\exp\left(\frac{iE(t_2-t_1)}{\hbar}\right)\exp\left(\frac{-iE’(t_2-t_1)}{\hbar}\right)\ket{E}\braket {E\vert E’}\bra {E’}\newline &= \sum_{E} \exp\left[\frac{iE(t_2-t_1)}{\hbar}+\frac{-iE(t_2-t_1)}{\hbar}\right]\ket E\bra E\newline &= \sum_E\ket E \bra E = \Bbb I. \end{align*} $$ One can follow exactly the same procedure to show that $U(t_1,t_2)U(t_1,t_2)^\dagger = \Bbb I$. Therefore $U(t_1,t_2)$ is unitary.

2.56

Since unitary $U$ satisfies $UU^\dagger = \Bbb I$, it is a normal operator and has a spectral decomposition $U = \sum_j u_j\ket j\bra j$, where $u_j$ are its corresponding eigenvalues. We also know that because $U$ is unitary, its eigenvalues $u_j$ have the form $e^{i\theta_j}$ for values $\theta_j \in [0,2\pi)$. This means that $$ \begin{align*} -i\log U &= -i\sum_j\log(u_j)\ket j\bra j\newline &= -i\sum_j\log(e^{i\theta_j})\ket j\bra j \newline &= \sum_j\theta_j\ket j\bra j. \end{align*} $$ Since $\theta_j\in\Bbb R$, the operator $K$ defined by $K \coloneqq \sum_j\theta_j\ket j\bra j$ is diagonal and satisfies $K = K^\dagger$, which proves hermicity.

2.57

We have two sets of measurement operators {$M_m$} and {$L_l$}. Suppose we have a system in an initial state $\ket\psi$ and we apply the second set of mesurements, measuring the value $l$. This means that the state of the system is now $$\ket{\psi_l} = \frac{L_l\ket\psi}{\sqrt{\bra\psi L_l^\dagger L_l\ket\psi}}. $$ If we apply the first set of measurements now and measure $m$, the state of the system becomes

$$ \begin{align*} \ket{\psi_{m|l}} &= \frac{M_m}{\sqrt{\bra{\psi_l} M_m^\dagger M_m\ket{\psi_l}}}\ket{\psi_l}\newline &= \frac{M_m}{\sqrt{\bra{\psi_l} M_m^\dagger M_m\ket{\psi_l}}}\frac{L_l\ket{\psi}}{\sqrt{\bra\psi L_l^\dagger L_l\ket\psi}} \newline &=\frac{M_m L_l}{\sqrt{\bra{\psi} L_l^\dagger M_m^\dagger M_m L_l\ket{\psi}}}\ket\psi. \end{align*} $$ We can now define the measurement operators {$N_n$} $\equiv$ {$M_m L_l$}, such that every value $n\in\Bbb N$ corresponds to a unique pair $m,l$. Then, if we applied this new set of measurements to the initial state $\ket\psi$, the system would be in the state $$ \begin{align*} \ket{\psi_n} &= \frac{M_m L_l}{\sqrt{\bra{\psi} L_l^\dagger M_m^\dagger M_m L_l\ket{\psi}}}\ket\psi\newline &= \ket{\psi_{m|l}}. \end{align*} $$

We can verify that {$N_n$} indeed constitutes a measurement set: $$ \begin{align*} \sum_n N_n^\dagger N_n &= \sum_{m,l} L_l^\dagger M_m^\dagger M_m L_l\newline &= \sum_l L_l^\dagger \left(\sum_m M_m^\dagger M_m\right)L_l\newline &= \sum_l L_l^\dagger L_l = \Bbb I. \end{align*} $$

2.58

Since the system is already in an eigenstate of the observable $M$, $M\ket\psi = m\ket\psi$. Then $$ \begin{align*} \langle M \rangle &= \bra\psi M \ket\psi = m\newline \langle M^2 \rangle &= \bra\psi M^2 \ket\psi = m\bra\psi M \ket\psi = m^2, \end{align*} $$ so that $\Delta(M) = \sqrt{\langle M^2 \rangle - \langle M \rangle^2} = m^2-m^2 = 0$.

2.59

We know that $X\ket x = \ket{x\oplus 1}$. Thus:

  1. $\langle X\rangle = \bra{0}X\ket{0} = \braket{0\vert 1} = 0$,
  2. $\langle X^2\rangle = \bra{0}X^2\ket{0} = \bra{0}X\ket{1} = \braket{0\vert 0} = 1$,
  3. $\Delta(X) = \sqrt{\langle X^2\rangle-\langle X\rangle^2} = 1$.

2.60

Suppose $\vec v$ is any real three-dimensional vector. Then $$ \begin{align*} \vec v \cdot \vec\sigma &= v_1\begin{pmatrix}0&1\newline 1&0\end{pmatrix}+v_2\begin{pmatrix}0&-i\newline i&0\end{pmatrix}+v_3\begin{pmatrix}1&0\newline 0&-1\end{pmatrix}\newline &= \begin{pmatrix}v_3 & v_1-iv_2\newline v_1+iv_2 & -v_3\end{pmatrix}. \end{align*} $$ Solving $$ \begin{align*} \det(\vec v \cdot \vec\sigma-\lambda\Bbb I) &= \det \begin{pmatrix} v_3-\lambda & v_1-iv_2\newline v_1+iv_2 & -v_3-\lambda \end{pmatrix} = 0\newline \Rightarrow \lambda^2 &- |v|^2 = 0\newline \therefore \lambda &= \pm 1. \end{align*} $$ To find the projectors we first need to find the associated eigenvectors, since $P_{\pm} = \lambda_\pm\ket{\lambda_\pm}\bra{\lambda_\pm}$.

For $\lambda_+ = 1$ we have the following system of linear equations (solving for $\ket{\lambda_+} = \left(\begin{smallmatrix}\alpha\newline\beta\end{smallmatrix}\right)$): $$ \begin{align*} (v_3-1)\alpha + (v_1+iv_2)\beta &= 0\newline (v_1+iv_2)\alpha-(v_3+1)\beta &= 0, \end{align*} $$ yielding $\ket{\lambda_+} =\cfrac{1}{\sqrt{2-2v_3}}\begin{pmatrix}-v_1+iv_2\newline v_3-1\end{pmatrix}$.

We then find that: $$ \begin{align*} P_{+} = \ket{\lambda_+}\bra{\lambda_+} &= \cfrac{1}{2(1-v_3)}\begin{pmatrix}-v_1+iv_2\newline v_3-1\end{pmatrix}\begin{pmatrix}-v_2-iv_2 & v_3-1\end{pmatrix}\newline &= \cfrac{1}{2}\left[\cfrac{1}{1-v_3}\right]\begin{pmatrix}v_1^2+v_2^2 & (-v_1+iv_2)(v_3-1)\newline -(v_1+iv_2)(v_3-1)&(v_3-1)^2\end{pmatrix}, \end{align*} $$ and using the fact that $\vert{\vec v}\vert^2 = v_1^2+v_2^2+v_3^2 = 1$, we get $$ \begin{align*} \ket{\lambda_+}\bra{\lambda_+} &= \cfrac{1}{2}\begin{pmatrix}1+v_3 & v_1-iv_2\newline v_1+iv_2 & 1-v_3\end{pmatrix} = \frac{1}{2}\left(\Bbb I + \vec v\cdot\vec\sigma\right). \end{align*} $$ An analogous procedure can be followed for the other eigenvector $\lambda_- = -1$.

2.61

The probability of measuring $m = 1$ given the state $\ket 0$ corresponds to $$ \begin{align*} p(1) &= \bra{0}P_+\ket 0\newline &= \frac{1}{2}\bra 0(\Bbb I + \vec v\cdot\vec\sigma)\ket 0\newline &= \frac{1}{2} \braket{0\vert0}+ \frac{1}{2}\bra 0(v_1X+v_2Y+v_3Z)\ket 0\newline &= \frac{1}{2} + \frac{1}{2}\left[v_1\braket{0\vert 1}+iv_2\braket{0\vert 1} + v_3\braket{0\vert 0}\right]\newline &= \frac{1}{2}(1+v_3). \end{align*} $$ If the measurement $m=1$ is obtained, then the system is now in the state $\ket{\lambda_+}$.

2.62

Suppose we have a measurement that can be described by the measurement operators {$M_m$}. Its POVM elements are defined by $E_m = M_m^\dagger M_m$.

If the measurement operators coincide with the POVM elements then $M_m = M_m^\dagger M_m$ for all $m$. Thanks to exercise 2.25, we know that $M_m^\dagger M_m$ is positive, so $M_m$ is positive as well. Furthermore, exercise 2.24 tells us that $M_m$ has to be Hermitian as well. $$ M_m = M_m^\dagger M_m = M_m^2, $$ which means that $M_m$ is a projector (exercise 2.16).

2.63

Thanks to the left polar decomposition of $M_m$ we know there exist unitary $U$ and positive $J$ such that $$M_m = UJ,$$ with $J = \sqrt{ M_m^\dagger M_m} = \sqrt{E_m}$, which was what we wanted to show.

2.65

Trivially answered with the sign states basis $$\cfrac{1}{\sqrt{2}}\left(\ket 0 \pm \ket 1\right) = \ket \pm .$$

2.66

For a system in the state $\ket{\Phi^+}=\cfrac{1}{\sqrt{2}}(\ket{00} + \ket {11})$:

$$ \begin{align*} \langle X\otimes Z \rangle &= \bra{\Phi^+}X\otimes Z\ket{\Phi^+}\newline &= \cfrac{1}{2}(\bra{00}+\bra{11})X\otimes Z(\ket{00}+\ket{11})\newline &=\cfrac{1}{2}(\bra{00}X\otimes Z\ket{00}+\bra{00}X\otimes Z\ket{11}+\bra{11}X\otimes Z\ket{00}+\bra{11}X\otimes Z\ket{11}). \end{align*} $$ We know the effects of $X$ and $Z$ on the computational basis: $$ \begin{matrix} X\ket{x} = \ket{x\oplus 1}, & Z\ket x = (-1)^x\ket x. \end{matrix} $$ Applying this to the previous equation we obtain $$ \begin{align*} \langle X\otimes Z \rangle &=\cfrac{1}{2}(\bra{00}\ket{10}-\bra{00}\ket{01}+\bra{11}\ket{10}-\bra{11}\ket{01})\newline &= 0. \end{align*} $$

2.68

Let us have the two qubit state $\ket{\Phi^+}=\cfrac{1}{\sqrt{2}}(\ket{00} + \ket {11})$. If $\ket\Psi^+$ were a product state, then there exist real numbers $a_0, a_1, b_0$ and $b_1$ for which: $$ \begin{align*} \ket{\Phi^+} &= \ket a\ket b\newline &= (a_0\ket 0 + a_1\ket 1)(b_0\ket 0 + b_1\ket 1)\newline &= a_0 b_0\ket{00}+a_0 b_1\ket{01}+a_1 b_0\ket{10}+a_1 b_1\ket{11}. \end{align*} $$ Because of linear independence, this would mean that $$ \begin{matrix} a_0 b_0 = 1/\sqrt{2}, & & a_0 b_1 = 0, \newline a_1 b_0 = 0 & \text{and} & a_1 b_1 = 1/\sqrt{2}, \end{matrix} $$ which are impossible conditions to satisfy simultaneously. Thus, $\ket{\Phi^+}$ can’t possibly be a product state.

2.69

Given the Bell states $$ \begin{align*} \ket{\Phi^\pm} &= \cfrac{1}{\sqrt{2}}(\ket{00} \pm \ket {11})\hspace{0.5cm}\text{and}\newline \ket{\Psi^\pm} &= \cfrac{1}{\sqrt{2}}(\ket{01} \pm \ket {10}), \end{align*} $$ it is straightforward to see that we can express the computational basis for the two qubit space as follows $$ \begin{align*} \ket{00} &= \cfrac{1}{\sqrt{2}}(\ket{\Phi^+}+\ket{\Phi^-}),\newline \ket{11} &= \cfrac{1}{\sqrt{2}}(\ket{\Phi^+}-\ket{\Phi^-}),\newline \ket{01} &= \cfrac{1}{\sqrt{2}}(\ket{\Psi^+}+\ket{\Psi^-}),\newline \ket{10} &= \cfrac{1}{\sqrt{2}}(\ket{\Psi^+}-\ket{\Psi^-}). \end{align*} $$ Since we can generate the computational basis with the Bell states, and given the fact that the Bell states themselves are orthonormal between them, we can assert that they too form a basis for the two qubit space.

2.70

a)

We define a positive operator $E$ acting on Alice’s qubit as $E_A$ (any operator acting on Bob’s qubit will similarly have the subscript $B$). Since $E_A$ is a positive operator $\Rightarrow E_A$ is Hermitian $\Rightarrow$ it has a spectral decomposition. Then $$ \begin{align*} E_A\otimes \Bbb{I}_B &= \sum_{i}e_i\ket{i}\bra{i}\otimes\sum_{j}\ket{j}\bra{j}, \hspace{0.5cm}e_i \geq 0. \end{align*} $$ Then for the $\ket{\Phi^+}$ state we have $$ \begin{align*} \bra{\Phi^+}E_A\otimes\Bbb I_B\ket{\Phi^+} =& \cfrac{1}{2}(\bra{00}E_A\otimes\Bbb I_B\ket{00}+\bra{00}E_A\otimes\Bbb I_B\ket{11}\newline &+\bra{11}E_A\otimes\Bbb I_B\ket{00}+\bra{11}E_A\otimes\Bbb I_B\ket{11})\newline =&\cfrac{1}{2}(\bra{0}\sum_{i}e_i\ket{i}\braket{i\vert 0}+\bra{1}\sum_{i}e_i\ket{i}\braket{i\vert 1})\newline =& \cfrac{1}{2}(e_0+e_1) = \cfrac{1}{2}\text{tr}(E_A). \end{align*} $$ For $\ket{\Phi^-}$ we get something very similar: $$ \begin{align*} \bra{\Phi^-}E_A\otimes\Bbb I_B\ket{\Phi^-} =& \cfrac{1}{2}(\bra{00}E_A\otimes\Bbb I_B\ket{00}-\bra{00}E_A\otimes\Bbb I_B\ket{11}\newline &-\bra{11}E_A\otimes\Bbb I_B\ket{00}+\bra{11}E_A\otimes\Bbb I_B\ket{11})\newline =&\cfrac{1}{2}(\bra{0}\sum_{i}e_i\ket{i}\braket{i\vert 0}+\bra{1}\sum_{i}e_i\ket{i}\braket{i\vert 1})\newline =& \cfrac{1}{2}(e_0+e_1) = \cfrac{1}{2}\text{tr}(E_A), \end{align*} $$ and showing the same for $\ket{\Psi^\pm}$ is straightforward.

b)

If a malevolent third party Eve wants to eavesdrop on Alice’s qubit this means that she must perform a measurement $M_m$ on qubit $A$ (formally the measrument $(M_m)_A\otimes\Bbb I_B$). The probability of measuring $m$ after this is $$ \begin{align*} p(m) &= \bra\psi [(M_m)_A\otimes\Bbb I_B]^\dagger[(M_m)_A\otimes\Bbb I_B]\ket\psi\newline &= \bra\psi[(M_m)_A^\dagger(M_m)_A\otimes\Bbb I_B]\ket\psi, \end{align*} $$ and since $(M_m)_A^\dagger(M_m)_A$ is positive and $\ket\psi$ is one of the Bell states, Eve will always measure the same value no matter what state $\ket\psi$ is, so they cannot infer anything from this disruptive eavesdropping.

2.71

Let $\rho$ be a density operator. Since all density operator are positive (and thus Hermitian) $\rho$ has a spectral decomposition $\Rightarrow \rho = \sum_{i} p_{i} \ket{i}\bra{i}$ with $p_{i} \geq 0$. Then $$ \begin{align*} \rho^2 &= \sum_{i} p_{i} \ket{i}\bra{i}\sum_{j} p_{j} \ket{j}\bra{j}\newline &= \sum_{i,j}p_i p_j \ket{i}\braket{i\vert j}\bra{j}\newline &= \sum_i p_i^2\ket{i}\bra{i}, \end{align*} $$ so that $\text{tr}(\rho^2) = \sum_i p_i^2$. Since each $p_i$ is non-negative, $p_i^2 \leq p_i$ and $$\text{tr}(\rho^2) = \sum_i p_i^2 \leq \sum_i p_i = \text{tr}(\rho) = 1.$$

If we have a pure state, then $\rho = \ket\psi\bra\psi$, where $\ket\psi$ is an eigenvector of $\rho$, and $\text{tr}(\rho^2) = \text{tr}(\ket\psi\braket{\psi\vert\psi}\bra\psi) = \text{tr}(\ket\psi\bra\psi) = 1$.

This is equivalent to saying that only one single element in the sum over $i$ survives (with corresponding eigenvalue $p_i = 1$) while all other vanish.