Sale!

Phys 512 Problem Set 3 Solved

Original price was: $40.00.Current price is: $35.00.

Category:

Description

5/5 - (1 vote)

1) Before we start the main part of this problem set, let’s warm up with
a linear least-squares problem. Look at the file dish zenith.txt. This contains
photogrammetry data for a prototype telescope dish. Photogrammetry attempts
to reconstruct surfaces by working out the 3-dimensional positions of targets
from many pictures (as an aside, the algorithms behind photogrammetry are
another fun least-squares-type problem, but beyond the scope of this class).
The end result is that dish zenith.txt contains the (x,y,z) positions in mm of a
few hundred targets placed on the dish. The ideal telescope dish should be a
rotationally symmetric paraboloid. We will try to measure the shape of that
paraboloid, and see how well we did.

a) Helpfully, I have oriented the points in the file so that the dish is pointing
in the +z direction (in the general problem, you would have to fit for direction
the dish is pointing in as well, but we will skip that here). For a rotationally
symmetric paraboloid, we know that
z − z0 = a

(x − x0)
2 + (y − y0)
2

and we need to solve for x0, y0, z0, and a. While at first glance this problem may
appear non-linear, show that we can pick a new set of parameters that make
the problem linear. What are these new parameters, and how do they relate to
the old ones?

b) Carry out the fit. What are your best-fit parameters?
c) Estimate the noise in the data, and from that, estimate the uncertainty in
a. Our target focal length was 1.5 metres. What did we actually get, and what
is the error bar? In case all facets of conic sections are not at your immediate
recall, a parabola that goes through (0, 0) can be written as y = x
2/(4f) where

f is the focal length. When calculating the error bar for the focal length, feel
free to approximate using a first-order Taylor expansion.
BONUS: Of course, we have just assumed that the dish is circularly symmetric. In real life, we’d obviously need to check that. The leading order correction
would give us a dish that looked like z = ax02+by02
if the vertex (bottom) of the
dish was at (0, 0, 0) and our coordinate system was aligned with the principal
axes of the dish. We won’t usually have the benefit of being aligned like that;
instead we’ll usually be rotated by some (unknown) angle θ, so our observed
coordinates x, y will be related to the original coordinates x
0
, y0 by a rotation:
x = cos(θ)x
0 + sin(θ)y
0 and y = − sin(θ)x
0 + cos(θ)y
0
. Find the focal lengths

of the two principal axes (and don’t forget we can still have arbitrary offsets
x0, y0, z0). Is the dish round?

For the bulk of this problem set, we will use the power spectrum of the
Cosmic Microwave Background (CMB) to constrain the basic cosmological parameters of the universe. The parameters we will measure are the Hubble constant, the density of regular baryonic matter, the density of dark matter, the
amplitude and tilt of the initial power spectrum of fluctuations set in the very
early universe, and the Thomson scattering optical depth between us and the
CMB. In this excercise, we will only use intensity data, which does a poor job
constraining the optical depth.

For the data, we will use the WMAP satellite 9-year data release. (The
Planck satellite has new and better data, but its greater sensitivity means it is
more complicated to use). The data can be found at https://lambda.gsfc.nasa.gov/.
Browse down to WMAP data products, and go to the TT power spectra link.
We want the combined (not binned) version of the spectrum. This gives the
measured variance of the sky as a function of multipole l. WMAP does not
measure the monopole, and the dipole is set by the motion of the Earth/Milky
Way relative to the CMB reference frame.

So, the spectrum starts with the
quadrupole (l = 2). The first column is the multipole index, the second is the
measured power spectrum, and the third is the error in that. For simplicity,
we will treat the errors as Gaussian and uncorrelated, though that is not quite
accurate. The final two columns break down the error into the instrument noise
part and the “cosmic variance” part, due to the fact that we only have a finite
number of modes in the sky to measure. These columns can safely be ignored.
Further description, including plots, can be found in the WMAP 9-year result
paper https://arxiv.org/pdf/1212.5226.pdf.

You’ll also need to be able to calculate model power spectra as a function
of input parameters. You can get the source code for CAMB from Antony
Lewis’s github page: https://github.com/cmbant. There’s a short tutorial online at https://camb.readthedocs.io/en/latest/CAMBdemo.html as well. Note
that CAMB returns the power spectrum starting with the monopole, so you
may need to manually remove the first two entries. You might want to try e.g.
“pip3 install camb”, which worked for me (but you may have to install a fortran
compiler first. gfortran is open source and freely available).

To help you out, I have posted a sample script that calculates the power
spectrum from CAMB, reads in the WMAP data, and plots them on top of
each other for one guess for the cosmological parameters.
2) Using Gaussian, uncorrelated errors, what do you get for χ
2
for the model
in my example script, where the Hubble constant H0 = 65 km/s, the physical
baryon density ωbh
2=0.02, the cold dark matter density ωch
2 = 0.1 the optical
depth τ = 0.05, the primordial amplitude of fluctuations is As = 2 × 10−9
, and
the slope of the primordial power law is 0.96 (where 1 would be scale-invariant).

The baryon/dark matter densities are defined relative to the critical density
required to close the universe, scaled by h
2 where h ≡ H0/100 ∼ 0.7. Note that
the universe is assumed to be spatially flat (for reasons too long to justify here),
so the dark matter density relative to critical for these parameters would be
2
1 − (ωbh
2 + ωch
2
)/h2 =71.6% for the model assumed here. (You may want to
play around plotting different models as you change parameters to get a sense
for how the CMB depends on them.) If everything has gone well, you should
get something around 1588 (please give a few extra digits) for χ
2
for this model.

3) Keeping the optical depth fixed at 0.05, write a Newton’s method/LevenbergMarquardt minimizer and use it to find the best-fit values for the other parameters, and their errors. What are they? If you were to keep the same set of
parameter but now float τ , what would you expect the new errors to be? Note
that CAMB does not provide derivatives with respect to parameters, so you’ll
have to come up with something for that. Pleae also provide a plot showing
why we should believe your derivative estimates.

4) Now write a Markov-chain Monte Carlo where you fit the basic 6 parameters, including τ . However, note that we know the optical depth can’t be
negative, so you should reject any steps that try to sample a negative τ . What
are your parameter limits now? Please also present an argument as to why you
think your chains are converged. As a reminder, you can draw samples of correlated data from a covariance matrix with r = np.linalg.cholesky(mat); d =
np.dot(r, np.random.randn(r.shape[0])). You will want to use the covariance
matrix from part 2) when drawing samples for the MCMC.

5) The Planck satellite has independently measured the CMB sky, and finds
that the optical depth is 0.0544 ± 0.0073. Run a chain where you add this in
as a prior on the value of τ . What are your new parameter values/constraints?

You can also take your chain from part 4) and importance sample it (weighting
by the prior) with the Planck τ prior. How to those results compare to the full
chain results?
3