Musings on $\pi$ Day

1. The Ubiquity of $\pi$
     or, Life, the Universe, and Everything: A Simple Statement of Fact

$\pi$ is everywhere you look. It is even the case that there is $\pi$ in the sky. We need $\pi$ in order to live and function. These three observations are fundamental to the way our universe is put together.

2. The Value of $\pi$
     or, How Much is that Round Thing in the Window?

(with apologies to Patti Page) So, for fun, let’s calculate $\pi$ using Ramanujan’s famous infinite series formula, and check the error against a clever, arbitrary-precision algorithm for $\pi$, based on the Chudnovsky brothers’ improvement on Ramanujan’s series approximation, and which is correct to as many digits as we care to specify. While we’re at it, we’ll include just a straight-up implementation of the Chudnovsky brothers’ series approximation, too.

Ramanujan’s formula (see also here, and here):

\begin{equation}
\dfrac{1}{\pi} = \dfrac{2\sqrt 2}{9801}
\sum_{n=0}^{\infty} \dfrac{\left(4 n\right)!}{\left(n!\right)^4}
\dfrac{1103 + 26390\,n}{396^{4n}} \label{eq:ram}
\end{equation}

As mentioned here, the Chudnovsky brothers derived a Ramanujan-like formula that converges considerably faster(!) than Ramanujan’s original:

\begin{equation}
\dfrac{1}{\pi} = \dfrac{1}{53360\sqrt{640320}}
\sum_{n=0}^{\infty} \left(-1\right)^n
\dfrac{\left(6 n\right)!}{\left(n!\right)^3\left(3 n\right)!}
\dfrac{13591409 + 545140134\,n}{640320^{3n}} \label{eq:chud}
\end{equation}

We can take advantage of Python’s decimal module for exact arithmetic to as many digits of precision as we might want in calculating each term of the series. Doing so, we find the following errors after each successive iteration of the two series (note the exponents!):

      Ramanujan   Chudnovsky
n Rpi(n)-pi Cpi(n)-pi
-- ---------- -----------
0 7.642E-8 -5.903E-14
1 6.395E-16 3.078E-28
2 5.682E-24 -1.721E-42
3 5.239E-32 1E-56
4 4.944E-40 -5.959E-71
5 4.741E-48 3.609E-85
6 4.599E-56 -2.212E-99
7 4.5E-64 1.368E-113
8 4.433E-72 -8.515E-128
9 4.391E-80 5.331E-142
10 4.37E-88 -3.353E-156
11 4.364E-96 2.117E-170
12 4.372E-104 -1.341E-184
13 4.393E-112 8.513E-199
14 4.424E-120 -5.42E-213
click to enlarge

As we can see, Ramanujan’s formula, eq. \eqref{eq:ram}, gives eight orders of improvement (i.e., eight more digits of accuracy) per successive iteration, while the Chudnovsky formula, eq. \eqref{eq:chud}, yields fourteen orders of precision per iteration!

To illustrate, after fifteen Chudnovsky series terms, the difference between the series approximation and the actual value of $\pi$ is:

-0.00000 00000 00000 00000 00000 00000 00000 00000 00000 00000
00000 00000 00000 00000 00000 00000 00000 00000 00000 00000
00000 00000 00000 00000 00000 00000 00000 00000 00000 00000
00000 00000 00000 00000 00000 00000 00000 00000 00000 00000
00000 00000 00542...

Even just the first Chudnovsky term by itself (or just the first two Ramanujan terms) gives $\pi$ to almost machine precision ($2^{-52}\approx 2.22\!\times\!10^{-16}$) on a 64-bit computer.

For another perspective (thanks for the idea, Daniel Greenspan), let’s calculate (roughly!) the total number of atoms in the universe. As you might imagine, this will be a big number. We’ll break it down into two parts.

First, how many stars are in the universe? This is a number we can estimate from observations of galaxies and the amount of light that they emit. Essentially, since we can determine distances to galaxies, we just add it all up. Modern astronomical estimates for the equivalent number of solar-mass stars in our universe, based on the amount of light we detect coupled with the distances to the objects (galaxies) emitting that light, all come in at around

\begin{equation}
N_{stars} \approx 2\!\times\!10^{23} \label{eq:Nstars}
\end{equation}

This is equivalent to the mass of the visible universe divided by the mass of the Sun.

The amount of baryonic—that is, visible, or what we think of as “normal”—matter in the universe is only a small fraction of the total mass of the universe. Our universe, based on several different kinds of observations, is $68.3\%$ dark energy, $26.8\%$ dark matter, and $4.9\%$ ordinary matter. But that’s another story. We’ll just stick to the ordinary matter that we can detect via the electromagnetic radiation it emits.

Second, how many atoms are in a star the mass of our Sun? Now, the Sun has a measured mass of $M_{\odot} = 1.9884\!\times\!10^{30}$kg() and is composed of about $74.9\%$ hydrogen and $23.8\%$ helium by mass(). For this exercise, we will assume that the mass contributions of electrons and the other elements besides hydrogen and helium are negligible. The mass of a proton is $1.00784$amu, and the mass of a helium nucleus is $4.002602$amu. One amu (atomic mass unit) is $1.66053904\!\times\!10^{-27}$kg. The approximate number of atoms in the Sun, $N_{\odot}$, is then

\begin{equation}
N_{\odot} \approx
\dfrac{0.749 M_{\odot}}{1.00784\mathrm{amu}} +
\dfrac{0.238 M_{\odot}}{4.002602\mathrm{amu}}
\approx 9.6 \!\times\!10^{56} \mathrm{atoms} \label{eq:Nsuns}
\end{equation}

Hence, combining \eqref{eq:Nstars} and \eqref{eq:Nsuns}, the number of atoms in the universe, $N_{universe}$, is, roughly,

\begin{equation}
N_{universe} \approx N_{stars}\cdot N_{\odot}
\approx 1.9\!\times\!10^{80} \mathrm{atoms}
\end{equation}

Notice from the table above that the error of the Chudnovsky series after only the first six terms is about one part in $2.8\!\times\!10^{84}$—a number that is several orders of magnitude larger than the number of atoms in the entire universe!

3. A Short Introduction to Astrophysics
    or, Is there Really $\pi$ in the Sky?
    or, Here are the Footnotes

You might not have seen this coming. But here it is, wherein we demonstrate that, indeed, there is $\pi$ in the sky.

(†) We can determine the mass of the Sun by measuring the motions of the planets and asteroids in our Solar System, and then using Newton’s Law of Gravity. As Kepler discovered from Tycho Brahe’s meticulous observations, and Newton proved mathematically after he invented calculus and then turned his attention to the Moon’s motion, the orbital period $P$ of a body of mass $m$ and its mean distance $a$ from the Sun with mass $M_{\odot}$ are related by

\begin{equation}
P^2 = \dfrac{4 {\pi}^2}{G\left(M_{\odot}+m\right)} a^3
\end{equation}

Look at that: $\pi$ is in this equation that describes what we see in the sky.

(‡) We can determine the relative abundances of the elements that make up the Sun (and almost any star) by measuring, with a spectroscope, the amount of radiation absorbed by those elements in the atmosphere of the Sun (called the photosphere). Every element has its own discrete spectral signature in the form of absorption lines at specific sets of wavelengths. The amount of radiation absorbed by an element, relative to the other elements present, and in combination with the measured temperature, luminosity, and mass of a star, tells us what fraction of the star’s photosphere consists of that element. (We also need to know the distance to the star, but that’s a long story.)

Stars are, roughly speaking (i.e., ignoring the radiation absorbed by the elements in their photospheres), black body radiators. This means we can relate their luminosity (total radiated energy per unit time) to their radius $R$ and their effective surface temperature, $T_{eff}$. Simply put, the luminosity is the surface area of the star ($4\pi R^2$) times the amount of radiation emitted per unit surface area of the star:

\begin{equation}
L = 4\pi R^2 \sigma T_{eff}^4 \label{eq:L}
\end{equation}

where $\sigma = \dfrac{2\pi^5 k^4}{15 c^2 h^3}$ is the Stefan-Boltzmann constant, $k=1.38064852\!\times\!10^{−23}$ Joules per degree Kelvin ($J\cdot K^{-1}$) is the Boltzmann constant, $c$ is the speed of light in vacuum, and $h=6.62607015\!\times\!10^{−34} J\cdot s$ is the Planck constant from quantum mechanics. Eq. \eqref{eq:L} is a consequence of the physics of black body radiation.

Look at that: $\pi$ is integral to these relations that describe what we see in the sky, too.

(Don’t ask about the quantum mechanics connection. You can go down that rabbit hole by following the provided links. Quantum mechanics hurts my head.)

4. Full Disclosure
     or, So This is Where That Came From

The Python code that produces the Ramanujan and Chudnovsky results (table and plot) is:


import decimal
from decimal import Decimal as D
from utils import mutils
from utils import mplot as plt

prec = 300  # Set the number of digits of precision
            # for calculations.
decimal.getcontext().prec = prec 

def dfac(n):
    """ Arbitrary digits factorial. """
    m = D('1')
    for k in range(1,n+1):
        m *= k
    return m

def Rpi(n):
    """
    Calculate pi using n iterations of Ramanujan's
    formula.
    """
    s = D('0')
    for k in range(n+1):
        facterm = dfac(4*k)/dfac(k)**4
        num = D('1103') + D('26390')*k
        den = D('396')**(4*k)
        s += facterm*num/den
    s *= D('8').sqrt()/D('9801')
    return 1/s

def Cpi(n):
    """
    Calculate pi using n iterations of the Chudnovsky
    brothers' Ramanujan-like formula.
    """
    s = D('0')
    for k in range(n+1):
        facterm = dfac(6*k)/(dfac(k)**3*dfac(3*k))
        num = D('13591409') + D('545140134')*k
        den = D('640320')**(3*k)
        s += D('-1')**k*facterm*num/den
    s *= D('1')/(D('53360')*D('640320').sqrt())
    return 1/s

# Print a table of the error of n iterations
# of Ramanujan's formula.
print('     Ramanujan   Chudnovsky')
print(' n   Rpi(n)-pi    Cpi(n)-pi')
print('--  ----------  -----------')
fmt = '{:2d}  {:>10s}  {:>11s}'
c4  = decimal.Context(prec=4)
rerrs = []
cerrs = []
for n in range(15):
    exact_pi = D(mutils.pi_chudnovsky(prec))
    errR = Rpi(n) - exact_pi*D('1e-{:d}'.format(prec))
    errC = Cpi(n) - exact_pi*D('1e-{:d}'.format(prec))
    normerrR = errR.normalize(c4)
    normerrC = errC.normalize(c4)
    print(fmt.format(n, str(normerrR), str(normerrC)))
    rerrs.append(float(errR))
    cerrs.append(float(errC))

fig = plt.figure(figsize=(8.2, 5))
xlab = ['$\mathrm{number\ of\ series\ terms}\ n$', 12]
ylab = ['$\mid f(n) - \pi \mid$', 12]
pt = ['$\mathrm{\pi\ series\ approximation\ error}$',
      14]
labs = [['$f(n) = \mathrm{Ramanujan}$', 10],
        '$f(n) = \mathrm{Chudnovsky}$']
xticks = np.arange(15)
yticks = np.array([0.1**k for k in range(0, 240, 30)])
ylim = (1e-220, 1e-1)
plt.lineplot([np.array(rerrs), abs(np.array(cerrs))],
             np.arange(15), ['k-', 'r-'], [1, 1], 
             ylim=ylim, logy=True, xlab=xlab,
             ylab=ylab, xticks=xticks, yticks=yticks,
             doxticks='bottom', doyticklabels='both', 
             dolegend=True, labels=labs, plottitle=pt)
fname = (os.environ['PYTHONPATH'] +
         '/misc/Ramanujan pi.jpg')
plt.savefig(fname, dpi=300)


Thor’s Day Morning Mathematical Musings

Have you had your caffeine injection yet? Well, then, here are three puzzles (with answers, but the answers are not helpful!):

  1. Can you completely mix a mug of coffee, such that, at every point inside the mug, the coffee at that point is different after stirring from before stirring? Go get a cup of joe (or tea), stir it, and see what you think.
    Answer: no. There will always be at least one point that is the same after the liquid has settled, no matter how vigorously you stir it. It is mathematically impossible for there to be no such points inside the mug.
  2. Do there exist on the surface of the Earth, at any given time, two antipodal points that have exactly the same surface temperature?

    Answer: yes. What about two antipodal points that have exactly the same barometric pressure? Also yes. Two antipodal points that have exactly the same surface temperature and exactly the same barometric pressure? Yet again, yes. This is mathematically inescapable.‌

    antipodal points on a sphereAt any time there exists a continuous curve on the Earth’s surface on which every point has an antipodal point that also lies on the curve and that has the same temperature. There is a different continuous curve on which antipodal points have the same pressure. And the two curves must intersect, since both encircle the globe, each separating it into two pieces. So that means there must be, at any time, at least one pair of antipodal points somewhere on the surface of the Earth that have the same temperature and the same pressure.

    You’ve probably surmised by now—you drank that cup of joe, right?—that this is true not just for temperature and pressure but for any two continuously variable parameters (such as temperature, pressure, humidity, wind speed, solar and terrestrial radiation, cloud ceiling, particulate density, atmospheric composition, and so on). You would be correct.
  3. Think of a multi-digit positive integer. Any such number will do—for example, $76.$ Now add up its digits and subtract that sum from the original number. $76\,- (7+6) = 63.$ Now apply this algorithm to the new number: $63\,- (6+3) = 54.$ Keep doing this until the resulting number has shrunk to just one digit. $54\,- (5+4) = 45$, $\dots, 18\,- (1+8) = 9.$

    Ta da! (Yes, really.) No matter your starting number (as long as it has more than one digit), you will always end up at $9$.


    Here is a quick and dirty python program that performs this task for any positive integer, returning the end result (which had better be nine!) and the number of iterations it took to get there:

    def digi9(n):
        count = 0
        while True:
            k = sum(list(map(int,','.join(str(n)).split(','))))
            m = n - k
            if len(str(m)) == 1:
                return m, count+1
            n = m
            count += 1

    Let’s consider an example:

    >>> digi9(72459075)
     (9,2191634)

    Starting with the randomly chosen number $72,459,075$, over two million iterations later we indeed end at $\dots, 27\,-(2+7) = 18,$ $18\,- (1+8) = 9.$

How are the answers to these little puzzles so? Welcome to the world of fixed point theorems! In mathematics, a fixed point is a member of a set such that an operation on the set at that point maps back to the point. The set can be anything—the set of integers, a Euclidean line, surface, or volume, etc. This concept has wide application and profound consequences in many branches of mathematics. The above puzzles are examples of fixed points in their respective sets. Put that in your mug and stir it!

Now go get some more coffee.

Show Me!

Suppose we have a function $f(x)$ such that $f(x) \in [a,b]~~\forall~x \in [a,b]$. That is, the function maps back to its domain. Then $f(x)$ has a fixed point $f(c) = c$ somewhere in the closed interval $a \le c \le b$.

Why? Well, it must be true that

\begin{equation}f(a) \ge a~~~ \mathrm{and} ~~~f(b) \le b \label{condition}\end{equation}

The intermediate value theorem says that if a function $f(x)$ is continuous on a closed interval $[a,b]$, then, for a given $c$ such that $f(a) \le c \le f(b)$, there must exist at least one value $x_0 \in [a,b]$ such that $f(x_0) = c.$

Since the range of our function is restricted to its domain, $f([a,b]) \in [a,b]$, we have from eq. \eqref{condition} that $f(a)-a \ge 0$ and $f(b)-b \le 0.$ If we define $g(x) \equiv f(x)-x$, this is $g(a) \ge 0 \ge g(b).$ By the intermediate value theorem there must then exist a value $c \in [a,b]$ such that $g(c) = 0$. Hence, there must exist at least one fixed point, $f(c) = c.$

This—or, rather, its generalization to any Euclidean space—is essentially a statement of the Brouwer fixed point theorem:

Every continuous function from a closed ball of a Euclidean space into itself has a fixed point.

Legend has it that Brouwer was lead to his theorem by pondering the surface of a cup of coffee upon stirring in a lump of sugar. (That someone would debase a good cup of coffee with sugar is a wholly different issue.)

Vsauce has an interesting video about fixed points, from which I stole the three examples above:

 

Is Trump’s Lead Significant?

Snapshot of polling results among Republican voters over the past three months [click to embiggen]
Snapshot of polling results among Republican voters over the past three months [click to embiggen]
At the moment, The Donald leads nationally among Republicans, with 29.8% favorability. Roughly 30% of polled Republicans currently favor Trump over the rest of the Republican Field of Clowns. People argue that 30 percent is not terribly impressive. Are they right?

You have to interpret more carefully than that. Roughly 30% of polled Republicans prefer Trump over the others. That last bit is important: that many other Clowns are vying for the prize matters in the interpretation of Trump’s 29.8 percent.

Since there are fifteen Clowns in this poll, an even distribution of favorability would be 6.7% per Clown. So Trump’s 29.8% is a pretty big outlier. How big? The mean of this favorability distribution is $\mu = 6.1$%, pretty close to the 6.7% expectation. The standard deviation of this distribution of Clown favorability ratings is $\sigma = 7.4$%. Trump’s $p = 29.8$% therefore is a $\Delta = \dfrac{\left|p\, – \mu\right|}{\sigma} = 3.2$-sigma outlier, which is statistically significant. What this means is that the chance of that being just a statistical fluke (i.e., the likelihood that a random choice from among a Gaussian distribution with $\mu = 6.1$% and $\sigma = 7.4$% would land you at 29.8% or higher) is $1 – \mathrm{erf} \left(\dfrac{\Delta}{\sqrt{2}}\right) = 0.0014 = 0.14$ percent.

In the physical sciences, a result lying three or more standard deviations away from the null hypothesis value is the typical bar for publishable significance. $\mathrm{erf}$ is the error function:

$$\mathrm{erf}(z) = \dfrac{1}{\sqrt{\pi}} \int_{-z}^z e^{-t^2} dt$$

and is the probability of a random variate lying between $-z$ and $+z$ in a distribution with zero mean and standard deviation $\frac 12$. Now, the 0.14% result above would hold if the favorability distribution were a normal (i.e., Gaussian) distribution, which it certainly is not. But the conclusions should correspond closely enough to reality to use as an approximate guide.

The next candidate down is Carson at 16.0%, and Bush is third at 8.3%. Carson is only 1.3 sigma out from the mean (Bush: $0.3\,\sigma$), which corresponds to the likelihood of his favorability rating being where it is or higher due to random chance is 18 percent (Bush: 77%).

Conclusion: Trump’s and Carson’s leads above the rest of this particular Republican Field of Clowns are currently significant, while for the rest it’s a coin toss in terms of preference — even for Bush.

Update 9/10: Numbers and graphic updated from original to reflect values current as of 10 September.

math test

Here’s how to get MathJax up and running for your blog: part I, part II. The three tests below are text lifted from elsewhere.

Test 1

Consider first what we shall call the direct geometry case, in which we use only the zenith angle $z$ and bypass the geocentric angle $\theta$. The length of side $\overline{CM}$ follows from the right triangle $\widehat{CMP}$:

$$\begin{equation}\begin{array}[b]{ccl}\left(R+H\right)^{2} & = & \left(D\sin z\right)^{2}+\left(R+h+D\cos z\right)^{2}\\ \\ & = & D^{2}+\left(R+h\right)^{2}+2\left(R+h\right)D\cos z\end{array}\label{eq:R+H-test}\end{equation}$$

or

\begin{equation}D^{2}+2\left(R+h\right)D\cos z-\left[\left(R+H\right)^{2}-\left(R+h\right)^{2}\right]=0\label{eq:D eqn-test}\end{equation}

with solution

\begin{equation}\begin{array}[b]{ccl}D & = & -\left(R+h\right)\cos z\pm\sqrt{\left(R+h\right)^{2}\cos^{2}z+\left[\left(R+H\right)^{2}-\left(R+h\right)^{2}\right]}\\ \\& = & \left(R+h\right)\left(\sqrt{\cos^{2}z+\dfrac{\left(R+H\right)^{2}-\left(R+h\right)^{2}}{\left(R+h\right)^{2}}}-\cos z\right)\end{array}\label{eq:D soln quadratic ugly-test}\end{equation}

where the geometry of the problem requires the positive root. For convenience, define

\begin{equation}\epsilon\equiv\dfrac{H}{R}\quad\mathrm{and}\quad\xi\equiv\dfrac{h}{R}\label{eq:eps and xsi defs-test}\end{equation}

Then we can write eq. \eqref{eq:D soln quadratic ugly-test} as

\begin{equation}D=\left(R+h\right)\left(\sqrt{\cos^{2}z+\left(\dfrac{1+\epsilon}{1+\xi}\right)^{2}-1}-\cos z\right)\label{eq:D soln quadratic-test}\end{equation}

Eq. \eqref{eq:D soln quadratic-test} has the disadvantage of subtraction of two nearly equal numbers.

Test 2

We would like to know what is the radius $\bar{r}$ of the center of mass

of a grid cell of inner radius $r_{1}$ and outer radius $r_{2}$. In polar coordinates $\left(r,\theta\right)$ an infinitesimal area element is $dA=r\,dr\,d\theta$, so

\begin{equation}\bar{r}=\frac{1}{\Delta A}\intop_{0}^{\Delta\theta}\intop_{r_{1}}^{r_{2}}r\,dA=\frac{1}{\Delta A}\intop_{0}^{\Delta\theta}\intop_{r_{1}}^{r_{2}}r^{2}dr\,d\theta\label{eq: area-weighted r integral-test}\end{equation}

where $\Delta A=\frac{\Delta\theta}{2\pi}\cdot\pi\left(r_{2}^{2}-r_{1}^{2}\right)$.

Thus,

\begin{equation}\Delta A=\frac{\Delta\theta}{2}\left(r_{2}^{2}-r_{1}^{2}\right)\label{eq: cell area-test}\end{equation}

and

\begin{equation}\bar{r}=\frac{1}{3}\frac{\Delta\theta}{\Delta A}\left(r_{2}^{3}-r_{1}^{3}\right)=\frac{2}{3}\frac{r_{2}^{2}+r_{1}r_{2}+r_{1}^{2}}{r_{1}+r_{2}}\label{eq: area-weighted r-test}\end{equation}

[…]

Thus, we have the bootstrapping scheme

\begin{equation}\begin{array}{rclcrcl}\bar{r}_{0} & = & \dfrac{2}{3\Delta^{2}}\left(r_{2,0}^{3}-r_{1,0}^{3}\right) & & r_{2,0} & = & \sqrt{r_{1,0}^{2}+\Delta^{2}}\\& \vdots & & & & \vdots\\\bar{r}_{k} & = & \dfrac{2}{3\Delta^{2}}\left(r_{2,\,k}^{3}-r_{2,\,k-1}^{3}\right) & & r_{2,\,k} & = & \sqrt{r_{2,\,k-1}^{2}+\Delta^{2}}\\& \vdots & & & & \vdots\\\bar{r}_{N_{r}-1} & = & \dfrac{2}{3\Delta^{2}}\left(r_{2,\,N_{r}-1}^{3}-r_{2,\,N_{r}-2}^{3}\right) & & r_{2,\,N_{r}-1} & = & \sqrt{r_{2,\,N_{r}-2}^{2}+\Delta^{2}}\end{array}\label{eq: bootstrap scheme}\end{equation}

where, again, we start with $r_{1,0}=r_{min}$ .

Test 3

Now, $-\widehat{z}\times{\left(\widehat{z}\times\overrightarrow{r}\right)}=\overrightarrow{r}-{\left(\widehat{z}\cdot\overrightarrow{r}\right)}\widehat{z}$, so

\begin{equation}\overrightarrow{r}^{\prime\prime}+2\widehat{z}\times\overrightarrow{r}^{\prime}=\frac{1}{{1+e_{p}\mathrm{cos}\mathrm{\theta}}}{\left(\overrightarrow{r}+\overrightarrow{\nabla}U\right)}-{\left(\widehat{z}\cdot\overrightarrow{r}\right)}\widehat{z}\label{}\end{equation}

Define a new effective potential

\begin{equation}\mathrm{\Omega}=\frac{1}{2}r^{2}+U=\frac{1}{2}r^{2}+\frac{{1-\mathrm{\mu}}}{r_{1}}+\frac{\mathrm{\mu}}{r_{2}}\label{EQUATION.5d0b51dc-3a17-4d57-95ed-8e8768257778}\end{equation}

where

\begin{equation}r_{1}=\sqrt{{{\left(x+\mathrm{\mu}\right)}^{2}+y^{2}+z^{2}}}\hspace{2em}r_{2}=\sqrt{{{\left(x-1+\mathrm{\mu}\right)}^{2}+y^{2}+z^{2}}}\label{EQUATION.10d1bacb-a0cf-4bdc-8b6d-c72d845b975b}\end{equation}

Then we find the satisfying result

\begin{equation}\overrightarrow{r}^{\prime\prime}+2\widehat{z}\times\overrightarrow{r}^{\prime}+{\left(\widehat{z}\cdot\overrightarrow{r}\right)}\widehat{z}=\frac{1}{{1+e_{p}\mathrm{cos}\mathrm{\theta}}}\overrightarrow{\nabla}\mathrm{\Omega}\label{EQUATION.7aeaeb03-1226-46ab-815a-4b28e71a84a5}\end{equation}

The individual components of \eqref{EQUATION.7aeaeb03-1226-46ab-815a-4b28e71a84a5} are

\begin{equation}\begin{aligned}x^{\prime\prime}-2y^{\prime} & =\frac{1}{{1+e_{p}\mathrm{cos}\mathrm{\theta}}}\frac{{\partial\mathrm{\Omega}}}{{\partial x}}\\y^{\prime\prime}+2x^{\prime} & =\frac{1}{{1+e_{p}\mathrm{cos}\mathrm{\theta}}}\frac{{\partial\mathrm{\Omega}}}{{\partial y}}\\z^{\prime\prime}+z\hspace{0.9em} & =\frac{1}{{1+e_{p}\mathrm{cos}\mathrm{\theta}}}\frac{{\partial\mathrm{\Omega}}}{{\partial z}}\end{aligned}\label{}\end{equation}

where

\begin{equation}\begin{array}{rcl}\overrightarrow{\nabla}\mathrm{\Omega} & = & \left[\begin{matrix}x-\dfrac{1-\mathrm{\mu}}{r_{1}^{3}}\left(x+\mathrm{\mu}\right)-\dfrac{\mathrm{\mu}}{r_{2}^{3}}\left(x-1+\mathrm{\mu}\right)\\y\left(1-\dfrac{1-\mathrm{\mu}}{r_{1}^{3}}-\dfrac{\mathrm{\mu}}{r_{2}^{3}}\right)\\z\left(1-\dfrac{1-\mathrm{\mu}}{r_{1}^{3}}-\dfrac{\mathrm{\mu}}{r_{2}^{3}}\right)\end{matrix}\right]\\ \\& = & \left(1-\dfrac{1-\mathrm{\mu}}{r_{1}^{3}}-\dfrac{\mu}{r_{2}^{3}}\right)\overrightarrow{r}-\mathrm{\mu}\left(1-\mathrm{\mu}\right)\left(\dfrac{1}{r_{1}^{3}}-\dfrac{1}{r_{2}^{3}}\right)\widehat{x}\end{array}\label{}\end{equation}

How I Do MathJax II. Example

To render equations in a WordPress blog, you have several options. The most aesthetically pleasing is MathJax. An earlier post tells you how to install MathJax for your WordPress site. This second post shows a few pointers by way of an example (you’ll probably want to view the page source, then search for “For example”). Here are a few more usage examples.

How to Do Math in a Blog Post

If you’ve installed MathJax in your site, then in a blog post you can trigger the loading of MathJax by putting the shortcode at the top of your post. It will not show up in your readers’ browsers.

That’s it! You can write your post now.

What I usually do, if the document has a lot of equations, is to compose the post in the quasi-WYSIWYG LaTeX editor, LyX. You can, of course, use whatever writing tool you like. When you’re happy with how your article looks, then copy the text to the clipboard. (With LyX, open up the source pane (View→Source Pane) and select the text.) Paste to your WordPress post editor.

You now have to make one change to the pasted text: remove the line breaks inside AMS environments

\begin{...} ... \end{...}

For example,

\begin{equation}
\begin{array}[b]{ccl}
D & = & -\left(R+h\right)\cos z\pm\sqrt{\left(R+h\right)^{2}\cos^{2}z+\left[\left(R+H\right)^{2}-\left(R+h\right)^{2}\right]}\\
\\
& = & \left(R+h\right)\left(\sqrt{\cos^{2}z+\dfrac{\left(R+H\right)^{2}-\left(R+h\right)^{2}}{\left(R+h\right)^{2}}}-\cos z\right)
\end{array}\label{eq:D soln quadratic ugly}
\end{equation}

becomes

\begin{equation}\begin{array}[b]{ccl}D & = & -\left(R+h\right)\cos z\pm\sqrt{\left(R+h\right)^{2}\cos^{2}z+\left[\left(R+H\right)^{2}-\left(R+h\right)^{2}\right]}\\\\& = & \left(R+h\right)\left(\sqrt{\cos^{2}z+\dfrac{\left(R+H\right)^{2}-\left(R+h\right)^{2}}{\left(R+h\right)^{2}}} \cos z\right)\end{array}\label{eq:D soln quadratic ugly-how}\end{equation}

Here’s how to refer to the above equation. Write, for example,

eq. \eqref{eq:D soln quadratic ugly}

which renders as eq. \eqref{eq:D soln quadratic ugly-how}.

How I Do MathJax I. Installation

I use equations. To enable equations in a WordPress blog, there are several options. The most comprehensive—and aesthetically pleasing—is to use MathJax. This post tells you how to install MathJax for your WordPress site. A second post has a few pointers. Here are a few usage examples.

1. Edit default.js

I do not use the MathJax CDN since occasionally their site has problems. When that happens, your math stops working and your pages containing math become ugly. So I download MathJax to my WordPress install. Rather than futz with <script> tags in my site’s header, I edit the default configuration file to my liking. Thus:

  • Download the latest version of MathJax: go to https://github.com/mathjax/MathJax/, click on Releases, and download the latest version.
  • Unpack the archive file to your hard drive.
  • Edit default.js in the config directory. My preferences:
    • You’ll probably want to add to your extensions, something like:
      extensions: ["tex2jax.js", "TeX/AMSsymbols.js", "TeX/AMSmath.js"]
    • Scroll down and set messageStyle to your liking (I changed mine to messageStyle: "simple").
    • Scroll down to menuSettings and change these to your liking (I set zoom: "Hover").
    • In the tex2jax section that immediately follows:
      • Under inlineMath uncomment the line with inline delimiters ['$','$']. This enables normal LaTeX inline delimiters. You’ll have to escape actual dollar signs with \\\$.
      • processEscapes: true
      • preview: "[math]"
    • Scroll down to the TeX section.
      • Under equationNumbers, set autoNumber: "AMS".
      • Fiddle with whatever else there catches your fancy.
    • Fiddle with whatever else catches your fancy.
  • Finally, upload your entire MathJax directory to your WordPress site, something like http://yourdomain/mathjax/.

2. Get the WordPress plugin.

Next, get the MathJax-LaTeX plugin and set the settings. The easiest way is to go to your blog administration Dashboard→Plugins→Add New, and type mathjax in the search box. My plugin settings (Dashboard→Settings→MathJax-LaTeX) are

  • Force Load = unchecked
  • Default [latex] syntax attribute = inline (this seems to have no effect with my configuration)
  • Use wp-latex syntax? = unchecked
  • Use MathJax CDN Service? = unchecked
  • Custom MathJax location? = http://yourdomain/mathjax/MathJax.js
  • MathJax Configuration = default

Do not forget to click the Save Changes button!