Orthogonal
How would electromagnetism in our own universe be different, if the photon had mass? In the 1930s, the Romanian physicist Alexandru Proca generalised Maxwell’s equations to develop a theory of massive particles producing a force analogous to electromagnetism, in groundbreaking work to explain the weak nuclear force. Proca doesn’t seem to be as wellknown as he should be, but his results were mentioned by Wolfgang Pauli in his 1946 Nobel Prize lecture. As you might guess from the connection with the weak force, giving the forcecarrying particle rest mass diminishes its range. If photons were heavy in our universe, the Coulomb potential would experience an exponential falloff with distance.
But as we’ve seen, the Riemannian Coulomb potential doesn’t suffer from exponential decay; instead, it undergoes oscillations across space. The change from Lorentzian to Riemannian geometry makes all the difference.
To obtain the Riemannian version of Proca’s equation, we start with the Riemannian Vector Wave equation with a source term, j, which we call the fourcurrent, plus the transverse condition that we impose on any vector wave A in order to rule out solutions that are scalar waves in disguise.
∂_{x}^{2}A + ∂_{y}^{2}A + ∂_{z}^{2}A + ∂_{t}^{2}A + ω_{m}^{2} A + j  =  0  (RVWS) 
∂_{x} A^{x} + ∂_{y} A^{y} + ∂_{z} A^{z} + ∂_{t} A^{t}  =  0  (Transverse) 
One nice result we can get immediately from this pair of equations is:
∂_{x} j^{x} + ∂_{y} j^{y} + ∂_{z} j^{z} + ∂_{t} j^{t}  =  0  (1) 
which follows from the transverse condition, and the fact that the fourcurrent j is equal to a linear combination of A and its derivatives. This amounts to a statement of conservation of charge: the rate at which the density of charge is increasing over time at some point, ∂_{t} j^{t}, is the opposite of the divergence of the current density, ∂_{x} j^{x} + ∂_{y} j^{y} + ∂_{z} j^{z}, which describes the net amount of charge flowing out of a small region around that point.
We previously noted that there are big problems with the energymomentum fourvector when it’s computed by different observers, because there’s no objective means to decide which way it should point along an object’s world line. The fourcurrent doesn’t suffer from that problem, because it’s defined as j = ρ u where ρ is the charge density in the charged material’s rest frame, and if we swap the sign of u we also swap the sign of ρ, since a time reversed positive charge looks negative, and vice versa. (Of course the assignment of the labels “positive” and “negative” to charges is just a matter of convention, but that’s a choice that can be made globally, once and for all.)
Just as in ordinary electromagnetism, we define the electromagnetic field, F, in terms of A:
F_{ab}  =  ∂_{a} A_{b} – ∂_{b} A_{a}  (2) 
The quantities A_{a} here are the components of the dual vector corresponding to the vector A. It’s a good idea to keep track of this distinction, though in orthonormal rectangular coordinates in Riemannian space, components of vectors, such as A^{a}, and components of the corresponding dual vectors, such as A_{a}, are identical. In Lorentzian spacetime, that’s almost true, but not quite; we have A_{x} = A^{x}, A_{y} = A^{y} and A_{z} = A^{z}, but A_{t} = –A^{t}.
Suppose we pick three coordinates, and call them a, b and c. Then simply as a matter of the definition of F, and the fact that derivatives commute (that is, ∂_{a} ∂_{b} = ∂_{b} ∂_{a}), we have:
∂_{a} F_{bc} + ∂_{b} F_{ca} + ∂_{c} F_{ab}  =  ∂_{a} (∂_{b} A_{c} – ∂_{c} A_{b}) + ∂_{b} (∂_{c} A_{a} – ∂_{a} A_{c}) + ∂_{c} (∂_{a} A_{b} – ∂_{b} A_{a})  
=  0  (3) 
It also follows from the definition of F that:
∂_{b} F^{ab}  =  ∂_{b} (∂^{a} A^{b} – ∂^{b} A^{a})  
=  ∂^{a} (∂_{b} A^{b}) – ∂_{b} ∂^{b} A^{a}  
=  –∂_{b} ∂^{b} A^{a}  (4) 
where we’re using the Einstein Summation Convention, and ∂_{b} A^{b} vanishes by the transverse condition.
Inserting the result (4) into the Riemannian Vector Wave equation with a source term, (RVWS), we get the Riemannian Proca Equation. We also have equation (3) which follows solely from the definition of F, and is consequently shared between the Riemannian and Lorentzian versions of electromagnetism. Maxwell’s Equations in their fourdimensional form are shown for comparison. In this and everything that follows, we are choosing units for the Lorentzian equations where the speed of light is 1, and the permittivity of the vacuum, ε_{0}, is 1. We are also using a (– + + +) signature for the Lorentzian metric, as opposed to the (+ – – –) signature used in some literature.
Riemannian Proca Equation  
∂_{b} F^{ab} – ω_{m}^{2} A^{a} – j^{a}  =  0  (Riemannian) 
∂_{a} F_{bc} + ∂_{b} F_{ca} + ∂_{c} F_{ab}  =  0  (Common) 
Maxwell’s Equations  
∂_{b} F^{ab} – j^{a}  =  0  (Lorentzian) 
∂_{a} F_{bc} + ∂_{b} F_{ca} + ∂_{c} F_{ab}  =  0  (Common) 
The fourdimensional equations for Riemannian electromagnetism are concise, but to see clearly what’s going on in a variety of situations, and compare them with the Lorentzian equivalents, it will help to give threedimensional versions, where instead of talking about the electromagnetic field F we describe everything in terms of two threedimensional vector fields: an electric field E and a magnetic field B.
We start with some definitions. The components of the electric field E are taken to be the components of the electromagnetic field F with the same spatial index first and t as the second index, while each component of the magnetic field B is the component of F whose indices are the other two spatial directions, preserving the cyclic order xyz.
Electric Field, E  
(E_{x}, E_{y}, E_{z})  =  (F_{xt}, F_{yt}, F_{zt})  (Common)  
Magnetic Field, B  
(B_{x}, B_{y}, B_{z})  =  (F_{yz}, F_{zx}, F_{xy})  (Common)  
Electromagnetic Field, F  
Using (– + + +) signature for Lorentzian metric  
F_{ab}  = 

(Common) 
Note that in the matrix above, the first index on F refers to the row, the second to the column, and the t components are shown in the first row and column. So for example, F_{xt} is the entry in the first column of the second row.
We define the electric scalar potential, φ to be the opposite of the time component of the (dual vector) fourpotential A, and the threedimensional magnetic vector potential, A_{(3)}, to consist of the remaining part of A.
Electric potential, φ  
φ  =  –A_{t}  (Common) 
Magnetic Potential, A_{(3)}  
(A_{(3) x}, A_{(3) y}, A_{(3) z})  =  (A_{x}, A_{y}, A_{z})  (Common) 
And finally we define the charge density, ρ, and the threedimensional currentdensity, j_{(3)}, in terms of the fourcurrent vector j.
Charge Density, ρ  
ρ  =  j^{t}  (Common) 
Current Density, j_{(3)}  
(j_{(3)}^{x}, j_{(3)}^{y}, j_{(3)}^{z})  =  (j^{x}, j^{y}, j^{z})  (Common) 
These definitions — as we’ve given them, with upper and lower indices exactly like this — are the same regardless of whether we’re doing Riemannian or Lorentzian physics. But if you want to make comparisons with the literature on the Lorentzian version, remember that raising or lowering a t index will produce the opposite of the original quantity. Also, note that some of the literature uses a (+ – – –) signature for the Lorentzian metric, whereas the Lorentzian formulas here use (– + + +).
Along with the definition of F in terms of the fourpotential A via equation (2), these definitions let us describe the electric field as the opposite of the gradient of its potential φ minus the time rate of change of the magnetic potential, and the magnetic field as the curl of its potential, A_{(3)}. Again, this is just as in conventional electromagnetism.
Fields From Potentials  
E  =  –∇φ – ∂_{t} A_{(3)}  (Common) 
B  =  ∇×A_{(3)}  (Common) 
Next, consider the fourdimensional force f on a particle with charge q and fourvelocity u:
f  =  q F u  (5) 
The fourforce is the rate of change with respect to proper time τ of the particle’s energymomentum vector, P, so this is equivalent to:
∂_{τ} P  =  q F u  (6) 
Now, the spatial part of P is just the threedimensional momentum p, whereas the spatial part of the fourvelocity u isn’t quite the ordinary velocity v. The ordinary velocity describes the particle’s rate of change of spatial coordinates with respect to coordinate time, t, whereas the spatial part of u gives rates of change with respect to proper time, τ — so the spatial part of u is (dt/dτ) v. However, we can absorb that factor of (dt/dτ) by switching to a rate of change of p with respect to coordinate time.
What we end up with is known as the Lorentz force law. Again this is common to the Riemannian and Lorentzian versions (although the effects of relativistic motion on the particle’s momentum p are of course not the same).
Lorentz Force Law  
∂_{t} p  =  q (E + v × B)  (Common) 
Next, we translate the conditions forced upon F by its definition, equation (3), into the consequences for E and B. When we take a, b, c in equation (3) to be x, y and z it tells us that the divergence of B must be zero. This is known as Gauss’s Law For Magnetism, and states that there are no magnetic monopoles (which is true in conventional electromagnetism, though of course there are speculative theories where such monopoles do exist).
When we take a, b, c in equation (3) to be t and two spatial coordinates — for each of the three pairs of spatial coordinates — that tells us that the sum of the curl of E and the time rate of change of B is zero. This result is known as the MaxwellFaraday Equation, and describes the way an electric field is created when a magnetic field is varying in time.
Gauss’s Law For Magnetism  
∇ · B  =  0  (Common) 
MaxwellFaraday Equation  
∇ × E + ∂_{t} B  =  0  (Common) 
Finally we come to the differences between Riemannian and Lorentzian electromagnetism, which arise from replacing the equation of Maxwell’s that involves the source of the field with the Riemannian Proca equation.
When we set the index a in the Riemannian Proca or Maxwell equation to t, we get two versions of Gauss’s Law, which in the Maxwell case tells us that lines of electric flux only begin and end on charges. In the Riemannian Proca case, this no longer holds: flux lines appear out of the vacuum, with the electric potential acting just like a charge density in that respect.
Gauss’s Law  
∇ · E  =  ω_{m}^{2} φ – ρ  (Riemannian) 
∇ · E  =  ρ  (Lorentzian) 
When we set the index a in the Riemannian Proca or Maxwell equation to each of the spatial coordinates, we get two versions of the AmpèreMaxwell Law, which describes the creation of a magnetic field by a current, a changing electric field — and, in the Riemannian case, directly from the magnetic vector potential.
AmpèreMaxwell Law  
∇ × B + ∂_{t} E  =  ω_{m}^{2} A_{(3)} + j_{(3)}  (Riemannian) 
∇ × B – ∂_{t} E  =  j_{(3)}  (Lorentzian) 
We discussed the Riemannian Coulomb potential on the main page of the notes on electromagnetism. We now have the tools to derive that potential.
What we are interested in is the field around a point charge, q, which is motionless in our coordinates. The situation is unchanging in time and perfectly radially symmetric in space, so it’s really a onedimensional problem where everything is a function of the distance, r, from the charge.
The curious twist that Riemannian electromagnetism brings to this is that lines of electric flux — which in Lorentzian electromagnetism always start and end on charges — can now terminate in the middle of the vacuum, an effect that depends on the potential. In the diagram on the right, the arrows indicate the direction of the electric field — but what’s drawn here are flux lines, not vectors: the strength of the field is indicated by how closely packed the lines are, not their length.
The main mathematical difficulty, shared with the Lorentzian case, is the fact that we have an infinite density of charge at the location of the particle. The way around that is to work with an integral over space of the charge density; integrating over a region that includes the particle will yield a finite value of q. But our equations are all differential equations, so first we have to convert one of them to a suitable form. To do that, we make use of the divergence theorem, a result in pure mathematics which says that for any vector field E and any region of space, the integral of the dot product of E with the outward normal to the surface of the region is equal to the volume integral of the divergence of E:
∫_{Surface} E · n  =  ∫_{Volume} ∇ · E  (7) 
We choose the vector field E to be the electric field, and we choose the region of integration to be a sphere of radius r around our point charge. We expect the electric potential φ to be a function only of r, and from the radial symmetry of the problem we expect the electric field to point radially towards or away from the charge. Given that the problem is static, we can express E in terms of the electric potential alone, as:
E(r)  =  –∇φ(r)  
=  –φ'(r) e_{r}  (8) 
where e_{r} is a unit vector pointing away from the charge. The Riemannian version of Gauss’s Law gives us ∇ · E as a function of the charge density ρ and the electric potential φ. We make use of that, along with (8), in the integrals of (7) applied to our spherical region around the charge. We also use the fact that any volume integral of ρ over a region containing the charge simply yields q. After dividing both sides by 4 π, we end up with:
–r^{2} φ'(r)  =  ω_{m}^{2} ∫_{0}^{r} φ(s) s^{2} ds – q / (4 π)  (9) 
Now, we make an educated guess on the basis of our experience of conventional electrostatics that things will be simpler if we write φ in terms of a new function, f, divided by r:
φ(r)  =  f(r) / r  (10) 
φ'(r)  =  f '(r) / r – f(r) / r^{2}  (11) 
In terms of the new function f, equation (9) becomes (12); we evaluate (12) at r = 0 to get (12a):
f(r) – r f '(r)  =  ω_{m}^{2} ∫_{0}^{r} f(s) s ds – q / (4 π)  (12) 
f(0)  =  – q / (4 π)  (12a) 
The derivative of (12) with respect to r gives us (13), and some simple rearrangement gives us (14):
– r f ''(r)  =  ω_{m}^{2} f(r) r  (13) 
f ''(r) + ω_{m}^{2} f(r)  =  0  (14) 
Equation (14) is a very wellknown differential equation, whose general realvalued solution is:
f(r)  =  C_{1} cos(ω_{m} r) + C_{2} sin(ω_{m} r)  (15) 
Equation (12a) tells us that C_{1} = – q / (4 π).
What about C_{2}? Any value we choose for C_{2} will yield a valid solution to the problem, but since this term has nothing to do with the point charge, q, we set C_{2} to zero. As in conventional electromagnetism, the most general solution to a problem often includes some form of radiation that’s merely passing through the region of interest — in this case, radially symmetric radiation that happens to be motionless in the rest frame of the charge.
So we have derived the Riemannian Coulomb potential. We also give the corresponding electric field, E = –∇φ, below.
Coulomb potential  
φ(r)  =  –[q / (4 π r)] cos(ω_{m} r)  (Riemannian) 
φ(r)  =  q / (4 π r)  (Lorentzian) 
Coulomb field  
E(r)  =  –[q / (4 π r^{2})] [cos(ω_{m} r) + ω_{m} r sin(ω_{m} r)] e_{r}  (Riemannian) 
E(r)  =  q / (4 π r^{2}) e_{r}  (Lorentzian) 
The Coulomb potential for a single, motionless point charge allows us, in principle, to find the electric field of any static distribution of charge, simply by integrating over the source of the field. However, it will be useful to have an even more fundamental solution to the equations of Riemannian electrodynamics: one that is associated with an instantaneous “blip” of charge that comes into existence at a certain event in fourspace, and then immediately vanishes. Obviously that behaviour violates conservation of charge, but by integrating the solution over the world lines of any number of charges with complete histories, a solution that respects conservation of charge can be found.
A fundamental solution like this is known as a Green’s function.
We begin by looking for fourdimensional rotationally symmetric solutions to the Riemannian Scalar Wave Equation, with no source term. This is the Helmholtz equation in four dimensions, and when we impose fourrotational symmetry we get an ordinary differential equation for a function G of a single variable s, the distance in fourspace from the origin:
G''(s) + (3 / s) G'(s) + ω_{m}^{2} G(s)  =  0  (16) 
The general solution to this equation is:
G(s)  =  C_{1} J_{1}(ω_{m} s) / s + C_{2} Y_{1}(ω_{m} s) / s  (17) 
where J_{1} and Y_{1} are Bessel functions of the first and second kind. Although this is a solution to the sourceless equation for s>0, the Bessel function Y_{1} goes to minus infinity as s approaches zero, which suggests the kind of singular behaviour we would expect for a Green’s function associated with a point charge.
We can explicitly integrate G for a motionless point charge along its entire world line with the help of a change of variable from t, the time coordinate along the world line, to s = √(r^{2} + t^{2}), the fourspace distance from an event on that world line to an event a spatial distance r from the point charge. Using t = √(s^{2} – r^{2}) and dt = (s/t) ds, we have:
∫_{–∞}^{∞} G(√(r^{2} + t^{2})) dt  =  2 ∫_{r}^{∞} G(s) (s / √(s^{2} – r^{2})) ds  
=  2 ∫_{r}^{∞} [C_{1} J_{1}(ω_{m} s) + C_{2} Y_{1}(ω_{m} s)] / √(s^{2} – r^{2}) ds  
=  2 [C_{1} sin(ω_{m} r) – C_{2} cos(ω_{m} r)] / (ω_{m} r)  (18) 
We can match this with the Riemannian Coulomb potential of a point particle with charge q by setting C_{1} = 0 and C_{2} = q ω_{m} / (8 π).
We’ve done this calculation for a scalar potential, φ, but the result will be most useful if we express it in terms of fourvectors. In those terms, each infinitesimal segment of a particle’s world line makes a contribution to the fourpotential A that is parallel to the particle’s fourvelocity u. We add a minus sign because φ = –A_{t}.
Green’s function  
Particle has charge q. Its world line y(τ) is parameterised by proper time τ. Its fourvelocity u(τ) = ∂_{τ}y(τ). The fourpotential A is evaluated at event x.  
dA(x)  =  –u(τ)[q ω_{m} / (8 π)] Y_{1}(ω_{m} x – y(τ)) / x – y(τ) dτ  (Riemannian) 
We won’t give the Lorentzian equivalent here, as it would require a substantial detour to explain all the details and differences. We’ll just note that what’s known as the LiénardWiechert potential at a given event depends only on the location and fourvelocity of the charge on the intersection of its world line with the past light cone of the event where we’re evaluating A. In other words, as you might expect, in Lorentzian physics A is only affected by information about the particle propagating from the past, at the speed of light.
The Riemannian Green’s function we’ve given here makes no distinction between the past and the future. That will be fine for problems in electrostatics and magnetostatics, but we need to keep in mind that if it’s applied to situations where electromagnetic waves are generated, it will produce solutions containing both incoming and outgoing waves.
An electric dipole consists of two point charges of equal strength, one positive and one negative, which are held a fixed distance apart. If the charges are close to each other, then they’ll tend to cancel each other’s Coulomb potential, but there will be a characteristic dipole field remaining.
We can simplify the way we think about the shape of this field by studying the limiting case where the two charges are moved ever closer to each other, while the strength of each charge increases. If we define a vector p, the dipole moment, to be the displacement vector pointing from the negative charge to the positive charge multiplied by the (positive) strength of the charge, then we take the limit where p remains constant and finite, but the separation goes to zero while the strength of each charge goes to infinity.
The easiest way to obtain this limit is by taking the derivative of the Coulomb potential along the opposite direction to the chosen dipole moment. The resulting potential is shown in the diagram on the right, and the formulas for the potential and electric field are given in the table below. Here r is a threedimensional vector from the location of the dipole to the point where we’re evaluating the field, and r is its magnitude.
Electric Dipole potential  
φ(r)  =  –[p · r / (4 π r^{3})] [cos(ω_{m} r) + ω_{m} r sin(ω_{m} r)]  (Riemannian) 
φ(r)  =  p · r / (4 π r^{3})  (Lorentzian) 
Electric Dipole field  
E(r)  =  – [(3 (p · r) r – r^{2} p) / (4 π r^{5})] [cos(ω_{m} r) + ω_{m} r sin(ω_{m} r)] + [ω_{m}^{2} (p · r) r / (4 π r^{3})] cos(ω_{m} r) 
(Riemannian) 
E(r)  =  ((3 p · r) r – r^{2} p) / (4 π r^{5})  (Lorentzian) 
If you experienced a sense of déjà vu at the sight of the Riemannian dipole potential, then you’ve probably seen a very similar drawing for the potential of an oscillating dipole in conventional electromagnetism. The static Riemannian dipole’s field is, in fact, precisely the same as the spatial part of the standing wave that can be constructed in conventional electromagnetism by summing incoming and outgoing radiation associated with an oscillating dipole.
Suppose we have a total charge of Q distributed uniformly over a spherical shell of radius R. It’s a wellknown result in Lorentzian electromagnetism that the potential outside the sphere is exactly the same as that due to a point charge at the centre of the sphere, while in the interior the potential is constant. However, in Riemannian electromagnetism the result is quite different! Either by explicitly integrating the contributions from across the surface, or by using the appropriate form of Gauss’s Law, we get the following:
Uniformly charged spherical shell  
Shell of radius R, total charge Q  
Uniformly charged spherical shell, potential  
φ(r)  = 

(Riemannian)  
φ(r)  = 

(Lorentzian)  
Uniformly charged spherical shell, field  
E(r)  = 

(Riemannian)  
E(r)  = 

(Lorentzian) 
In the Riemannian case, the exterior potential for the shell is that of a point charge multiplied by a factor of sin(ω_{m} R) / (ω_{m} R), while the interior potential has the roles of r and R exchanged.
Though the interior potential is generally not constant, for certain values of R either the interior or exterior potential will be zero. When ω_{m} R is an odd multiple of π/2, or equivalently, when R is an odd multiple of one quarter the minimum wavelength of light, λ_{min}, the interior potential will be zero. When ω_{m} R is a multiple of π, or equivalently, when R is a multiple of half λ_{min}, the exterior potential will be zero.
Of course these exact cancellations are very sensitive to the precise geometry of the charge distribution. In general, though, the exterior potential will be substantially diminished compared to that of a point charge.
We can integrate our results for spherical shells to obtain the potential and electric field due to a charge Q uniformly distributed throughout a solid sphere.
Uniformly charged solid sphere  
Sphere of radius R, total charge Q  
Uniformly charged solid sphere, potential  
φ(r)  = 

(Riemannian)  
φ(r)  = 

(Lorentzian)  
Uniformly charged solid sphere, field  
E(r)  = 

(Riemannian)  
E(r)  = 

(Lorentzian) 
In the Lorentzian case, as with a spherical shell, the potential and field outside the solid sphere are simply those of a point charge concentrated at the centre of the sphere. Inside the solid sphere, the field is that due to whatever part of the sphere lies closer to the centre than you are, so it increases linearly with the distance from the centre, while the potential is quadratic in the distance from centre.
In the Riemannian case, the exterior potential and field are those of a point charge multiplied by a factor depending on the size of the sphere:
3 [sin(ω_{m} R) – ω_{m} R cos(ω_{m} R)] / [ω_{m}^{3} R^{3}]
This factor oscillates with R, and has its first zero at R ≈ 0.715 λ_{min}.
The interior potential consists of a flat term that depends on R but doesn’t oscillate, plus a term that’s oscillatory in both R and r. The oscillating part can be made zero by the right choice of R, leaving the potential flat throughout the sphere, with the first zero at R ≈ 0.445 λ_{min}.
Suppose we have two concentric charged shells, bearing equal and opposite charges. This setup constitutes a charged capacitor. Realworld capacitors in electronic circuits are usually much more complex than this, but this simple geometry will allow us to make some exact calculations that demonstrate how capacitance works in the Riemannian universe.
Spherical capacitor  
Inner shell of radius R_{1}, total charge –Q Outer shell of radius R_{2}, total charge +Q  
Spherical capacitor, potential  
φ(r)  = 

(Riemannian)  
φ(r)  = 

(Lorentzian)  
Spherical capacitor, field  
E(r)  = 

(Riemannian)  
E(r)  = 

(Lorentzian)  
Spherical capacitor, capacitance  
C  =  8 π ω_{m} R_{1}^{2} R_{2}^{2} /
[4 R_{1}R_{2} sin(ω_{m} R_{1}) cos(ω_{m} R_{2}) – R_{1}^{2} sin(2 ω_{m} R_{2}) – R_{2}^{2} sin(2 ω_{m} R_{1})] 
(Riemannian)  
C  =  (4 π R_{1}R_{2}) / (R_{2}–R_{1})  (Lorentzian) 
In the Lorentzian case, the potential will always rise from a negative value on the inner shell to zero on the outer shell, and the voltage across the device is defined as a positive value:
V_{Lorentzian} = φ(R_{2}) – φ(R_{1}) = Q (R_{2}–R_{1}) / (4 π R_{1}R_{2})
The constant of proportionality between the total positive charge and the voltage difference is known as the capacitance of the device, C.
C_{Lorentzian} = Q / V_{Lorentzian} = (4 π R_{1}R_{2}) / (R_{2}–R_{1})
In the Riemannian case, the voltage difference between the shells will still be proportional to the total charge, and we can define the capacitance in the same way, but the formula (given in the table above) is quite a bit more complex, being sensitive to the length scale set by the minimum wavelength of light. In principle the Riemannian capacitance can be either positive or negative, and even infinite. Infinite capacitance means you can pour as much charge as you want into the device without building up a voltage between the shells themselves, though the electric field will still increase. Negative capacitance implies that the shell with an excess of positive charge is at a lower potential than the shell with an excess of negative charge, so given a connection between the two, the positive shell will draw in yet more positive charge. When you shortcircuit an ordinary capacitor, it discharges; when you shortcircuit a capacitor with negative capacitance, it increases its charge.
Clearly this could lead to a runaway process, and there’s nothing in our (highly simplified) analysis to indicate when it would come to an end. But in a more detailed model of a circuit with a negative capacitor that included the properties of all the materials involved, there would eventually be complications that cut short the buildup of charge. Similarly, the mere fact that the Riemannian Coulomb potential allows situations in which like charges attract seems to threaten the possibility that all the positive charge in the universe could end up clumped together in one place — but that scenario neglects quantummechanical effects that put limits on the agglomeration of identical charged particles.
It’s also important to note that the situation we’ve studied is an idealisation where the shells are perfectly smooth and their charge evenly distributed, on a scale much smaller than the minimum wavelength of light. Any bumps of a greater size than that will produce a device with a mixture of positive and negative capacitance, leading to the kind of cancellations that moderate all electrostatic phenomena in the Riemannian universe.
Furthermore, this whole analysis assumes that any changes in the charge and voltage occur very slowly. We treat capacitors in an alternating current in a later section.
Suppose we have a steady current I running through a long, thin, straight wire. The Riemannian version of the AmpèreMaxwell Law gives us the curl of the magnetic field, ∇ × B, as a function of the current density and the threedimensional magnetic potential, A_{(3)}. But because we want to think of the current as being concentrated along an infinitesimally thin wire, it’s convenient to convert this law to an integral form, by means of the KelvinStokes theorem, which relates an integral of the curl of a vector field over a surface to a line integral around the boundary of that surface:
∫_{Surface} (∇ × B) · n  =  ∫_{Boundary} B · t  (19) 
Here n is a unit normal to the surface, and t is a unit tangent to the curve that forms the boundary of the surface, running counterclockwise around the surface when viewed from “above” if our choice for n defines what we mean by “up”.
If we choose as our surface a disk of radius r centred on the wire and perpendicular to it, by symmetry we expect the magnetic potential A_{(3)} to point parallel to the wire and to be a function only of the distance r from the wire. If we choose to have the wire run along the zaxis, we have:
A_{(3)}(r)  =  A(r) e_{z}  (20) 
B(r)  =  ∇ × A_{(3)}(r)  
=  ∂_{y} A(r) e_{x} – ∂_{x} A(r) e_{y}  
=  A'(r) [ (y / r) e_{x} – (x / r) e_{y}]  
=  –A'(r) e_{φ}  (21) 
where e_{φ} is a unit vector field that points counterclockwise around the wire. If we then apply the KelvinStokes theorem, equation (19), and the AmpèreMaxwell Law, we get:
I + 2 π ω_{m}^{2} ∫_{0}^{r} A(s) s ds  =  –2 π r A'(r)  (22) 
Dividing through by 2 π, taking the derivative of this with respect to r, and rearranging slightly we have:
A''(r) + A'(r) / r + ω_{m}^{2} A(r)  =  0  (23) 
The general solution to this differential equation is:
A(r)  =  C_{1} J_{0}(ω_{m} r) + C_{2} Y_{0}(ω_{m} r)  (24) 
where J_{0} and Y_{0} are Bessel functions of the first and second kind. The derivatives of these Bessel functions give us:
A'(r)  =  –ω_{m} [C_{1} J_{1}(ω_{m} r) + C_{2} Y_{1}(ω_{m} r)]  (25) 
Now, given this result, the limit as r→0 of the righthand side of equation (22) is:
lim_{r→0} (–2 π r A'(r))  =  –4 C_{2}  (26) 
while the same limit of the lefthand side of equation (22) is is simply the current, I. So we have C_{2} = –I/4. This leaves C_{1} undetermined, but as with our derivation of the Coulomb potential, we take the C_{1} term to be a motionless radiation field coming in from the past that has nothing to do with the current I.
Linear current magnetic potential  
A_{(3)}(r)  =  –[I / 4] Y_{0}(ω_{m} r) e_{z}  (Riemannian) 
A_{(3)}(r)  =  –[I / (2 π)] log(r) e_{z}  (Lorentzian) 
Linear current magnetic field  
B(r)  =  –[I ω_{m} / 4] Y_{1}(ω_{m} r) e_{φ}  (Riemannian) 
B(r)  =  [I / (2 π r)] e_{φ}  (Lorentzian) 
The Bessel functions are oscillatory, so the magnetic field around the current reverses direction on a similar length scale to the reversals of the electric field around a point charge.
Because the magnetic field has the same direction very close to the current in both the Lorentzian and Riemannian cases, and because the Lorentz force law is also the same in both cases, in theory two sufficiently close (and narrow) wires with currents running in parallel will experience an attractive force. However, as with the electrostatic force the spatial oscillation of the field will lead to significant cancellations over any objects whose width exceeds the wavelength of the oscillation.
In this example we can once again see a link between static Riemannian solutions and the spatial part of oscillating Lorentzian solutions. The Riemannian field around the current is the same as the spatial part of the standing wave around an oscillating current in conventional electromagnetism. Of course an oscillating current in the real world is usually associated with a purely outgoing wave, but in the presence of an incoming wave of the same strength a standing wave will be produced with exactly this form.
In conventional magnetostatics, the BiotSavart Law gives the magnetic field produced by a steady current I flowing along a thin wire:
B  =  [I / (4 π)] ∫ t × r / r^{3} dl  (27) 
Here the variable of integration, l, is the length along the wire, r is a threedimensional displacement vector from an element of the wire to the point where the field B is being evaluated, and t is a unit tangent vector to the wire.
We will obtain the Riemannian equivalent by making use of the Riemannian Green’s function we derived earlier.
Each element of the wire of length dl will be taken to contain both moving and stationary charges of magnitude dq = ρ dl, where ρ is the linear charge density in the wire. The moving charges will contribute dq u dτ to the Green’s function integral — where u is the fourvelocity of each moving charge in this element of wire — but we know that the time component of this vector will be cancelled exactly by an opposite amount of stationary charge present in the wire, which is assumed to be electrically neutral overall. The spatial part of u dτ is just v dt, where v is the ordinary velocity of the moving charges and t is the coordinate time in a frame in which the wire is stationary. And since the current I flowing through the wire is ρ v — or in vector terms, I t = ρ v, where t is a unit tangent vector to the wire — we have:
(dq u dτ)_{net}  =  ρ dl v dt  
=  I t dt dl  (28) 
We can then integrate the Green’s function over t with I t dl as a constant; the integral is the same as that which we used to obtain the Coulomb potential from the Green’s function. Not surprisingly, then, the magnetic potential we get from this integral looks just like a Coulomb potential, and the magnetic field we get by taking the curl of it has the same magnitude (but not direction) as the Coulomb electric field.
BiotSavart Law for magnetic potential  
A_{(3)}(r)  =  [I / (4 π)] ∫ t cos(ω_{m} r) / r dl  (Riemannian) 
A_{(3)}(r)  =  [I / (4 π)] ∫ t / r dl  (Lorentzian) 
BiotSavart Law for magnetic field  
B(r)  =  [I / (4 π)] ∫ [cos(ω_{m} r) + ω_{m} r sin(ω_{m} r)] t × r / r^{3} dl  (Riemannian) 
B(r)  =  [I / (4 π)] ∫ t × r / r^{3} dl  (Lorentzian) 
An explicit integral of the magnetic potential around an infinite straight wire using the BiotSavart Law gives a result in agreement with the formula we obtained previously.
A magnetic dipole is a system that produces a certain kind of simple, highly symmetrical magnetic field. A small loop of circulating current, or a charged particle with quantummechanical spin are examples of this, but many systems that possess more complicated fields will look like magnetic dipoles from a distance.
For a loop of current, the magnetic moment, which we’ll call μ, is defined as a vector normal to the loop whose magnitude is the product of the area of the loop and the strength of the circulating current. The convention is that the current circulates in the direction of the fingers of the right hand when the thumb is aligned with the magnetic moment vector. The pure dipole field can be taken either as the dominant term (that is, the term that drops off least slowly with distance) in the field from a finite loop, or as the field in the limiting case when the area of the loop shrinks to zero while the current goes to infinity, with the product of the two remaining finite.
In Lorentzian electromagnetism, it turns out that the magnetic field of a magnetic dipole takes precisely the same mathematical form as the electric field of an electric dipole. However, that’s impossible in the Riemannian case, because the magnetic field B must satisfy ∇ · B = 0 everywhere — which is to say that lines of magnetic flux form unbroken loops — but that isn’t true of the Riemannian electric field even in a vacuum, and the electric dipole field has lines of electric flux starting and ending far from the dipole itself.
We can use the BiotSavart Law to find the magnetic dipole potential in the limiting case of a small current loop. As with the electric dipole, we take the derivative of an appropriate quantity to obtain the limit. In this case, we integrate — over half the current loop — the sum of the contribution from an element of the loop and the element directly opposite it, where the current will be flowing in the opposite direction. In the limit of a small loop, that sum is just the directional derivative across the loop of 1/r or cos(ω_{m} r)/r, evaluated at the centre of the loop, then multiplied by the diameter of the loop and the tangent vector to the loop.
Magnetic Dipole potential  
μ is magnetic dipole moment  
A_{(3)}(r)  =  [μ × r / (4 π r^{3})] [cos(ω_{m} r) + ω_{m} r sin(ω_{m} r)]  (Riemannian) 
A_{(3)}(r)  =  μ × r / (4 π r^{3})  (Lorentzian) 
Magnetic Dipole field  
B(r)  =  [(3 (μ · r) r – r^{2} μ) / (4 π r^{5})] [cos(ω_{m} r) + ω_{m} r sin(ω_{m} r)] – [ω_{m}^{2} (μ × r) × r / (4 π r^{3})] cos(ω_{m} r) 
(Riemannian) 
B(r)  =  (3 (μ · r) r – r^{2} μ) / (4 π r^{5})  (Lorentzian) 
In Lorentzian electromagnetism, although not all materials can be magnetised, the conditions that allow large numbers of magnetic dipoles (generally, the spins of electrons) to combine to produce a much stronger field are not all that stringent. So long as the magnetic moment vectors of a collection of dipoles are parallel, all their contributions to the external magnetic field will reinforce each other. But because the Riemannian magnetic dipole field switches directions on a very small length scale, in any collection of dipoles there will be a huge amount of cancellation between their fields — and the combined field will again have the same kind of spatial oscillations. In the Riemannian universe, there can be no equivalent of our permanent magnets with fields that sustain a force in a single direction over a long distance.
A solenoid is a helical coil of wire. We will approximate the field inside and outside the coil when there is a steady current flowing through it, assuming that the solenoid is so long that we can neglect precisely what happens at the ends. In effect, what we will analyse is an infinitely long solenoid, which is easier to deal with than a finite one because we can approximate it as having both translational symmetry along its axis and rotational symmetry around the axis.
The most general solution for the magnetic potential and magnetic field with this kind of cylindrical symmetry, and with the magnetic field pointing along the zaxis, is:
A_{(3)}(r)  =  [a J_{1}(ω_{m} r) + b Y_{1}(ω_{m} r)] e_{φ}  (29a) 
B(r)  =  [a ω_{m} J_{0}(ω_{m} r) + b ω_{m} Y_{0}(ω_{m} r)] e_{z}  (29b) 
However, we need to allow the solutions to be different inside and outside the coil, so we will have four coefficients, a_{int}, b_{int}, a_{ext} and b_{ext}, to find. The need for the solution to be finite at r = 0 means b_{int} = 0, and we require A_{(3)} to be continuous at r = R, the radius of the coil. We get a third relationship by applying Ampère’s Law to a thin vertical rectangle that encloses the current flowing through the n windings along a unit height of the solenoid; this tells us that the difference between the B field immediately inside and outside the coil is equal to that current.
Obtaining a fourth equation to completely fix the solution takes a bit more work. It’s not hard to integrate the contribution to A_{(3)} from the BiotSavart Law along a vertical strip of the coil, but then a precise expression for the integral around the coil is intractable. But we can obtain a firstorder Taylor series, in r, for the contribution to A_{(3)} at a point a small distance from the centre of the coil, and then integrate that around the entire coil. Matching that Taylor series to an equivalent Taylor series obtained from our general solution gives us the value of a_{int}, and then we can solve the other equations to determine all the coefficients. It turns out that a_{ext} = 0, so we have a single term in both the interior and exterior solutions.
In conventional electromagnetism, the magnetic field outside an infinite solenoid is zero, but that is not generally true in the Riemannian case.
Long solenoid  
Solenoid has radius R, current I and n windings per unit length. Axis of solenoid coincides with the zaxis.  
Long solenoid, magnetic potential  
A_{(3)}(r)  = 

(Riemannian)  
A_{(3)}(r)  = 

(Lorentzian)  
Long solenoid, magnetic field  
B(r)  = 

(Riemannian)  
B(r)  = 

(Lorentzian)  
Long solenoid, total magnetic flux within coil  
Φ  =  –n I π^{2} R^{2} J_{1}(ω_{m} R) Y_{1}(ω_{m} R)  (Riemannian)  
Φ  =  n I π R^{2}  (Lorentzian)  
Long solenoid, inductance  
For solenoid of length l.  
L  =  –n^{2} π^{2} R^{2} l J_{1}(ω_{m} R) Y_{1}(ω_{m} R)  (Riemannian)  
L  =  n^{2} π R^{2} l  (Lorentzian) 
In the table above, we’ve included the total magnetic flux that threads through the solenoid; this is the area integral of the magnetic field B over a crosssection perpendicular to the axis.
If the current flowing through the solenoid starts changing, then so will the magnetic field, so via the MaxwellFaraday Law an electric field will develop, with a curl proportional to the time rate of change of the magnetic field. Then by the KelvinStokes theorem, the integral of the electric field around any loop that encloses that changing magnetic field will be proportional to the integral over the area of the loop of the rate of change of the magnetic field. But that area integral is just the time rate of change of the total magnetic flux through the loop. So each loop enclosing a changing quantity of flux will have an electromotive force around it that is proportional to the rate of change of flux. In fact, the constant of proportionality is simply minus 1.
EMF = –dΦ/dt
Applying this argument to the coils that constitute our solenoid, if the current flowing through the solenoid changes then a voltage will be produced across the leads of the solenoid that is proportional to the current’s rate of change. The opposite of the constant of proportionality is known as the inductance of the solenoid, L.
EMF = –L dI/dt
The Riemannian inductance for a solenoid of length l (and hence with a total of nl coils) is:
L_{Riemannian} = n l Φ / I = –n^{2} π^{2} R^{2} l J_{1}(ω_{m} R) Y_{1}(ω_{m} R)
while the Lorentzian value is:
L_{Lorentzian} = n l Φ / I = n^{2} π R^{2} l
The product of Bessel functions in the Riemannian inductance can be either positive or negative, allowing an inductance of either sign. Negative inductance, like negative capacitance, can lead to runaway effects: an increase in the current through a negative inductor will produce a voltage that drives the current even higher, until damage to the materials or other effects put a brake on the current’s growth.
But as with the capacitor, our model here is highly idealised. The difference in the geometry of the coil between a positive and negative inductor is about one minimum wavelength of light, so if the wire in the coil is thicker than that, or deviates from a perfect circle by more than that distance, the solenoid will effectively consistent of both positive and negative inductors — leading, as usual, to a significant degree of cancellation between the two.
What’s more, all our formulas here assume a situation that can be approximated as a steady current. We treat solenoids carrying an alternating current in a later section.
Runaway effects of the kind we see in systems with negative capacitance or inductance would clearly violate conservation of energy in our own universe, but in the Riemannian universe, where the energy associated with matter (including the electromagnetic field) has the opposite sense to kinetic and potential energy, it’s trickier to follow exactly what’s going on. We need to be able to quantify the energy stored in, and transported by, the electromagnetic field. But in order to do this, first we need to take a short detour into a Lagrangian treatment of Riemannian electromagnetism.
The Lagrangian for a field theory such as electromagnetism is a quantity L that is a function of the field and its derivatives, whose integral over a region of fourspace is stationary under variations of the field, when the field satisfies the appropriate equations. If we integrate L to obtain what’s known as the action, S:
S(A_{k}) = ∫ L(A_{k})
then when A satisfies the field equations, S should be, to first order, unchanged by any small variation in A, just like a function of an ordinary variable at a local maximum or minimum.
If the Lagrangian is expressed as a function of the field components A_{k} and their derivatives ∂_{j} A_{k}, then — so long as the field vanishes on the boundary of the region of integration, or there are cyclic boundary conditions — the requirement for the action to be stationary is equivalent to the EulerLagrange equations:
∂_{j} [ ∂_{∂j Ak}L ] = ∂_{Ak}L
We will define the Riemannian Proca Lagrangian, L_{RP}, in two parts: a field Lagrangian, L_{field}, and an interaction term, L_{inter}. Below we also give the Lorentzian equivalents.^{[1]}
Riemannian Proca Lagrangian  
L_{field}  =  ¼ F_{ij} F^{ij} – ½ ω_{m}^{2} A_{a} A^{a}  
=  ½ (B^{2} + E^{2}) – ½ ω_{m}^{2} (A_{(3)}^{2}+φ^{2})  
L_{inter}  =  –A_{k} j^{k}  
=  –A_{(3)} · j_{(3)} + φ ρ  
L_{RP}  =  L_{field} + L_{inter}  (Riemannian) 
Maxwell Lagrangian  
L_{field}  =  –¼ F_{ij} F^{ij}  
=  –½ (B^{2} – E^{2})  
L_{inter}  =  A_{k} j^{k}  
=  A_{(3)} · j_{(3)} – φ ρ  
L_{Maxwell}  =  L_{field} + L_{inter}  (Lorentzian) 
The EulerLagrange equations for the full Lagrangians correspond to the Riemannian Proca equation or Maxwell’s equation, respectively.
We can find the stressenergy tensor for the Riemannian electromagnetic field, which we will call T, by means of the formula^{[2]}:
StressEnergy Tensor From Field Lagrangian  
T_{ab}  =  –L_{field} g_{ab} + 2 ∂_{gab} L_{field}  (Riemannian) 
T_{ab}  =  L_{field} g_{ab} – 2 ∂_{gab} L_{field}  (Lorentzian) 
Here g_{ab} and g^{ab} are components of the metric tensor for fourspace, with either two lower or two upper indices. In orthonormal coordinates, the matrices of these components are just the 4×4 identity matrix — that is, 1 when a=b and 0 otherwise. But if we think of the components of the dual vector version of our fourpotential field, A_{k}, as the fundamental variables for the Lagrangian, then every time we raise an index to get something like the term A_{a} A^{a}, we’re making using of g^{ab} (using the Einstein Summation Convention):
A_{a} A^{a} = A_{a} (g^{ab} A_{b})
So if we view the Lagrangian as a function of the components A_{k} of the fourpotential and the components g^{ab} of the metric tensor, the derivative in terms of the metric, evaluated at the actual metric, gives us the second term in the stressenergy tensor.
It would be too much of a detour to explain in any detail why this construction works, but it ultimately fits in with the way Einstein’s equation for gravity — which relates a tensor derived from the metric to the stressenergy tensor of any matter present — can itself be derived from an appropriate Lagrangian. The crucial point is that the complete stressenergy tensor constructed this way (one that includes all matter) will have zero divergence, which means energy and momentum will be conserved.
We will express the result of this calculation both in terms of the electromagnetic field F and the fourpotential A, and in terms of the threedimensional fields B, E, φ and A_{(3)}.
Riemannian Electromagnetic StressEnergy Tensor  
T_{ab}  =  –L_{field} g_{ab} + F_{ac} F_{b}^{c} – ω_{m}^{2} A_{a} A_{b}  
= 


Lorentzian Electromagnetic StressEnergy Tensor  
T_{ab}  =  L_{field} g_{ab} + F_{ac} F_{b}^{c}  
= 

The divergence of T for the electromagnetic field alone is not zero when j is not zero. Rather, we have:
∂_{b}T^{a}_{}^{b} + F^{a}_{c} j^{c} = 0
The second term corresponds to the density of the fourforce acting on the current, which in turn will be the divergence of the charged matter’s own stressenergy tensor. So the sum of stressenergy tensors for both the electromagnetic field and the matter on which it acts will be zero.
The stressenergy tensors can look a bit intimidating, but for now let’s ignore the terms that lie beyond the first row and column, which describe pressure and shear stress. The terms we’re interested in are T_{tt}, which gives the energy density u in the electromagnetic field, and the vector S = (T ^{tx}, T ^{ty}, T ^{tz}), known as the Poynting vector, which describes the rate of energy flow across a unit area. (Note that we have to raise a t index to get the Poynting vector, which changes the sign in the Lorentzian case).
Electromagnetic energy density  
u  =  [E^{2}–B^{2} + ω_{m}^{2}(A_{(3)}^{2}–φ^{2})]/2  (Riemannian) 
u  =  [E^{2}+B^{2}]/2  (Lorentzian) 
Poynting vector  
S  =  B × E + ω_{m}^{2} φ A_{(3)}  (Riemannian) 
S  =  E × B  (Lorentzian) 
Let’s look at the energy density and flow in a few simple examples.
For a plane wave, we have the description in fourspace:
A(x) = A_{0} sin(k · x)
F(x) = (k ∧ A_{0}) cos(k · x)
where k = ω_{m} and A_{0} · k = 0. From this, we can compute the stressenergy tensor:
T_{ab} = L_{field} g_{ab} + F_{ac} F_{b}^{c} – ω_{m}^{2} A_{a} A_{b}
T = A_{0}^{2} k ⊗ k cos(k · x)^{2} + ω_{m}^{2} [A_{0} ⊗ A_{0} – (A_{0}^{2} / 2) I_{4}] cos(2 k · x)
If we average T over one cycle, cos(2 k · x) becomes zero while cos(k · x)^{2} becomes 1/2, so we have:
<T> = ½ A_{0}^{2} k ⊗ k
That’s just the stressenergy tensor we’d expect of a uniform cloud of matter with a fourvelocity u = k/ω_{m} and a massenergy density (in its rest frame) of ½ A_{0}^{2} ω_{m}^{2}. If we define u that way, and also define a unit vector a_{0} = A_{0}/A_{0}, we can write the stressenergy tensor as:
T = A_{0}^{2} ω_{m}^{2} [u ⊗ u cos(k · x)^{2} + (a_{0} ⊗ a_{0} – ½ I_{4}) cos(2 k · x)]
Suppose the light has an angular time frequency of ω = k_{t} = ω_{m} u_{t}. Then the energy density u (not to be confused with the fourvelocity u or any of its components) is:
u = T_{tt} = A_{0}^{2} [ω^{2} cos(k · x)^{2} + ω_{m}^{2} (a_{0, t}^{2} – ½) cos(2 k · x)]
= ½ A_{0}^{2} [ω^{2} + (ω^{2} + (2 a_{0, t}^{2} – 1) ω_{m}^{2}) cos(2 k · x) ]
Clearly there are values for ω and a_{0, t} such that the energy density will be negative some of the time: for example, if a_{0, t} = 0 and ω < ω_{m} / √2. But the average energy density over any cycle will still be positive:
<u> = ½ A_{0}^{2} ω^{2}
We can see from <T> that the same kind of average of the Poynting vector S will be parallel to the spatial projection of the propagation vector k, which in turn is parallel to the ordinary velocity v that corresponds to the fourvelocity u = k/ω_{m}. Specifically:
<S> = ½ A_{0}^{2} ω^{2} v
We can apply our formula for the energy density in an electric field to the spherical capacitor that we analysed earlier. In the Lorentzian case, the electric field is zero outside the capacitor, and the energy density depends only on the field, so we can get a finite answer from a straightforward integration.
In the Riemannian case, the situation is a bit trickier. The potential and the electric field extend beyond the capacitor, and the energy density computed from them is nonzero, out to infinity. The energy contained within a sphere of a given radius S >> R_{2} is cyclic in S, and the peaktopeak distance of these cycles does not grow smaller with distance, so the integral to infinity is undefined. But we can get a sensible finite answer by setting the cyclic part to zero and taking the asymptotic value of the remainder.
Spherical capacitor  
Plate with area A and charge density –σ at x = 0 Plate with area A and charge density +σ at x = d Total charge Q = A σ  
Spherical capacitor, capacitance  
C  =  8 π ω_{m} R_{1}^{2} R_{2}^{2} /
[4 R_{1}R_{2} sin(ω_{m} R_{1}) cos(ω_{m} R_{2}) – R_{1}^{2} sin(2 ω_{m} R_{2}) – R_{2}^{2} sin(2 ω_{m} R_{1})] 
(Riemannian)  
C  =  (4 π R_{1}R_{2}) / (R_{2}–R_{1})  (Lorentzian)  
Spherical capacitor, energy density in electric field  
u(r)  = 

(Riemannian)  
u(r)  = 

(Lorentzian)  
Spherical capacitor, total energy in electric field  
<U>  = 


=  [Q^{2} / (16 π ω_{m} R_{1}^{2} R_{2}^{2}) ][R_{1}^{2} sin(2 ω_{m} R_{2})+R_{2}^{2} sin(2 ω_{m} R_{1}) –4 R_{1}R_{2} sin(ω_{m} R_{1}) cos(ω_{m} R_{2})]  
=  –Q^{2} / (2 C)  (Riemannian)  
U  =  ∫_{R1}^{R2} 4 π r^{2} u(r) dr  
=  [Q^{2} / (8 π)] [1/R_{1} – 1/R_{2}]  
=  Q^{2} / (2 C)  (Lorentzian) 
The answers we get in both the Riemannian and Lorentzian cases are compatible with the potential energy that we expect for the capacitor, if we integrate the energy required to charge it up from zero charge to a total charge of Q:
Potential energy = ∫_{0}^{Q} V(q) dq = ∫_{0}^{Q} (q/C) dq = Q^{2} / (2 C)
In the Lorentzian case, this is exactly the energy stored in the electric field. In the Riemannian case, it’s the opposite! The reason, of course, is that potential energy in the Riemannian universe has the opposite sense to electromagnetic field energy.
The calculations for the energy stored in a solenoid follow the same general pattern as that for a capacitor. In the Lorentzian case, there is a constant magnetic field over a finite volume, making the total energy in the field very easy to compute.
In the Riemannian case, we can’t neglect the field outside the solenoid, and the integral over an infinite region doesn’t converge, but if we integrate out to a radius S the total energy enclosed cycles between maxima and minima that, in the limit of large S, approach fixed values. In the table below, we use an asymptotic expression for a product of Bessel functions of S in terms of a cosine function. The average value over a cycle of this cosine term (which we can easily find, just by setting that term to zero) then gives a result that accords with the energy from the inductance.
Long solenoid  
Solenoid has radius R, length l, current I and n windings per unit length.  
Long solenoid, inductance  
L  =  –n^{2} π^{2} R^{2} l J_{1}(ω_{m} R) Y_{1}(ω_{m} R)  (Riemannian)  
L  =  n^{2} π R^{2} l  (Lorentzian)  
Long solenoid, energy density in magnetic field  
u(r)  = 

(Riemannian)  
u(r)  = 

(Lorentzian)  
Long solenoid, total energy in magnetic field  
<U>  = 


=  –1/4 n^{2} I^{2} π^{3} R^{3} l ω_{m}
[Y_{1}(ω_{m} R)^{2} J_{0}(ω_{m} R) J_{1}(ω_{m} R) – J_{1}(ω_{m} R)^{2} [Y_{0}(ω_{m} R) Y_{1}(ω_{m} R) – (S/R) Y_{0}(ω_{m} S) Y_{1}(ω_{m} S)] ] 

≈  –1/4 n^{2} I^{2} π^{3} R^{3} l ω_{m}
[Y_{1}(ω_{m} R)^{2} J_{0}(ω_{m} R) J_{1}(ω_{m} R) – J_{1}(ω_{m} R)^{2} [Y_{0}(ω_{m} R) Y_{1}(ω_{m} R) – cos(2 ω_{m} S) / (π ω_{m} R)] ] 

=  ½ n^{2} I^{2} π^{2} R^{2} l J_{1}(ω_{m} R) Y_{1}(ω_{m} R)  
=  –½ L I^{2}  (Riemannian)  
U  =  π R^{2} l u(0)  
=  ½ n^{2} I^{2} π R^{2} l  
=  ½ L I^{2}  (Lorentzian) 
For an inductor, the potential energy is found by computing the work we need to do to bring the current up from zero to some final steady value I. As we change the current from i to i+di in a time dt, we move a charge i dt against a voltage V = L di/dt. So we have:
Potential energy = ∫_{0}^{I} V(t) i dt = ∫_{0}^{I} L (di/dt) i dt = ½ L I^{2}
As we’d expect, the potential energy computed this way agrees with the total energy in the magnetic field in the Lorentzian case, but is the opposite of the energy in the magnetic field in the Riemannian case.
Suppose we have a magnetostatic solution of the Riemannian Proca equation, with a fourpotential A_{MS} and a source fourcurrent j_{MS}. What we mean by “magnetostatic” is that both A_{MS} and j_{MS} are unchanging in time, and that the fields are solely magnetic, A_{MS}^{t} = 0. We’ve looked at three such solutions: a steady linear current, a magnetic dipole, and a solenoid with a steady current.
Now suppose we take that solution and in place of ω_{m}, the maximum angular frequency of Riemannian light, we substitute a smaller value k, giving us A_{MS, k} and j_{MS, k}, which satisfy the equation:
∂_{x}^{2}A_{MS, k} + ∂_{y}^{2}A_{MS, k} + ∂_{z}^{2}A_{MS, k} + k^{2} A_{MS, k} + j_{MS, k} = 0
We then form an oscillating solution:
A = A_{MS, k} cos(ωt)
j = j_{MS, k} cos(ωt)
with an angular time frequency of ω, such that:
k^{2} + ω^{2} = ω_{m}^{2}
The new A and j will satisfy the RVWS equation:
∂_{x}^{2}A + ∂_{y}^{2}A + ∂_{z}^{2}A + ∂_{t}^{2}A + ω_{m}^{2} A + j
= cos(ωt) [∂_{x}^{2}A_{MS, k} + ∂_{y}^{2}A_{MS, k} + ∂_{z}^{2}A_{MS, k} – ω^{2} A_{MS, k} + ω_{m}^{2} A_{MS, k} + j_{MS, k}]
= cos(ωt) [∂_{x}^{2}A_{MS, k} + ∂_{y}^{2}A_{MS, k} + ∂_{z}^{2}A_{MS, k} + k^{2} A_{MS, k} + j_{MS, k}]
= 0
What about the transverse condition? Our magnetostatic solution satisfies that, with no time component:
∂_{x} A_{MS, k}^{x} + ∂_{y} A_{MS, k}^{y} + ∂_{z} A_{MS, k}^{z} = 0
After multiplying A_{MS, k} by cos(ωt) this will still be true, and of course we still have A^{t}=0. So our new oscillatory solution is a genuine solution of the Riemannian Proca equation.
In all of the above, we could just as well have used sin(ωt) rather than cos(ωt). It also makes no difference whether we use the t direction in this construction, or any other direction in fourspace along which the solution is unchanging and the fourpotential’s component is zero.
We can get the same kind of oscillating Lorentzian solution from our original magnetostatic Riemannian solution by a very similar process. In Lorentzian electromagnetism, the fourpotential doesn’t appear in Maxwell’s equations, and its only physical significance comes through the electromagnetic field F. But different fourpotentials A can give rise to exactly the same F, so we’re free to make certain kinds of changes to A without changing the physics; this is known as gauge freedom. One convenient approach to gauge freedom is to choose an extra condition that A must satisfy, and there are various choices that make the calculations easier in various contexts. One such choice is known as the Lorenz gauge condition — that’s “Lorenz” not “Lorentz”, they’re two completely different people! — which requires:
∂_{x} A^{x} + ∂_{y} A^{y} + ∂_{z} A^{z} + ∂_{t} A^{t} = 0
This is a Lorentzian version of the transverse condition that we impose on every Riemannian vector wave. So the connections between the two kinds of electromagnetism become much clearer if we do our Lorentzian electromagnetism in Lorenz gauge, where Maxwell’s equations are equivalent to the following equations for the fourpotential:
Maxwell’s Equations for FourPotential in Lorenz Gauge  
∂_{x}^{2}A + ∂_{y}^{2}A + ∂_{z}^{2}A – ∂_{t}^{2}A + j  =  0  (LVWS) 
∂_{x} A^{x} + ∂_{y} A^{y} + ∂_{z} A^{z} + ∂_{t} A^{t}  =  0  (Lorenz) 
If we take our original Riemannian magnetostatic solution, A_{MS}, for a fourcurrent j_{MS}, we can get an oscillating Lorentzian solution as follows. We substitute any frequency ω for ω_{m}, to obtain A_{MS, ω} and j_{MS, ω}, then we multiply them by cos(ωt):
A_{L} = A_{MS, ω} cos(ωt)
j_{L} = j_{MS, ω} cos(ωt)
These functions will then satisfy the Lorentzian vector wave equation with source (LVWS):
∂_{x}^{2}A_{L} + ∂_{y}^{2}A_{L} + ∂_{z}^{2}A_{L} – ∂_{t}^{2}A_{L} + j_{L}
= cos(ωt) [∂_{x}^{2}A_{MS, ω} + ∂_{y}^{2}A_{MS, ω} + ∂_{z}^{2}A_{MS, ω} + ω^{2} A_{MS, ω} + j_{MS, ω}]
= 0
Since there is no time component to either fourpotential, the fact that A_{MS, ω} meets the transverse condition is enough for A_{L} to meet the Lorenz condition.
If we apply the method we have just described to the fourpotential for a steady current through a linear conductor, we obtain the solution for an oscillating standing wave field around a linear conductor carrying an alternating current.
Linear Alternating Current Standing Wave Solution  
Current I_{0} cos(ωt) runs along the zaxis For the Riemannian solution, k^{2} + ω^{2} = ω_{m}^{2}  
Linear AC magnetic potential  
A_{(3)}(r)  =  –[I_{0} / 4] Y_{0}(kr) cos(ωt) e_{z}  (Riemannian) 
A_{(3)}(r)  =  –[I_{0} / 4] Y_{0}(ωr) cos(ωt) e_{z}  (Lorentzian) 
Linear AC, magnetic and electric fields  
B(r)  =  –[I_{0} k / 4] Y_{1}(kr) cos(ωt) e_{φ}  
E(r)  =  –[I_{0} ω / 4] Y_{0}(kr) sin(ωt) e_{z}  (Riemannian) 
B(r)  =  –[I_{0} ω / 4] Y_{1}(ωr) cos(ωt) e_{φ}  
E(r)  =  –[I_{0} ω / 4] Y_{0}(ωr) sin(ωt) e_{z}  (Lorentzian) 
A standing wave solution has a fixed form in space and simply oscillates in time. This is the kind of wave we’d expect if the wire was sitting in a cylindrical cavity. But what if we want a travelling wave solution instead? A standing wave can be formed as the sum or difference of ingoing and outgoing travelling waves, and conversely the ingoing and outgoing waves can be recovered as the sum or difference of those standing waves, so if we can find a second standing wave solution, we should be able to construct the travelling waves.
For the second standing wave solution, we go back to our original calculation for a linear current, and use the sourceless solution that is completely independent of the strength of the current. This amounts to changing the Bessel function Y_{0} into J_{0} in our potential above. If we also make the new solution 90 degrees out of phase with the original, by changing the cos(ωt) factor to sin(ωt), then add the two solutions together, we end up with an outgoing travelling wave. Since the second solution that we’ve added is sourceless, there’s no need to change the current; this is simply the wave around the same wire with the same current, under different boundary conditions.
For the Lorentzian case, we need to subtract the second solution, not add it, in order to get an outgoing wave.
Linear Alternating Current Outgoing Travelling Wave Solution  
Current I_{0} cos(ωt) runs along the zaxis For the Riemannian solution, k^{2} + ω^{2} = ω_{m}^{2}  
Linear AC magnetic potential  
A_{(3)}(r)  =  –[I_{0} / 4] [Y_{0}(kr) cos(ωt) + J_{0}(kr) sin(ωt)] e_{z}  (Riemannian) 
A_{(3)}(r)  =  –[I_{0} / 4] [Y_{0}(ωr) cos(ωt) – J_{0}(ωr) sin(ωt)] e_{z}  (Lorentzian) 
Linear AC, magnetic and electric fields  
B(r)  =  –[I_{0} k / 4] [Y_{1}(kr) cos(ωt) + J_{1}(kr) sin(ωt)] e_{φ}  
E(r)  =  –[I_{0} ω / 4] [Y_{0}(kr) sin(ωt) – J_{0}(kr) cos(ωt) ] e_{z}  (Riemannian) 
B(r)  =  –[I_{0} ω / 4] [Y_{1}(ωr) cos(ωt) – J_{1}(ωr) sin(ωt)] e_{φ}  
E(r)  =  –[I_{0} ω / 4] [Y_{0}(ωr) sin(ωt) + J_{0}(ωr) cos(ωt) ] e_{z}  (Lorentzian) 
Linear AC, Poynting vector  
S(r)  =  [I_{0}^{2} k ω / 16] [Y_{1}(kr) cos(ωt) + J_{1}(kr) sin(ωt)] [Y_{0}(kr) sin(ωt) – J_{0}(kr) cos(ωt) ] e_{r} 
(Riemannian) 
S(r)  =  –[I_{0}^{2} ω^{2} / 16] [Y_{1}(ωr) cos(ωt) – J_{1}(ωr) sin(ωt)] [Y_{0}(ωr) sin(ωt) + J_{0}(ωr) cos(ωt) ] e_{r} 
(Lorentzian) 
<S(r)>  =  [I_{0}^{2} ω / (16 π r)] e_{r}  (Common) 
Linear AC, average power radiated (per unit length of wire)  
<P>  =  I_{0}^{2} ω / 8  (Common) 
We can see most clearly that these are outgoing travelling waves from <S(r)>, the Poynting vector averaged over one time cycle, where it’s an obviously positive value times the unit vector pointing radially out from the wire.
It might seem a bit puzzling that in the Riemannian case the angular spatial frequency k vanishes from the final results; after all, we expect the speed of these waves to be k / ω, and the density of energy flow to be that speed times the energy density. But it turns out that the energy density is inversely proportional to k, which is not hard to see when you look at the fourpotential for large r, which is inversely proportional to the square root of k thanks to the asymptotic expansion of the Bessel functions:
A(r) ≈ I_{0} cos(kr+ωt+π/4) / [2 √(2 π kr)] e_{z}
The analysis of energy flow in a plane wave we carried out previously then gives the same average Poynting vector from this plane wave as we derived from the precise solution.
In the Lorentzian case, the power being radiated means that work must be done to maintain the current at a fixed amplitude. In the Riemannian case, work in the conventional sense must be done by the current, to keep it from growing larger! Strange as this is, it’s exactly what we’d expect, given that energy in the electromagnetic field will have the opposite sense to kinetic and potential energy.
We’ll use the same method to construct the field for an oscillating magnetic dipole, based on our previous result for a static dipole. We won’t show either of the standing wave solutions, we’ll skip straight to the outgoing travelling wave.
Oscillating Magnetic Dipole Outgoing Travelling Wave Solution  
Magnetic dipole moment is μ cos(ωt) For the Riemannian solution, k^{2} + ω^{2} = ω_{m}^{2}  
Oscillating Magnetic Dipole potential  
A_{(3)}(r)  =  [μ × r / (4 π r^{3})] [cos(kr+ωt) + kr sin(kr+ωt)]  (Riemannian) 
A_{(3)}(r)  =  [μ × r / (4 π r^{3})] [cos(ω(r–t)) + ωr sin(ω(r–t))]  (Lorentzian) 
Oscillating Magnetic Dipole, magnetic and electric fields  
B(r)  =  [(3 (μ · r) r – r^{2} μ) / (4 π r^{5})] [cos(kr+ωt) + kr sin(kr+ωt)] – [k^{2} (μ × r) × r / (4 π r^{3})] cos(kr+ωt) 

E(r)  =  [ω μ × r / (4 π r^{3})] [sin(kr+ωt) – kr cos(kr+ωt)]  (Riemannian) 
B(r)  =  [(3 (μ · r) r – r^{2} μ) / (4 π r^{5})] [cos(ω(r–t)) + ωr sin(ω(r–t))] – [ω^{2} (μ × r) × r / (4 π r^{3})] cos(ω(r–t)) 

E(r)  =  [ω μ × r / (4 π r^{3})] [ωr cos(ω(r–t)) – sin(ω(r–t))]  (Lorentzian) 
Oscillating Magnetic Dipole Poynting vector averaged over one cycle  
<S(r)>  =  [ k^{3} ω ((μ · μ) – (μ · e_{r})^{2}) / (32 π^{2} r^{2})] e_{r}  (Riemannian) 
<S(r)>  =  [ ω^{4} ((μ · μ) – (μ · e_{r})^{2}) / (32 π^{2} r^{2})] e_{r}  (Lorentzian) 
Oscillating Magnetic Dipole Total power averaged over one cycle  
<P>  =  k^{3} ω (μ · μ) / (12 π)  (Riemannian) 
<P>  =  ω^{4} (μ · μ) / (12 π)  (Lorentzian) 
If we look at the asymptotic form of the Riemannian fourpotential for large r, we have:
A(r) ≈ [ k sin(kr+ωt) / (4 π r) ] μ × e_{r}
The polarisation is always transverse, with the fourpotential pointing around the dipole axis. The magnitude is greatest perpendicular to the dipole, and drops off to zero on the axis itself. The angular distribution of the radiated power is precisely the same as in the Lorentzian case.
In the Riemannian case, the energy density averaged over a cycle is proportional to k^{2} ω^{2}. Multiplied by the speed of the wave, k / ω, that gives the k^{3} ω frequencydependence for the power that we see in the table, and plotted on the right.
We can apply the same method to adapt our magnetostatic description of a solenoid carrying a steady current to one carrying an alternating current. To get a sourcefree magnetostatic solution, we change the factor of Y_{1} in the exterior part of the steadycurrent solenoid solution to J_{1}, and continue the same solution all the way in to the zaxis. Combining the two standing wave solutions gives us an outgoing travelling wave solution.
Long solenoid (AC) Outgoing Travelling Wave Solution  
Solenoid has radius R, current I_{0} cos(ωt) and n windings per unit length. Axis of solenoid coincides with the zaxis. For the Riemannian solution, k^{2} + ω^{2} = ω_{m}^{2}  
Long solenoid (AC), magnetic potential  
A_{(3)}(r)  = 

(Riemannian)  
A_{(3)}(r)  = 

(Lorentzian)  
Long solenoid (AC), magnetic and electric fields  
B(r)  = 


E(r)  = 

(Riemannian)  
B(r)  = 


E(r)  = 

(Lorentzian)  
Long solenoid (AC) Poynting vector averaged over one cycle  
<S(r)>  = 

(Riemannian)  
<S(r)>  = 

(Lorentzian)  
Long solenoid (AC) Total radiated power averaged over one cycle, for a coil of length l  
<P_{Radiated}>  =  ½ π^{2} I_{0}^{2} l n^{2} R^{2} ω J_{1}(kR)^{2}  
≈ 

(Riemannian)  
<P_{Radiated}>  =  ½ π^{2} I_{0}^{2} l n^{2} R^{2} ω J_{1}(ωR)^{2}  
≈ 

(Lorentzian)  
Long solenoid (AC) Total magnetic flux within coil  
Φ  =  –π^{2} I_{0} n R^{2} J_{1}(kR) [J_{1}(kR) sin(ωt)+Y_{1}(kR) cos(ωt)]  (Riemannian)  
Φ  =  π^{2} I_{0} n R^{2} J_{1}(ωR) [J_{1}(ωR) sin(ωt)–Y_{1}(ωR) cos(ωt)]  (Lorentzian)  
Long solenoid (AC) Voltage across a coil of length l  
V  =  π^{2} I_{0} l n^{2} R^{2} ω J_{1}(kR) [Y_{1}(kR) sin(ωt) – J_{1}(kR) cos(ωt)]  
≈ 

(Riemannian)  
V  =  π^{2} I_{0} l n^{2} R^{2} ω J_{1}(ωR) [Y_{1}(ωR) sin(ωt) + J_{1}(ωR) cos(ωt)]  
≈ 

(Lorentzian)  
Long solenoid (AC) Average electrical power expended on a coil of length l  
<P_{Radiated}>  =  –½ π^{2} I_{0}^{2} l n^{2} R^{2} ω J_{1}(kR)^{2}  
≈ 

(Riemannian)  
<P_{Radiated}>  =  ½ π^{2} I_{0}^{2} l n^{2} R^{2} ω J_{1}(ωR)^{2}  
≈ 

(Lorentzian) 
The first interesting feature of the Riemannian solution is that the spatial angular frequency k now sets the scale for the geometry of the solenoid, in place of ω_{m} when the current is unchanging. While the directcurrent behaviour of a solenoid would be extremely sensitive to any imperfections comparable to the minimum wavelength of light — and a realistic device might have a wire whose width spanned several wavelengths, so that the whole structure would include a series of negative and positive inductances that largely cancelled each other out — we now have the possibility of a much larger wavelength, and a system that’s both free of cancellations and less sensitive to the precise shape of the coil.
When we treated the DC solenoid, we noted that it could possess either a positive or negative inductance, and hence it could either oppose or assist changes in current flow. However, in an AC context that distinction is less important; what matters is the power expended over a full cycle, and it’s guaranteed by our choice of an outgoing wave that the Riemannian solenoid will act as a source of electrical power, while the Lorentzian equivalent will require an expenditure of power.
The graph on the right shows the current, voltage and power for the Riemannian and Lorentzian case, for three sizes of coil. Here J_{1} and Y_{1} are abbreviations for J_{1}(kR) and Y_{1}(kR) in the Riemannian case or J_{1}(ωR) and Y_{1}(ωR) in the Lorentzian case. The sign convention we’re using for the voltage here is such that an ordinary resistor would have a voltage exactly in phase with the current, so the power computed as the product VI is electrical energy being dissipated.
In the Lorentzian case, the voltage is never more than 90 degrees out of phase with the current. In the lowfrequency DC limit, J_{1}(ωR) Y_{1}(ωR) ≈ –1/π to first order, and the voltage leads the current by exactly 90 degrees. If we think of an inductor at least a few millimetres across, carrying AC frequencies in the kilohertz range or less, the wavelength is vastly larger than the size of the solenoid, so that “limiting case” is actually a good approximation for a lot of common AC circuits. As the frequency becomes higher, though, Y_{1}(ωR) eventually becomes zero, putting the voltage in phase with the current, and then positive, so that the voltage lags the current. But whatever the values of J_{1}(ωR) and Y_{1}(ωR), the average power dissipated over each cycle is always either positive or zero.
In the Riemannian case, the voltage is never less than 90 degrees out of phase with the current. The DC limit involves the maximum possible spatial frequency and behaviour that’s highly sensitive to the coil’s geometry. It’s only in the high (time) frequency limit that the wavelength becomes large, J_{1}(kR) Y_{1}(kR) ≈ –1/π, and the voltage leads the current by 90 degrees. But at all frequencies and coil sizes, the average power dissipated is negative or zero – because any field energy radiated away must be accompanied, in the Riemannian case, by an increase in conventional energy.
The trick we used to get oscillating solutions from magnetostatic ones won’t work quite so easily for electrostatic solutions. If we take a pure electrostatic potential, φ_{ES}, adapt it to a new constant, k rather than ω_{m}, and multiply it by cos(ωt), then it will solve the RVWS for a source equal to the original charge density multiplied by cos(ωt), where as always we have k^{2} + ω^{2} = ω_{m}^{2}. But it won’t satisfy the transverse condition, because the time component of the fourpotential, which is now –φ_{ES, k} cos(ωt), has a nonzero time derivative, but there are no spatial components to the fourpotential with derivatives of their own that can make the divergence sum to zero.
In the case of a static electric dipole, though, there’s a fairly easy trick to get around this. The static dipole potential is just the opposite of the spatial derivative of the Coulomb potential along the dipole axis, say the zaxis. So if we make the zcomponent of the fourpotential equal to the Coulomb potential (also adapted for the constant k rather than ω_{m}) times ω sin(ωt), its spatial derivative in the z direction will cancel out the time derivative of the fourpotential’s time component, satisfying the transverse condition. The extra term will also satisfy the RVWS with a source modified in the same way, and charge will automatically be conserved. Specifically, this adds a pointlike oscillating current to the source, ninety degrees out of phase from the oscillations in the strength of the dipole.
As before, we can build two standing waves with this approach, and then combine them to get an outgoing travelling wave. And as before, we can adapt the method to get Lorentzian solutions as well.
Oscillating Electric Dipole Outgoing Travelling Wave Solution  
Electric dipole moment is p cos(ωt) For the Riemannian solution, k^{2} + ω^{2} = ω_{m}^{2}  
Oscillating Electric Dipole potentials  
φ(r)  =  –[p · r / (4 π r^{3})] [cos(kr+ωt) + kr sin(kr+ωt)]  
A_{(3)}(r)  =  –[p ω / (4 π r)] sin(kr+ωt)  (Riemannian) 
φ(r)  =  [p · r / (4 π r^{3})] [cos(ω(r–t)) + ωr sin(ω(r–t))]  
A_{(3)}(r)  =  [p ω / (4 π r)] sin(ω(r–t))  (Lorentzian) 
Oscillating Electric Dipole, electric and magnetic fields  
E(r)  =  –[(3 (p · r) r – r^{2} p) / (4 π r^{5})] [cos(kr+ωt) + kr sin(kr+ωt)] + [(k^{2} (p · r) r + ω^{2} r^{2} p) / (4 π r^{3})] cos(kr+ωt) 

B(r)  =  [ω p × r / (4 π r^{3})] [kr cos(kr+ωt) – sin(kr+ωt)]  (Riemannian) 
E(r)  =  [(3 (p · r) r – r^{2} p) / (4 π r^{5})] [cos(ω(r–t)) + ωr sin(ω(r–t))] – [ω^{2} (p × r) × r / (4 π r^{3})] cos(ω(r–t)) 

B(r)  =  [ω p × r / (4 π r^{3})] [sin(ω(r–t)) – ωr cos(ω(r–t))]  (Lorentzian) 
Oscillating Electric Dipole Poynting vector averaged over one cycle  
<S(r)>  =  [ (k ω^{3} (p · p) + k^{3} ω (p · e_{r})^{2}) / (32 π^{2} r^{2})] e_{r}  (Riemannian) 
<S(r)>  =  [ ω^{4} ((p · p) – (p · e_{r})^{2}) / (32 π^{2} r^{2})] e_{r}  (Lorentzian) 
Oscillating Electric Dipole Total power averaged over one cycle  
<P>  =  (k^{3} ω + 3 k ω^{3}) (p · p) / (24 π)  (Riemannian) 
<P>  =  ω^{4} (p · p) / (12 π)  (Lorentzian) 
The Riemannian solution here has a somewhat different powerfrequency relationship than the oscillating magnetic dipole. It also provides the first explicit source we’ve seen of longitudinally polarised waves.
We can write the Riemannian fourpotential for large r as:
A(r) ≈ [sin(kr+ωt) / (4 π r)] [k (p · e_{r}) e_{t} – ω p]
We can split this into transverse and longitudinal parts:
A_{T}(r) ≈ [sin(kr+ωt) / (4 π r)] ω [(p · e_{r}) e_{r} – p]
A_{L}(r) ≈ [sin(kr+ωt) / (4 π r)] (p · e_{r}) [k e_{t} – ω e_{r}]
The transverse part has no time component and is orthogonal to e_{r}, the direction in space in which the wave is propagating. Using our analysis of energy in plane waves, if we write θ for the angle between the dipole vector and the direction of the wave in space, the local energy density in the transverse and longitudinal modes, averaged over one time cycle, is:
<u_{T}(r)> ≈ [1 / (32 π^{2} r^{2})] (p · p) ω^{4} sin^{2}(θ)
<u_{L}(r)> ≈ [1 / (32 π^{2} r^{2})] (p · p) (ω^{2}+k^{2}) ω^{2} cos^{2}(θ)
This shows us that the transverse waves are strongest perpendicular to the dipole axis, dropping to zero on the axis itself, while the longitudinal waves have the opposite pattern: strongest on the axis, dropping to zero perpendicular to it. The angular distribution of energy for the transverse waves matches that of the Lorentzian case.
If we multiply these energy densities by the speed of the wave, k / ω, and integrate over the whole sphere, we get the total power in each form:
<P_{T}> = k ω^{3} (p · p) / (12 π)
<P_{L}> = (k ω^{3} + k^{3} ω) (p · p) / (24 π)
Of course these two values add up to give the total power radiated, shown in the table.
To look at the behaviour of our spherical capacitor with an alternating current charging and discharging the two shells, we need to fit the current somewhere into the picture. The only way we can do this without breaking the spherical symmetry is by having a symmetrically distributed current run directly from shell to shell, through the gap between them. To do this literally would obviously disrupt the functioning of the capacitor, but we can treat this model as an approximation to an arrangement where a large number of flatplate capacitors are being charged and discharged through wires that run beside, but not actually within, the devices themselves. Arranging a large number of such circuits so they all fan out around a central point will produce fields very similar to those in our model.
We find the Riemannian fourpotential due to the oscillating charge on the spheres by modifying the electrostatic solution, substituting k for ω_{m} and multiplying by cos(ωt). Then we add in a fourpotential for the current flowing back and forth between the shells, which we can find first as a magnetostatic solution with the BiotSavart Law, and then convert to an oscillatory solution with our standard method. Charge is conserved, since the current we’ve added accounts for the oscillating charge on the shells, and so the fourpotential obeys the transverse condition and gives us a valid solution. Then as usual, we need to combine this with a sourceless standing wave to get the outgoing travelling wave solution.
Because the fourpotential is radially symmetrical, there is no magnetic field. In the Lorentzian case, that means there can be no radiation, while in the Riemannian case there is purely transverse radiation.
Spherical capacitor (AC) Outgoing Travelling Wave Solution  
Inner shell of radius R_{1}, total charge –Q_{0} cos(ωt) Outer shell of radius R_{2}, total charge +Q_{0} cos(ωt) For the Riemannian solution, k^{2} + ω^{2} = ω_{m}^{2}  
Spherical capacitor (AC), potentials  
φ(r)  = 


A_{(3)}(r)  = 

(Riemannian)  
φ(r)  = 


A_{(3)}(r)  =  0  (Lorentzian)  
Spherical capacitor (AC), electric field  
E(r)  = 

(Riemannian)  
E(r)  = 

(Lorentzian)  
Spherical capacitor (AC) Poynting vector averaged over one cycle  
<S(r)>  = 

(Riemannian)  
<S(r)>  =  0  (Lorentzian)  
Spherical capacitor (AC) Total radiated power averaged over one cycle  
<P_{Radiated}>  =  [Q_{0}^{2} ω_{m}^{2} ω / (8 π k^{3} R_{1}^{2} R_{2}^{2})] [(R_{2} sin(kR_{1})–R_{1} sin(kR_{2}))^{2}]  
≈ 

(Riemannian)  
<P_{Radiated}>  =  0  (Lorentzian)  
Spherical capacitor (AC), voltage between shells  
V  =  [Q_{0} / (4 π k^{3} R_{1}^{2} R_{2}^{2})] [ ω_{m}^{2} sin(ωt) (R_{2} sin(kR_{1})–R_{1} sin(kR_{2}))^{2} – cos(ωt) (ω_{m}^{2} (R_{2}^{2} sin(kR_{1}) cos(kR_{1})+R_{1} cos(kR_{2}) (R_{1} sin(kR_{2})–2 R_{2} sin(kR_{1}))) + ω^{2} kR_{1}R_{2} (R_{1}–R_{2})) ] 

≈ 

(Riemannian)  
V  =  Q_{0} (R_{2}–R_{1}) cos(ωt) / (4 π R_{1} R_{2})  (Lorentzian)  
Spherical capacitor (AC) Average electrical power expended  
<P_{Electrical}>  =  –[Q_{0}^{2} ω_{m}^{2} ω / (8 π k^{3} R_{1}^{2} R_{2}^{2})] [(R_{2} sin(kR_{1})–R_{1} sin(kR_{2}))^{2}]  
≈ 

(Riemannian)  
<P_{Electrical}>  =  0  (Lorentzian) 
As usual, the electrical power expended in the Riemannian case is the opposite of the total power radiated, so work needs to be extracted from the circuit to keep the peak amplitude of the oscillating current unchanged. The exact relationship between the voltage/current phase difference and the frequency of the oscillations will be complicated, but the fact that there is always a negative (or at worst, zero) power expenditure by the circuit means that the voltage will always be at least 90 degrees out of phase with the current.
In the Lorentzian case, because the geometry prevents any radiative loss, the voltage and current will always be precisely 90 degrees out of phase.
In basic circuit theory as applied in our own universe, it’s usually assumed that capacitors and inductors have fixed values of capacitance and inductance that are independent of the frequency of the current passing through them. This is a reasonable assumption, because in the Lorentzian universe moderate time frequencies correspond to wavelengths much larger than the dimensions of typical electronic components.
But that’s not to say that the behaviour of a circuit containing these devices is itself independent of frequency. For a capacitor, what the capacitance C fixes is the ratio between the charge stored in the device and the voltage across the plates, but when we look at the relationship between voltage and current, rather than charge, the frequency of the current enters into the relationship through the derivative of the oscillating charge. In what follows, we will write Q_{0}, I_{0} and V_{0} for the amplitude of an oscillating charge, current or voltage whose instantaneous value follows a harmonic wave.
Q = Q_{0} sin(ωt)
V = Q / C = [Q_{0} / C] sin(ωt)
I = dQ/dt = [ω Q_{0}] cos(ωt)
V_{0} = I_{0} / (ωC)
With an inductor, L fixes the ratio between voltage and the rate of change of current, so we have:
I = I_{0} cos(ωt)
dI/dt = [–ω I_{0}] sin(ωt)
V = L dI/dt = [–Lω I_{0}] sin(ωt)
V_{0} = (Lω) I_{0}
If we define the capacitative reactance X_{C} and inductive reactance X_{L} as follows:
X_{C} = 1 / (ωC)
X_{L} = ωL
then X_{C} and X_{L} play a role analogous to resistance, with:
V_{0} = I_{0} R, for a resistor
V_{0} = I_{0} X_{C}, for a capacitor
V_{0} = I_{0} X_{L}, for an inductor.
However, the instantaneous values of the voltages are different in these three cases: for a resistor the voltage is in phase with the current, for a capacitor the voltage lags the current by 90 degrees (it’s a positive multiple of sine, if the current is a cosine), and for an inductor the voltage leads the current by 90 degrees (it’s a negative multiple of sine, if the current is a cosine). This means that if all three devices are connected in series, and so the same current is flowing through all of them, the voltage across the capacitor will be 180 degrees out of phase with that across the inductor, which is to say it will precisely oppose it. So the net reactance:
X = X_{L} – X_{C}
dictates the combined voltage for those two components, 90 degrees out of phase with the current.
Next, we define the impedance, which includes the effect of resistance, and the overall phase difference φ:
Z = √(R^{2} + X^{2})
φ = arctan(X / R)
cos(φ) = R / Z
sin(φ) = X / Z
These two quantities let us describe the combined voltage across the three devices, the capacitor, inductor and resistor wired in series:
V = R I_{0} cos(ωt) – X I_{0} sin(ωt)
= I_{0} Z [cos(φ) cos(ωt) – sin(φ) sin(ωt)]
= I_{0} Z cos(ωt+φ)
Clearly the impedance will be at a minimum when X is zero. If we call the angular frequency when this happens ω_{res}, then:
X = X_{L} – X_{C} = 0
ω_{res}L – 1 / (ω_{res}C) = 0
ω_{res} = 1 / √(LC)
At the resonant frequency ω_{res}, the inductance and capacitance cancel each other exactly, and the amplitude of the current, I_{0}, hits a peak that is determined solely by the resistance.
To give an example, suppose we have a solenoid 10 cm long, 5 cm in radius, and with one turn every millimetre. In SI units, its DC inductance will be 0.987 milliHenries.
In series with this we add a spherical capacitor, with inner radius 5 cm and outer radius 5.01 cm. Its DC capacitance will be 2.787 nanoFarads.
We connect the solenoid and the capacitor in series, along with a 1000 ohm resistor. Our formula gives us the angular frequency of the resonance, which corresponds to an ordinary frequency of ν=95.96 kiloHertz.
The plot shows the current that will flow through these three components for a given voltage as the frequency is varied; the frequency scale is logarithmic, and the vertical axis has been normalised so that the current through the 1000 ohm resistor alone would give a value of 1. Just as we’d expect, there’s a peak around 10^{5} Hertz. There will be other resonances that aren’t accounted for in the approximation where we treat the inductance and capacitance as frequencyindependent, but they won’t appear until the GHz range, where the wavelength starts to approach the dimensions of the solenoid.
Now, suppose that instead of connecting our three components to an oscillating voltage, we charge up the capacitor to a charge of Q_{i} and then just close the circuit, allowing current to flow through it. What happens?
The sum of all the voltages around the circuit must be zero:
V_{C} + V_{L} + V_{R} = 0
Q / C + L dI/dt + R I = 0
Q / C + L d^{2}Q/dt^{2} + R dQ/dt = 0
d^{2}Q/dt^{2} + 2 β dQ/dt + ω_{res}^{2} Q = 0
where we have defined β=R/(2L). Given Q=Q_{i} and dQ/dt = 0 at t=0, and assuming β < ω_{res}, this differential equation has the solution:
Q(t) = Q_{i} exp(–βt) [cos(√(ω_{res}^{2} – β^{2}) t) + (β / √(ω_{res}^{2} – β^{2})) sin(√(ω_{res}^{2} – β^{2}) t)]
This describes an oscillating function undergoing an exponential decay. The frequency of the oscillations will be less than the resonant frequency ω_{res} at which the circuit responds with the least impedance to a driving voltage, though as the resistance is reduced the oscillations will approach that frequency.
The concepts we discussed in the previous section can be adapted to analogous situations in the Riemannian universe, but there are some very significant changes. The first is that it will very rarely be reasonable to assume that L and C themselves are independent of ω. In the Riemannian universe wavelengths are at their minimum for static fields, and only become larger with increasing time frequencies. The increase in wavelength comes late, and then occurs very abruptly; the wavelength isn’t double the minimum until ω = 0.866 ω_{m}, hits ten times the minimum at ω = 0.995 ω_{m}, and a hundred times at ω = 0.99995 ω_{m}. So an inductor or capacitor in a circuit operating at any but the very highest frequencies will have a currentvoltage relationship dictated by the interaction of the field’s wavelength with the geometry of the component, and hence dependent on the frequency in a far more complex fashion than the reactancefrequency formulas we’ve given above for the Lorentzian case.
Furthermore, the electromagnetic radiation from Riemannian inductors and capacitors will give them a significant frequencydependent negative resistance. This puts a frequencydependent term into the resistance part of the impedance and the phase difference:
X = X_{L} – X_{C}
R_{tot} = R – R_{rad}
Z = √(R_{tot}^{2} + X^{2})
φ = arctan(X / R_{tot})
where everything here but the ordinary resistance R is now frequencydependent (and even that is a simplifying assumption). So we can no longer guarantee that the frequency at which X = 0 will give us the minimum impedance, Z.
Nevertheless, these extra complications mean that even a very simple circuit can have interesting behaviour. Suppose we have a solenoid, identical to the one we described in the previous section: 10 cm long, 5 cm in radius, and with one turn every millimetre. To apply Riemannian physics to it, we will assume a value for ω_{m} of 2 π × 10^{15} Hz.
The reactance and resistance of the solenoid are plotted in the diagram on the right. Note that all the wavelengths here correspond to time frequencies extremely close to ω_{m}. Because the reactance crosses zero for the solenoid alone, there is no need to add a capacitor to the circuit; if we wired up this solenoid with an ordinary resistor that balanced the solenoid’s negative resistance at the longest of the wavelengths where its reactance was zero, a closed circuit containing just those two components would resonate at that wavelength, in principle sustaining the current indefinitely. The solenoid would emit electromagnetic waves, bringing ordinary energy into the circuit, and the resistor would turn that energy into heat. This would not violate any of the laws of thermodynamics: energy is conserved, because electromagnetic field energy has the opposite sense to thermal/kinetic energy, and entropy increases, because of the radiation produced.
Amazingly enough, there is even a degree of stability built into the behaviour of this extremely simple circuit. If the current began to increase exponentially, that would entail its frequency spreading out, and although the resonance point isn’t quite at the wavelength of minimum resistance, the difference in time frequency here is so tiny that it would only take a very small growth constant in the exponential to spread out the frequency sufficiently to lower the rate at which the solenoid was feeding energy into the circuit. Of course the same effect would exacerbate the damping of the current if it began to drop, so it would require an additional regulatory mechanism (such as a nonlinear resistor, with a resistance that increased at higher currents) to keep the circuit harvesting energy at a constant rate.
So far, everything we’ve said about electromagnetism has been expressed in terms of Cartesian coordinates in flat space (or in the Lorentzian case, flat spacetime). But since we don’t actually expect the Riemannian universe to be perfectly flat, any more than our own universe, it will be helpful to understand how the equations can be reformulated to work in curved space. This will have the added benefit of allowing us to deal easily with nonCartesian coordinates in flat space.
If you haven’t done any calculations in curved spacetime before, the quick summary that follows might be bewildering. For a much gentler introduction, try this article on the basics of general relativity.
In generalrelativistic Lorentzian physics, when converting an equation from flat spacetime to curved spacetime, the rule of thumb is to convert partial derivatives to covariant derivatives. When we take a derivative of a vector field in flat space, we are implicitly treating vectors at different points as belonging to the same vector space; if we say a vector field has zero derivative, and hence is constant, that claim really only makes sense if we can take a vector at point A and compare it with another vector at point B. But on the curved surface of the Earth, say, how do we compare the vector space of possible velocities across the ground in London with the same kind of vector space in Nairobi? Even if we step away from the Earth’s surface and think of these vectors as threedimensional, that doesn’t let us match up all the velocities at one location with velocities at another — and in a curved universe, we can’t “step away” at all.
The resolution involves supplementing the idea of a derivative to include a geometrical structure known as the LeviCivita connection, which gives us a notion of parallel transport of vectors along a curve: that is, if we travel along a curve, we can “carry” a vector from the start of the curve along with us, keeping it “parallel” with its original direction, according to the connection. The LeviCivita connection has the virtue of being compatible with the metric; the metric defines a dot product on curved space, and the LeviCivita connection lets you paralleltransport two vectors while preserving their dot product. The covariant derivative computes the derivative of a vector field relative to the LeviCivita connection: if you paralleltransport a vector along a curve using the LeviCivita connection, that is the standard that says “this vector is unchanging” against which any change is identified by the covariant derivative.
To make this concrete, suppose we have a vector field v on a curved space, with components in some coordinate basis of v^{b}. Then the covariant derivative of this vector field in one of the coordinate directions, a, is given by:
∇_{a} v^{b} = ∂_{a} v^{b} + Γ^{b}_{ca} v^{c}
where Γ is the LeviCivita connection, telling us how to correct the partial derivative to produce a derivative that respects parallel transport. If g_{ab} and g^{ab} are the components of the metric in our coordinate system, the LeviCivita connection Γ has components (often referred to as Christoffel symbols):
Γ^{b}_{ca} = ½ g^{bk} [ ∂_{a}g_{kc} + ∂_{c}g_{ka} – ∂_{k}g_{ca}]
Note that Γ is symmetric in its last two indices: Γ^{b}_{ca} = Γ^{b}_{ac}.
We can extend the idea of parallel transport from vectors to any kind of tensor. For example, if we parallel transport the vectors v and w from point A to point B with the LeviCivita connection, obtaining v' and w' at B, then parallel transport of rank(2,0) tensors from A to B is defined so that v ⊗ w at A becomes v' ⊗ w' at B. For dual vectors, we require that if a dual vector α at point A has α(v) = c, parallel transport of α from A to B yields α' such that α'(v') = c.
These requirements give us the following formulas for the covariant derivatives of the kind of tensors we’ll need:
∇_{a} A_{b} = ∂_{a} A_{b} – Γ^{h}_{ba} A_{h}
∇_{a} F_{bc} = ∂_{a}F_{bc} – Γ^{h}_{ba} F_{hc} – Γ^{h}_{ca} F_{bh}
∇_{a} F^{bc} = ∂_{a}F^{bc} + Γ^{b}_{ha} F^{hc} + Γ^{c}_{ha} F^{bh}
Applying the second of these equations to the metric, and making use of the definition of Γ, gives us ∇_{a} g_{bc} = 0. Essentially our definition of Γ has been chosen to get this result: Γ is the connection with respect to which the metric itself is judged to be constant.
If we replace the partial derivatives in our equations of electromagnetism with covariant derivatives, we obtain the following:
Riemannian Proca Equation in Curved Space  
∇_{b} F^{ab} – ω_{m}^{2} A^{a} – j^{a}  =  0  (Riemannian) 
∂_{a} F_{bc} + ∂_{b} F_{ca} + ∂_{c} F_{ab}  =  0  (Common) 
Maxwell’s Equations in Curved Spacetime  
∇_{b} F^{ab} – j^{a}  =  0  (Lorentzian) 
∂_{a} F_{bc} + ∂_{b} F_{ca} + ∂_{c} F_{ab}  =  0  (Common) 
Why are there still partial derivatives rather than covariant derivatives in the common equation shared by Riemannian and Lorentzian electromagnetism? If we write out the equation with covariant derivatives and use the fact that F_{bc} is antisymmetric while Γ is symmetric in its last two indices, all the correction terms cancel each other out, and we’re left with just the partial derivatives.
In the relationship between the electromagnetic field F and the fourpotential A, the correction terms for the covariant derivative again cancel out.
Field From FourPotential  
F_{ab}  =  ∇_{a} A_{b} – ∇_{b} A_{a}  
=  ∂_{a} A_{b} – ∂_{b} A_{a}  (Common) 
It follows that the common equation in the Riemannian Proca and the Maxwell Equations will again be satisfied merely by defining F in terms of A this way, since nothing has changed and exactly the same partial derivatives appear as in the flat spacetime case.
Now, the next step is where things get a little tricky. In flat space or spacetime, partial derivatives commute: if you take two derivatives, it doesn’t matter which order you do it in. This is not the case for covariant derivatives in curved space, and indeed the whole idea of curvature is tied up with the fact that covariant derivatives don’t commute.
Suppose we take the covariant derivative of a vector field v along two different coordinate directions, indexed by a and b, in both orders. The difference between the two is given by:
∇_{a} ∇_{b} v – ∇_{b} ∇_{a} v = R^{h}_{cab} v^{c} e_{h}
where e_{h} is the basis vector in the coordinate direction indexed by h, and the fourindex tensor R is what’s known as the Riemann curvature tensor (named, of course, after the same Georg Friedrich Bernhard Riemann as we’ve been referring to all along, though this tensor is just as useful in Lorentzian curved spacetime as in Riemannian curved space). By explicitly calculating the covariant derivatives in terms of the LeviCivita connection, we can express the components of the Riemann curvature tensor as:
R^{h}_{cab} = ∂_{a} Γ^{h}_{cb} – ∂_{b} Γ^{h}_{ca} + Γ^{h}_{ka} Γ^{k}_{bc} – Γ^{h}_{kb} Γ^{k}_{ca}
Now, suppose the fourpotential A satisfies a covariantderivative version of the transverse condition or the Lorenz gauge condition:
∇_{b} A^{b} = 0
where as usual we’re using the Einstein Summation Convention on repeated indices. Then the expression ∇_{b} ∇_{a} A^{b} would be zero if covariant derivatives commuted ... but they don’t commute, so instead we have:
∇_{b} ∇_{a} A^{b} = ∇_{b} ∇_{a} A^{b} – ∇_{a} ∇_{b} A^{b} = R^{b}_{cba} A^{c} = R_{ca} A^{c}
where the twoindex tensor R, known as the Ricci curvature tensor, is found by “contracting” the Riemann curvature tensor, that is summing over two of its indices.
If we make use of this result to evaluate ∇_{b} F^{ab} — which appears in both the Riemannian Proca equation and Maxwell’s Equations — in terms of the fourpotential A, we get:
∇_{b} F^{ab}
= g^{αa} g^{βb} ∇_{b} F_{αβ}
= g^{αa} g^{βb} ∇_{b} (∇_{α} A_{β} – ∇_{β} A_{α})
= g^{αa} g^{βb} (∇_{b} ∇_{α} A_{β} – ∇_{b} ∇_{β} A_{α})
= g^{αa} (∇_{b} ∇_{α} A^{b} – ∇_{b} ∇^{b} A_{α})
= g^{αa} (R_{cα} A^{c} – ∇_{b} ∇^{b} A_{α})
= R_{c}^{a} A^{c} – ∇_{b} ∇^{b} A^{a}
We can now express everything in terms of the fourpotential A:
Riemannian Vector Wave Equations in Curved Space  
∇_{b} ∇^{b} A^{a} – R_{c}^{a} A^{c} + ω_{m}^{2} A^{a} + j^{a}  =  0  (RVWS) 
∇_{c} A^{c}  =  0  (Transverse) 
Maxwell’s Equations for FourPotential in Lorenz Gauge in Curved Spacetime  
∇_{b} ∇^{b} A^{a} – R_{c}^{a} A^{c} + j^{a}  =  0  (LVWS) 
∇_{c} A^{c}  =  0  (Lorenz) 
While we’re on the subject of wave equations in curved space, we can also give a modified scalar wave equation. A covariant derivative of a scalar is just the partial derivative in the same direction, and the gradient of a scalar can be defined without reference to the metric or the LeviCivita connection. However, the sum of the second derivatives in all the coordinate directions that appears in the Riemannian Scalar Wave equation will only be independent of the coordinate system if we compute it, in the general case, as the divergence of the gradient, using the covariant derivative:
(grad A)_{j} = ∂_{j} A
div grad A = g^{ij} (∂_{i} ∂_{j} A – Γ^{k}_{ji} ∂_{k} A)
This operation, which is a generalisation of the Laplacian, is known as the LaplaceBeltrami operator. When the metric only has components on the diagonal, which is true for many coordinate systems, it’s very easy to compute the determinant of the metric as the product of those diagonal entries. If we write g for the absolute value of the determinant of the metric, it can be shown that an alternative expression for the LaplaceBeltrami operator is:
div grad A = (1/√g) ∂_{i} [(√g) g^{ij} ∂_{j} A]
which is easier to use than going to the trouble of computing the Christoffel symbols. Even if you haven’t encountered this equation before, if you stare at it long enough you’ll probably recognise it as lying behind the formulas you’ve seen for the Laplacian in spherical or cylindrical coordinates.
Riemannian Scalar Wave Equation With Source, in Curved Space  
div grad A + ω_{m}^{2} A + j  =  0  (RSWS) 
Lorentzian Scalar Wave Equation With Source, in Curved Spacetime  
div grad A + j  =  0  (LSWS) 
Further reading: Sections 16.3 and 22.4 of Gravitation by Charles Misner, Kip Thorne and John Wheeler, W.H. Freeman, New York, 1973.
As we mentioned when first discussing the Riemannian wave equations, there is a serious problem with these equations: they allow for solutions that have an angular frequency higher than ω_{m} in one direction, along with exponential change in another. The nice, wellbehaved plane waves from which we derived the Riemannian Scalar Wave Equation have the sum of the squares of their frequencies in the four dimensions equal to a fixed total, ν_{max}^{2}, and so none of those individual frequencies can exceed ν_{max}, but the equation itself can’t rule out solutions with an exponential factor, such as cos (kx) exp(αt), which will satisfy the RSW equation so long as k^{2} – α^{2} = ω_{m}^{2}.
If you’ve read the first volume of Orthogonal you’ll know how these exponential solutions can be avoided. If you haven’t read the book but have read this far into these notes despite the spoiler warnings, this is your last chance to decide not to read on.
If the Riemannian universe is finite but has no boundary, the requirement that solutions of the wave equations are continuous, and possess continuous derivatives, will rule out solutions with an exponential factor. While a cyclic function can, by its very nature, join up smoothly with itself when followed around a closed curve, an exponential function can’t do that. (Things become a bit more subtle when we go from a free wave in the vacuum to a field with a source, and we’ll look at some examples of that in the following sections.)
So far we’ve mostly been treating the Riemannian universe as an infinite, perfectly flat fourspace, while noting that this is just an approximation, akin to the useful approximation of the Lorentzian universe as flat Minkowski spacetime. In the same spirit, we can look at two idealised models of the Riemannian universe which are finite, but which still make simplifying assumptions about the curvature. In one of these models, the 4torus, the Riemannian universe remains perfectly flat. In the other model, the 4sphere, the universe has a constant, positive curvature.
Suppose we take a region of flat fourspace in the shape of a rectangular hyperprism. We put coordinates (x, y, z, t) on this region that range from –L^{x}/2 to L^{x}/2, –L^{y}/2 to L^{y}/2, –L^{z}/2 to L^{z}/2 and –L^{t}/2 to L^{t}/2. Then we declare that all eight of the threedimensional hyperfaces of this hyperprism are “glued” to the opposite face. For example, all points (x, y, z, –L^{t}/2) are identified with the corresponding points (x, y, z, L^{t}/2). This is the fourdimensional equivalent of taking a rectangle in the plane and identifying its opposite edges to make a torus.
We should stress, though, that the whole fourspace remains perfectly flat; we are not “rolling up” the hyperprism in any higherdimensional space, we are just decreeing that this model of the Riemannian universe is finite in all directions, and that its topology takes the form we have described, which is known as a 4torus. Our choice of topology doesn’t require the curvature of the fourspace to be zero everywhere, but it certainly allows it.
In what follows, we will call this model universe T^{4}. We will take it as given that the whole fourspace is flat, and that we’ve chosen coordinates like those described above. There is, of course, nothing physically special about the choice of origin or the points where the coordinates jump from L^{i}/2 to –L^{i}/2, and any solution to the equations of Riemannian physics that we find using our original coordinates will still be valid if we translate everything by an arbitrary displacement vector. However, the boundary conditions imposed by the shape of the T^{4} universe are not rotationally symmetrical, so if we take a solution and apply an arbitrary rotation, it will no longer satisfy those boundary conditions.
Any sufficiently wellbehaved scalar function A(x, y, z, t) on T^{4} can be written as a Fourier series:
A(x, y, z, t) = Σ_{i, j, k, l} a_{i, j, k, l} f_{i, j, k, l}(x, y, z, t)
where the sum is over all integer values (positive, negative and zero) for i, j, k, l, and:
f_{i, j, k, l}(x, y, z, t) = f_{i}(x / L^{x}) f_{j}(y / L^{y}) f_{k}(z / L^{z}) f_{l}(t / L^{t})
f_{n}(u) = sin(2 π n u), n > 0
f_{n}(u) = cos(2 π n u), n < 0
f_{0}(u) = 1/√2
We will refer to the functions f_{i, j, k, l} as the Fourier basis functions for T^{4}. With the integral over T^{4} as the inner product between functions:
<f, g> = ∫_{T4} fg
the different basis functions are orthogonal to each other, and they all have the same squared norm: V / 16, where V = L^{x} L^{y} L^{z} L^{t} is the total 4volume of T^{4}.
Each basis function is a standing wave that undergoes i, j, k and l cycles, respectively, in the x, y, z and t directions, around the entire width of the universe. Given a function A(x, y, z, t), we can explicitly compute the Fourier coefficients a_{i, j, k, l} as follows:
a_{i, j, k, l} = (16 / V) ∫_{T4} f_{i, j, k, l}(x, y, z, t) A(x, y, z, t)
Now we’d like to know which, if any, of the f_{i, j, k, l} satisfy the sourceless Riemannian Scalar Wave equation. Applying that differential equation to a Fourier basis function, we get the algebraic equation:
(i / L^{x})^{2} + (j / L^{y})^{2} + (k / L^{z})^{2} + (l / L^{t})^{2} = ν_{max}^{2}
where ν_{max} = ω_{m} / (2 π). If the L^{i} and ν_{max} are just randomly chosen numbers, no integer values for i, j, k, l will satisfy this equation. So we have two possibilities to consider: the generic case, where there are no solutions to the sourceless RSW equation, and the special case, where the L^{i} and ν_{max} have values that allow some solutions to exist.
To give an example of the special case, suppose all the L^{i} = 1, and ν_{max} = √90. Then any integers i, j, k, l whose sum of squares is 90 will provide a Fourier basis function f_{i, j, k, l} that satisfies the RSW equation. There are 1872 such quadruples of integers, if we count all the permutations and choices of positive and negative signs, but they can all be derived from these nine equations:
0^{2}+0^{2}+3^{2}+9^{2} = 90
0^{2}+1^{2}+5^{2}+8^{2} = 90
0^{2}+4^{2}+5^{2}+7^{2} = 90
1^{2}+2^{2}+2^{2}+9^{2} = 90
1^{2}+2^{2}+6^{2}+7^{2} = 90
1^{2}+3^{2}+4^{2}+8^{2} = 90
2^{2}+5^{2}+5^{2}+6^{2} = 90
3^{2}+3^{2}+6^{2}+6^{2} = 90
3^{2}+4^{2}+4^{2}+7^{2} = 90
In a Riemannian universe where the ratio between the size of the universe and the minimum wavelength of light was comparable to, say, the size of our observable universe measured in wavelengths of far ultraviolet light, or about 10^{34}, the number of solutions for suitable choices of L^{i} and ν_{max} would be extremely large. We won’t go into the number theory involved in counting the solutions (see Mathworld’s Sum of Squares function page for a taste of that), but it’s intuitively plausible that on a cosmic scale the number of discrete solutions could easily be so large as to appear continuous. In other words, although sourceless plane waves in such a Riemannian universe could only have a finite number of specific propagation vectors, the actual choices would be so numerous as to look like a continuum that included all directions.
Since the sourceless solutions are all built from a finite number of Fourier basis functions, they will be smooth and finite everywhere. None of their directional frequencies can exceed ν_{max}, and they could equally well be written as a superposition of a finite number of plane waves, which is how we originally envisioned constructing general solutions to the wave equation.
What can we say about solutions to the scalar wave equation with a source, which we’ll call H?
∂_{x}^{2}A(x) + ∂_{y}^{2}A(x) + ∂_{z}^{2}A(x) + ∂_{t}^{2}A(x) + ω_{m}^{2} A(x) + H(x)  =  0  (RSWS) 
If we Fourierexpand both the function A with coefficients a and the scalar source H with coefficients h, we have:
a_{i, j, k, l} [(i / L^{x})^{2} + (j / L^{y})^{2} + (k / L^{z})^{2} + (l / L^{t})^{2} – ν_{max}^{2}] = h_{i, j, k, l} / (4 π^{2})
For those values of i, j, k, l that satisfy the sourceless equation — making the expression in square brackets zero — the source’s Fourier coefficient h_{i, j, k, l} must be zero in order for a solution to exist at all, while a_{i, j, k, l} can be chosen freely. For all other values, we solve the equation above to obtain:
a_{i, j, k, l} = h_{i, j, k, l} / [(4 π^{2}) ((i / L^{x})^{2} + (j / L^{y})^{2} + (k / L^{z})^{2} + (l / L^{t})^{2} – ν_{max}^{2})]
So the source will determine all the coefficients that do not correspond to sourceless solutions, and then we’re free to add any additional, sourceless solution we wish.
For generic values of L^{i} and ν_{max}, none of the Fourier basis functions will solve the sourceless Riemannian Scalar Wave equation. In this case, there are no Fourier components of the source that are required to be zero, and we can always use:
a_{i, j, k, l} = h_{i, j, k, l} / [(4 π^{2}) ((i / L^{x})^{2} + (j / L^{y})^{2} + (k / L^{z})^{2} + (l / L^{t})^{2} – ν_{max}^{2})]
to obtain a solution, assuming the Fourier series converges.
To give a very simple example, suppose we have a motionless planar sheet of unit charge density that bisects the T^{4} Riemannian universe, lying in the yzplane. The source for the time component of the fourpotential is then a onedimensional Dirac delta function in the x coordinate. Since everything will be a function of x alone, we will drop the other three dimensions from the Fourier coefficient subscripts and integrals, and we’ll simply write L for L^{x}.
The nonzero Fourier coefficients of the source are then:
h_{0} = (√2)/L
h_{i} = 2/L, i < 0
This precise source will only be possible if ν_{max} L is not an integer, so we’ll assume that’s the case. The nonzero Fourier coefficients of the solution for the time component of the fourpotential are then:
a_{0} = –1 / [2(√2) π^{2} L ν_{max}^{2}]
a_{i} = L / [2 π^{2} (i^{2} – L^{2} ν_{max}^{2})], i < 0
Rather than attempting to explicitly sum the Fourier series, we will find the solution by another method. By using the symmetry of the problem and the Riemannian version of Gauss’s Law, we can easily establish that the fourpotential associated with a unit planar charge when there are no boundary conditions imposed is:
A_{t, src} = –sin(ω_{m} x) / (2 ω_{m})
But there is also a sourceless solution with the same symmetry that we’re free to add in any multiple we wish:
A_{t, nsrc} = cos(ω_{m} x) / (2 ω_{m})
Both functions are even in x (i.e. they have the same value at ±x for all x), so any solution will be continuous at x=±L/2. But an even function has opposite derivatives at ±x, so the solution can only meet itself at x=±L/2 smoothly if the derivative there is zero. By adjusting the constant C in the general solution A_{t, src} + C A_{t, nsrc} we can ensure a derivative of zero at x=±L/2. The result simplifies to:
A_{t, bc} = –cos(π ν_{max} (L – 2x)) / [4 π ν_{max} sin(π ν_{max} L)]
The Fourier coefficients of A_{t, bc} are precisely those we’ve already written above, so the two methods are in agreement. What this solution describes is a phase shift in the potential that allows it to wrap around the universe smoothly, while still having just the right discontinuity on the planar charge to satisfy Gauss’s Law there.
Suppose we have a motionless line of unit charge density located on the zaxis of the T^{4} Riemannian universe. The source for the time component of the fourpotential will be a Dirac delta function in the x and y coordinates. We’ll drop the z and t coordinates from the Fourier coefficients, and for simplicity we’ll assume L^{x} = L^{y} = L. The nonzero Fourier coefficients of the source are:
h_{0, 0} = 2 / L^{2}
h_{i, 0} = h_{0, i} = (2√2) / L^{2}, i < 0
h_{i, j} = 4 / L^{2}, i, j < 0
This source will only be possible if L^{2} ν_{max}^{2} is not a sum of squares of two integers, so we’ll assume that it’s not. The nonzero Fourier coefficients of the solution for the time component of the fourpotential are:
a_{0, 0} = –1 / [2 π^{2} L^{2} ν_{max}^{2}]
a_{i, 0} = a_{0, i} = 1 / [(√2) π^{2} (i^{2} – L^{2} ν_{max}^{2})], i < 0
a_{i, j} = 1 / [π^{2} (i^{2} + j^{2} – L^{2} ν_{max}^{2})], i, j < 0
It’s possible to explicitly evaluate the sum over one index and reduce the Fourier series to a sum over the other index. We can’t get a closed form for the whole expression, but halving the number of indices makes the result much easier to work with numerically.
A_{t} = Σ_{j=0}^{∞} f_{–j}(0) β_{j}(x / L) f_{–j}(y / L)
β_{j}(u) = cosh(α_{j} (1 – 2u)) / [2 α_{j} sinh(α_{j})]
α_{j} = π √(j^{2} – L^{2} ν_{max}^{2})
Note that α_{j} will be imaginary at first — until j / L exceeds ν_{max} — and while it’s imaginary, the functions β_{j}(u) will be oscillatory, since the cosh of an imaginary number ix is simply the cosine of x.
Once α_{j} is real, the β_{j}(u) decrease monotonically from a positive maximum at u = 0 to a minimum (also positive) at u = 1/2, which corresponds to the point half a universe away from the source. The drop isn’t literally an exponential decay — since exponential decay never flattens out to a minimum — but it’s very similar. So these nonoscillatory terms decay rapidly with distance from the source.
The diagrams on the right show the contours of zero potential in a plane perpendicular to the line of charge, demonstrating how the shape of the field is distorted by the boundary conditions. [Since nonzero contours aren’t shown, there is no information here about the field strength — the contours’ spacing here is basically just the wavelength.] The top image shows the entire universe, for a choice of parameters where L is just a few wavelengths, and the effect is very pronounced. The bottom image shows a region of the same size (in wavelengths), but in this case it is only a small portion of a universe that is a thousand times wider, and the field is already beginning to grow more radially symmetrical close to the charge. So, although it’s interesting to see how the field loses radial symmetry in order to satisfy the boundary conditions, in a realisticallysized universe — at least 10^{30} or so wavelengths wide — these effects aren’t likely to be empirically detectable.
The original motivation for introducing these boundary conditions was to avoid exponential blowups in highfrequency waves. We’ve seen that if sourceless waves can exist at all in the T^{4} universe, then they are guaranteed not to exceed the notional maximum frequency that appears in the wave equation. So an obvious question to ask is: what happens in the T^{4} universe if we have some kind of source that oscillates at a frequency greater than the maximum?
The simplest kind of source to analyse is a linear alternating current. If the current runs along the zaxis of the T^{4} Riemannian universe, and oscillates with a frequency l_{AC} / L^{t} for some integer l_{AC}, then both the source and the solution will share a single Fourier component in the time direction and we can factor that out and deal with the spatial dependence of the solution in an almost identical fashion to the previous problem. The difference is that a constant term, l_{AC}^{2}, will be added to the sum of squared indices, which previously had only the term j^{2}. As before, for the sake of simplicity we’ll assume that the universe has the same width, L, in all directions (including our chosen time direction). We then have:
A_{z} = cos(2 π l_{AC} t / L) Σ_{j=0}^{∞} f_{–j}(0) β_{j}(x / L) f_{–j}(y / L)
β_{j}(u) = cosh(α_{j} (1 – 2u)) / [2 α_{j} sinh(α_{j})]
α_{j} = π √(j^{2} + l_{AC}^{2} – L^{2} ν_{max}^{2})
If the frequency of the current’s oscillations, l_{AC} / L, exceeds ν_{max}, the expression inside the square root in the definition of α_{j} will always be positive, so α_{j} will be real for all j. As we discussed in the previous section, when α_{j} is real the functions β_{j}(u) drop away in a manner very similar to exponential decay, while flattening out to reach a derivative of zero halfway across the universe.
We can’t produce a closed expression for the infinite sum over j, but the diagram shows the sum of a large number of terms. It’s apparent that a highfrequency source will be accompanied by a field that is only significant very close to the source itself, dropping off far more rapidly with distance than the radiation field around a linear alternating current with a frequency less than ν_{max}.
In our universe, where light in a vacuum is governed by a Lorentzian wave equation, if we know both the value and the time derivative of the electromagnetic field throughout a region R of space at some instant in time, t_{0}, we can predict the value of the field some way into the future. Of course, electromagnetic waves can always enter the region from the sides, so as time moves on from t_{0} the region where we can make predictions will shrink at the speed of light, but in principle there will be a certain, definite portion of spacetime where our initial data lets us predict what the field will be.
This kind of data — the value of a function and its time derivative, throughout a region of space at a particular moment — is known as Cauchy data. It’s in the nature of Lorentzian wave equations — which are secondorder hyperbolic differential equations — that we can use Cauchy data to obtain their solution some way into the future.
Another example of a hyperbolic equation where we can make use of Cauchy data would be the wave equation for small displacements of an elastic string. Suppose the string is finite and anchored at both ends. Then if we know the displacement of the string and the time derivative of the displacement, along the entire string at some instant in time, then in principle we can predict the entire future of the string’s motion. What’s more, even if our knowledge was limited to just part of the string, since the waves it carried would have a certain maximum speed, c_{max}, we could still confidently make predictions about a region of the string that gradually shrank down from the portion about which we had data, with the ends being nibbled away at the rate c_{max}.
In contrast to this, the Riemannian wave equations are elliptic differential equations. To solve an elliptic differential equation in some region, we usually need data about the value of the solution on the entire boundary of the region. Examples of elliptic differential equations in our own universe involve regions of space, rather than of spacetime. For instance, the equilibrium temperature reached in a solid material obeys an elliptic differential equation — Laplace’s equation — and to determine the temperature throughout some region of the material, we generally need to know the temperature on the entire boundary of that region. Being told the temperature on, say, one face of an iron cube — along with the temperature’s derivative in the direction pointing into the cube from that face, giving us Cauchy data — is not a reliable way to compute the temperature throughout the cube.
For example, suppose the opposite face of the cube to the one where we have data is covered in a pattern of closely spaced stripes of alternating high and low temperature. Our data might then describe an extremely weak, washedout version of those stripes. The progression of temperature from our face to the opposite face will involve an exponential rise in the temperature difference, which will amplify enormously any imprecision in our data, to the point where just having our washedout stripes and their derivative provides a very poor guide to the exact values the temperature reaches on the other face. But if instead we were supplied with the temperature on every face of the cube, interpolating the temperature distribution within the region that satisfied Laplace’s equation would be a much more reliable process.
In an infinite Riemannian universe, the problem of making predictions for the Riemannian wave equation from Cauchy data would be as difficult as trying to compute the temperature in a cube from data on just one face. Given that the equation is elliptic, we might conclude that we could only make postdictions about its solutions: gathering data about both the initial values of the field in some region of space and the final values after some interval of time had passed, along with data about what happened during that interval on the borders of the region, and then using all that information on the boundary of the relevant portion of fourspace to compute the time course of the field in the region’s interior, after the fact. Such a situation would allow the laws of physics to be tested, but it would make it very hard to anticipate and prepare for the future.
But in a finite Riemannian universe such as T^{4}, the situation isn’t so bad. For sourceless waves in T^{4}, there are only a finite number of Fourier basis functions that can contribute to the total wave, so if we are able to determine the coefficients for all of them, we will know the entire history of the wave. If we include a source — which itself ought to obey an equation of the same general form — then the problem becomes more complex, but the principle is the same.
For simplicity, let’s work with a sourceless scalar wave. Suppose we know the value and the time derivative of the wave, throughout all of space at one moment in time. We will choose coordinates so that the moment of time for which we have data is t=0, and of course “time” can be any of the four directions in which the torus can be circumnavigated.
Suppose some Fourier basis function f_{i, j, k, l} satisfies the sourceless wave equation. If l ≤ 0, then the timedependent factor of this function, f_{l}(t / L^{t}), will be a cosine or a constant function, and hence nonzero at t=0, so we can identify the coefficient of f_{i, j, k, l} simply by performing a threedimensional Fourier analysis of our data for t=0. If l > 0, the timedependent factor will be a sine, so it will be zero at t=0. But its time derivative will be nonzero at t=0, and so we can identify its coefficient from a Fourier analysis of the time derivative data we have for t=0. So between the data and its time derivative, we can compute the coefficient of every basis function that contributes to a sourceless wave, which will allow us to compute the value of that wave at any time, future or past.
Now, of course it’s absurd to expect anyone in the Riemannian universe to have information about the electromagnetic field across the entire universe. But then, when we make predictions in our own universe about what will happen over the next five minutes, we never have perfect information about our surroundings out to a distance of five lightminutes (about 90 million kilometres). Yet we’ve managed to test scientific theories, and to predict the future well enough to survive, so far. The fact is, we live in a sufficiently orderly and calm time and place that we can usually assume that the most important sources of electromagnetic radiation around us are nearby objects like the sun, whose behaviour is wellknown and fairly predictable. That the laws of physics allow sudden, massive inflows of radiation from unknown sources that would take us completely by surprise hasn’t ruined our ability to do science or plan for the future.
In Max Tegmark’s classic paper, “Is ‘the Theory of Everything’ Merely the Ultimate Ensemble Theory?”, Tegmark suggests that the elliptic partial differential equations governing a universe with no timelike dimensions would render it impossible to make predictions, and hence very difficult for what he calls “selfaware substructures” to function effectively. But in a finite Riemannian universe, if there are regions where the local environment is relatively calm and orderly — the kind of conditions that our own evolution and thriving have relied upon — then the strict need for Cauchy data spanning the whole universe in order to make predictions will no more be the determining factor governing what life can achieve than the strict need in our own universe to have Cauchy data for a region 90 million kilometres in radius in order to know what will happen in the next five minutes.
The boundary of a solid hypersphere in fivedimensional space is a finite, borderless fourdimensional space known as the 4sphere, or S^{4}. A 4sphere need not be embedded in any higherdimensional space, and it need not have uniform curvature, but for the sake of simplicity we’ll consider a Riemannian universe with this topology that does have all the geometric properties that a 4sphere embedded in flat fivedimensional space would inherit from that space. If we take the radius of the hypersphere to be R, that fixes the total 4volume at:
V = (8/3) π^{2} R^{4}
and fixes the maximum length of any geodesic within the 4sphere to 2 π R. The Ricci scalar curvature — which measures the degree to which the volume of a solid ball within the space grows less rapidly with increasing radius than it would in Euclidean space — is 12 / R^{2} at every point.
The nice thing about S^{4} as a model universe is that it is more symmetrical than T^{4}. If we look at the symmetries of S^{4} that leave a point fixed, they are exactly the same group, O(4), as applies in Euclidean fourspace. And in place of translations of Euclidean space, we simply extend the group to O(5).
The cost of this is that we have to deal with a curved fourspace. Unlike T^{4}, it’s impossible for a space with the topology of S^{4} to be perfectly flat everywhere. Why? The Euler characteristic of S^{n} for even n is always 2 (this can be proved quite simply by counting the parts of a hypercube). The Generalised GaussBonnet Theorem equates an integral of a function relating to the curvature of the space to the Euler characteristic, and if the curvature were zero, that integral would be zero — contradicting the known value of the Euler characteristic.
We can put a form of polar coordinates on S^{4}, with four angular coordinates:
0 ≤ ξ ≤ π
0 ≤ ψ ≤ π
0 ≤ θ ≤ π
0 ≤ φ ≤ 2π
which parameterise a point on the 4sphere of radius R in flat fivedimensional space as:
(R cos(ξ), R sin(ξ) cos(ψ), R sin(ξ) sin(ψ) cos(θ), R sin(ξ) sin(ψ) sin(θ) cos(φ), R sin(ξ) sin(ψ) sin(θ) sin(φ))
In terms of these coordinates, the metric is diagonal, with nonzero components:
g_{ξξ} = R^{2}
g_{ψψ} = R^{2} sin(ξ)^{2}
g_{θθ} = R^{2} sin(ξ)^{2} sin(ψ)^{2}
g_{φφ} = R^{2} sin(ξ)^{2} sin(ψ)^{2} sin(θ)^{2}
giving us the square root of the determinant of the metric as:
√g = R^{4} sin(ξ)^{3} sin(ψ)^{2} sin(θ)
In much the same fashion as a scalar function on T^{4} can be written as a Fourier series, a wellbehaved scalar function on S^{4} can be expanded as a sum of fourdimensional spherical harmonics:
Y_{j, k, l}^{m}(ξ, ψ, θ, φ) = Φ_{m}(φ) Θ^{m}_{j}(θ) Ψ^{j}_{k}(ψ) Ξ^{k}_{l}(ξ) / R^{4}
Φ_{m}(φ) = sin(mφ)/√π, m > 0
Φ_{m}(φ) = cos(mφ)/√π, m < 0
Φ_{0}(φ) = 1/√(2π)
Θ^{m}_{j}(θ) = √[(j+½) (j–m)! / (j+m)!] P^{m}_{j}(cos θ)
Ψ^{j}_{k}(ψ) = √[(k+1) (k+j+1)! / (k–j)!] P^{–j–½}_{k+½}(cos ψ) / [√sin(ψ)]
Ξ^{k}_{l}(ξ) = √[(l+3/2) (l–k)! / (l+k+2)!] P^{k+1}_{l+1}(cos ξ) / sin(ξ)
Here P is an associated Legendre function of the first kind. The indices m, j, k, l are integers, with the following constraints:
0 ≤ m ≤ j ≤ k ≤ l
The function Φ_{m}(φ) has a simple trigonometric form, but what do the functions of the other three coordinates look like? They all follow much the same pattern: when their upper index is at its highest possible value, they range from zero when the coordinate is zero, to a single maximum or minimum at π/2, then back to zero when the coordinate reaches π. As the value of that index drops, they gain one more extremum as they go from zero back to zero. When the index reaches zero, the count of extrema is incremented as always, but this time the function is no longer zero at the endpoints of its range.
The total number of these fourdimensional spherical harmonics, for a given l, can be found from the constraints on the other indices to be:
N(l)=(l+1)(l+2)(2l+3) / 6
All the spherical harmonics with different indices are orthogonal to each other, i.e. their products integrated over S^{4}, weighted by the volume √g, are zero. We’ve included factors here that also ensure that the integral of each harmonic squared is one.
Which spherical harmonics satisfy the sourceless Riemannian Scalar Wave Equation on the sphere, which we derived in the section on curved space? It’s not hard to show that the spherical harmonics are eigenfunctions of the LaplaceBeltrami operator, with:
div grad Y_{j, k, l}^{m} = [–l(l+3) / R^{2}] Y_{j, k, l}^{m}
So if ω_{m}^{2} R^{2} is an integer of the form l(l+3), then all N(l) spherical harmonics for that value of l will satisfy the sourceless equation. If ω_{m}^{2} R^{2} is not an integer of that form, then there will be no sourceless solutions. So we have a situation very similar to that on T^{4}, where generic values for the maximum frequency and the size of the universe will not permit sourceless solutions, but if the geometry permits sourceless solutions to exist at all, they will be constructed from a finite number of modes. Here, we can count the modes very easily, without worrying about any of the numbertheoretic subtleties required to do so for T^{4}. Since there are N(l) modes if ω_{m}^{2} R^{2} = l(l+3), for large l we have:
l ≈ ω_{m} R
N(l) ≈ N(ω_{m} R) ≈ (ω_{m} R)^{3} / 3
This will be a very large number, of course, in any universe whose scale is even roughly comparable to our own observable universe. But in fact, the symmetry of S^{4} means that if there are sourceless solutions at all, there are solutions that look locally like plane waves with literally any propagation vector, rather than a large but discrete set of choices. For an observer located at ξ=ψ=θ=π/2, and any value for their φ coordinate, consider the harmonic Y_{l, l, l}^{l}, where l here is the specific integer such that l(l+3) = ω_{m}^{2} R^{2}. The observer will be at a very wide, flat extremum for the functions of ξ, ψ and θ, while the function will vary in the φ direction as cos(l φ) ≈ cos(ω_{m} R φ), which will look locally just like a plane wave of the kind we’ve described for Euclidean fourspace. But for any choice of the observer’s location and any choice of propagation vector, we can simply pick coordinates that meet the conditions we’ve described, and construct the same solution in those coordinates.
If we consider the RSW equation with a source H, and we write the spherical harmonic coefficients of the source as h_{j, k, l}^{m} and those of the solution A as a_{j, k, l}^{m}, we have:
a_{j, k, l}^{m} [l(l+3) – ω_{m}^{2} R^{2}] = h_{j, k, l}^{m} R^{2}
If there is an l such that l(l+3) = ω_{m}^{2} R^{2}, the source cannot contain any spherical harmonics with that value for l, and the solution is free to contain those harmonics in any amounts. For other values of l, the source’s coefficient fixes the solution’s coefficient:
a_{j, k, l}^{m} = h_{j, k, l}^{m} R^{2} / [l(l+3) – ω_{m}^{2} R^{2}]
Suppose we have a pointlike, delta function blip of source on S^{4}. What is the solution associated with that source? We’d expect it to look a bit like the Green’s function we previously found for a momentary blip of charge on Euclidean fourspace, which was proportional to Y_{1}(ω_{m} s) / s, where Y_{1} is a Bessel function of the second kind.
To keep things simple, we’ll confine ourselves to the scalar wave equation. If we place the source at a pole of our coordinate system, where ξ=ψ=θ=0 and φ is undefined (just as longitude is undefined at the Earth’s north and south poles), then the only spherical harmonic coefficients that will be nonzero will have m = j = k = 0, since all other values make the harmonic Y_{j, k, l}^{m} equal to zero at the pole. The nonzero coefficients are then:
h_{0, 0, l}^{0} = – √[(l+1)(l+2)(2l+3)] / (4 π)
a_{0, 0, l}^{0} = – R^{2} √[(l+1)(l+2)(2l+3)] / [(4 π) (l(l+3) – ω_{m}^{2} R^{2})]
We are assuming that there is no integer l such that l(l+3) = ω_{m}^{2} R^{2}. The solution is:
A(ξ) = –R^{2} Σ_{l=0}^{∞} (2l+3) P^{1}_{l+1}(cos ξ) / [(8 π^{2}) (l(l+3) – ω_{m}^{2} R^{2}) sin(ξ)]
The diagram on the right shows a numerical approximation to this sum. The function behaves much as we’d expect for the first half of its domain, oscillating and declining with distance from the source, but then undergoes a disconcerting resurgence as it approaches the opposite pole. This is an artifact of the symmetry of S^{4}; in a less homogeneous space with the same topology the effect would be much less prominent.
It’s worth noting that even with this perfect symmetry, the field at the antipodal point is finite, unlike that at the source itself. The sum for ξ=π can be computed explicitly:
A(π) = –R^{2} (ω_{m}^{2} R^{2} + 2) / (16 π cos((π/2) √[4 ω_{m}^{2} R^{2} + 9]))
The cosine in the denominator here could only be zero if ω_{m}^{2} R^{2} violated the integer assumption, so the field here will be finite.
For the T^{4} universe, we found that if we had Cauchy data across the width of the universe at a single instant of time (where the time axis could be any of the four coordinates that wrapped around the 4torus), we could determine the values of the finite number of coefficients of a free wave, and thus reconstruct its history for all time.
For S^{4}, we can do the same thing with Cauchy data on any “great 3sphere”, i.e. any 3sphere of radius R. Assuming the geometry allows sourceless waves, all the spherical harmonics Y_{j, k, l}^{m}(ξ, ψ, θ, φ) that satisfy the sourceless wave equation will share the same value of l. We choose a coordinate system in which ξ=π/2 on the 3sphere for which we have data. For those harmonics that reach a maximum or minimum at ξ=π/2, we can find their coefficients from the field’s value on the 3sphere, while those harmonics that are zero there will have maxima or minima in their derivatives in the ξ direction, and we can find their coefficients from the field’s derivative. So from Cauchy data on the 3sphere, we can reconstruct the entire history of the solution.
What if we have data on a smaller 3sphere, which we could describe as a hypersurface ξ=ξ_{0} for some ξ_{0} < π/2? So long as we actually know the value of ξ_{0}, the factors Ξ^{k}_{l}(ξ) and ∂_{ξ}Ξ^{k}_{l}(ξ) will be known quantities on the 3sphere (and they will never both be zero at once), so in principle we should always be able to compute all the coefficients of the solution.
This leads to the curious observation that in principle we could reconstruct the entire solution from Cauchy data on even the smallest 3sphere. After all, such a 3sphere is a boundary of two finite regions: its interior as normally construed, and also the rest of the S^{4} universe, just as the Arctic Circle is a boundary for the region around the north pole and also for the remainder of the Earth’s surface. But in practice, for ξ much less than π/2 the values of Ξ^{k}_{l}(ξ) become extremely small compared to the values at π/2, and also the peaks of the other factors in the harmonics become increasingly close, to the point where extrapolating outwards from a small 3sphere to the whole universe would demand a prohibitive degree of accuracy in the data.
[1] Classical Electrodynamics by John David Jackson, John Wiley & Sons, 1999. Section 12.7 gives the Lagrangian for ordinary electromagnetism, and Section 12.8 gives a Lagrangian for Lorentzian Proca electrodynamics. (Note that Jackson uses different units than those we’ve adopted, and also a (+ – – –) signature for the Lorentzian metric, so it takes some care to compare these formulas.)
[2] Gravitation by Charles Misner, Kip Thorne and John Wheeler, W.H. Freeman, New York, 1973. Section 21.3.