Covectors in the Dual Space. This sounds like an alien tribe living in a parallel universe hitherto unknown to humans.
In this lectures on General Relativity, Prof. Frederic Schuller says:
Now comes a much-feared topic: Dual vector space. And it’s totally unclear why this is such a feared topic!
A vector feels familiar: three numbers tacked on a line segment with head and tail – an arrow floating about in the three-dimensional space we seem to know. A covector seem to belong in a different universe: a linear map that assign a number to a vector. Covectors form a vector space in their own right – the dual space.
Envisage parallel planes, infinitely stretched out. Project a vector onto the plane(s), cast its shadow. Measure the length of the shadow; it is a number. If two vectors are added, the sizes of their shadows add up. If the vector is stretched by a factor, the shadow will stretch in proportion. This is the definition of a linear map. Each set of parallel planes represents a covector.
In a two-dimensional world, covectors don’t look like planes, but like vectors. They can help with determining the components of a vector, along two given directions.
You could draw a parallelogram with sides parallel to the axes, and calculate the length of each side in units of the two basis vectors. No covectors involved. You obtain the same numbers via an alternate construction: Draw a line perpendicular to each axis – a dual axis. The line perpendicular to axis 1 shall be the dual of axis 2. Project each basis vector on the dual axis – the projections are the elements of the dual basis. Project the vector to be examined on each dual axis. Measure the projection in units of the dual basis; as per elementary geometry the numbers are the same as before.
The second method invokes covectors, as linear maps. Covectors act on vectors: they eat a vector and spit out a number. The projection measured in units of the projected basis vector is that number.
In an orthonormal co-ordinate system, you would not need a separate set of axes to determine a vector’s components by projections. Axis 1 is perpendicular to axis 2, so each dual axis seems to coincide with the other axis.
I found the two-dimensional example in an unlikely place, in an antique (1960s) series of volumes on theoretical physics, written by the late Wilhelm Macke. After mechanics had finally been laid out as a field theory, with Gauss’ theorems, Green’s functions, and partial differential equations, there comes an unexpected down-to-earth chapter on statics – parallelograms of forces and constructions using virtual ropes. The parallelogram of forces is explained in terms of covariant and contravariant vectors, as if this should turn into an introduction to tensor analysis. You hear a theoretical physicist translating pragmatic rules of engineering back into his language.
A lengthy epilogue, with more details.
Arrows floating in orthonormal co-ordinate systems can be dangerous metaphors which break down not only in non-Euclidian curved spaces, but also when introducing, say, polar co-ordinates. A vector is an abstract object that exists irrespective of a basis. Schuller quotes the British proverb: A gentleman only chooses a basis when he must.
In a three-dimensional space (a space that at least locally looks like R3), a vector can be represented as the sum of basis vectors e1, e2, and e3 with components (numbers) v1 , v2, and v3. Both subscript and superscripts are indices.
v = v1e1 + v2e2 + v3e3
The dual space is made up of linear maps that send vectors to functions. One option to build a basis for the dual space is to demand that each covector in the dual basis should send each basis vector either to zero or one: Dual basis covector e1 sends basis vector e1 to 1, but e2 and e3 to 0. e1 as a function of e1 is 1: e1(e1) = 1, but e1(e2) = 0.
An arbitrary covector ω can also be represented in its basis as
ω = ω1e1 + ω2e2 + ω3e3
ω acts on a vector v:
ω(v) = [ ω1e1 + ω2e2 + ω3e3 ]( v1e1 + v2e2 + v3e3 )
The round brackets indicate that the covector ω is a function, and the vector v is the function’s argument. But you could see this action as a sort of product of vector and covector.
Because covectors are linear functions, this is equivalent to the sum of 9 functions, 6 of them being zero:
ω1 v1 e1(e1) + ω1 v2 e1(e2) + ω1 v3 e1(e3) + ω2 v2 e1(e1) + …
= ω1 v1 + ω2 v2 + ω3v3
If covector e1 acts on an arbitrary vector, it will extract exactly the first component of the vector, v1 – as in this case ω2 and ω3 are zero, and ω1 is one. The three covectors in the dual basis extract the co-ordinates v1, v2, v3, of the vector in the original basis.
The ‘product’ between vector and covector looks like the familiar scalar product. Any such scalar-product-like operation can be used to defined covectors. In his Principles of Quantum Mechanics, Paul Dirac gives a generic introduction to the relationship between vector space and dual space. He refers to the bra and ket (co)vectors in quantum mechanics:
Suppose we have a number Φ which is a function of a ket vector |A>, i.e. to each ket vector |A> there corresponds one number Φ, and suppose further that the function is a linear one, which means that the number corresponding to |A> + |A’> is the sum of the numbers corresponding to |A> and |A’>, and the number corresponding to c|A> is c times the number corresponding to |A>, c being any numerical factor. Then the number Φ corresponding to any |A> may be looked upon as the scalar product of that |A> with some new vector, there being one of these new vectors for each linear function of the key vectors |A>.
These new vectors are the bra vectors. Taking a scalar product results in a number, calculated from both a ket and a bra vector. But when the bra vector is given, one slot of two in this product is filled, and the remaining structure becomes a function that assigns a number to the vector that will fill the empty slot. A linear function acting on a vector is a covector.
< [Given bra covector] * [Given ket vector] > = A Number
< [Given bra covector] * [Open slot for ket vector] > = Function of a ket vector