Grim Reaper Does a Back-of-the-Envelope Calculation

I have a secondary super-villain identity. People on Google+ called me:
Elke the Ripper or Master of the Scythe.

elkement-reaper[FAQ] No, I don’t lost a bet. We don’t have a lawn-mower by choice. Yes, we tried the alternatives including a reel lawn-mower. Yes, I really enjoy doing this.

It is utterly exhausting – there is no other outdoor activity in summer that leaves me with the feeling of really having achieved something!

So I was curious if Grim Reaper the Physicist can express this level of exhaustion in numbers.

Just holding a scythe with arms stretched out would not count as ‘work’. Yet I believe that in this case it is the acceleration required to bring the scythe to proper speed that matters; so I will focus on work in terms of physics.

In order to keep this simple, I assume that the weight of the scythe is a few kilos (say: 5kg) concentrated at the end of a weightless pole of 1,5m length. All the kinetic energy is concentrated in this ‘point mass’.

But how fast does a blade need to move in order to cut grass? Or from experience: How fast do I move the scythe?

One sweep with the scythe takes a fraction of second – probably 0,5s. The blade traverses an arc of about 2m.

Thus the average speed is: 2m / 0,5s = 4m/s

However, using this speed in further calculations does not make much sense: The scythe has two handles that allow for exerting a torque – the energy goes into acceleration of the scythe.

If an object with mass m is accelerated from a velocity of zero to a peak velocity vmax the kinetic energy acquired is calculated from the maximum velocity: m vmax2 / 2. How exactly the velocity has changed with time does not matter – this is just conservation of energy.

But what is the peak velocity?

For comparison: How fast do lawn-mower blades spin?

This page says: at 3600 revolutions per minute when not under load, dropping to about 3000 when under load. How fast would I have to move the scythe to achieve the same?

Velocity of a rotating body is angular velocity times radius. Angular velocity is 2Pi – a full circle – times the frequency, that is revolutions per time. The radius is the length of the pole that I use as a simplified model.

So the scythe on par with a lawn-mower would need to move at:
2Pi * (3000 rev./minute) / (60 seconds/minute) * 1,5m = 471m/s

This would result in the following energy per arc swept. I use only SI units, so the resulting energy is in Joule:

Energy needed to accelerate to 314m/s: 5kg * (471m/s)2 / 2 = 555.000J = 555kJ

I am assuming that this energy is just consumed (dissipated) to cut the grass; the grass brings the scythe to halt, and it is decelerated to 0m/s again.

Using your typIcal food-related units:
1 kilocalorie is 4,18kJ, so this amounts to about 133kcal (!!)

That sounds way too much already: Googling typical energy consumptions for various activities I learn that easy work in the garden needs about 100-150kcal kilocalories per half an hour!

If scything were that ‘efficient’ I would put into practice what we always joke about: Offer outdoor management trainings to stressed out IT managers who want to connect with their true selves again through hard work and/or work-out most efficiently. So they would pay us for the option to scythe our grass.

But before I crank down the hypothetical velocity again, I calculate the energy demand per half an hour:

I feel exhausted after half an hour of scything. I pause a few seconds before the next – say 10s – on average. In reality it is probably more like:

scythe…1s…scythe…1s…scythe…1s….scythe…1s….scythe…longer break, gasping for air, sharpen the scythe.

I assume a break of 9,5s on average to make the calculation simpler. So this is 1 arc swept per 10 seconds, 6 arcs per minute, and 180 per half an hour. After half on hour I need to take longer break.

So using that lawn-mower-style speed this would result in:

Energy per half an hour if I were a lawn-mower: 133kJcal * 180 = 23.940kcal

… about five times the daily energy demands of a human being!

Velocity enters the equation quadratically. Assuming now that my peak scything speed is really only a tenth of the speed of a lawn-mower, 47m/2, which is still about 10 times my average speed calculated the beginning, this would result in one hundredth the energy.

A bit more realistic energy per half an hour of scything is then: 239kcal

Just for comparison – to get a feeling for those numbers: Average acceleration is maximum velocity over time. Thus 47m/s would result in:

Average acceleration: (47m/s) / (0,5s)  =  94m/s2

A fast car accelerates to 100km/h within 3 seconds, at (100/3,6)m/s / 3s = 9m/s2

So my assumed scythe’s acceleration is about 10 times a Ferrari’s!

Now I would need a high-speed camera, determine speed exactly and find a way to calculate actual energy needed for cutting.

Is there some conclusion?

This was just playful guesswork but the general line of reasoning and cross-checking orders of magnitude outlined here is not much different from when I try to get my simulations of our heat pump system right – based on unknown parameters, such as the effect of radiation, the heat conduction of ground, and the impact of convection in the water tank. The art is not so much in gettting numbers exactly right but in determining which parameters matter at all and how sensitive the solution is to a variation of those. In this case it would be crucial to determine peak speed more exactly.

In physics you can say the same thing in different ways – choosing one way over the other can make the problem less complex. As in this case, using total energy is often easier than trying to figure out the evolution of forces or torques with time.

results-achieved-by-scythe-masterThe two images above were taken in early spring – when the ‘lawn’ / meadow was actually still growing significantly. Since we do not water it relentless Pannonian sun already started to turn it into a mixture of green and brown patches.

This is how the lawn looks now, one week after latest scything. This is not intended to be beautiful – I wanted to add a realistic picture as I had been asked about the ‘quality’ compared to a lawn-mower. Result: Good enough for me!

Scything: One week after


How to Introduce Special Relativity (Historical Detour)

I am just reading the volume titled Waves in my favorite series of ancient textbooks on Theoretical Physics by German physics professor Wilhelm Macke. I tried to resist the urge to write about seemingly random fields of physics, and probably weird ways of presenting them – but I can’t resist any longer.

There are different ways to introduce special relativity. Typically, the Michelson-Morely experiment is presented first, as our last attempt in a futile quest to determine to absolute speed in relation to “ether”. In order to explain these results we have to accept the fact that the speed of light is the same in any inertial frame. This is weird and non-intuitive: We probably can’t help but compare a ray of light to a bunch of bullets or a fast train – whose velocity relative to us does change with our velocity. We can outrun a train but we can’t outrun light.

Michelson–Morley experiment

The Michelson–Morley experiment: If light travels in a system – think: space ship – that moves at velocity v with respect to absolute space the resulting velocity should depend on the angle between the system’s velocity and the absolute velocity. Just in the same way as the observed relative velocity of a train becomes zero if we manage to ride besides it in a car driving at the same speed as the train. But this experiments shows – via non-detected interference of beam of alleged varying velocities – that we must not calculate relative velocities of beams of light. (Wikimedia)

Yet, not accepting it would lead to even more weird consequences: After all, the theory of electromagnetism had always been relativistically invariant. The speed of light shows up as a constant in the related equations which explain perfectly how waves of light behaves.

I think the most straight-forward way to introduce special relativity is to start from its core ideas (only) – the constant speed of light and the equivalence of frames of reference. This is the simplicity and beauty of symmetry. No need to start with trains and lightning bolts, as Matthew Rave explained so well. For the more visually inclined there is an ingenious and nearly purely graphical way, called k-calculus (that is however seldom taught AFAIK – I had stumbled upon it once in a German book on relativity).

From the first principles all the weirdness of length contraction and time dilation follows naturally.

But is there a way to understand it a bit better though?

Macke also starts from the Michelson-Morely experiment  – and he adds the fact that it can be “explained” by Lorentz’ contraction hypothesis: Allowing for direction-dependent velocities – as in “ether theory” – but adding the odd fact that rulers contract in the direction of the unobservable absolution motion makes the differences the rays of light traverse go away. It also “explains” time dilatation if you consider your typical light clock and factor in the contraction of lengths:

Light clock

The classical light clock: Light travels between two mirrors. When it hits a mirror it “ticks”. If the clock moves relatively to an observer the path to be traversed between ticks appears to be longer. Thus measurement of time is tied to measurement of spatial distances.

However, length contraction could be sort of justified by tracing it back to the electromagnetic underpinnings of stuff we use in the lab. And it is the theory of electromagnetism where the weird constant speed of light sneaks in.

Contraction can be visualized by stating that like rulers and clocks are finally made from atoms, ions or molecules, whose positions are determined by electromagnetic forces. The perfect sphere of the electrostatic potential around a point charge would be turned into an ellipsoid if the charge starts moving – hence the contraction. You could hypothesize that only “electromagnetic stuff” might be subject to contraction and there might be “mechanical stuff” that would allow for measuring true time and spatial dimensions.

Thus the new weird equations about contracting rulers and slowing time are introduced as statements about electromagnetic stuff only. We use them to calculate back and forth between lengths and times displayed on clocks that suffer from the shortcomings of electromagnetic matter. The true values for x,y,z,t are still there, but finally inaccessible as any matter is electromagnetic.

Yes, this explanation is messy as you mix underlying – but not accessible – direction-dependent velocities with the contraction postulate added on top. This approach misses the underlying simplicity of the symmetry in nature. It is a historical approach, probably trying to do justice to the mechanical thought experiments involving trains and clocks that Einstein had also used (and that could be traced back to his childhood spent basically in the electrical engineering company run by his father and uncle, according to this biography).

What I found fascinating though is that you get consistent equations assuming the following:

  • There are true co-ordinates we can never measure; for those Galileian Transformations remain valid, that is: Time is the same in all inertial frames and distances just differ by time times the speed of the frame of reference.
  • There are “apparent” or “electromagnetic” co-ordinates that follow Lorentz Transformations – of which length contraction and time dilations are consequences.

To make these sets of transformations consistent you have to take into account that you cannot synchronize clocks in different locations if you don’t know the true velocity of the frame of reference. Synchronization is done by placing an emitter of light right in the middle of the two clocks to be synchronized, sending signals to both clocks. This is correct only if the emitter is at rest with respect to both clocks. But we cannot determine when it is at rest because we never know the true velocity.

What you can do is to assume that one frame of reference is absolutely at rest, thus implying that (true) time is independent of spatial dimensions, and the other frame of reference moving in relation to it suffers from the problem of clock synchronization – thus in this frame true time depends on the spatial co-ordinates used in that frame.

The final result is the same when you eliminate the so-called true co-ordinates from the equations.

I don’t claim its the best way to explain special relativity – I just found it interesting, as it tries to take the just hypothetical nature of 4D spacetime as far as possible while giving results in line with experiments.

And now explaining the really important stuff – and another historical detour in its own right

Yes, I changed the layout. My old theme, Garland, had been deprecated by I am nostalgic – here is a screenshot –  courtesy to visitors who will read this in 200 years. with theme Garland using theme Garland – from March 2012 to February 2014 – with minor modifications made to colors and stylesheet in 2013.

I had checked it with an iPhone simulator – and it wasn’t simply too big or just “not responsive”, the top menu bar boundaries of divs looked scrambled. Thus I decided the days of Garland the three-column layout are over.

Now you can read my 2.000 words posts on your mobile devices – something I guess everybody has eagerly anticipated.

And I have just moved another nearly 1.000 words of meta-philosophizing on the value of learning such stuff (theory of relativity, not WordPress) from this post to another draft.

Non-Linear Art. (Should Actually Be: Random Thoughts on Fluid Dynamics)

In my favorite ancient classical mechanics textbook I found an unexpected statement. I think 1960s textbooks weren’t expected to be garnished with geek humor or philosophical references as much as seems to be the default today – therefore Feynman’s books were so refreshing.

Natural phenomena featured by visual artists are typically those described by non-linear differential equations . Those equations allow for the playful interactions of clouds and water waves of ever changing shapes.

So fluid dynamics is more appealing to the artist than boring electromagnetic waves.

Grimshaw, John Atkinson - In Peril - 1879

Is there an easy way to explain this without too much math? Most likely not but I try anyway.

I try to zoom in on a small piece of material, an incredibly small cube of water in a flow at a certain point of time. I imagine this cube as decorated by color. This cube will change its shape quickly and turn into some irregular shape – there are forces pulling and pushing – e.g. gravity.

This transformation is governed by two principles:

  • First, mass cannot vanish. This is classical physics, no need to consider the generation of new particles from the energy of collisions. Mass is conserved locally, that is if some material suddenly shows up at some point in space, it had to have been travelling to that point from adjacent places.
  • Second, Newton’s law is at play: Forces are equal to a change momentum. If we know the force acting at time t and point (x,y,z), we know how much momentum will change in a short period of time.

Typically any course in classical mechanics starts from point particles such as cannon balls or planets – masses that happen to be concentrated in a single point in space. Knowing the force at a point of time at the position of the ball we know the acceleration and we can calculate the velocity in the next moment of time.

This also holds for our colored little cube of fluid – but we usually don’t follow decorated lumps of mass individually. The behavior of the fluid is described perfectly if we know the mass density and the velocity at any point of time and space. Think little arrows attached to each point in space, probably changing with time, too.

Aerodynamics of model car

Digesting that difference between a particle’s trajectory and an anonymous velocity field is a big conceptual leap in my point of view. Sometimes I wonder if it would be better to not learn about the point approach in the first place because it is so hard to unlearn later. Point particle mechanics is included as a special case in fluid mechanics – the flowing cannon ball is represented by a field that has a non-zero value only at positions equivalent to the trajectory. Using the field-style description we would say that part of the cannon ball vanishes behind it and re-appears “before” it, along the trajectory.

Pushing the cube also moves it to another place where the velocity field differs. Properties of that very decorated little cube can change at the spot where it is – this is called an explicit dependence on time. But it can also change indirectly because parts of it are moved with the flow. It changes with time due to moving in space over a certain distance. That distance is again governed by the velocity – distance is velocity times period of time.

Thus for one spatial dimension the change of velocity dv associated with dt elapsed is also related to a spatial shift dx = vdt. Starting from a mean velocity of our decorated cube v(x,t) we end up with v(x + vdt, t+dt) after dt has elapsed and the cube has been moved by vdt. For the cannon ball we could have described this simply as v(t + dt) as v was not a field.

And this is where non-linearity sneaks in: The indirect contribution via moving with the flow, also called convective acceleration, is quadratic in v – the spatial change of v is multiplied by v again. If you then allow for friction you get even more nasty non-linearities in the parts of the Navier-Stokes equations describing the forces.

My point here is that even if we neglect dissipation (describing what is called dry water tongue-in-cheek) there is already non-linearity. The canonical example for wavy motions – water waves – is actually rather difficult to describe due to that, and you need to resort to considering small fluctuations of the water surface even if you start from the simplest assumptions.

The tube

From ElKement: On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace

I am still working on a more self-explanatory update to my previous physics post … trying to explain that multi-dimensional hyperspace is really a space of all potential states a single system might exhibit – a space of possibilities and not those infamous multi-dimensional world that might really be ‘out there’ according to string theorists. In the meantime, please enjoy mathematician Joseph Nebus’ additions to my post which include a down-to-earth example.


I’m frightfully late on following up on this, but ElKement has another entry in the series regarding quantum field theory, this one engagingly titled “On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace”. The objective is to introduce the concept of phase space, a way of looking at physics problems that marks maybe the biggest thing one really needs to understand if one wants to be not just a physics major (or, for many parts of the field, a mathematics major) and a grad student.

As an undergraduate, it’s easy to get all sorts of problems in which, to pick an example, one models a damped harmonic oscillator. A good example of this is how one models the way a car bounces up and down after it goes over a bump, when the shock absorbers are working. You as a student are given some physical properties —…

View original post 606 more words

On the Relation of Jurassic Park and Alien Jelly Flowing through Hyperspace

Yes, this is a serious physics post – no. 3 in my series on Quantum Field Theory.

I promised to explain what Quantization is. I will also argue – again – that classical mechanics is unjustly associated with pictures like this:

Steampunk wall clock (Wikimedia)

… although it is more like this:

Timelines in Back to the Future | By TheHYPO [CC-BY-SA-3.0 ( or GFDL (], via Wikimedia Commons

This shows the timelines in Back to the Future – in case you haven’t recognized it immediately.

What I am trying to say here is – again – is so-called classical theory is as geeky, as weird, and as fascinating as quantum physics.

Experts: In case I get carried away by my metaphors – please see the bottom of this post for technical jargon and what I actually try to do here.

Get a New Perspective: Phase Space

I am using my favorite simple example: A point-shaped mass connected to an massless spring or a pendulum, oscillating forever – not subject to friction.

The speed of the mass is zero when the motion changes from ‘upward’ to ‘downward’. It is maximum when the pendulum reaches the point of minimum height. Anything oscillates: Kinetic energy is transferred to potential energy and back. Position, velocity and acceleration all follow wavy sine or cosine functions.

For purely aesthetic reasons I could also plot the velocity versus position:

Simple Harmonic Motion Orbit | By Mazemaster (Own work) [Public domain], via Wikimedia Commons

From a mathematical perspective this is similar to creating those beautiful Lissajous curves:  Connecting a signal representing position to the x input of an oscillosope and the velocity signal to the y input results in a circle or an ellipse:

Lissajous curves | User Fiducial, Wikimedia

This picture of the spring’s or pendulum’s motion is called a phase portrait in phase space. Actually we use momentum, that is: velocity times mass, but this is a technicality.

The phase portrait is a way of depicting what a physical system does or can do – in a picture that allows for quick assessment.

Non-Dull Phase Portraits

Real-life oscillating systems do not follow simple cycles. The so-called Van der Pol oscillator is a model system subject to damping. It is also non-linear because the force of friction depends on the position squared and the velocity. Non-linearity is not uncommon; also the friction an airplane or car ‘feels’ in the air is proportional to the velocity squared.

The stronger this non-linear interaction is (the parameter mu in the figure below) the more will the phase portrait deviate from the circular shape:

Van der pols equation phase portrait | By Krishnavedala (Own work) [CC-BY-SA-3.0 ( or GFDL (], via Wikimedia Commons

Searching for this image I have learned from Wikipedia that the Van der Pol oscillator is used as a model in biology – here the physical quantity considered is not a position but the action potential of a neuron (the electrical voltage across the cell’s membrane).

Thus plotting the rate of change of in a quantity we can measure plotted versus the quantity itself makes sense for diverse kinds of systems. This is not limited to natural sciences – you could also determine the phase portrait of an economic system!

Addicts of popular culture memes might have guessed already which phase portrait needs to be depicted in this post:

Reconnecting to Popular Science

Chaos Theory has become popular via the elaborations of Dr. Ian Malcolm (Jeff Goldblum) in the movie Jurassic Park. Chaotic systems exhibit phase portraits that are called Strange Attractors. An attractor is the set of points in phase space a system ‘gravitates’ to if you leave it to itself.

There is no attractor for the simple spring: This system will trace our a specific circle in phase space forever – the larger the bigger the initial push on the spring is.

The most popular strange attractor is probably the Lorentz Attractor. It  was initially associated with physical properties characteristic of temperature and the flow of air in the earth’s atmosphere, but it can be re-interpreted as a system modeling chaotic phenomena in lasers.

It might be apocryphal but I have been told that it is not the infamous flap of the butterfly’s wing that gave the related effect its name, but rather the shape of the three-dimensional attractor:

Lorenz system r28 s10 b2-6666 | By Computed in Fractint by Wikimol [Public domain], via Wikimedia Commons

We had Jurassic Park – here comes the jelly!

A single point-particle on a spring can move only along a line – it has a single degree of freedom. You need just a two-dimensional plane to plot its velocity over position.

Allowing for motion in three-dimensional space means we need to add additional dimensions: The motion is fully characterized by the (x,y,z) positions in 3D space plus the 3 components of velocity. Actually, this three-dimensional vector is called velocity – its size is called speed.

Thus we need already 6 dimensions in phase space to describe the motion of an idealized point-shaped particle. Now throw in an additional point-particle: We need 12 numbers to track both particles – hence 12 dimensions in phase space.

Why can’t the two particles simply use the same space?(*) Both particles still live in the same 3D space, they could also inhabit the same 6D phase space. The 12D representation has an advantage though: The whole system is represented by a single dot which make our lives easier if we contemplate different systems at once.

Now consider a system consisting of zillions of individual particles. Consider 1 cubic meter of air containing about 1025 molecules. Viewing these particles in a Newtonian, classical way means to track their individual positions and velocities. In a pre-quantum mechanical deterministic assessment of the world you know the past and the future by calculating these particles’ trajectories from their positions and velocities at a certain point of time.

Of course this is not doable and leads to practical non-determinism due to calculation errors piling up and amplifying. This is a 1025 body problem, much much much more difficult than the three-body problem.

Fortunately we don’t really need all those numbers in detail – useful properties of a gas such as the temperature constitute gross statistical averages of the individual particles’ properties. Thus we want to get a feeling how the phase portrait develops ‘on average’, not looking too meticulously at every dot.

The full-blown phase space of the system of all molecules in a cubic meter of air has about 1026 dimensions – 6 for each of the 1025 particles (Physicists don’t care about a factor of 6 versus a factor of 10). Each state of the system is sort of a snapshot what the system really does at a point of time. It is a vector in 1026 dimensional space – a looooong ordered collection of numbers, but nonetheless conceptually not different from the familiar 3D ‘arrow-vector’.

Since we are interesting in averages and probabilities we don’t watch a single point in phase space. We don’t follow a particular system.

We rather imagine an enormous number of different systems under different conditions.

Considering the gas in the cubic vessel this means: We imagine molecule 1 being at the center and very fast whereas molecule 10 is slow and in the upper right corner, and molecule 666 is in the lower left corner and has medium. Now extend this description to 1025 particles.

But we know something about all of these configurations: There is a maximum x, y and z particles can have – the phase portrait is limited by these maximum dimensions as the circle representing the spring was. The particles have all kinds of speeds in all kinds of directions, but there is a most probably speed related to temperature.

The collection of the states of all possible systems occupy a patch in 1026 dimensional phase space.

This patch gradually peters out at the edges in velocities’ directions.

Now let’s allow the vessel for growing: The patch will become bigger in spatial dimensions as particles can have any position in the larger cube. Since the temperature will decrease due to the expansion the mean velocity will decrease – assuming the cube is insulated.

The time evolution of the system (of these systems, each representing a possible system) is represented by a distribution of this hyper-dimensional patch transforming and morphing. Since we consider so many different states – otherwise probabilities don’t make sense – we don’t see the granular nature due to individual points – it’s like a piece of jelly moving and transforming:

Precisely defined initial configurations of systems configurations have a tendency to get mangled and smeared out. Note again that each point in the jelly is not equivalent to a molecule of gas but it is a point in an abstract configuration space with a huge number of dimensions. We can only make it accessible via projections into our 3D world or a 2D plane.

The analogy to jelly or honey or any fluid is more apt than it may seem

The temporal evolution in this hyperspace is indeed governed by equations that are amazingly similar to those governing an incompressible liquid – such as water. There is continuity and locality: Hyper-Jelly can’t get lost and be created. Any increase in hyper-jelly in a tiny volume of phase space can only be attributed to jelly flowing in to this volume from adjacent little volumes.

In summary: Classical mechanical systems comprising many degrees of freedom – that is: many components that have freedom to move in a different way than other parts of the system – can be best viewed in the multi-dimensional space whose dimensions are (something like) positions and (something like) the related momenta.

Can it get more geeky than that in quantum theory?

Finally: Quantization

I said in the previous post that quantization of fields or waves is like turning down intensity in order to bring out the particle-like rippled nature of that wave. In the same way you could say that you add blurry waviness to idealized point-shaped particles.

Another is to consider the loss in information via Heisenberg’s Uncertainly Principle: You cannot know both the position and the momentum of a particle or a classical wave exactly at the same time. By the way, this is why we picked momenta  and not velocities to generate phase space.

You calculate positions and momenta of small little volumes that constitute that flowing and crawling patches of jelly at a point of time from positions and momenta the point of time before. That’s the essence of Newtonian mechanics (and conservation of matter) applied to fluids.

Doing numerical calculation in hydrodynamics you think of jelly as divided into small little flexible cubes – you divide it mentally using a grid, and you apply a mathematical operation that creates the new state of this digitized jelly from the old one.

Since we are still discussing a classical world we do know positions and momenta with certainty. This translates to stating (in math) that it does not matter if you do calculations involving positions first or for momenta.

There are different ways of carrying out steps in these calculations because you could do them one way of the other – they are commutative.

Calculating something in this respect is similar to asking nature for a property or measuring that quantity.

Thus when we apply a quantum viewpoint and quantize a classical system calculating momentum first and position second or doing it the other way around will yield different results.

The quantum way of handling the system of those  1025 particles looks the same as the classical equations at first glance. The difference is in the rules for carrying out calculation involving positions and momenta – so-called conjugate variables.

Thus quantization means you take the classical equations of motion and give the mathematical symbols a new meaning and impose new, restricting rules.

I probably could just have stated that without going off those tangent.

However, any system of interest in the real world is not composed of isolated particles. We live in a world of those enormous phase spaces.

In addition, working with large abstract spaces like this is at the heart of quantum field theory: We start with something spread out in space – a field with infinite degrees in freedom. Considering different state vectors in these quantum systems is considering all possible configurations of this field at every point in space!

(*) This was a question asked on G+. I edited the post to incorporate the answer.


Expert information:

I have taken a detour through statistical mechanics: Introducing Liouville equations as equation of continuity in a multi-dimensional phase space. The operations mentioned – related to positions of velocities – are the replacement of time derivatives via Hamiltonians equations. I resisted the temptation to mention the hyper-planes of constant energy. Replacing the Poisson bracket in classical mechanics with the commutator in quantum mechanics turns the Liouville equation into its quantum counterpart, also called Von Neumann equation.

I know that a discussion about the true nature of temperature is opening a can of worms. We should rather describe temperature as the width of a distribution rather than the average, as a beam of molecules all travelling in the same direction at the same speed have a temperature of zero Kelvin – not an option due to zero point energy.

The Lorenz equations have been applied to the electrical fields in lasers by Haken – here is a related paper. I did not go into the difference of the phase portrait of a system showing its time evolution and the attractor which is the system’s final state. I also didn’t stress that was is a three dimensional image of the Lorenz attractor and in this case the ‘velocities’ are not depicted. You could say it is the 3D projection of the 6D phase portrait. I basically wanted to demonstrate – using catchy images, admittedly – that representations in phase space allows for a quick assessment of a system.

I also tried to introduce the notion of a state vector in classical terms, not jumping to bras and kets in the quantum world as if a state vector does not have a classical counterpart.

I have picked an example of a system undergoing a change in temperature (non-stationary – not the example you would start with in statistical thermodynamics) and swept all considerations on ergodicity and related meaningful time evolutions of systems in phase space under the rug.

May the Force Field Be with You: Primer on Quantum Mechanics and Why We Need Quantum Field Theory

As Feynman explains so eloquently – and yet in a refreshingly down-to-earth way – understanding and learning physics works like this: There are no true axioms, you can start from anywhere. Your physics knowledge is like a messy landscape, built from different interconnected islands of insights. You will not memorize them all, but you need to recapture how to get from one island to another – how to connect the dots.

The beauty of theoretical physics is in jumping from dot to dot in different ways – and in pondering on the seemingly different ‘philosophical’ worldviews that different routes may provide.

This is the second post in my series about Quantum Field Theory, and I  try to give a brief overview on the concept of a field in general, and on why we need QFT to complement or replace Quantum Mechanics. I cannot avoid reiterating some that often quoted wave-particle paraphernalia in order to set the stage.

From sharp linguistic analysis we might conclude that is the notion of Field that distinguishes Quantum Field Theory from mere Quantum Theory.

I start with an example everybody uses: a so-called temperature field, which is simply: a temperature – a value, a number – attached to every point in space. An animation of monthly mean surface air temperature could be called the temporal evolution of the temperature field:

Monthly Mean Temperature

Solar energy is absorbed at the earth’s surface. In summer the net energy flow is directed from the air to the ground, in winter the energy stored in the soil is flowing to the surface again. Temperature waves are slowly propagating perpendicular to the surface of the earth.

The gradual evolution of temperature is dictated by the fact that heat flows from the hotter to the colder regions. When you deposit a lump of heat underground – Feynman once used an atomic bomb to illustrate this point – you start with a temperature field consisting of a sharp maximum, a peak, located in a region the size of the bomb. Wait for some minutes and this peak will peter out. Heat will flow outward, the temperature will rise in the outer regions and decrease in the center:

Diffluence of a bucket of heat, goverend by the Heat Transfer EquationModelling the temperature field (as I did – in relation to a specific source of heat placed underground) requires to solve the Heat Transfer Equation which is the mathy equivalent of the previous paragraph. The temperature is calculated step by step numerically: The temperature at a certain point in space determines the flow of heat nearby – the heat transferred changes the temperature – the temperature in the next minute determines the flow – and on and on.

This mundane example should tell us something about a fundamental principle – an idea that explains why fields of a more abstract variety are so important in physics: Locality.

It would not violate the principle of the conservation of energy if a bucket of heat suddenly disappeared in once place and appeared in another, separated from the first one by a light year. Intuitively we know that this is not going to happen: Any disturbance or ripple is transported by impacting something nearby.

All sorts of field equations do reflect locality, and ‘unfortunately’ this is the reason why all fundamental equations in physics require calculus. Those equations describe in a formal way how small changes in time and small variations in space do affect each other. Consider the way a sudden displacement traverses a rope:

Propagation of a waveSound waves travelling through air are governed by local field equations. So are light rays or X-rays – electromagnetic waves – travelling through empty space. The term wave is really a specific instance of the more generic field.

An electromagnetic wave can be generated by shaking an electrical charge. The disturbance is a local variation in the electrical field which gives rises to a changing magnetic field which in turn gives rise a disturbance in the electrical field …


Electromagnetic fields are more interesting than temperature fields: Temperature, after all, is not fundamental – it can be traced back to wiggling of atoms. Sound waves are equivalent to periodic changes of pressure and velocity in a gas.

Quantum Field Theory, however, should finally cover fundamental phenomena. QFT tries to explain tangible matter only in terms of ethereal fields, no less. It does not make sense to ask what these fields actually are.

I have picked light waves deliberately because those are fundamental. Due to historical reasons we are rather familiar with the wavy nature of light – such as the colorful patterns we see on or CDs whose grooves act as a diffraction grating:

Michael Faraday had introduced the concept of fields in electromagnetism, mathematically fleshed out by James C. Maxwell. Depending on the experiment (that is: on the way your prod nature to give an answer to a specifically framed question) light may behave more like a particle, a little bullet, the photon – as stipulated by Einstein.

In Compton Scattering a photon partially transfers energy when colliding with an electron: The change in the photon’s frequency corresponds with its loss in energy. Based on the angle between the trajectories of the electron and the photon energy and momentum transfer can be calculated – using the same reasoning that can be applied to colliding billiard balls.

Compton Effect

We tend to consider electrons fundamental particles. But they give proof of their wave-like properties when beams of accelerated electrons are utilized in analyzing the microstructure of materials. In transmission electron microscopy diffraction patterns are generated that allow for identification of the underlying crystal lattice:

A complete quantum description of an electron or a photon does contain both the wave and particle aspects. Diffraction patterns like this can be interpreted as highlighting the regions where the probabilities to encounter a particle are maximum.

Schrödinger has given the world that famous equation named after him that does allow for calculating those probabilities. It is his equation that let us imagine point-shaped particles as blurred wave packets:

Schrödinger’s equation explains all of chemistry: It allows for calculating the shape of electrons’ orbitals. It explains the size of the hydrogen atom and it explains why electrons can inhabit stable ‘orbits’ at all – in contrast to the older picture of the orbiting point charge that would lose energy all  the time and finally fall into the nucleus.

But this so-called quantum mechanical picture does not explain essential phenomena though:

  • Pauli’s exclusion principle explains why matter is extended in space – particles need to put into different orbitals, different little volumes in space. But It is s a rule you fill in by hand, phenomenologically!
  • Schrödinger’s equations discribes single particles as blurry probability waves, but it still makes sense to call these the equivalents of well-defined single particles. It does not make sense anymore if we take into account special relativity.

Heisenberg’s uncertainty principle – a consequence of Schrödinger’s equation – dictates that we cannot know both position and momentum or both energy and time of a particle. For a very short period of time conservation of energy can be violated which means the energy associated with ‘a particle’ is allowed to fluctuate.

As per the most famous formula in the world energy is equivalent to mass. When the energy of ‘a particle’ fluctuates wildly virtual particles – whose energy is roughly equal to the allowed fluctuations – can pop into existence intermittently.

However, in order to make quantum mechanics needed to me made compatible with special relativity it was not sufficient to tweak Schrödinger’s equation just a bit.

Relativistically correct Quantum Field Theory is rather based on the concept of an underlying field pervading space. Particles are just ripples in this ur-stuff – I owe to Frank Wilczek for that metaphor. A different field is attributed to each variety of fundamental particles.

You need to take a quantum leap… It takes some mathematical rules to move from the classical description of the world to the quantum one, sometimes called quantization. Using a very crude analogy quantization is like making a beam of light dimmer and dimmer until it reveals its granular nature – turning the wavy ray of light into a cascade of photonic bullets.

In QFT you start from a classical field that should represent particles and then apply the machinery quantization to that field (which is called second quantization although you do not quantize twice.). Amazingly, the electron’s spin and Pauli’s principle are a natural consequence if you do it right. Paul Dirac‘s achievement in crafting the first relativistically correct equation for the electron cannot be overstated.

I found these fields the most difficult concepts to digest, but probably for technical reasons:

Historically  – and this includes some of those old text books I am so fond of – candidate versions of alleged quantum mechanical wave equations have been tested to no avail, such as the Klein-Gordon equation. However this equation turned out to make sense later – when re-interpreted as a classical field equation that still needs to be quantized.

It is hard to make sense of those fields intuitively. However, there is one field we are already familiar with: Photons are ripples arising from the electromagnetic field. Maxwell’s equations describing these fields had been compatible with special relativity – they predate the theory of relativity, and the speed of light shows up as a natural constant. No tweaks required!

I will work hard to turn the math of quantization into comprehensive explanations, risking epic failure. For now I hand over to MinutePhysics for an illustration of the correspondence of particles and fields:

Disclaimer – Bonus Track:

In this series I do not attempt to cover latest research on unified field theories, quantum gravity and the like. But since I started crafting this article, writing about locality when that article on an alleged simple way to replace field theoretical calculations went viral. The principle of locality may not hold anymore when things get really interesting – in the regime of tiny local dimensions and high energy.

From ElKement: Space Balls, Baywatch, and the Geekiness of Classical Mechanics

This is self-serving, but I can’t resist reblogging Joseph Nebus’ endorsement of my posts on Quantum Field Theory. Joseph is running a great blog on mathematics, and he manages to explain math in an accessible and entertaining way. I hope I will be able to do the same to theoretical physics!


Over on Elkement’s blog, Theory and Practice of Trying To Combine Just Anything, is the start of a new series about quantum field theory. Elke Stangl is trying a pretty impressive trick here in trying to describe a pretty advanced field without resorting to the piles of equations that maybe are needed to be precise, but, which also fill the page with piles of equations.

The first entry is about classical mechanics, and contrasting the familiar way that it gets introduced to people —- the whole forceequalsmasstimesacceleration bit — and an alternate description, based on what’s called the Principle of Least Action. This alternate description is as good as the familiar old Newton’s Laws in describing what’s going on, but it also makes a host of powerful new mathematical tools available. So when you get into serious physics work you tend to shift over to that model; and, if you…

View original post 72 more words