Using Social Media in Bursts. Is. Just. Normal.

I have seen lots of turkey pictures last week and this has reminded me of an anniversary: When I saw those last time I have just started using Twitter, Google+ and Facebook.

So a review is overdue, and I also owe an update to my Time-Out from social networks this summer. (If you don’t have time to read further – the headline says it all.)

I am not at all an internet denier. Actually, I had crafted my first website in 1997 and had pseudo-blogged since 2002. I made these pages – not blogs in the technical sense, but content-wise – the subject of last year’s Website Resurrection Project.

There have been two reasons for my denial of modern interactive platforms, both are weird:

  1. Territory Anxiety: It made me uncomfortable to have my own site entangled with somebody else’s via comments, reshares and the like. I prefer platforms that allow me to make them mine. Facebook and Google+ require you to ‘fill in form’ and put you at the mercy of their designers.
  2. Always-On and Traceability: For many years my job was concerned with firefighting – an inherent feature of working with digital certificates that have their end of validity embedded cryptographically. I considered it odd if panicking clients would see me sharing geeky memes while they are waiting for my more substantial responses. Notifications by corporate online communication tools conditioned me to loath any piece of technology that tried to start a conversation via flashing pop-ups.

These two reasons haven’t been invalidated completely – I think I just care less. Social media is an ongoing experiment in communications.

I am using social media in the following way: (This is not at all advice for using social media properly, but an observation.)

  • If I use a network, I want to use it actively. I don’t use anything as a sole channel for announcements, such as tweeting all new blog postings (only), and I don’t use automation. I don’t replicate all content on different networks or at least there should be enough non-overlap. Each network has its own culture, target group, style of conversation.

A detailed analysis of the unique culture of each network remains maybe subject to a future post. But I cannot resist sharing my recently started collection of articled on the characteristics of the most hated most analyzed network:

How to overcome facebook status anxiety
7 Ways to Be Insufferable on Facebook
Does Facebook CAUSE narcissism?

I became a Google+ fan, actually.

  • The only ‘strategic tool’ I use is a simple text file I paste interesting URLs to – in case I stumble upon too many interesting things which would result in quite a spammy tsunamis of posts or tweets. This is in line with my life-long denial of sophisticated time-management tools and methodologies as Getting Things Done (which is less down-to-earth than it sounds). I don’t believe in the idea of getting mundane things out of your head to free up capacity for the real thing. I want to keep appointments, tasks, the really important items on the to do list, and thing to be posted in my mind.
  • Using social networks must not feel like work – like having to submit your entries to the time-tracking tool. I said often that my so-called business blog, Facebook site, Google+ site can hardly be recognized as such. (Remember, I said this is not perfect marketing advice.)
  • I don’ care about the alleged ideal time for posting and about posting regularly. It is all about game theory: What if everybody adhered to that grand advice that you should, say, tweet funny stuff in the afternoon or business stuff on Tuesday morning? My social media engagement is burst-like, and I think this is natural. This is maybe the most important result of my time-out experiment:
  • Irregularity is key. It is human and normal. I don’t plan to take every summer off from social media. I will rather allow for breaks of arbitrary length when I feel like that.

And I have found scientific confirmation through this scientific paper: The origin of bursts and heavy tails in human dynamics by renowned researcher on network dynamics, Albert-László Barabási.

The abstract reads (highlights mine):

The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behaviour into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. In contrast, there is increasing evidence that the timing of many human activities, ranging from communication to entertainment and work patterns, follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. Here I show that the bursty nature of human behaviour is a consequence of a decision-based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, with most tasks being rapidly executed, whereas a few experience very long waiting times. In contrast, random or priority blind execution is well approximated by uniform inter-event statistics.

Poisson statistics is used to describe, for example, radioactive decay. I learned now that it can also be applied to traffic flow or queues of calls in a call center – basically queues handled by unbiased recipients. The probability to measure a certain time between two consecutive decays or phone calls taken decreases exponentially with time elapsed. Thus very long waiting times are extremely unlikely.

The exponential dependence is another way to view the probably familiar exponential law of decay – by finding the probability of no decay in a certain time via the percentage of not yet decayed atoms. Richard Feynman gives the derivation here for collisions of molecules in a gas.

Radioactive Decay Law Decay Constants

Radioactive decay – the number of non-decayed nuclei over time for different decay rates (half-lives). This could also be read as the probability for a specific nucleus not to decay for a certain time (Wikimedia)

Thus plotting probability over measured inter-e-mail time should give you a straight line in a log-linear plot.

However, the distribution of the time interval between e-mails has empirically been determined to follow a power law which can quickly be identified by a straight line in a log-log-plot: In this case probability for a certain time interval goes approximately with 1 over the time elapsed (power of minus 1).

Power-law distribution, showing the yellow heavy or fat tail. This function goes to zero much slower than the exponential function.

A power function allows for much higher probabilities for very long waiting times (‘Fat tails’).

Such patterns were also found…

…in the timing of job submissions on a supercomputer directory listing and file transfers (FTP request) initiated by individual users, or the timing of printing jobs submitted by users were also reported to display non-Poisson features. Similar patterns emerge in economic transactions, describing the time interval distributions between individual trades in currency futures. Finally, heavy-tailed distributions characterize entertainment-related events, such as the time intervals between consecutive online games played by the same user.

We so-called knowledge workers process our task lists, e-mails, or other kinds of queued up input neither in First-In-First-Out-style (FIFO) or randomly, but we assign priorities in this way:

…high-priority tasks will be executed soon after their addition to the list, whereas low-priority items will have to wait until all higher-priority tasks are cleared, forcing them to stay on the list for considerable time intervals. Below, I show that this selection mechanism, practiced by humans on a daily basis, is the probable source of the fat tails observed in human-initiated processes.

Barabási’s model is perfectly in line with what I had observed in deadline-driven environments all the time. When your manager pings you – you will jump through any hoop presented to you, provided it has been tagged as super-urgent:

This simple model ignores the possibility that the agent occasionally selects a low-priority item for execution before all higher-priority items are done common, for example, for tasks with deadlines.

It gets even better as this model is even more suited to dealing with competing tasks – such as your manager pinging your while you ought have to respond to that urgent Facebook post, too:

Although I have illustrated the queuing process for e-mails, in general the model is better suited to capture the competition between different kinds of activities an individual is engaged in; that is, the switching between various work, entertainment and communication events. Indeed, most data sets displaying heavy-tailed inter-event times in a specific activity reflect the outcome of the competition between tasks of different nature.

Poisson processes and the resulting exponential distribution are due to the fact that events occur truly random: The number of particles emitted due to radioactive decays or the number of request served by a web server is proportional to the time interval multiplied by a constant. This constant is characteristic of the system: an average rate of decay or the average number of customers calling. Call center agents just process calls in FIFO mode.

Power-law behavior, on the other hand, is the result of assigning different priorities to tasks using a distribution function. Agents are biased.

Barabási is very cautious is stating the universal validity of the power-law. He also discusses refinements of the model, such as taking into account the size of an e-mail message and required processing time, and he emphasizes the dependence of the calculated probability on the details of the priorities of tasks. Yet, the so-called fat tails in the probabilities of task execution seem to be a universal feature irrespective of the details of the distribution function.

He has also shown that these bursty patterns are not tied to modern technology and e-mail clients: Darwin and Einstein prioritized their replies to letters in the same way that people rate their e-mails today.

Considering a normal (typically crazy) working day you may have wondered why you could model that without taking into account other things that need to be done in addition to responding to e-mail. And indeed Barabási stresses the role of different competing tasks:

Finally, heavy tails have been observed in the foraging patterns of birds as well, raising the intriguing possibility that animals also use some evolutionarily encoded priority-based queuing mechanisms to decide between competing tasks, such as caring for offspring, gathering food, or fighting off predators.

Thus we might even seem evolutionary hard-wired to process challenging tasks in this way.

I am asking myself: Is this the reason why I find automated posts on social media feel staged? Why I find very regular blogging / posting intervals artificial? Why I don’t like the advice (by social media professionals) that you need to prepare posts in advance for the time you will be on vacation? What happens next – program the automation to act in a bursty fashion?

I planned to connect my Time-Out experience with Barabási’s Bursts for a long time. But now this burst of my writing it down may finally have been triggered by this conversation on an earlier post of mine.

I enjoyed Barabási’s popular-science book Linked: How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life on the dynamics of scale-free networks.

There is also a popular version related to his research on bursts: Bursts: The Hidden Patterns Behind Everything We Do, from Your E-mail to Bloody Crusades. Bursts is a fascinating book as well, and Barabási illustrates the underlying theories using very diverse examples. But you should better be interested in history in its own right and don’t read the book for the science/modelling part only. Reading Bursts for the first time, I came to similar conclusions as this reviewer. It is probably one of the books you should read more than once, re-calibrating your expectations.

Further reading: Website of Barabási’s research lab.

Barabasi Albert 1000nodes

So-called scale-free networks. The distribution of the number of connections per node also follows a power-law. Scale-free networks are characterized by ongoing growth and ‘winner-take-all’ behavior (Wikimedia, user Keiichiro Ono)

Mastering Geometry is a Lost Art

I am trying to learn Quantum Field Theory the hard way: Alone and from text books. But there is something harder than the abstract math of advanced quantum physics:

You can aim at comprehending ancient texts on physics.

If you are an accomplished physicist, chemist or engineer – try to understand Sadi Carnot’s reasoning that was later called the effective discovery of the Second Law of Thermodynamics.

At Carnotcycle’s excellent blog on classical thermodynamics you can delve into thinking about well-known modern concepts in a new – or better: in an old – way. I found this article on the dawn of entropy a difficult ready, even though we can recognize some familiar symbols and concepts such as circular processes, and despite or because of the fact I was at the time of reading this article a heavy consumer of engineering thermodynamics textbooks. You have to translate now unused notions such as heat received and the expansive power into their modern counterparts. It is like reading a text in a foreign language by deciphering every single word instead of having developed a feeling for a language.

Stephen Hawking once published an anthology of the original works of the scientific giants of the past millennium: Corpernicus, Galieo, Kepler, Newton and Einstein: On the Shoulders of Giants. So just in case you googled for Hawkins – don’t expect your typical Hawking pop-sci bestseller with lost of artistic illustrations. This book is humbling. I found the so-called geometrical proofs most difficult and unfamiliar to follow. Actually, it is my difficulties in (not) taming that Pesky Triangle that motivated me to reflect on geometrical proofs.

I am used to proofs stacked upon proofs until you get to the real thing. In analysis lectures you get used to starting by proving that 1+1=2 (literally) until you learn about derivatives and slopes. However, Newton and his processor giants talk geometry all the way! I have learned a different language. Einstein is most familiar in the way he tackles problems though his physics is on principle the most non-intuitive.

This review is titled Now We Know why Geometry is Called the Queen of the Sciences and the reviewer perfectly nails it:

It is simply astounding how much mileage Copernicus, Galileo, Kepler, Newton, and Einstein got out of ordinary Euclidean geometry. In fact, it could be argued that Newton (along with Leibnitz) were forced to invent the calculus, otherwise they too presumably would have remained content to stick to Euclidean geometry.

Science writer Margaret Wertheim gives an account of a 20th century giant trying to recapture Isaac Newton’s original discovery of the law of gravitation in her book Physics on the Fringe (The main topic of the book are outsider physicists’ theories, I have blogged about the book at length here.).

This giant was Richard Feynman.

Today the gravitational force, gravitational potential and related acceleration objects in the gravitational fields are presented by means of calculus: The potential is equivalent to a rubber membrane model – the steeper the membrane, the higher the force. (However, this is not a geometrical proof – this is an illustration of underlying calculus.)

Gravity Wells Potential Plus Kinetic Energy - Circle-Ellipse-Parabola-Hyperbola

Model of the gravitational potential. An object trapped in these wells moves along similar trajectories as bodies in a gravitational field. Depending on initial conditions (initial position and velocity) you end up with elliptical, parabolic or hyperbolic orbits. (Wikimedia, Invent2HelpAll)

(Today) you start from the equation of motion for a object under the action of a force that weakens with the inverse square of the distance between two massive objects, and out pops Kepler’s law about elliptical orbits. It takes some pages of derivation, and you need to recognize conic sections in formulas – but nothing too difficult for an undergraduate student of science.

Newton actually had to invent calculus together with tinkering with the law of gravitation. In order to convince his peers he needed to use the geometrical language and the mental framework common back then. He uses all kinds of intricate theorems about triangles and intersecting lines (;-)) in order to say what we say today using the concise shortcuts of derivatives and differentials.

Wertheim states:

Feynman wasn’t doing this to advance the state of physics. He was doing it to experience the pleasure of building a law of the universe from scratch.

Feynman said to his students:

“For your entertainment and interest I want you to ride in a buggy for its elegance instead of a fancy automobile.”

But he underestimated the daunting nature of this task:

In the preparatory notes Feynman made for his lecture, he wrote: “Simple things have simple demonstrations.” Then, tellingly, he crossed out the second “simple” and replaced it with “elementary.” For it turns out there is nothing simple about Newton’s proof. Although it uses only rudimentary mathematical tools, it is a masterpiece of intricacy. So arcane is Newton’s proof that Feynman could not understand it.

Given the headache that even Corpernicus’ original proofs in the Shoulders of Giants gave me I can attest to:

… in the age of calculus, physicists no longer learn much Euclidean geometry, which, like stonemasonry, has become something of a dying art.

Richard Feynman has finally made up his own version of a geometrical proof to fully master Newton’s ideas, and Feynman’s version covered hundred typewritten pages, according to Wertheim.

Everybody who indulges gleefully in wooden technical prose and takes pride in plowing through mathematical ideas can relate to this:

For a man who would soon be granted the highest honor in science, it was a DIY triumph whose only value was the pride and joy that derive from being able to say, “I did it!”

Richard Feynman gave a lecture on the motion of the planets in 1964, that has later been called his Lost Lecture. In this lecture he presented his version of the geometrical proof which was simpler than Newton’s.

The proof presented in the lecture have been turned in a series of videos by Youtube user Gary Rubinstein. Feynman’s original lecture was 40 minutes long and confusing, according to Rubinstein – who turned it into 8 chunks of videos, 10 minutes each.

The rest of the post is concerned with what I believe that social media experts call curating. I am just trying to give an overview of the episodes of this video lecture. So my summaries do most likely not make a lot of sense if you don’t watch the videos. But even if you don’t watch the videos you might get an impression of what a geometrical proof actually is.

In Part I (embedded also below) Kepler’s laws are briefly introduced. The characteristic properties of an ellipse are shown – in the way used by gardeners to creating an elliptical with a cord and a pencil. An ellipse can also be created within a circle by starting from a random point, connecting it to the circumference and creating the perpendicular bisector:

Part II starts with emphasizing that the bisector is actually a tangent to the ellipse (this will become an important ingredient in the proof later). Then Rubinstein switches to physics and shows how a planet effectively ‘falls into the sun’ according to Newton, that is a deviation due to gravity is superimposed to its otherwise straight-lined motion.

Part III shows in detail why the triangles swept out by the radius vector need to stay the same. The way Newton defined the size of the force in terms of parallelogram attached to the otherwise undisturbed path (no inverse square law yet mentioned!) gives rise to constant areas of the triangles – no matter what the size of the force is!

In Part IV the inverse square law in introduced – the changing force is associated with one side of the parallelogram denoting the deviation from motion without force. Feynman has now introduced the velocity as distance over time which is equal to size of the tangential line segments over the areas of the triangles. He created a separate ‘velocity polygon’ of segments denoting velocities. Both polygons – for distances and for velocities – look elliptical at first glance, though the velocity polygon seems more circular (We will learn later that it has to be a circle).

In Part V Rubinstein expounds that the geometrical equivalent of the change in velocity being proportional to 1 over radius squared times time elapsed with time elapsed being equivalent to the size of the triangles (I silently translate back to dv = dt times acceleration). Now Feynman said that he was confused by Newton’s proof of the resulting polygon being an ellipse – and he proposed a different proof:
Newton started from what Rubinstein calls the sun ‘pulsing’ at the same intervals, that is: replacing the smooth path by a polygon, resulting in triangles of equal size swept out by the radius vector but in a changing velocity.  Feynman divided the spatial trajectory into parts to which triangles of varying area e are attached. These triangles are made up of radius vectors all at the same angles to each other. On trying to relate these triangles to each other by scaling them he needs to consider that the area of a triangle scales with the square of its height. This also holds for non-similar triangles having one angle in common.

Part VI: Since ‘Feynman’s triangles’ have one angle in common, their respective areas scale with the squares of the heights of their equivalent isosceles triangles, thus basically the distance of the planet to the sun. The force is proportional to one over distance squared, and time is proportional to distance squared (as per the scaling law for these triangles). Thus the change in velocity – being the product of both – is constant! This is what Rubinstein calls Feynman’s big insight. But not only are the changes in velocity constant, but also the angles between adjacent line segments denoting those changes. Thus the changes in velocities make up for a regular polygon (which seems to turn into a circle in the limiting case).

Part VII: The point used to build up the velocity polygon by attaching the velocity line segments to it is not the center of the polygon. If you draw connections from the center to the endpoints the angle corresponds to the angle the planet has travelled in space. The animations of the continuous motion of the planet in space – travelling along its elliptical orbit is put side-by-side with the corresponding velocity diagram. Then Feynman relates the two diagrams, actually merges them, in order to track down the position of the planet using the clues given by the velocity diagram.

In Part VIII (embedded also below) Rubinstein finally shows why the planet traverses an elliptical orbit. The way the position of the planet has finally found in Part VII is equivalent to the insights into the properties of an ellipse found at the beginning of this tutorial. The planet needs be on the ‘ray’, the direction determined by the velocity diagram. But it also needs to be on the perpendicular bisector of the velocity segment – as force cause a change in velocity perpendicular to the previous velocity segment and the velocity needs to correspond to a tangent to the path.

Revisiting the Enigma of the Intersecting Lines and That Pesky Triangle

Chances are I made a fool of myself when trying to solve an intriguing math/physics puzzle described in this post.

I wanted to create a German version but found it needs a revision. I will just give you my stream of consciousness as I cannot make it worse anyway.

The puzzle is presented as a ‘physics puzzle’ but I think its enigmatic nature is described better if stated in purely mathematical terms:

Consider three lines in a flat plane, not parallel to each other and not intersecting in a single point. Their mutual intersection points are the corners of a triangle.

Assuming that the probability to find an arbitrary point on either side of each line is 50% – what is the probability to find a point within the triangle?

I had proposed a solution 1/7. My earlier line of reasoning was this:

The three lines divide the full area into 7 parts – the triangle in the center and 6 sections adjacent to the triangle: Each of these parts is located either ‘to the left’ or ‘to the right’ of each line, called the ‘+’ and ‘-’ parts.

Center of mass, physics puzzle.

Proposed solution in post published in Feb. 2013: The body is divided into 7 parts; the center of mass being located in either with equal probability. (Image Credits: Mine)

There are 8 possible combinations of + and – signs, but note that the inverse of the symbols assigned to the triangle is missing: (-+-)  Digression: It would be there if we painted these lines on a ball instead of a flat plane – then each line would close on itself in a circle and there would be 8 equivalent triangles. The combination missing here would correspond to the triangle opposite to the distinguished and singular triangle in our flat plane.

I had assumed that the 7 areas are equivalent based on ‘symmetry’ – each area being positioned on either side of one of three planes – and assuming that the condition given (50%) is not physical anyway. A physical probability would vary with distance from the line – imagine something like a Gaussian symmetrical distribution function centered around each line. Than the triangle would approximately correspond to the area of highest probability (where the peaks of the three Gaussians overlap most).

Do you spot the flaw?

Intersecting lines, two halves

The lower half contains 4 different parts (1x triangle, 2x open trapezoid, 1x open wedge), and the lower half contains two open wedges and one open trapezoid. Probabilities should add up to 50% in each half though.

We can do two cross-checks:

1) the sum of the probabilities of all parts should add up to 1 – OK as 7 x 1/7 is 1. But :

2) the sum of probabilities of all pieces on either side of a line should add up to 0,5! This was the assumption after all.

Probabilities don’t add up correctly if I assign the same probability to each of the 7 pieces – it is 4/7 for the lower half and 3/7 for the upper half.

So I need to amend my theory and rethink the probability assigned to different kinds of areas (I guess mathematicians have a better term for ‘kinds of areas’ – more like ‘topologically equivalent’ or something.).

We spot three distinct shapes:

  • A triangle formed by the three lines.
  • Three ‘open wedges’ formed by two lines – e.g. part (- – -) in the lower half.
  • Three ‘open trapezoids’ formed by three lines, e.g. part (+++) in the upper half.

I am assuming now that probabilities assigned to all wedges are the same and those assigned to trapezoids are the same. I am aware of the fact that this will not work out if we consider a limiting case: Assume the angle between two of the three lines gets smaller and smaller – this will result in one very small wedge (between the red and the blue line) and two ‘wedges’ which are nearly equivalent to a quarter of the total area:

Intersecting lines, narrow wedges

In the limiting case of the blue and red lines coalescing we would end up with four quarters, and you would find an arbitrary point with a probability of 25% in either quarter.

In the video Quantum Boffin has asked for the probability of the triangle – which can be any triangle, of any arbitrary shape and size, and he states that there is a definitive answer. Therefore I think also the details of size and shape of the other areas does not matter, and the 50% assumption is somewhat unphysical.

As there are three distinct types of shapes – I need three equations to calculate them all.

Notation in the following: p… probability. T…triangle, W…wedge, Z…trapezoid. p(T) denotes the probability to find a point in the triangle.

The sum of all probalities to meet the point in either of the 7 pieces must be 1, and we have 3 wedges and 3 trapezoids:
i) p(T) + 3p(W) +3p(Z) = 1

We need 50% on either side of a line.

There is one Z and 2W on one side…
ii) p(Z) + 2p(W) = 0,5

…and T and 2W on the other side:
iii) p(T) + 2p(Z) + p(W) = 0,5

Now the sum ii) and iii) is just i) that these equations are not independent. We need one more information to solve for p(T), p(W), and p(Z)!

And here is my great educated guess: You have to make another assumption and this has to be based on a limiting case. How else could we make an assumption for an arbitrary shape?

I played with different ones, such as letting iv) p(W) = 0,25 motivated by the limiting case of a nearly right angle. Interestingly, you obtain a self-consistent solution. Just plugging in and solving you get: p(T)=0,25 and p(Z)=0. Cross-checking you see immediately that this is consistent with the assumptions – probabilities sum of to 50%: You either have two Ws or one W and the T in one half of the plane.

Assigning 0 to the trapezoid does not seem physical though. We can do better.
So what about assigning equal probabilities to Z and W? iv) p(Z) = p(W)?

I don’t need to do the algebra to see that p(T) has to be zero as you would have 3 equivalent pieces on each side, but the triangle can only be located one one side.

This assumption is in line with the limiting case of a really infinite plane. The triangle has finite size compared to 6 other infinite areas.

I change my proposal to: The probability to find an arbitrary point in the triangle is zero – given the probability to find it on either side of each line is 50% and given that the area is infinite.

Again I’d like to stress that I consider this a math puzzle as the 50% assumption does not make sense without considering a spatial variation of probability (probability density, actually).

Addition as per November 21:

Based on the ingenious proposal by Jacques Pienaar in the comments, I am adding a sketch highlighting his idea.

Theoretically, the center of mass would correspond to the intersection of the 3 “perfect” solid lines. Now allow for some “measurement error” and add an additional line denoting the deviations. I depicted the “left” and “right” lines as dashed and dotted, respectively.

Now take a break, get a coffee, and look at the position of the true center of mass with respect to the triangles made up by the dashed and dotted lines:

Intersecting-Lines-Proposal-Jacques-PienaarSince we have 3 colors and either a dashed or a dotted lines, there are 8 distinct triangles. I tried to make the angles and distances as random as possible, so I think Jacques’ proof does not depend on the details of the configuration or the probability distribution function (yet beware the limiting cases such as parallel lines). The intersection of the solid lines is within 2 of 8 triangles – hence a probability of 2/8 = 1/4.

I was intrigued by an odd coincidence as I had played with o,25, too (see above), but based on the assumption that of a probability of 0,25 for the wedges/corners – which by cranking the algebra or just cross-checking the 50% criterion results in p(Triangle)=0,25, too, and in p(Trapezoid)=0.

Looking hard at this new figure introduced by Jacques I see something closely related, but unfortunately a new puzzle as well: The true center of mass is in exactly two of eight trapezoids built from dashed or dotted lines. So I am tempted to state:


But it is difficult to make a statement on the corners or wedges as any intersecting two lines cut the plane in 4 parts and any point is found in one of them. I was tempted to pick p(W) = 0 though, but this would result in p(Triangle)= -0,25.

So this was probably not the last update or the last post related to the enigma of the intersecting lines.

Breaking News on Search Term Poetry (Good, Bad, Ugly)

This is a break from quantum physics – you deserve it! Based on your questions – no un-follows so far fortunately – I will follow-up with attempting again to pick the right metaphors for 1026 dimensional spaces and tons of enormous vectors.

But there is more (enigmatic) to this universe than quantum field theory – such as search terms submitted by our blogs’ visitors.

Samir Chopra, philosophy professor and published author, has said it very well in his freshly pressed post on search terms “The Peculiar Allure of Blog Search Terms” which also features this poem of mine.

I quote from his post:

Most of all though, search terms are a glimpse of the hive mind of the ‘Net: a peek at the bubbling activity of the teeming millions that interact with it on a daily basis, seeking entertainment, amusement, edification, gratification, employment.  They make visible the anxiety of the questions that torment some and the curiosity–sometimes prurient, sometimes not–that drives others; they remind us of the many different functions that this gigantic interconnected network of networks and protocols plays in our lives, of the indispensability it has acquired.

This quote would certainly improve the quality of my future search terms a lot…

But Google is going to spoil our playful crafting of poetry from our blog statistics: Starting 2011 SSL (https) has been used with Google’s search results pages if you had been logged on to a Google app such as Google+. Now pages are encrypted even for anonymous users according to this article.

This had been called a reaction to the NSA spying on us. I tend to agree to the following:

We also can’t help but think that, because Google is encrypting search activity for everything but ad clicks, this is a move to get more people using Google AdWords.

From an internet standards’ perspective this is fine: RFC 2626 – which defines HTTP – states that

Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.

… which means that if you click on a link displayed on a page whose address starts with https the target website will not know / log where you came from.

What does it mean for search term poets? Will this history of my meteoric rise to fame as a search term / spam poet suddenly come to an end?

Here are some ideas of mine:

Use  other stuff as spam, comments, error messages – as I did in some poems already. I had even poeti-cized my own blog posts’ titles and a full post of mine. But search terms from unknown visitors have been the most inspirational snippets from the virtual scrapyard.

Use Google / Bing Webmaster Tools’ output instead of WP Stats – I have done this, too. The poem quoted by Samir Chopra was one of those peppered with terms from Webmaster Tools. This is a violation of my own rules, though a documented one.

In order to setup your blog with these tools you need to prove ownership by adding a meta tag in your WP blog’s settings. The downside: You can only access the search terms submitted within the last month at every point in time so you would need to save them every day, and Google only gives you a number of clicks if it was higher than 10. Otherwise (<10) it might just have been an ‘impression’ – your blog just showing up in search results, but users did not click your link.

More ideas?

The Matrix | Jamie Zawinski [Attribution], via Wikimedia Commons

engineering and art meets
steampunk icons electrical panel
interactive floor tetris
geeky fascination
back to the future

(Snippet from my most recent search term poem)

On the Relation of Jurassic Park and Alien Jelly Flowing through Hyperspace

Yes, this is a serious physics post – no. 3 in my series on Quantum Field Theory.

I promised to explain what Quantization is. I will also argue – again – that classical mechanics is unjustly associated with pictures like this:

Steampunk wall clock (Wikimedia)

… although it is more like this:

Timelines in Back to the Future | By TheHYPO [CC-BY-SA-3.0 ( or GFDL (], via Wikimedia Commons

This shows the timelines in Back to the Future – in case you haven’t recognized it immediately.

What I am trying to say here is – again – is so-called classical theory is as geeky, as weird, and as fascinating as quantum physics.

Experts: In case I get carried away by my metaphors – please see the bottom of this post for technical jargon and what I actually try to do here.

Get a New Perspective: Phase Space

I am using my favorite simple example: A point-shaped mass connected to an massless spring or a pendulum, oscillating forever – not subject to friction.

The speed of the mass is zero when the motion changes from ‘upward’ to ‘downward’. It is maximum when the pendulum reaches the point of minimum height. Anything oscillates: Kinetic energy is transferred to potential energy and back. Position, velocity and acceleration all follow wavy sine or cosine functions.

For purely aesthetic reasons I could also plot the velocity versus position:

Simple Harmonic Motion Orbit | By Mazemaster (Own work) [Public domain], via Wikimedia Commons

From a mathematical perspective this is similar to creating those beautiful Lissajous curves:  Connecting a signal representing position to the x input of an oscillosope and the velocity signal to the y input results in a circle or an ellipse:

Lissajous curves | User Fiducial, Wikimedia

This picture of the spring’s or pendulum’s motion is called a phase portrait in phase space. Actually we use momentum, that is: velocity times mass, but this is a technicality.

The phase portrait is a way of depicting what a physical system does or can do – in a picture that allows for quick assessment.

Non-Dull Phase Portraits

Real-life oscillating systems do not follow simple cycles. The so-called Van der Pol oscillator is a model system subject to damping. It is also non-linear because the force of friction depends on the position squared and the velocity. Non-linearity is not uncommon; also the friction an airplane or car ‘feels’ in the air is proportional to the velocity squared.

The stronger this non-linear interaction is (the parameter mu in the figure below) the more will the phase portrait deviate from the circular shape:

Van der pols equation phase portrait | By Krishnavedala (Own work) [CC-BY-SA-3.0 ( or GFDL (], via Wikimedia Commons

Searching for this image I have learned from Wikipedia that the Van der Pol oscillator is used as a model in biology – here the physical quantity considered is not a position but the action potential of a neuron (the electrical voltage across the cell’s membrane).

Thus plotting the rate of change of in a quantity we can measure plotted versus the quantity itself makes sense for diverse kinds of systems. This is not limited to natural sciences – you could also determine the phase portrait of an economic system!

Addicts of popular culture memes might have guessed already which phase portrait needs to be depicted in this post:

Reconnecting to Popular Science

Chaos Theory has become popular via the elaborations of Dr. Ian Malcolm (Jeff Goldblum) in the movie Jurassic Park. Chaotic systems exhibit phase portraits that are called Strange Attractors. An attractor is the set of points in phase space a system ‘gravitates’ to if you leave it to itself.

There is no attractor for the simple spring: This system will trace our a specific circle in phase space forever – the larger the bigger the initial push on the spring is.

The most popular strange attractor is probably the Lorentz Attractor. It  was initially associated with physical properties characteristic of temperature and the flow of air in the earth’s atmosphere, but it can be re-interpreted as a system modeling chaotic phenomena in lasers.

It might be apocryphal but I have been told that it is not the infamous flap of the butterfly’s wing that gave the related effect its name, but rather the shape of the three-dimensional attractor:

Lorenz system r28 s10 b2-6666 | By Computed in Fractint by Wikimol [Public domain], via Wikimedia Commons

We had Jurassic Park – here comes the jelly!

A single point-particle on a spring can move only along a line – it has a single degree of freedom. You need just a two-dimensional plane to plot its velocity over position.

Allowing for motion in three-dimensional space means we need to add additional dimensions: The motion is fully characterized by the (x,y,z) positions in 3D space plus the 3 components of velocity. Actually, this three-dimensional vector is called velocity – its size is called speed.

Thus we need already 6 dimensions in phase space to describe the motion of an idealized point-shaped particle. Now throw in an additional point-particle: We need 12 numbers to track both particles – hence 12 dimensions in phase space.

Why can’t the two particles simply use the same space?(*) Both particles still live in the same 3D space, they could also inhabit the same 6D phase space. The 12D representation has an advantage though: The whole system is represented by a single dot which make our lives easier if we contemplate different systems at once.

Now consider a system consisting of zillions of individual particles. Consider 1 cubic meter of air containing about 1025 molecules. Viewing these particles in a Newtonian, classical way means to track their individual positions and velocities. In a pre-quantum mechanical deterministic assessment of the world you know the past and the future by calculating these particles’ trajectories from their positions and velocities at a certain point of time.

Of course this is not doable and leads to practical non-determinism due to calculation errors piling up and amplifying. This is a 1025 body problem, much much much more difficult than the three-body problem.

Fortunately we don’t really need all those numbers in detail – useful properties of a gas such as the temperature constitute gross statistical averages of the individual particles’ properties. Thus we want to get a feeling how the phase portrait develops ‘on average’, not looking too meticulously at every dot.

The full-blown phase space of the system of all molecules in a cubic meter of air has about 1026 dimensions – 6 for each of the 1025 particles (Physicists don’t care about a factor of 6 versus a factor of 10). Each state of the system is sort of a snapshot what the system really does at a point of time. It is a vector in 1026 dimensional space – a looooong ordered collection of numbers, but nonetheless conceptually not different from the familiar 3D ‘arrow-vector’.

Since we are interesting in averages and probabilities we don’t watch a single point in phase space. We don’t follow a particular system.

We rather imagine an enormous number of different systems under different conditions.

Considering the gas in the cubic vessel this means: We imagine molecule 1 being at the center and very fast whereas molecule 10 is slow and in the upper right corner, and molecule 666 is in the lower left corner and has medium. Now extend this description to 1025 particles.

But we know something about all of these configurations: There is a maximum x, y and z particles can have – the phase portrait is limited by these maximum dimensions as the circle representing the spring was. The particles have all kinds of speeds in all kinds of directions, but there is a most probably speed related to temperature.

The collection of the states of all possible systems occupy a patch in 1026 dimensional phase space.

This patch gradually peters out at the edges in velocities’ directions.

Now let’s allow the vessel for growing: The patch will become bigger in spatial dimensions as particles can have any position in the larger cube. Since the temperature will decrease due to the expansion the mean velocity will decrease – assuming the cube is insulated.

The time evolution of the system (of these systems, each representing a possible system) is represented by a distribution of this hyper-dimensional patch transforming and morphing. Since we consider so many different states – otherwise probabilities don’t make sense – we don’t see the granular nature due to individual points – it’s like a piece of jelly moving and transforming:

Precisely defined initial configurations of systems configurations have a tendency to get mangled and smeared out. Note again that each point in the jelly is not equivalent to a molecule of gas but it is a point in an abstract configuration space with a huge number of dimensions. We can only make it accessible via projections into our 3D world or a 2D plane.

The analogy to jelly or honey or any fluid is more apt than it may seem

The temporal evolution in this hyperspace is indeed governed by equations that are amazingly similar to those governing an incompressible liquid – such as water. There is continuity and locality: Hyper-Jelly can’t get lost and be created. Any increase in hyper-jelly in a tiny volume of phase space can only be attributed to jelly flowing in to this volume from adjacent little volumes.

In summary: Classical mechanical systems comprising many degrees of freedom – that is: many components that have freedom to move in a different way than other parts of the system – can be best viewed in the multi-dimensional space whose dimensions are (something like) positions and (something like) the related momenta.

Can it get more geeky than that in quantum theory?

Finally: Quantization

I said in the previous post that quantization of fields or waves is like turning down intensity in order to bring out the particle-like rippled nature of that wave. In the same way you could say that you add blurry waviness to idealized point-shaped particles.

Another is to consider the loss in information via Heisenberg’s Uncertainly Principle: You cannot know both the position and the momentum of a particle or a classical wave exactly at the same time. By the way, this is why we picked momenta  and not velocities to generate phase space.

You calculate positions and momenta of small little volumes that constitute that flowing and crawling patches of jelly at a point of time from positions and momenta the point of time before. That’s the essence of Newtonian mechanics (and conservation of matter) applied to fluids.

Doing numerical calculation in hydrodynamics you think of jelly as divided into small little flexible cubes – you divide it mentally using a grid, and you apply a mathematical operation that creates the new state of this digitized jelly from the old one.

Since we are still discussing a classical world we do know positions and momenta with certainty. This translates to stating (in math) that it does not matter if you do calculations involving positions first or for momenta.

There are different ways of carrying out steps in these calculations because you could do them one way of the other – they are commutative.

Calculating something in this respect is similar to asking nature for a property or measuring that quantity.

Thus when we apply a quantum viewpoint and quantize a classical system calculating momentum first and position second or doing it the other way around will yield different results.

The quantum way of handling the system of those  1025 particles looks the same as the classical equations at first glance. The difference is in the rules for carrying out calculation involving positions and momenta – so-called conjugate variables.

Thus quantization means you take the classical equations of motion and give the mathematical symbols a new meaning and impose new, restricting rules.

I probably could just have stated that without going off those tangent.

However, any system of interest in the real world is not composed of isolated particles. We live in a world of those enormous phase spaces.

In addition, working with large abstract spaces like this is at the heart of quantum field theory: We start with something spread out in space – a field with infinite degrees in freedom. Considering different state vectors in these quantum systems is considering all possible configurations of this field at every point in space!

(*) This was a question asked on G+. I edited the post to incorporate the answer.


Expert information:

I have taken a detour through statistical mechanics: Introducing Liouville equations as equation of continuity in a multi-dimensional phase space. The operations mentioned – related to positions of velocities – are the replacement of time derivatives via Hamiltonians equations. I resisted the temptation to mention the hyper-planes of constant energy. Replacing the Poisson bracket in classical mechanics with the commutator in quantum mechanics turns the Liouville equation into its quantum counterpart, also called Von Neumann equation.

I know that a discussion about the true nature of temperature is opening a can of worms. We should rather describe temperature as the width of a distribution rather than the average, as a beam of molecules all travelling in the same direction at the same speed have a temperature of zero Kelvin – not an option due to zero point energy.

The Lorenz equations have been applied to the electrical fields in lasers by Haken – here is a related paper. I did not go into the difference of the phase portrait of a system showing its time evolution and the attractor which is the system’s final state. I also didn’t stress that was is a three dimensional image of the Lorenz attractor and in this case the ‘velocities’ are not depicted. You could say it is the 3D projection of the 6D phase portrait. I basically wanted to demonstrate – using catchy images, admittedly – that representations in phase space allows for a quick assessment of a system.

I also tried to introduce the notion of a state vector in classical terms, not jumping to bras and kets in the quantum world as if a state vector does not have a classical counterpart.

I have picked an example of a system undergoing a change in temperature (non-stationary – not the example you would start with in statistical thermodynamics) and swept all considerations on ergodicity and related meaningful time evolutions of systems in phase space under the rug.

The Science of Search Term Poetry

In the break after the second session on Quantum Field Theory I am showing off light edutainment with a scientific touch.

Every quarter I save the search terms as displayed in WordPress Stats for highly sophisticated statistical, psychological and linguistic analysis. That is, I do create a Search Term Poem.

Rules are as follows:

  • Every line of the poem including titles is copied from the search terms as displayed in Stats. Search terms must not be used in more than one poem.
  • Editing is not permitted; typos must not be corrected.
  • Words must not be cut out from the middle of search strings. Truncating words at the beginning or the end is permitted.
  • Different search terms or fragments cut out from different terms must not be concatenated. There is a bijective correspondence of lines in the poem and search terms.

The focus of this blog has changed back to physics – and so do the search terms. Searchers’ usual question about rodents in the microwave could also be classified as sciencey (biology and electrical engineering)  considering the way the pitied rodent died nearly one year ago.

I am throwing in some images from Wikimedia that represent my mental connections to the ‘stanzas’ nearly perfectly. I wonder if my so-called artwork is now considered building upon those images and should be licensed under Common Criteria, too.

humiliating poetry
poems on fear
poems that have the word danger in them
best crowd sourcing site to publish poetry

technology is the theory and practice of science
cyber nightmares
“extended self”

Borg dockingstation

myth and magic
butterfly effect
microwave a rodent
travel in past by falling asleep
forward feed and backward feed

Intermittent Lorenz Attractor - Chaoscope

outsider physics
“new physical evidence on the axis of rotation of the earth”
physics on the fringe: smoke rings
is theoretical physics crackpot science
quantum resurrection

Smoke Rings (4446627539)

engineering and art meets
steampunk icons electrical panel
interactive floor tetris
geeky fascination
back to the future


trivia about physics
where’s centre of mass in world?
if we plumb the air throw an object, what is the direction of the coriolis force
toilet flush rotating too strong

Hurricane isabel and coriolis force

at large in particular
is “befire we know it” a cliché
why is existential questions are avoided in most conversations?

what is sleek weak geek in a poetic term
unemployed philosopher

who invented the org chart?

dangers of social networking
fun and action are the rule here

As I don’t pay for the No Ads Upgrade: Occasionally, some of your visitors may see an advertisement here. This image may add to the experience of the poem.

Space Balls, Baywatch and the Geekiness of Classical Mechanics

This is the first post in my series about Quantum Field Theory. What a let-down: I will just discuss classical mechanics.

There is a quantum mechanics, and in contrast there is good old classical, Newtonian mechanics. The latter is a limiting case of the former. So there is some correspondence between the two, and there are rules that let you formulate the quantum laws from the classical laws.

But what are those classical laws?

Chances are high that classical mechanics reminds you of pulleys and levers, calculating torques of screws and Newton’s law F = ma: Force is equal to mass times acceleration.

I argue that classical dynamics is most underrated in terms of geek-factor and philosophical appeal.

[Space Balls]

The following picture might have been ingrained in your brain: A force is tugging at a physical object, such as earth’s gravity is attracting a little ball travelling in space. Now the ball moves – it falls. Actually the moon also falls in a sense when it is orbiting the earth.

Newton's cannon ball.

Cannon ball and gravity. If the initial velocity is too small the ball traverses a parabola and eventually reaches the ground (A, B). If the ball is just given the right momentum, it will fall forever and orbit the earth (C). If the velocity is too high, the ball will escape the gravitational field (E). (Wikimedia). Now I said it – ‘field’! – although I tried hard to avoid it in this post.

When bodies move their positions change. The strength of the gravitational force depends on the distance from the mass causing it, thus the force felt by the moving ball changes. This is why the three-body problem is hard: You need a computer for calculating the forces three or more planets exert on each other at every point of time.

So this is the traditional mental picture associated associated with classical mechanics. It follows these incremental calculations:
Force acts – things move – configuration changes – force depends on configuration – force changes.

In order to get this going you need to know the configuration at the beginning – the positions and the velocities of all planets involved.

So in summary we need:

  • the dependence of the force on the position of the masses.
  • the initial conditions – positions and velocities.
  • Newton’s law.

But there is an alternative description of classical dynamics, offering an alternative philosophy of mechanics so to speak. The description is mathematically equivalent, yet it feels unfamiliar.

In this case we trade the knowledge of positions and velocities for fixing the positions at a start time and an end time. Consider it a sort of game: You know where the planets are at time t1 and at time t2. Now figure out how they have moved / will move between t1 and t2. Instead of the force we consider another, probably more mysterious propert:.

It is called the action. The action has a dimension of [energy time], and – as the force – it has all information about the system.

The action is calculated by integrating…. I am reluctant to describe how the action is calculated. Action (or its field-y counterparts) will be considered the basic description of a system – something that is given, in the way had been forces had been considered given in the traditional picture. The important thing is: You attach a number to each imaginable trajectory, to each possible history.

The trajectory a particle traverses in time slot t1-t2 are determined by the Principle of Least Action (which ‘replaces’ Newton’s law): The action of the system is minimal for the actual trajectories. Any deviation – such as a planet travelling in strange loops – would increase the action.

Principle of Least Action.

Principle of least action. Given: The positions of the particle at start time t1 and end t2. Calculated: The path the particle traverse – by testing all possible paths and calculating their associated actions. Near the optimum (red) path the variation does hardly vary (Wikimedia).

This sounds probably awkward – why would you describe nature like this?
(Of course one answer is: this description will turn out useful in the long run – considering fields in 4D space-time. But this answer is not very helpful right now).

That type of logic is useful in other fields of physics: A related principle lets you calculate the trajectory of a beam of light: Given the start point and the end point a beam, light will pick the path that is traversed in minimum time (This rule is called Fermat’s principle).

This is obvious for a straight laser beam in empty space. But Fermat’s principle allows for picking the correct path in less intuitive scenarios, such as: What happens at the interface between different materials, say air and glass? Light is faster in air than in glass, thus is makes sense to add a kink to the path and utilize air as much as possible.


Richard Feynman used the following example: Consider you walk on the beach and hear a swimmer crying for help. Since this is a 1960s text book the swimmer is a beautiful girl. In order to reach her you have to: 1) Run some meters on the sandy beach and 2) swim some meters in the sea. You do an intuitive calculation about the ideal point of where to enter the water: You can run faster than you can swim. By using a little more intelligence we would realize that it would be advantageous to travel a little greater distance on land in order to decrease the distance in the water, because we go so much slower in the water (Source: Feynman’s Lecture Vol. 1 – available online since a few days!)

Refraction at the interface between air and water.

Refraction at the interface between air and water (Wikimedia). The trajectory of the beam has a kink thus the pole appears kinked.

Those laws are called variational principles: You consider all possible paths, and the path taken is indicated by an extremum, in these cases: a minimum.

Near a minimum stuff does not vary much – the first order derivative is zero at a minimum. Thus on varying paths a bit you actually feel when are close to the minimum – in the way you, as a car driver, would feel the bottom of a valley (It can only go up from here).

Doesn’t this description add a touch of spooky multiverses to classical mechanics already? It seems as if nature has a plan or as if we view anything that has ever or will ever happen from a vantage point outside of space-time.

Things get interesting when masses or charges become smeared out in space – when there is some small ‘infinitesimal’ mass at every point in space. Or generally: When something happens at every point in space. Instead of a point particle that can move in three different directions – three degrees of freedom in physics lingo – we need to deal with an infinite number of degrees of freedom.

Then we are entering the world of fields that I will cover in the next post.

Related posts: Are We All Newtonians? | Sniffing the Path (On the Fascination of Classical Mechanics)