My Data Kraken – a Shapeshifter

I wonder if Data Kraken is only used by German speakers who translate our hackneyed Datenkrake – is it a word like eigenvector?

Anyway, I need this animal metaphor, despite this post is not about facebook or Google. It’s about my personal Data Kraken – which is a true shapeshifter like all octopuses are:

(… because they are spineless, but I don’t want to over-interpret the metaphor…)

Data Kraken’s shapeability is a blessing, given ongoing challenges:

When the Chief Engineer is fighting with other intimidating life-forms in our habitat, he focuses on survival first and foremost … and sometimes he forgets to inform the Chief Science Officer about fundamental changes to our landscape of sensors. Then Data Kraken has to be trained again to learn how to detect if the heat pump is on or off in a specific timeslot. Use the signal sent from control to the heat pump? Or to the brine pump? Or better use brine flow and temperature difference?

It might seem like a dull and tedious exercise to calculate ‘averages’ and other performance indicators that require only very simple arithmetics. But with the exception of room or ambient temperature most of the ‘averages’ just make sense if some condition is met, like: The heating water inlet temperature should only be calculated when the heating circuit pump is on. But the temperature of the cold water, when the same floor loops are used for cooling in summer, should not be included in this average of ‘heating water temperature’. Above all, false sensor readings, like 0, NULL or any value (like 999) a vendor chooses to indicate as an error, have to be excluded. And sometimes I rediscover eternal truths like the ratio of averages not being equal to the average of ratios.

The Chief Engineer is tinkering with new sensors all the time: In parallel to using the old & robust analog sensor for measuring the water level in the tank…

Level sensor: The old way

… a multitude of level sensors was evaluated …

Level sensors: The precursors

… until finally Mr. Bubble won the casting …

blubber-messrohr-3

… and the surface level is now measured via the pressure increasing linearly with depth. For the Big Data Department this means to add some new fields to the Kraken database, calculate new averages … and to smoothly transition from the volume of ice calculated from ruler readings to the new values.

Change is the only constant in the universe, paraphrasing Heraclitus [*]. Sensors morph in purpose: The heating circuit, formerly known (to the control unit) as the radiator circuit became a new wall heating circuit, and the radiator circuit was virtually reborn as a new circuit.

I am also guilty of adding new tentacles all the time, too, herding a zoo of meters added in 2015, each of them adding a new log file, containing data taken at different points of time in different intervals. This year I let Kraken put tentacles into the heat pump:

Data Kraken: Tentacles in the heat pump!

But the most challenging data source to integrate is the most unassuming source of logging data: The small list of the data that The Chief Engineer had recorded manually until recently (until the advent of Miss Pi CAN Sniffer and Mr Bubble). Reason: He had refused to take data at exactly 00:00:00 every single day, so learned things I never wanted to know about SQL programming languages to deal with the odd time intervals.

To be fair, the Chief Engineer has been dedicated at data recording! He never shunned true challenges, like a legendary white-out in our garden, at the time when measuring ground temperatures was not automated yet:

The challenge

White Out

Long-term readers of this blog know that ‘elkement’ stands for a combination of nerd and luddite, so I try to merge a dinosaur scripting approach with real-world global AI Data Krakens’ wildest dream: I wrote scripts that create scripts that create scripts [[[…]]] that were based on a small proto-Kraken – a nice-to-use documentation database containing the history of sensors and calculations.

The mutated Kraken is able to eat all kinds of log files, including clients’ ones, and above all, it can be cloned easily.

I’ve added all the images and anecdotes to justify why an unpretentious user interface like the following is my true Christmas present to myself – ‘easily clickable’ calculated performance data for days, months, years, and heating seasons.

Data Kraken: UI

… and diagrams that can be changed automatically, by selecting interesting parameters and time frames:

Excel for visualization of measurement data

The major overhaul of Data Kraken turned out to be prescient as a seemingly innocuous firmware upgrade just changed not only log file naming conventions and publication scheduled but also shuffled all the fields in log files. My Data Kraken has to be capable to rebuild the SQL database from scratch, based on a documentation of those ever changing fields and the raw log files.

_________________________________

[*] It was hard to find the true original quote for that, as the internet is cluttered with change management coaches using that quote, and Heraclitus speaks to us only through secondary sources. But anyway, what this philosophy website says about Heraclitus applies very well to my Data Kraken:

The exact interpretation of these doctrines is controversial, as is the inference often drawn from this theory that in the world as Heraclitus conceives it contradictory propositions must be true.

In my world, I also need to deal with intriguing ambiguity!

Alien Energy

I am sure it protects us not only from lightning but also from alien attacks and EMP guns …

So I wrote about our lightning protection, installed together with our photovoltaic generator. Now our PV generator is operational for 11 months and we have encountered one alien attack, albeit by beneficial aliens.

The Sunny Baseline

This is the electrical output power of our generator – oriented partly south-east, partly south-west – for some selected nearly perfectly cloudless days last year. Even in darkest winter you could fit the 2kW peak that a water cooker or heat pump needs under the curve at noon. We can heat hot water once a day on a really sunny day but not provide enough energy for room heating (monthly statistics here).

PV power over time: Sunny days 2015

Alien Spikes and an Extended Alien Attack

I was intrigued by very high and narrow spikes of output power immediately after clouds had passed by:

PV power over time, data points taken every few seconds.

There are two possible explanations: 1) Increase in solar cell efficiency as the panels cool off while shadowed or 2) ‘focusing’ (refraction) of radiation by the edges of nearby clouds.

Such 4kW peaks lasting only a few seconds wide are not uncommon, but typically they do not show up in our standard logging, comprising 5-minute averages.

There was one notable exception this February: Power surged to more than 4kW which is significantly higher than the output on other sunny days in February. Actually, it was higher than the output on the best ever sunny day last May 11 and as high as the peaks on summer solstice (Aliens are green, of course):

PV power over time: Alien Energy on Feb 11, 2016

Temperature effect and/or ‘focusing’?

On the alien attack day it was cloudy and warmer in the night than on the sunny reference day, February 6. At about 11:30 the sun was breaking through the clouds, hitting rather cool panels:

PV power over time: February 2016 - Output Power and Ambient Temperature

At that day, the sun was lingering right at the edge of clouds for some time, and global radiation was likely to be higher than expected due to the focusing effect.

Global Radiation over time: February 2016

The jump in global radiation at 11:30 is clearly visible in our measurements of radiation. But in addition panels had been heated up before by the peak in solar radiation and air temperature had risen, too. So the different effects cannot be disentangled easily .

Power drops by 0,44% of the rated power per decrease in °C of panel temperature. Our generator has 4,77kW, so power decreases by 21W/°C panel temperature.

At 11:30 power was by 1,3kW higher than power on the normal reference day – the theoretical equivalent of a panel temperature decrease by 62°C. I think I can safely attribute the initial surge in output power to the unusual peak in global radiation only.

At 12:30 output power is lower by 300W on the normal sunny day compared to the alien day. This can partly be attributed to the lower input radiation, and partly to a higher ambient temperature.

But only if input radiation is changing slowly, panel temperature has a simple, linear relationship with input temperature. The sun might be blocked for a very short period – shorter than our standard logging interval of 90s for radiation – and the surface of panels cools off intermittently. It is an interesting optimization problem: By just the right combination of blocking period and sunny period overall output could be maximized.

Re-visiting data from last hot August to add more dubious numbers

Panels’ performance was lower for higher ambient air temperatures …

PV power over time: August 2015 - Output Power and Ambient Temperature

… while global radiation over time was about the same. Actually the enveloping curve was the same, and there were even negative spikes at noon despite the better PV performance:

Global Radiation over time: August 2015

The difference in peak power was about 750W. The panel temperature difference to account for that would have to be about 36°. This is three times the measured difference in ambient temperature of 39°C – 27°C = 12°C. Is this plausible?

PV planners use a worst-case panel temperature of 75°C – for worst-case hot days like August 12, 2015.

Normal Operating Cell Temperature of panels is about 46°C. Normal conditions are: 20°C of ambient air, 800W/m2 solar radiation, and free-standing panels. One panel has an area of about 1,61m2; our generator with 18 panels has 29m2, so 800W/m2 translates to 23kW. Since the efficiency of solar panels is about 16%, 23kW of input generates about 3,7kW output power – about the average of the peak values of the two days in August. Our panels are attached to the roof and not free-standing – which is expected to result in a temperature increase of 10°C.

So we had been close to normal conditions at noon radiation-wise, and if we had been able to crank ambient temperature down to 20°C in August, panel temperature had been about 46°C + 10°C = 56°C.

I am boldly interpolating now, in order to estimate panel temperature on the ‘colder’ day in August:

Air Temperature Panel Temperature Comment
20°C 56°C Normal operating conditions, plus typical temperature increase for well-vented rooftop panels.
27°C 63°C August 1. Measured ambient temperature, solar cell temperature interpolated.
39°C 75°C August 12. Measured ambient temperature.
Panel temperature is an estimate for the worst case.

Under perfectly stable conditions panel temperature would have differed by 12°C, resulting in a difference of only ~ 250W (12°C * 21W/°C).

Even considering higher panel temperatures at the hotter day or a non-linear relationship between air temperature and panel temperature will not easily give you the 35° of temperature difference required to explain the observed difference of 750W.

I think we see aliens at work again:

At about 10:45 global radiation for the cooler day, August 1, starts to fluctuate – most likely even more wildly than we see with the 90s interval. Before 10:45, the difference in output power for the two days is actually more like 200-300W – so in line with my haphazard estimate for steady-state conditions.

Then at noon the ‘focusing’ effect could have kicked in, and panel surface temperature might haved fluctuated between 27°C air temperature minimum and the estimated 63°C. Both of these effects could result in the required additional increase of a few 100W.

Since ‘focusing’ is actually refraction by particles in the thinned out edges of clouds, I wonder if the effect could also be caused by barely visible variations of the density of mist in the sky as I remember the hot period in August 2015 as sweltry and a bit hazy, rather than partly cloudy.

I think it is likely that both beneficial effects – temperature and ‘focusing’ – will always be observed in unison. On February 11 I had the chance to see the effect of focusing only (or traces of an alien spaceship that just exited a worm-hole) for about half an hour.

Wormhole travel as envisioned by Les Bossinas for NASA________________________________

Further reading:

On temperature dependence of PV output power – from an awesome resource on photovoltaics:

On the ‘focusing’ effect:

  • Can You Get More than 100% Solar Energy?
    Note especially this comment – describing refraction, and pointing out that refraction of light can ‘focus’ light that would otherwise have been scattered back into space. This commentator also proposes different mechanism for short spikes in power and increase of power during extended periods (such as I observed on February 11).
  • Edge-of-Cloud Effect

Source for the 10°C higher temperature of rooftop panels versus free-standing ones: German link, p.3: Ambient air + 20°C versus air + 30°C

Temperature Waves and Geothermal Energy

Nearly all of renewable energy exploited today is, in a sense, solar energy. Photovoltaic cells convert solar radiation into electricity, solar thermal collectors heat hot water. Plants need solar power for photosynthesis, for ‘creating biomass’. The motion of water and air is influenced by the fictitious forces caused by the earth’s rotation, but by temperature gradients imposed by the distribution of solar energy as well.

Also geothermal heat pumps with ground loops near the surface actually use solar energy deposited in summer and stored for winter – that’s why I think that ‘geothermal heat pumps’ is a bit of a misnomer.

3-ton Slinky Loop

Collector (heat exchanger) for brine-water heat pumps.

Within the first ~10 meters below the surface, temperature fluctuates throughout the year; at 10m the temperature remains about constant and equal to 10-15°C for the whole year.

Only at higher depths the flow of ‘real’ geothermal energy can be spotted: In the top layer of the earth’s crust the temperatures rises about linearly, at about 3°C (3K) per 100m. The details depend on geological peculiarities, it can be higher in active regions. This is the energy utilized by geothermal power plants delivering electricity and/or heat.

Temperature schematic of inner Earth

Geothermal gradient adapted from Boehler, R. (1996). Melting temperature of the Earth’s mantle and core: Earth’s thermal structure. Annual Review of Earth and Planetary Sciences, 24(1), 15–40. (Wikimedia, user Bkilli1). Geothermal power plants use boreholes a few kilometers deep.

This geothermal energy originates from radioactive decays and from the violent past of the primordial earth: when the kinetic energy of celestial objects colliding with each other turned into heat.

The flow of geothermal energy per area directed to the surface, associated with this gradient is about 65 mW/m2 on continents:

Earth heat flow

Global map of the flow of heat, in mW/m2, from Earth’s interior to the surface. Davies, J. H., & Davies, D. R. (2010). Earth’s surface heat flux. Solid Earth, 1(1), 5-24. (Wikimedia user Bkilli1)

Some comparisons:

  • It is small compared to the energy from the sun: In middle Europe, the sun provides about 1.000 kWh per m2 and year, thus 1.000.000Wh / 8.760h = 144W/m2 on average.
  • It also much lower than the rule-of-thumb power of ‘flat’ ground loop collectors – about 20W/m2
  • The total ‘cooling power’ of the earth is several 1010kW: Would the energy not be replenished by radioactive decay, the earth would lose a some seemingly impressive 1014kWh per year, yet this would result only in a temperature difference of ~10-7°C (This is just a back-of-the-envelope check of orders of magnitude, based on earth’s mass and surface area, see links at the bottom for detailed values).

The constant energy in 10m depth – the ‘neutral zone’ – is about the same as the average temperature of the earth (averaged over one year over the surface of the earth): About 14°C. I will show below that this is not a coincidence: The temperature right below the fluctuating temperature wave ‘driven’ by the sun has to be equal to the average value at the surface. It is misleading to attribute the 10°C in 10m depths to the ‘hot inner earth’ only.

In this post I am toying with theoretical calculations, but in order not so scare readers off too much I show the figures first, and add the derivation as an appendix. My goal is to compare these results with our measurements, to cross-check assumptions for the thermal properties of ground I use in numerical simulations of our heat pump system (which I need for modeling e.g. the expected maximum volume of ice)

I start with this:

  1. The surface temperature varies periodically in a year, and I use maximum, minimum and average temperature from our measurements, (corrected a bit for the mild last seasons). These are daily averages as I am not interested in the daily temperature changes between and night.
  2. A constant geothermal flow of 65 mW/m2 is superimposed to that.
  3. The slow transport of solar energy into ground is governed by a thermal property of ground, called the thermal diffusivity. It describes ‘how quickly’ a lump of heat deposited will spread; its unit is area per time. I use an assumption for this number based on values for soil in the literature.

I am determining the temperature as a function of depth and of time by solving the differential equation that governs heat conduction. This equation tells us how a spatial distribution of heat energy or ‘temperature field’ will slowly evolve with time, given the temperature at the boundary of the interesting part of space in question – in this case the surface of the earth. Fortunately, the yearly oscillation of air temperature is about the simplest boundary condition one could have, so you can calculate the solution analytically.
Another nice feature of the underlying equation is that it allows for adding different solutions: I can just add the effect of the real geothermal flow of energy to the fluctuations caused by solar energy.

The result is a  ‘damped temperature wave’; the temperature varies periodically with time and space: The spatial maximum of temperature moves from the surface to a point below and back: In summer (beginning of August) the measured temperature is maximum at the surface, but in autumn the maximum is found some meters below – heat flows back from ground to the surface then:

Temperature wave propagating through ground near the surface of the earth.

Calculated ground temperature, based on measurements of the yearly variation of the temperature at the surface and an assumption of the thermal properties of ground. Calculated for typical middle European maximum and minimum temperatures.

This figure is in line with the images shown in every textbook of geothermal energy. Since the wave is symmetrical about the yearly average, the temperature in about 10m depth, when the wave has ‘run out’, has to be equal to the yearly average at the surface. The wave does not have much chance to oscillate as it is damped down in the middle of the first period, so the length of decay is much shorter than the wavelength.

The geothermal flow just adds a small distortion, an asymmetry of the ‘wave’. It is seen only when switching to a larger scale.

Temperature wave propagating through ground near the surface of the earth - larger scale.

Some data as in previous plot, just extended to greater depths. The geothermal gradient is about 3°C/100m, the detailed value being calculated from the value of thermal conductivity also used to model the fluctuations.

Now varying time instead of space: The higher the depth, the more time it takes for ground to reach maximum temperature. The lag of the maximum temperature is proportional to depth: For 1m difference in depth it is less than a month.

Temperature wave: Temporal evolution

Temporal change of ground temperature at different depths. The wave is damped, but other simply ‘moving into the earth’ at a constant speed.

Measuring the time difference between the maxima for different depths lets us determine the ‘speed of propagation’ of this wave – its wavelength divided by its period. Actually, the speed depends in a simple way on the thermal diffusivity and the period as I show below.

But this gives me an opportunity to cross-check my assumption for diffusivity: I  need to compare the calculations with the experimentally determined delay of the maximum. We measure ground temperature at different depths, below our ice/water tank but also in undisturbed ground:

Temperature wave - experimental results

Temperature measured with Pt1000 sensors – comparing ground temperature at different depths, and the related ‘lag’. Indicated by vertical dotted lines, the approximate positions of maxima and minima. The lag is about 10-15 days.

The lag derived from the figure is in the same order as the lag derived from the calculation and thus in accordance with my assumed thermal diffusivity: In 70cm depth, the temperature peak is delayed by about two weeks.

___________________________________________________

Appendix: Calculations and background.

I am trying to give an outline of my solution, plus some ‘motivation’ of where the differential equation comes from.

Heat transfer is governed by the same type of equation that describes also the diffusion of gas molecules or similar phenomena. Something lumped together in space slowly peters out, spatial irregularities are flattened. Or: The temporal change – the first derivative with respect to time – is ‘driven’ by a spatial curvature, the second derivative with respect to space.

\frac{\partial T}{\partial t} = D\frac{\partial^{2} T}{\partial x^{2}}

This is the heat transfer equation for a region of space that does not have any sources or sinks of heat – places where heat energy would be created from ‘nothing’ or vanish – like an underground nuclear reaction (or freezing of ice). All we know about the material is covered by the constant D, called thermal diffusivity.

The equation is based on local conservation of energy: The energy stored in a small volume of space can only change if something is created or removed within that volume (‘sources’) or if it flows out of the volume through its surface. This is a very general principles applicable to almost anything in physics. Without sources or sinks, this translates to:

\frac{\partial [energy\,density]}{\partial t} = -\frac{\partial \overrightarrow{[energy\,flow]}}{\partial x}

The energy density [J/m3] stored in a volume of material by heating it up from some start temperature is proportional to temperature, proportionality factors being the mass density ρ [kg/m3] and the specific heat cp [J/kg] of this material. The energy flow per area [W/m2] is typically nearly proportional to the temperature gradient, the constant being heat conductivity κ [W/mK]. The gradient is the first-order derivative in space, so inserting all this we end with the second derivative in space.

All three characteristic constants of the heat conducting material can be combined into one – the diffusivity mentioned before:

D = \frac{\kappa }{\varrho \, c_{p} }

So changes in more than one of these parameters can compensate for each other; for example low density can compensate for low conductivity. I hinted at this when writing about heat conduction in our gigantic ice cube: Ice has a higher conductivity and a lower specific heat than water, thus a much higher diffusivity.

I am considering a vast area of ground irradiated by the sun, so heat conduction will be one-dimensional and temperature changes only along the axis perpendicular to the surface. At the surface the temperature varies periodically throughout the year. t=0 is to be associated with beginning of August – our experimentally determined maximum – and the minimum is observed at the beginning of February.

This assumption is just the boundary condition needed to solve this partial differential equation. The real ‘wavy’  variation of temperature is closed to a sine wave, which makes the calculation also very easy. As a physicist I have trained to used a complex exponential function rather than sine or cosine, keeping in mind that only real part describes the real world. This a legitimate choice, thanks to the linearity of the differential equation:

T(t,x=0) = T_{0} e^{i\omega t}

with ω being the angular frequency corresponding to one year (2π/ω = 1 year).

It oscillates about 0, with an amplitude of half of T0. But after all, the definition of 0°C is arbitrary and – again thanks to linearity – we can use this solution and just add a constant function to shift it to the desired value. A constant does neither change with space or time and thus solves the equation trivially.

If you have more complicated sources or sinks, you would represent those mathematically as a composition of simpler ‘sources’, for each of which you can find a quick solution and then add up add the solutions, again thanks to linearity. We are lucky that our boundary condition consist just of one such simple harmonic wave, and we guess at the solution for all of space, adding a spatial wave to the temporal one.

So this is the ansatz – an educated guess for the function that we hope to solve the differential equation:

T(t,x) = T_{0} e^{i\omega t + \beta x}

It’s the temperature at the surface, multiplied by an exponential function. x is positive and increasing with depth. β is some number we don’t know yet. For x=0 it’s equal to the boundary temperature. Would it be a real, negative number, temperature would decrease exponentially with depth.

The ansatz is inserted into the heat equation, and every differentiation with respect to either space or time just yields a factor; then the exponential function can be cancelled from the heat transfer equation. We end up with a constraint for the factor β:

i\omega = D\beta^{2}

Taking the square root of the complex number, there would be two solutions:

\beta=\pm \sqrt{\frac{\omega}{2D}}(1+i))

β has a real and an imaginary part: Using it in T(x,t) the real part corresponds to exponential ‘decay’ while the imaginary part is an oscillation (similar to the temporal one).

Both real and imaginary parts of this function solve the equation (as any linear combination does). So we take the real part and insert β – only the solution for β with negative sign makes sense as the other one would describe temperature increasing to infinity.

T(t,x) = Re \left(T_{0}e^{i\omega t} e^{-\sqrt{\frac{\omega}{2D}}(1+i)x}\right)

The thing in the exponent has to be dimension-less, so we can express the combinations of constants as characteristic lengths, and insert the definition of ω=2π/τ):

T(t,x) = T_{0} e^{-\frac{x}{l}}cos\left(2\pi\left(\frac {t} {\tau} -\frac{x}{\lambda }\right)\right)

The two lengths are:

  • the wavelength of the oscillation \lambda = \sqrt{4\pi D\tau }
  • and the attenuation length  l = \frac{\lambda}{2\pi} = \sqrt{\frac{D\tau}{\pi}}

So the ratio between those lengths does not depend on the properties of the material and the wavelength is always much shorter than the attenuation length. That’s why there is hardly one period visible in the plots.

The plots have been created with this parameters:

  • Heat conductivity κ = 0,0019 kW/mK
  • Density ρ = 2000 kg/m3
  • Specific heat cp = 1,3 kJ/kgK
  • tau = 1 year = 8760 hours

Thus:

  • Diffusivity D = 0,002631 m2/h
  • Wavelength λ = 17 m
  • Attenuation length l = 2,7 m

The wave (any wave) propagates with a speed v equivalent to wavelength over period: v = λ / tau.

v = \frac{\lambda}{\tau} = \frac{\sqrt{4\pi D\tau}}{\tau} = \sqrt{\frac{4\pi D}{\tau}}

The speed depends only on the period and the diffusivity.

The maximum of the temperature as observed in a certain depth x is delayed by a time equal x over v. Cross-checking our measurements of the temperature T(30cm) and T(100cm), I would thus expect a delay by 0,7m / (17m/8760h) = 360 h = 15 days which is approximately in agreement with experiments (taking orders of magnitude). Note one thing though: Only the square root of D is needed in calculations, so any error I make in assumptions for D will be generously reduced.

I have not yet included the geothermal linear temperature gradient in the calculation. Again we are grateful for linearity: A linear – zero-curvature – temperature profile that does not change with time is also a trivial solution of the equation that can be added to our special exponential solution.

So the full solution shown in the plot is the sum of:

  • The damped oscillation (oscillating about 0°C)
  • Plus a constant denoting the true yearly average temperature
  • Plus a linear decrease with depth, the linear correction being 0 at the surface to meet the boundary condition.

If there would be no geothermal gradient (thus no flow from beneath) the temperature at infinite distance (practically in 20m) would be the same as the average temperature of the surface.

Daily changes could be taken into account by adding yet another solution that satisfies an amendment to the boundary condition: Daily fluctuations of temperatures would be superimposed to the yearly oscillations. The derivation would be exactly the same, just the period is different by a factor of 365. Since the characteristic lengths go with the square root of the period, yearly and daily lengths differ only by a factor of about 19.

________________________________________

Further reading:

Intro to geothermal energy:

A quick intro to geothermal energy.
Where does geothermal energy come from?

Geothermal gradient and energy of the earth:

Earth’s heat energy budget
Geothermal gradient
Radius and mass of earth

These data for bore holes using one scale show the gradient plus the disturbed surface region, with not much of a neutral zone in between.

Theory of Heat Conduction

Heat Transfer Equation on Wikipedia
Textbook on Heat Conduction, available on archive.org in different formats.

I have followed the derivation of temperature waves given in my favorite German physics book on Thermodynamics and Statistics, by my late theoretical physics professor Wilhelm Macke. This page quotes the classic on heat conduction, by Carlslaw and Jäger, plus the main results for the characteristic lengths.

The Impact of Ambient Temperature on the Output Power of Solar Panels

I have noticed the impact of traversing clouds on solar power output: Immediately after a cloud has passed, power surges to a record value. This can be attributed to the focusing effect of the surrounding clouds and/or cooling of the panels. Comparing data for cloudless days in May and June, I noticed a degradation of power – most likely due to higher ambient temperatures in June.

We had a record-breaking summer here; so I wondered if I could prove this effect, using data taken at extremely hot days. There is no sensor on the roof to measure temperature and radiation directly at the panels, but we take data taken every 90 seconds for:

  • Ambient air temperature
  • Global radiation on a vertical plane, at the position of the solar thermal collector used with the heat pump system.

I was looking for the following:

  • Two (nearly) cloudless days, in order to rule out the impact of shadowing at different times of the days.
  • These days should not be separated by too many other days, to rule out the effect of the changing daily path of the sun.
  • Ideally, air temperature should be very different on these days but global radiation should be the the same.

I found such days: August 1 and August 12 2015:

Daily PV ouput energies and ambient temperatures in August 2015

Daily output of the photovoltaics generator (4,77 kW peak), compared to average and maximum air temperatures and to the global radiation on a vertical plane. Dotted vertical lines indicate three days nearly without clouds.

August 12 was  a record-breaking day with a maximum temperature of 39,5°C. August 1 was one of the ‘cool’ but still perfectly sunny days in August. The ‘cold day’ resulted in a much higher PV output, despite similar input in terms of radiation. For cross-checking I have also included August 30: Still fairly hot, but showing a rather high PV output, at a slightly higher input energy.

August 2015 in detail:

Daily PV ouput energies and ambient temperatures in August 2015 - details

Same data as previous plot, zoomed in on August. Dotted lines indicate the days compared in more detail.

Overlaying the detailed curves for temperature and power output over time for the three interesting days:

PV power and ambient temperature over time

Detailed logging of ambient air temperature and output power of the photovoltaic generator on three nearly cloudless days in August 2015.

The three curves are stacked ‘in reverse order’:

The higher the ambient air temperature, the lower the output power.

Note that the effect of temperature can more than compensate for the actually higher radiation for the middle curve (August 30).

I have used global radiation on a vertical plane as an indicator of radiation, not claiming that it is related to the radiation that would be measured on the roof – or on a horizontal plane, as it is usually done – in a simple way. We measure radiation at the position of our ribbed pipe collector that serves as a heat source for the heat pump; it is oriented vertically so that it resembles the orientation of that collector and allows us for using these data as input for our simulations of the performance of the heat pump system.

Our house casts a shadow on the solar collector and this sensor on the afternoon; therefore data show a cut-off in the afternoon:

Global radiation on solar collector, vertical plane, August 2015

Global radiation in W per square meter on a vertical plane, measured at the position of the solar collector. The collector is installed on the ground, fence-like, behind the house, about north-east of it.

Yet, if you compare two cloudless days where the sun traversed about the same path (thus days close in the calendar) you can conclude that solar radiation everywhere – including the position on the roof – was the same if these oddly shaped curves are alike.

This plot shows that the curves for these two days that differed a lot in output and temperature, August 1 and 12, were really similar. Actually, the cooler day with higher PV output, August 1, even showed the lower solar radiation due to some spikes. Since the PV inverter only logs every 5 minutes whereas our system’s monitoring logs every 1,5 minutes those spikes might have been averaged out in the PV power curves. August 30 clearly showed higher radiation which can account for the higher output energy. But – as shown above – the higher solar power could not compensate for the higher ambient temperature.

___________________________

Logging setup:

An Efficiency Greater Than 1?

No, my next project is not building a Perpetuum Mobile.

Sometimes I mull upon definitions of performance indicators. It seems straight-forward that the efficiency of a wood log or oil burner is smaller than 1 – if combustion is not perfect you will never be able to turn the caloric value into heat, due to various losses and incomplete combustion.

Our solar panels have an ‘efficiency’ or power ratio of about 16,5%. So 16.5% of solar energy are converted to electrical energy which does not seem a lot. However, that number is meaningless without adding economic context as solar energy is free. Higher efficiency would allow for much smaller panels. If efficiency were only 1% and panels were incredibly cheap and I had ample roof spaces I might not care though.

The coefficient of performance of a heat pump is 4-5 which sometimes leaves you with this weird feeling of using odd definitions. Electrical power is ‘multiplied’ by a factor always greater than one. Is that based on crackpottery?

Heat pump.

Our heat pump. (5 connections: 2x heat source – brine, 3x heating water hot water / heating water supply, joint return).

Actually, we are cheating here when considering the ‘input’ – in contrast to the way we view photovoltaic panels: If 1 kW of electrical power is magically converted to 4 kW of heating power, the remaining 3 kW are provided by a cold or lukewarm heat source. Since those are (economically) free, they don’t count. But you might still wonder, why the number is so much higher than 1.

My favorite answer:

There is an absolute minimum temperature, and our typical refrigerators and heat pumps operate well above it.

The efficiency of thermodynamic machines is most often explained by starting with an ideal process using an ideal substance – using a perfect gas as a refrigerant that runs in a closed circuit. (For more details see pointers in the Further Reading section below). The gas would be expanded at a low temperature. This low temperature is constant as heat is transferred from the heat source to the gas. At a higher temperature the gas is compressed and releases heat. The heat released is the sum of the heat taken in at lower temperatures plus the electrical energy fed in to the compressor – so there is no violation of energy conservation. In order to ‘jump’ from the lower to the higher temperature, the gas is compressed – by a compressor run on electrical power – without exchanging heat with the environment. This process is repeating itself again and again, and with every cycle the same heat energy is released at the higher temperature.

In defining the coefficient of performance the energy from the heat source is omitted, in contrast to the electrical energy:

COP = \frac {\text{Heat released at higher temperature per cycle}}{\text{Electrical energy fed into the compressor per cycle}}

The efficiency of a heat pump is the inverse of the efficiency of an ideal engine – the same machine, running in reverse. The engine has an efficiency lower than 1 as expected. Just as the ambient energy fed into the heat pump is ‘free’, the related heat released by the engine to the environment is useless and thus not included in the engine’s ‘output’.

100 1870 (Voitsberg steam power plant)

One of Austria’s last coal power plants – Kraftwerk Voitsberg, retired in 2006 (Florian Probst, Wikimedia). Thermodynamically, this is like ‘a heat pump running in reverse. That’s why I don’t like when a heat pump is said to ‘work like a refrigerator, just in reverse’ (Hinting at: The useful heat provided by the heat pump is equivalent to the waste heat of the refrigerator). If you run the cycle backwards, a heat pump would become sort of a steam power plant.

The calculation (see below) results in a simple expression as the efficiency only depends on temperatures. Naming the higher temperature (heating water) T1 and the temperature of the heat source (‘environment’, our water tank for example) T….

COP = \frac {T_1}{T_1-T_2}

The important thing here is that temperatures have to be calculated in absolute values: 0°C is equal to 273,15 Kelvin, so for a typical heat pump and floor loops the nominator is about 307 K (35°C) whereas the denominator is the difference between both temperature levels – 35°C and 0°C, so 35 K. Thus the theoretical COP is as high as 8,8!

Two silly examples:

  • Would the heat pump operate close to absolute zero, say, trying to pump heat from 5 K to 40 K, the COP would only be
    40 / 35 = 1,14.
  • On the other hand, using the sun as a heat source (6000 K) the COP would be
    6035 / 35 = 172.

So, as heat pump owners we are lucky to live in an environment rather hot compared to absolute zero, on a planet where temperatures don’t vary that much in different places, compared to how far away we are from absolute zero.

__________________________

Further reading:

Richard Feynman has often used unusual approaches and new perspectives when explaining the basics in his legendary Physics Lectures. He introduces (potential) energy at the very beginning of the course drawing on Carnot’s argument, even before he defines force, acceleration, velocity etc. (!) In deriving the efficiency of an ideal thermodynamic engine many chapters later he pictured a funny machine made from rubber bands, but otherwise he follows the classical arguments:

Chapter 44 of Feynman’s Physics Lectures Vol 1, The Laws of Thermodynamics.

For an ideal gas heat energies and mechanical energies are calculated for the four steps of Carnot’s ideal process – based on the Ideal Gas Law. The result is the much more universal efficiency given above. There can’t be any better machine as combining an ideal engine with an ideal heat pump / refrigerator (the same type of machine running in reverse) would violate the second law of thermodynamics – stated as a principle: Heat cannot flow from a colder to a warmer body and be turned into mechanical energy, with the remaining system staying the same.

KarnoyiCikl

Pressure over Volume for Carnot’s process, when using the machine as an engine (running it counter-clockwise it describes a heat pump): AB: Expansion at constant high temperature, BC: Expansion without heat exchange (cooling), CD: Compression at constant low temperature, DA: Compression without heat exhange (gas heats up). (Image: Kara98, Wikimedia).

Feynman stated several times in his lectures that he does not want to teach history of physics or downplayed the importance of learning about history of science a bit (though it seems he was well versed in – as e.g. his efforts to follow Newton’s geometrical prove of Kepler’s Laws showed). For historical background of the evolution of Carnot’s ideas and his legacy see the the definitive resource on classical thermodynamics and its history – Peter Mander’s blog carnotcycle.wordpress.com:

What had puzzled me is once why we accidentally latched onto such a universal law, using just the Ideal Gas Law.The reason is that the Gas Law has the absolute temperature already included. Historically, it did take quite a while until pressure, volume and temperature had been combined in a single equation – see Peter Mander’s excellent article on the historical background of this equation.

Having explained Carnot’s Cycle and efficiency, every course in thermodynamics reveals a deeper explanation: The efficiency of an ideal engine could actually be used as a starting point defining the new scale of temperature.

Temperature scale according to Kelvin (William Thomson)

Carnot engines with different efficiencies due to different lower temperatures. If one of the temperatures is declared the reference temperature, the other can be determined by / defined by the efficiency of the ideal machine (Image: Olivier Cleynen, Wikimedia.)

However, according to the following paper, Carnot did not rigorously prove that his ideal cycle would be the optimum one. But it can be done, applying variational principles – optimizing the process for maximum work done or maximum efficiency:

Carnot Theory: Derivation and Extension, paper by Liqiu Wang

Random Thoughts on Temperature and Intuition in Thermodynamics

Recent we felt a disturbance of the force: It has been demonstrated that the absolute temperature of a real system can be pushed negative values.

The interesting underlying question is: What is temperature really? Temperature seems to be an intuitive everyday concept, yet the explanations of ‘negative temperatures’ prove that it is not.

Actually, atoms have not really been ‘chilled to negative temperatures’. I pick two explanations of this experiment that I found particularly helpful – and entertaining:

As Matt points out The issue is simply that formally temperature is a relationship between energy and entropy, and you can do some weird things to entropy and energy and get the formal definition of temperature to come out negative.

Aatish manages to convey the fact that temperature is inversely proportional to the slope of the entropy vs. energy curve using compelling analogs from economics. The trick is to find meaningful economic terms that are related in a way similar to the obscure physical properties you want to explain. MinutePhysics did something similar in explaining fundamental forces (cannot resist this digression):

I had once worked in laser physics, so Matt’s explanation involving two-level system speaks to me. His explanation avoids to touch on entropy and thus avoids to use the mysterious term entropy to explain mysterious temperature.

You can calculate the probabilities of population of these two states from temperature – or vice versa. If you manage to tweak the population by some science-fiction-like method (creating non equilibrium states) you can end up with a distribution that formally results in negative temperatures if you run the math backwards.

[In order to allow for tagging this post with Physics in a Nutshell I need to state that the nutshell part ends here.]

But how come that ‘temperature’ ever became such an abstract concept?

From a very pragmatic perspective focussed on macroscopic, everyday phenomena temperature is what we measure by thermometers, that is: calculated from the change of the volume of gases or liquids.

You do not need any explanation of what temperature or even entropy really is if you want to design efficient machines, such as turbines.

As a physics PhD working towards an MSc in energy engineering, I have found lectures in Engineering Thermodynamics eye-opening:

As a physicist I had been trained to focus on fundamental explanations: What is entropy really? How do we explain physical properties microscopically? That is: calculating statistical averages of the properties of zillions of gas molecules or imagining an abstract ‘hyperspace’ whose number of dimensions is proportional to the number of particles. The system as such moves through this abstract space as times passes by.

In engineering thermodynamics the question to What is entropy? was answered by: Consider it some property than can be calculated (and used to evaluate machines and processes).

Rankine cycle with reheat

Temperature Entropy diagram for steam. The red line represents a process called Rankine cycle: A turbine is delivering mechanical energy when temperature and pressure of the steam is decreased.

New terms in science have been introduced for fundamental conceptual reasons and/or because they came in handy in calculations. In my point of view, enthalpy belongs to the second class because it makes descriptions of gases and fluids flowing through apparatuses more straight-forward.

Entropy is different despite it can be reduced to its practical aspects. Entropy has been introduced in order to tame heat and irreversibility.

Richard Feynman stated (in Vol. I of his Physics Lectures, published 1963) that research in engineering contributed two times to the foundations of physics: The first time when Sadi Carnot formulated the Second Law of Thermodynamics  (which can be stated in terms of an ever increasing entropy) and the second time when Shannon founded information theory – using the term entropy in a new way. So musing about entropy and temperature – this is where hands-on engineering meets the secrets of the universe.

I tend to state that temperature had never been that understandable and familiar:

Investigations of the behavior of ideal gases (fortunately air, even moist air, is an ideal gas) have revealed that there needs to be an absolute zero temperature – when the volume of an ideal gas would approach zero.

When Clausius coined the term Entropy in 1865 1850 (*), he was searching for a function that allows to depict any process in a diagram such as the figure above, in a sense.
(*) Edit 1 – Jan. 31: Thanks to a true historian of science.

Heat is a vague term – it only exists ‘in transit’: Heat is exchanged, but you cannot assign a certain amount of heat to a state. Clausius searched for a function that could be used to denote one specific state in such a map of states, and he came up with a beautiful and simple relationship. The differential change in heat is equal to the change in entropy times the absolute temperature!  So temperature entered the mathematical formulations of the laws of thermodynamics when doing something really non-intuitive with differentials.

Entropy really seems to be the more fundamental property. You could actually start from the Second Law and define temperature in terms of the efficiency of perfect machines that are just limited by the fact that entropy can only increase (or that heat always needs to flow from the hotter to the colder object):

Beta stirling animation

Stirling motor – converting heat drawn from a hot gas to mechanical energy. Its optimum efficiency would be similar to that of Carnot’s theoretical machine.

The more we learn about the microscopic underpinnings of the laws that have been introduced phenomenologically before, the less intuitive explanations became. It does not help trying to circumvent entropy by considering what each of the particles in the system does. We think of temperature as something as some average over velocities (squared). But a single particle travelling its path through empty space would not have temperature. Neither would any directed motion of a beam of particles contribute to temperature. So temperature is better defined as the mean deviation of a distribution of speeds.

Even if we consider simple gas molecules, we could define different types of temperature: There is a kinetic temperature calculated from velocities. In the long run – when equilibrium has been reached – the other degrees of freedom (sich as rotations) would exhibit the same temperature. But when a gas is heated up, heat is transferred via collisions: So first the kinetic temperature rises, and then the energy is transferred to rotations. You could calculate a temperature from rotations, and this temperature would be different from the kinetic temperature.

So temperature is a property that is derived from what an incredible number of single particles do. It is a statistical property and it makes only sense when a system had enough time to reach an equilibrium. As soon as we push the microscopic constituents of the system that makes them deviate from their equilibrium behaviour, we get strange results for temperature – such as negative values.

_______________________
Further reading:
This post was also inspired by some interesting discussions on LinkedIn a while ago – on the second law and the nature of temperature.
(*) Edit 2 – Feb. 2: Though Clausius is known as the creator of the term entropy, the concept as such has been developed earlier by Rankine.