Network Sniffing for Everyone – Getting to Know Your Things (As in Internet of Things)

Simple Sniffing without Hubs or Port Mirroring for the Curious Windows User
[Jump to instructions and skip intro]

Your science-fiction-style new refrigerator might go online to download the latest offers or order more pizza after checking your calendar and noticing that you have to finish a nerdy project soon.

It may depend on your geekiness or faith in things or their vendors, but I absolutely need to know more about the details of this traffic. How does the device authenticate to the external partner? Is the connection encrypted? Does the refrigerator company spy on me? Launch the secret camera and mic on the handle?

In contrast to what the typical hacker movie might imply you cannot simply sniff traffic all on a network even if you have physical access to all the wiring.

In the old days, that was easier. Computers were connected using coaxial cables:

10base2 t-pieceCommunications protocols are designed to deal with device talking to any other device on the network any time – there are mechanisms to sort out collisions. When computers want to talk to each other the use (logical) IP addresses that need to get translated to physical device (MAC) addresses. Every node in the network can store the physical addresses of his peers in the local subnet. If it does not know the MAC address of the recipient of a message already it shouts out a broadcasting message to everybody and learns MAC addresses. But packets intended for one recipient are still visible to any other party!

A hub does (did) basically the same thing as coaxial cables, only the wiring was different. My very first ‘office network’ more than 15 years ago was based on a small hub that I have unfortunately disposed.

Nowadays even the cheapest internet router uses a switch – it looks similar but works differently:

A switch minimizes traffic and collisions by memorizing the MAC addresses associated with different ports (‘jacks’). If a notebook wants to talk to the local server this packet is sent from the notebook to the switch who forwards it to that port the server is connected to. Another curious employee’s laptop could not see that traffic.

This is fine from the perspective of avoiding collisions and performance but a bad thing if you absolutely want to know what’s going on.

I could not resist using the clichéd example of the refrigerator but there are really more and more interesting devices that make outbound connections Рor even effectively facilitate inbound ones Рso that you can connect to your thing from the internet.

Using a typical internet connection and router, a device on the internet cannot make an unsolicited inbound connection unless you open up respective ports on your router. Your internet provider may prevent this: Either you don’t have access to your router at all, or your router’s external internet address is still not a public one.

In order to work around this nuisance, some devices may open a permanent outbound connection to a central rendezvous server. As soon as somebody wants to connect to the device behind your router, the server utilizes this existing connection that is technically an outbound one from the perspective of the device.

Remote support tools such as Teamviewer use technologies like that to allow helping users behind firewalls. Internet routers doing that: DLink calls their respective series Cloud Routers (and stylish those things have become, haven’t they?).

How to: Setup your Windows laptop as a sniffer-router

If you want to sniff traffic from a blackbox-like device trying to access a server on the internet you would need a hub which is very hard to get these days – you may find some expensive used ones on ebay. Another option is to use a switch that supports Port Mirroring: All traffic on the network is replicated to a specific port, and connecting to that with your sniffer computer you could inspect all the packets

But I was asking myself for the fun of it:

Is there a rather simple method a normal Windows user could use though – requiring only minimal investment and hacker skills?

My proposed solution is to force the interesting traffic to go through your computer – that is turning this machine into a router. A router connects two distinct subnets; so the computer needs two network interfaces. Nearly every laptop has an ethernet RJ45 jack and wireless LAN – so these are our two NICs!

I am assuming that the thing to be investigated rather has wired than wireless LAN so we want…

  • … the WLAN adapter to connect to your existing home WLAN and then the internet.
  • … the LAN jack to connect to a private network segment for your thing. The thing will access the internet through a cascade of two routers finally.

Routing is done via a hardly used Windows feature experts will mock – but it does the job and is built-in: So-called Internet Connection Sharing.

Additional hardware required: A crossover cable: The private network segment has just a single host – our thing. (Or you could use another switch for the private subnet – but I am going for the simplest solution here.)

Software required: Some sniffer such as the free software Wireshark.

That’s the intended network setup (using 192.168.0.x as a typical internal LAN subnet)

|    Thing    |       |      Laptop Router      |      |Internet Router
|     LAN     |-cross-|     LAN     |    WLAN   |-WLAN-|Internal LAN
|192.168.137.2|       |192.168.137.1|192.168.0.2|      |192.168.0.1
  • Locate the collection of network adapters, in Windows 7 this is under
    Control Panel
    –Network and Internet
    —-View Network Status and Tasks
    ——Change Adapter Settings
  • In the Properties of the WLAN adapter click the Sharing tab and check the option Allow other network users to connect through this computer’s Internet connection.
  • In the drop-down menu all other network adapters except to one to be shared should be visible – select the one representing the RJ45 jack, usually called Local Internet Connection.

Internet Connection Sharing

  • Connect the RJ45 jack of the chatty thing (usually tagged LAN) to the LAN jack of your laptop with the crossover cable.
  • If it uses DHCP (most devices do), it will be assigned an IP address in the 192.168.137.x network. If it doesn’t i it needs a fixed IP address you should configure it for an address in this network with x other than 1. The router-computer will be assigned 192.168.137.1 and is the DHCP server, DNS server, and the default gateway.
  • Start Wireshark, click Capture…, Interfaces, locate the LAN adapter with IP address 192.168.137.1 and click Start

Now you see all the packets this device may send to the internet.

I use an innocuous example now:

On connecting my Samsung Blu-ray player, I see some interesting traffic:

Samsung bluray, packets

The box gets an IP address via DHCP (only last packet shown – acknowledgement of the address), then tries to find the MAC address for the router-computer 192.168.137.1 – a Dell laptop – as it needs to consult the DNS service there and ask for the IP address corresponding to an update server whose name is obviously hard-coded. It receives a reply, and the – fortunately non-encrypted – communication with the first internet-based address is initiated.

Follow TCP stream shows more nicely what is going on:

Samsung bluray player wants to update

The player sends an HTTP GET to the script liveupdate.jsp, appending the model, version number of location in the European Union. Since the player is behind two routers – that is NAT devices – Samsung now sees this coming from my Austrian IP address.

The final reply is a page reading [NO UPDATE], and they sent me a cookie that is going to expire 3,5 years in the past ūüėČ So probably this does not work anymore.

As I said – this was an innocuous example. I just wanted to demonstrate that you never know what will happen if you can’t resist connecting your things to your local computer network. You might argue that normal computers generate even more traffic trying to contact all kinds of update servers – but in this case you 1) can just switch on the sniffer and see that traffic without any changes to be made to the network and 2) as an owner of your computers you could on principle control it.

Edit: Added the ASCII ‘networking diagram’ based on feedback!

________________________________

Further reading:

Peer-to-Peer Communication Across Network Address Translators – an overview of different technique to allow for communications of devices behind NAT devices such as firewalls or internet routers.

Ethernet and Address Resolution Protocol (ARP) on Wikipedia

Sniffing Tutorial part 1 – Intercepting Network Traffic: Overview on sniffing options: dumb hubs, port mirroring, network tap.

Diffusion of iTechnology in Corporations (or: Certificates for iPhones)

[Jump to technical stuff]

Some clichés are true. One I found confirmed often is about how technologies are adopted within organizations: One manager meets another manager at a conference / business meeting / CIO event. Manager X show off the latest gadget and/or brags about presents a case-study of successful implementation of Y.

Another manager becomes jealous inspired, and after returning home he immediately calls upon his poor subordinates and have them implement Y – absolutely, positively, ASAP.

I suspect that this is the preferred diffusion mechanism for implementing SAP at any kind of organization or for the outsourcing hype (probably also the insourcing-again movement that followed it).

And I definitely know this works that way for iSomething such as iPhones and iPads. Even if iSomething might be not the officially supported standard. But no matter how standardized IT and processes are – there is always something like VIP support. I do remember vividly how I was one told that we (the IT guys) should not be so¬†overly obliging when helping users –¬† unless I (the top manager) needs something.

So trying to help those managers is the root cause for having to solve a nice puzzle: iThings need to have access to the network and thus often need digital certificates. Don’t tell me that certificates might not be the perfect solution – I know that. But working in some sort of corporate setting you are often not in the position to bring up these deep philosophical questions again and again, so let’s focus on solving the puzzle:

[Technical stuff – I am trying a new format to serve different audiences here]

Certificates for Apple iPhone 802.1x / EAP-TLS WLAN Logon

The following is an environment you would encounter rather frequently: Computer and user accounts are managed in Microsoft Active Directory – providing both Kerberos authentication infrastructure and LDAP directory. Access to Wireless LAN is handled by RADIUS authentication using Windows Network Protection Server, and client certificates are mandatory as per RADIUS policies.

You could require 802.1x to be done by either user accounts and/or machine accounts (though it is a common misunderstanding that in this way you can enforce a logon by 1) the computer account and then 2) the user account at the same machine.) I am now assuming that computers (only) are authenticated. This the iDevice needs to present itself as a computer to the logon servers.

Certificates contain lots of fields and standards either don’t enforce clearly what should go into those fields and/or applications interpret standards in weird ways. Thus the pragmatic approach is to tinker and test.

This is the certificate design that works for iPhones according to my experience:

  • We need a ‘shadow account’ in Active Directory whose properties will match fields in the certificates. Two LDAP attributes needto be set
    1. dnsHostName: machine.domain.com
      This is going to be mapped onto the DNS name in the Subject Alternative Name of the certificate.
    2. servicePrincipalNames: HOST/machine.domain.com
      This makes the shadow account a happy member of the Kerberos realm.

    According to my tests, the creation of an additional name mapping – as recommended here – is not required. We are using Active Directory default mapping here – DNS machine names work just as user’s UPNs (User Principal Name – the logon name in user@dmain syntax. See e.g. Figure 21 – Certificate Processing Logic – in this white paper for details.)

  • Extensions and fields in the certificate
    1. Subject Alternative Name: machine.domain.com (mapped to the DNS name dnsHostName in AD)
    2. Subject CN: host/machine.domain.com. This is different from Windows computers – as far as I understood what’s going on from RADIUS logging the Apple 802.1x client sends the string just as it appears in the CN. Windows clients would add the prefix host/ automatically.
    3. If this is a Windows Enterprise PKI: Copy the default template Workstation Authentication, and configure the Subject Name as to be submitted with the Request. The CA needs to accept custom SANs via enabling the  EDITF_ATTRIBUTESUBJECTALTNAME2 flag. Keys need to be configured as exportable to carry them over to the iDevice.
  • Create the key, request and certificate on a dedicated enrollment machine. Note that this should be done in the context of the user rather than the local machine. Certificates/key could be transported to another machines as PKCS#12 (PFX files).
  • Import the key and certificate to the iPhone using the iPhone Configuration Manager – this tools allows for exporting directly from the current user’s store. So if the user does not enroll for those certificates himself (which makes sense as the enrollment procedure is somewhat special, given the custom names), the PFX files would be first imported to the user’s store and then exported from there to the iPhone.

The point I like to stress in relation to certificates is that logon against AD is based on matching strings – containing the DNS names – not on a binary comparison of a file presented by the client versus a certificate file in the directory.

I have encountered that misconception often as there is an attribute in AD – userCertificate – that is actually designed for holding users’ (or machines’) certificates. But this is more of a Alice-tries-to-get-Bob’s-public-key-phonebook-style attribute, and it is not intended to be used for authentication but rather for encryption – Outlook is searching for S/MIME e-mail recipients’ public keys there. Disclaimer: I cannot vouch for any custom application that may exist.

Authentication is secure nonetheless as the issuing CA’s certificate needs to be present in a special LDAP object, the so-called NTAuth object in Active Directory’s Configuration Container, and per default it can only be edited by Enterprise Admins – the ‘root admins’ of AD. In addition you have to configure the CA for accepting arbitrary SANs in requests.

IPhone Fashion Valley

Happy iPhone users with their iPhones, when the product was released in 2007. I have never owned any iThing so I need to borrow an image from Wikimedia (user 1DmkIIN).

The Strange World of Public Key Infrastructure and Certificates

An e-mail discussion related to my recent post on IT security has motivated me to ponder about issues with Public Key Infrastructure once more. So I attempt – most likely in vain – to merge a pop-sci introduction to certificates with sort of an attachment to said e-mail discussion.

So this post might be opaque to normal users and too epic and introductory for security geeks. I apologise for the inconvenience.

I mentioned the failed governmental PKI pilot project in that post – a hardware security device destroying the key and there was no backup. I would have made fun of this – hadn’t I experienced it often that it is the so-called simple processes and logistics that can go wrong.

Ponte Milvio love padlocks

I didn’t expect to find such a poetic metaphor for “security systems” rendered inaccessible. Padlocks at Ponte Milvio in Italy – legend has it that lovers attaching a padlock to the bridge and throwing the key into the water will be together forever.

When compiling the following I had in mind what I call infrastructure PKIs – company-internal systems to be used mainly for internal purposes and very often for use by devices rather than by humans. (Ah, the internet of things.)

Issues often arise due to a combination of the following:

  • Human project resources assigned to such projects are often limited.
  • Many applications simply demand certificates so you need to create them.

Since the best way to understand certificates is probably by comparing them to passports or driver licenses I will nonetheless use one issued to me as a human life-form:

Digital Certificate

In Austria the chipcards used to authorize you to medical doctors as a patient can also be used as digital ID cards. That is, the card’s chip also holds the cryptographic private key, and the related certificate ties your identity as a citizen to the corresponding public key. A certificate is a file digitally signed by a Certificate Authority which in this case has the name a-sign-Token-03. The certificate can be downloaded here or searched for in the directory (German sute).

Digital X.509 Certificate: Details

The Public key related to my identity as a citizen (or better a database record representing myself as a citizen). As a passport, the certificate has an end of life and requires renewal.

Alternatives to Hardware Security Modules

An HSM is protecting that sacred private key of the certification authority. It is often a computer, running a locked-down version of an operating system, and it is equipped with sensors that detect and attempt to access the key store physically – it should actually destroy the key rather than having an attacker gain access to it.

It allows for implementing science-fiction-style (… Kirk Alpha 2Spock Omega 3 …) split administration and provides strong key protection that cannot be provided if the private key is stored in software – somewhere on the hard disk of the machine running the CA.

Captain Jean-Luc Picard transfers command of the USS Enterprise-D to Captain Edward Jellico

Yes, a key ceremony – the initiation of a certification authority – sometimes feel like that (memory-alpha.org). Here is the definitive list of Star Trek authorization codes.

Modern HSMs have become less cryptic in terms of usage but still: It is a hardware device not used on a daily basis, and requires additional training and management. Storage of physical items like the keys for unlocking the device and the corresponding password(s) is a challenge as is keeping the know-how of admins up to date.

Especially for infrastructure CAs I propose a purely organizational split administration for offline CAs such as a Root CA: Storing the key in software, but treating the whole CA machine as a device to be protected physically. You could store the private key of the Root CA or the virtual machine running the Root CA server on removable media (and at least one backup). The “protocol” provides spilt administration: E.g. one party has the key to the room, the other party has the password to decrypt the removable medium. Or the unencrypted medium is stored in a location protected by a third party – which in turn does only allow two persons to enter the room together.

But before any split administration is applied an evaluation of risks it should be made sure that the overall security strategy does not look like this:

Steps to nowhere^ - geograph.org.uk - 666960

From the description on Wikimedia: The gate is padlocked, though the fence would not prevent any moderately determined person from gaining access.

You might have to question the holy order (hierarchy) and the security implemented at the lowest levels of CA hierarchies.

Hierarchies and Security

In the simplest case a certification authority issues certificates to end-entities – users or computers. More complex PKIs consist of hierarchies of CAs and thus tree-like structures. The theoretical real-world metaphor would be an agency issuing some license to a subordinate agency that issues passports to citizens.

Chain of certificates associated with this blog

Chain of certificates associated with this blog: *.wordpress.com is certified by Go Daddy Secure Certification Authority which is in turn certified by Go Daddy Class 2 Certification Authority. The asterisk in the names makes it usable with any wordpress.com site – but it defies the purpose of denoting one specific entity.

The Root CA at the top of the hierarchy should be the most secure as if it is compromised (that is: it’s private key has – probably – been stolen) all certificates issued somewhere in the tree should be invalidated.

However, this logic only makes sense:

  • if there is or will with high probability be at least a second Issuing CA – otherwise the security of the Issuing CA is as important as that of the Root CA.
  • if the only purpose of that Root CA is to revoke the certificate of the Issuing CA. The Root CA’s key is going to sign a blacklist referring to the Issuing CA. Since the Root should not revoke itself its key signing the revocation list should be harder to compromise than the key of the to-be-revoked Issuing CA.
Certificate Chain

The certificate chain associated with my “National ID” certificate. Actually, these certificates stored on chipcards are invalidated every time the card (which serves another purpose primarily) is retired as a physical item. Invalidation of tons of certificates can create other issues I will discuss below.

Discussions of the design of such hierarchies focus a lot on the security of the private keys and cryptographic algorithms involved

But yet the effective security of an infrastructure PKI in terms of Who will be able to enroll for certificate type X (that in turn might entitle you to do Y) is often mainly determined by typical access control lists in databases or directories system that are integrated with an infrastructure PKI. Think would-be subscribers logging on to a web portal or to a Windows domain in order to enroll for a certificates. Consider e.g. Windows Autoenrollment (licensed also by non-Windows CAs) or the Simple Certificate Enrollment Protocol used with devices.

You might argue that it should be a no-no to make allegedly weak¬† software-credential-based authentication the only prerequisite for the issuance of certificates that are then considered strong authentication. However, this is one of the things that distinguish primarily infrastructure-focused CAs from, say, governmental CAs, or “High Assurance” smartcard CAs that require a face-to- face enrollment process.

In my opinion certificates are often deployed because their is no other option to provide platform-independent authentication – as cumbersome as it may be to import key and certificate to something like a printer box. Authentication based on something else might be as secure, considering all risks, but not as platform-agnostic. (For geeks: One of my favorites is 802.1x computer authentication via PEAP-TLS versus EAP-TLS.)

It is finally the management of group memberships or access control lists or the like that will determine the security of the PKI.

Hierarchies and Cross-Certification

It is often discussed if it does make sense to deploy more intermediate levels in the hierarchy – each level associated with additional management efforts. In theory you could delegate the management of a whole branch of the CA tree to different organizations, e.g. corresponding to continents in global organizations. Actually, I found that the delegation argument is often used for political reasons – which results in CA-per-local-fiefdom instead of the (in terms of performance much more reasonable) CA-per-continent.

I believe the most important reason to introduce the middle level is for (future) cross-certification: If an external CA cross-certifies yours it issues a certificate to your CA:

Cross Certification

Cross Certification between two CA hierarchies, each comprising three levels. Within a hierarchy each CA issues a certificate for its subordinate CA (orange lines). In addition the middle-tier CAs in each hierarchy issue certificates to the Root CAs of the other hierarchy – effectively creating logical chains consisting of 4 CAs. Image credits mine.

Any CA on any level could on principle be cross-certified. It would be easier to cross-certificate the Root CA but then the full tree of CAs subordinate to it will also be certified (For the experts: I am not considering name or other constraints here). If a CA an intermediate level is issued the cross-certificate trust is limited to this branch.

Cross-Certification constitutes a bifurcation in the CA tree and its consequences can be as weird and sci-fi as this sounds. It means that two different paths exists that connect an end-entity certificate to a different Root CA. Which path is actually chosen depends on the application validating the certificate and the protocol involved in exchanging or collecting certificates.

In an SSL handshake (which happens if you access your blog via https://yourblog.wordpress.com, using the certificate with that asterisk) happens if you access the web server is so kind to send the full certificate chain – usually excl. the Root CA – to the client. So the path finally picked by the client depends on the chain the server knows or that takes precedence at the server.

Cross-certification is usually done by CAs considered external, and it is expected that an application in the external world sees the path chaining to the External CAs.

Tongue-in-cheek I had once depicted the world of real PKI hierarchies and their relations as:

CA hierarchies in the real world.

CA hierarchies in the real world. Sort of. Image credits mine.

Weird things can happen if a web server is available on an internal network and accessible by the external world (…via a reverse proxy. I am assuming there is no deliberate termination of the SSL connection at the proxy – what I call a corporate-approved man-in-the-middle attack). This server knows the internal certificate chain and sends it to the external client – which does not trust the corresponding internal-only Root CA. But the chain sent in the handshake may take precedence over any other chain found elsewhere so the client throws an error.

How to Really Use “Cross-certification”

As confusing cross-certification is Рit can be  used in a peculiar way to solve other PKI problems Рthose with applications that cannot deal with the validation of a hierarchy at all or who can deal with only a one-level hierarchy. This is interesting in particular in relation to devices such as embedded industry systems or iPhones.

Assuming that only the needed certificates can be safely injected to the right devices and that you really know what you are doing the fully pesky PKI hierarchy can be circumvented by providing an alternative Root CA certificate to the CA at the bottom of the hierarchy:

The real, full blown hierarchy is

  1. Root CA issued a root certificate for Root CA (itself). It contains the key 1234.
  2. Root CA issues a certificate to Some Other CA related to key 5678.

… then the shortcut hierarchy for “dumb devices” looks like:

  1. Some Other CA issues a root certificate to itself, thus to Subject named Some Other CA. The public key listed in this certificate is 5678 the same as in certificate (2) of the extended hierarchy.

Client certificates can then use either chain – the long chain including several levels or the short one consisting of a single CA only. Thus if certificates have been issued by the full-blown hierarchy they can be “dumbed-down to devices” by creating the “one-level hierarchy” in addition.

Names and Encoding

In the chain of certificates the Issuer field in the certificate of the Subordinate CA needs to be the same as the Subject field of the Root CA – just as the Subject field in my National ID certificate contains my name and the Issuer field that of the signing CA. And it depends on the application how names with be checked. In a global world, names are not simple ASCII strings anymore, but encoding matters.

Certificates are based on an original request sent by the subordinate CA, and this request most often contains the name – the encoded name. I have sometimes seen that CAs changed the encoding of the names when issuing the certificates, or they reshuffled the components of the name – the order of tags like organization and country. An application may except that or not, and the reasons for rejections can be challenging to troubleshoot if the application is running in a blackbox-style device.

Revocation List Headaches

Certificates (X.509) can be invalidated by adding their respective serial number to a blacklist. This list is – or actually: may – be checked by relying parties. So full-blown certificate validation comprises collecting all certificates in the chain up to a self-signed Root CA (Subject=Issuer) and then checking each blacklist signed by each CA in the chain for the serial number of the entity one level below:

Certificate Validation

Validation of a certificate chain (“path). You start from the bottom and locate both CA certificates and the revocation lists via URLs in each subordinate certificate. Image credits mine.

The downside: If the CRL isn’t available at all applications following the recommended practices will for example deny network access to thousands of clients. With infrastructure PKIs that means that e.g. access to WLAN or remote access via VPN will fail.

This makes desperate PKI architects (or rather the architects accountable for the application requiring certificate based logon) build all kinds of workarounds, such as switching off CRL checking in case of an emergency or configuring grace periods. Note that this is all heavily application dependent and has to be figured out and documented individually for emergencies for all VPN servers, exotic web servers, Windows domain controllers etc.

A workaround is imperative if a very important application is dependent on a CRL issued by an “external” certificates’ provider. If I would use my Austrian’s digital ID card’s certificate for logging on to server X, that server would need tp have a valid version of this CRL which only lives for 6 hours.

Certificate Revocation List

A Certificate Revocation List (CRL) looks similar to a certificate. It is a file signed the Certification Authority that also signed the certificates that might be invalidated via that CRL. From downloading this CRL frequently I conclude that it a current version is published every hour – so there are 5 hours of overlap.

The predicament is that CRLs may be cached for performance reasons. Thus if you publish short-lived CRLs frequently you might face “false negative” outages due to operational issues (web server down…) but if the CRL is too long-lived it does not serve its purpose.

Ideally, CRLs would be valid for a few days, but a current CRL would be published, say every day, AND you could delete the CRL at the validating application every day. That’s exactly how I typically try to configure it. VPN servers, for example, have allowed to delete the CRL cache for a long time and Windows has a supported way to do that since Vista. This allows for reasonable continuity but revocation information would still be current.

If you cannot control the CRL issuance process one workaround is: Pro-active fetching of the CRL in case it is published with an overlap – that is: the next CRL is published while the current one is still valid –¬†and mirroring the repository in question.

As an aside: It is more difficult as it sounds to give internal machines access to a “public” external URL. Machines not necessarily use the proxy server configured for user (which cause false positive results – Look, I tested it by accessing it in the browser and it works), and/or machines in the servers’ network are not necessarily allowed to access “the internet”.

CRLs might also simply be too big – for some devices with limited processing capabilities. Some devices of a major vendor used to refuse to process CRLs larger than 256kB. The CRL associated with my sample certificate is about 700kB:

LDAP CDP URL

How the revocation is located – via a URL embedded in the certificate. For the experts: OCSP is supported, too, and it is the recommended method. However considering older devices it might be necessary to resort to CRLs.

CRL Details - Blacklist

The actual blacklist part of the CRL. The scrollbar is misleading – the list contains about 20.000 entries (best viewed with openssl or Windows certutil).

Emergency Revocation List

In case anything goes wrong – HSM inaccessible, passwords lost, datacenter 1 flooded abd backup datacenter 2 destroyed by a meteorite – there is one remaining option to keep PKI-dependent applications happy:

Prepare a revocation list in advance whose end of life (NextUpdate date) is after the end of validity of the CA certificate. In contrast to any backup of key material this CRL can be “backed up” by pasting the BASE64 string to the documentation as it does not contain sensitive information.

In an emergency this CRL will be published to the locations embedded in certificates. You will never be able to revoke anything anymore as CRLs might be cached – but business continuity is secured.

Emergency CRL

An Emergency CRL for my home-grown CA. It seems 9999 days is the maximum I can use with Windows certutil. Actually, the question of How many years should the lifetime be so that I will not be bothered anymore until retirement? comes up often in relation to all kinds of validity dates.

What I Never Wanted to Know about Security but Found Extremely Entertaining to Read

This¬†is in praise of Peter Gutmann‘s book draft Engineering Security, and the title is inspired by his talk¬†Everything You Never Wanted to Know about PKI but were Forced to Find Out.

Chances are high that any non-geek reader is already intimidated by the acronym PKI – sharing the links above on LinkedIn I have been asked Oh. Wait. What the %&$%^ is PKI??

This reaction is spot-on as this post is more about usability and perception of technology by end-users despite or because I have worked for more than 10 years at the geeky end of Public Key Infrastructure. In summary, PKI is a bunch (actually a ton) of standards that should allow for creating the electronic counterparts of signatures, of issuing passports, of transferring data in locked cabinets. It should solve all security issues basically.

The following images from Peter Gutmann’s book¬†¬†might invoke some memories.

Security warnings designed by geeks look like this:

Peter Gutmann, Engineering Security, certificate warning - What the developers wrote

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.167. Also shown in Things that Make us Stupid, https://www.cs.auckland.ac.nz/~pgut001/pubs/stupid.pdf, p.3.

As a normal user, you might rather see this:

Peter Gutmann, Engineering Security, certificate warning - What the user sees

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.168.

The funny thing was that I picked this book to take a break from books on psychology and return to the geeky stuff – and then I was back to all kinds of psychological biases and Kahneman’s Prospect Theory for example.

What I appreciate in particular is the diverse range of systems and technologies considered – Apple, Android, UNIX, Microsoft,…, all evaluated agnosticly, plus a diverse range of interdisciplinary research considered.¬†Now that’s what I call true erudition with a modern touch. Above all, I enjoyed the¬†conversational and irreverent tone – I have never started reading a book for technical reasons and then was not able to put it down because it was so entertaining.

My personal summary – which resonates a lot with my experience – is:
On trying to make systems more secure you might not only make them more unusable and obnoxious but also more insecure.

A concise summary is also given in¬†Gutmann’s talk Things that Make Us Stupid. I liked in particular the ignition key¬†as a real-world¬†example for a device that is smart and easy-to-use, and providing security as a by-product – very different from interfaces of ‘security software’.

Peter Gutmann is not at all siding with ‘experts’ who always chide end-users for being lazy and dumb – writing passwords down and stick the post-its on their screens – and who state that all we need is more training and user awareness. Normal users use systems to get their job done and they apply risk management in an intuitive way: Should I waste time in following an obnoxious policy or should I try to pass that hurdle as quick as possible to do what I am actually paid for?

Geeks are weird – that’s a quote from the lecture slides linked above. Since Peter Gutmann is an academic computer scientist and obviously a down-to-earth practitioner with ample hands-on experience¬†– which would definitely qualify him as a Geek God – his critique is even more convincing. In the book he quotes psychological research¬†which prove that geeks really think different (as per standardized testing of personality types). Geeks constitute¬†a minority of people (7%)¬†that tend to take decisions – such as Should I click that pop-up? – in a ‘rational’ manner, as the simple and mostly wrong theories on decision making have proposed.¬†One example Gutmann uses is testing for basic understanding of logics, such as Does ‘All X are Y’ imply ‘Some X are Y’? Across cultures the majority of people thinks that this is wrong.

Normal people – and I think also geeks when they don’t operate in geek mode, e.g. in the wild, non in their programmer’s cave – fall for many so-called fallacies and biases.

Our intuitive decision making engine runs on autopilot and we get conditioned to click away EULAs, or next-next-finish the dreaded install wizards, or click away pop-ups, including the warnings. As users we don’t generate testable hypotheses¬†or calculate risks but act unconsciously based on our experience what had worked in the past – and usually the click-away-anything approach works just fine. You would need US navy-style constant drilling in order to be alert enough not to fall for those fallacies.¬†This does exactly apply to anonymous end users using their home PCs to do online-banking.

Security indicators like padlocks and browser address bar colors change with every version of popular browsers. Not even tech-savvy users are able to tell from those indicators if they are ‘secure’ now. But what it is extremely difficult: Users would need to watch out for the lack of an indicator (that’s barely visible when it’s there). And we are – owing to confirmation bias – extremely bad at spotting the negative, the lack of something. Gutmann calls this the Simon Says problem.

It is intriguing to see how biases about what ‘the others’ – the users or the attackers – would do enter technical designs. For example it is often assumed that a client machine or user who has authenticated itself is more trustworthy – and servers are more vulnerable to a malformed packet sent after successful authentication. In the Stuxnet attack¬†digitally signed malware (signed by stolen keys) that has been used – ‘if it’s signed it has to be secure’.

To make things worse users are even conditioned for ‘insecure’ behavior: When banks use all kinds fancy domain names to market their latest products, they¬†lure their users into clicking on links to that fancy sites in e-mails and have them logon with their¬†banking user accounts¬†via these sites¬†they train users to fall for phishing e-mails – despite the fact that the same e-mails half-heartedly warn about clicking arbitrary links in e-mails.

I want to stress that I believe in relation to systems like PKI – that require you run some intricate procedures every few years only (these are called ceremonies for a reason), but then it is extremely critical – also admins should also be considered ‘users’.

I have spent many hours in discussing proposed security features like Passwords need to be impossible to remember and never written down with people whose job it is to audit, draft policies, and read articles on what Gutmann calls conference-paper-attacks all the day. These are not the people who have to run systems, deal with helpdesk calls or costs, and with requests from VIP users as top level managers who had on the one hand been extremely paranoid about system administrators sniffing their e-mails but yet on the other hand need instant 24/7 support with recovery of encrypted e-mails (This should be given a name like the Top Managers’ Paranoia Paradox)

As a disclaimer I’d like to add that I don’t underestimate cyber security threats, risk management, policies etc. It is probably the current media hype on governments spying on us that makes me advocate a contrarian view.

I could back this up by tons of stories, many of them too good to be made up (but unfortunately NDA-ed):¬†security geeks in terms of ‘designers’ and ‘policy authors’ often¬†underestimate time and efforts required in running their solutions on a daily basis. It is often the so-called trivial and simple things that go wrong, such as: The documentation of that intricate process to be run every X years cannot be found, or the only employee who really knew about the interdependencies is long gone,¬†or allegedly simple¬†logistics that go wrong (Now we are locked in the secret room to run the key ceremony… BTW did anybody think of having the media ready to install the operating system on that high secure isolated machine?).

A large European PKI setup failed (it made headlines) because the sacred key of a root certification authority had been destroyed Рwhich is the expected behavior for so-called Hardware Security Modules when they are tampered with or at least the sensors say so, and there was no backup Рthe companies running the project and running operations blamed each other.

I am not quoting this to make fun of others although the typical response here is to state that projects or operations have been badly managed and you just need to throw more people and money on them to run secure systems in a robust and reliable way. This might be true but it does simply not reflect the typical budget, time constraints, and lack of human resources typical IT departments of corporations have to deal with.

There is often a very real, palpable risk of trading off business continuity and availability (that is: safety) for security.

Again I don’t want to downplay risks associated with broken algorithms and the NSA reading our e-mail. But as Peter Gutmann points out cryptography is the last thing an attacker would target (even if a conference-paper attack had shown it is broken) – the implementation of cryptography rather guides attackers along the lines of where not to attack. Just consider the spectacular recent ‘hack’ of a prestigious one-letter Twitter account which was actually blackmailing the user after having gained control over a user’s custom domain through social engineering – of most likely underpaid call-center agents who had to face that dilemma of meeting the numbers in terms of customer satisfaction¬†versus following the security awareness training they might have had.

Needless to say, encryption, smart cards, PKI etc. would not have prevented that type of attack.

Peter Gutmann says about himself he is throwing rocks at PKIs, and I believe you can illustrate a particularly big problem using a perfect real-live metaphor: Digital certificates are like passports or driver licenses to users – signed by a trusted agency.

Now consider the following: A user might commit a crime and his driver license is seized. PKI’s equivalent of that seizure is to have the issuing agency publishing a black list regularly, listing all the bad guys. Police officers on the road need to have access to that blacklist in order to check drivers’ legitimacy. What happens if a user isn’t blacklisted but the blacklist publishing service is not available? The standard makes this check optional (as many other things which is the norm when an ancient standard is retrofitted with security features) but let’s assume the police app follows the recommendation what they SHOULD do.¬† If the list is unavailable the user is considered and alleged criminal and has to exit the car.

You could also imagine something similar happening to train riders who have printed out an online ticket that cannot be validated (e.g. distinguished from forgery) by the conductor due to a failure in the train’s IT systems.

Any ’emergency’ / ‘incident’ related to digital certificates I was ever called upon to support with was related to false negative blocking users from doing what they need to do because of missing, misconfigured or (temporarily) unavailable certificate revocation lists (CRLs). The most important question in PKI planning is typically how to workaround or prevent inaccessible CRLs. I am aware of how petty this problem may appear to readers – what’s the big deal in monitoring a web server? But have you ever noticed how many alerts (e.g. via SMS) a typical administrator gets – and how many of them are false negatives? When I ask what will happen if the PKI / CRL signing / the web server breaks on Dec. 24 at 11:30 (in a European country) I am typically told that we need to plan for at least some days until recovery. This means that revocation information on the blacklist will be stale, too, as CRLs can be cached for performance reasons.

As you can imagine most corporations rather tend to follow the reasonable approach of putting business continuity over security so they want to make sure that a glitch in the web server hosting that blacklists will not stop 10.000 employees from accessing the wireless LAN, for example. Of course any weird standard can be worked around given infinite resources. The point I wanted to make was that these standards have been designed having something totally different in mind, by PKI Theologians in the 1980s.

Admittedly though, digitally certificates and cryptography is great playground for geeks. I think I was a PKI theologian myself many years ago until I rather morphed in what I call anti-security consultant tongue-in-cheek – trying to help users (and admins) to keep on working despite new security features. I often advocated for¬†not using certificates and proposing alternative approaching¬†boiling down¬†the potential PKI project¬†to a few hours of work,¬†against the typical consultant’s mantra of trying to make yourself indispensable in long-term projects and by designing blackboxes the client will never be able to operate on his own. Not only because of the¬† PKI overhead but because alternatives were as secure – just not as hyped.

So in summary I am recommending Peter¬†Gutmann’s terrific resources (check out his Crypto Tutorial, too!)¬†to anybody who is torn between geek enthusiasm for some obscure technology and questioning its value nonetheless.

Rusty Padlock

No post on PKI, certificates and key would be complete without an image like this.I found the rusty one particularly apt here. (Wikimedia, user Garretttaggs)

On Science Communication

In a parallel universe I might work as a science communicator.

Having completed my PhD in applied physics I wrote a bunch of job applications, one of them being a bit eccentric: I applied at the Austrian national public service broadcaster. (Adding a factoid: According to Wikipedia Austria was the last country in continental Europe after Albania to allow nationwide private television broadcasting).

Fortunately I deleted all those applications that would me make me blush today. In my application letters I referred to¬†the physicist’s infamous skills in analytical thinking, mathematical modeling and optimization of technical processes. Skills that could be applied to basically anything – from inventing novel tractor beam generators¬†for space ships to automatically analyzing¬†emoticons in¬†Facebook messages.

If I would have been required to add a social-media-style tagline in these dark ages of letters on paper and snail mail I probably would have tagged myself as combining anything, in particular experimental and theoretical physics and, above all, communicating science to different audiences. If memory serves I used the latter argument in my pitch to the broadcaster.

I do remember the last sentence of that pivotal application letter:

I could also imagine working in front of a camera.

Yes, I really did write that – based on a ‘media exposure’ of having appeared on local TV for some seconds.

This story was open-ended: I did not receive a reply until three months later, and at that time I was already employed as a materials scientist in R&D.

In case job-seeking graduate students are reading this: It was imperative that I added some more substantial arguments to my letters, that is: hands-on experience –¬†maintaining UV excimer lasers, knowing how to handle liquid helium, decoding the output of¬†X-ray diffractometers, explaining accounting errors to auditors of research grant managing agencies.¬†Don’t rely on the analytical skills pitch for heaven’s sake.

I pushed that anecdote deep down into the netherworlds of my subconsciousness. Together with some colleagues I ritually burnt items reminiscent of university research and of that gruelling job hunt, such as my laboratory journals and print-outs of job applications. This spiritual event was eventually featured on a German proto-blog website and made the German equivalent of ritual burning the top search term for quite a while.

However, today I believe that the cheeky pitch to the broadcaster had anticipated my working as a covert science communicator:

Fast-forward about 20 years and I am designing and implementing Public Key Infrastructures¬†at corporations. (Probably in vain, according to the recent reports about NSA activities). In such projects I¬†covered anything from giving the first concise summary to the CIO (Could you explain what PKI is – in just two Powerpoint slides?) to¬†spending nights in the data center –¬†migrating to the new system together with other security nerds, fueled by pizza and caffeine.

The part I enjoyed most in these projects was the lecture-style introduction (the deep dive in IT training lingo) to the fundamentals of cryptography. Actually these workshops were the nucleus of a lecture I gave at a university later. I aimed at combining anything: Mathematical algorithms and anecdotes (notes from the field) about IT departments who locked themselves out of the high-security systems, stunning history of cryptography and boring  EU legislation, vendor-agnostic standards and the very details of specific products.

Usually the feedback was quite good though once the comment in the student survey read:

Her lectures are like a formula one race without pitstops.

This was a lecture given in English, so it is most likely worse when I talk in German. I guess, Austrian Broadcasting would have forced me to take a training in professional speaking.

As a Subversive Element I indulged in throwing in some slides about quantum cryptography Рoften this was considered the most interesting part of the presentation, second to my quantum physics stand-up edutainment in coffee breaks. The downside of that said edutainment were questions like:

And … you turned down *that* for designing PKIs?

I digress – find the end of that story here.

I guess I am obsessed with combining consulting¬†and education. Note that I am referring to consulting in terms of working hands-on with¬†a client, accountable for 1000 users being able to logon (or not) to their computers – ¬†not your typical management consultant’s¬†churning out¬†sleek Powerpoint slides and leaving silently before you need to get your hands dirty (Paraphrasing clients’ judgements of ‘predecessors’ in projects I had to fix).

It is easy to spot educational aspects in consulting related to IT security or renewable energy. There are people who want to know how stuff really works, in particular if that helps to make yourself less dependent on utilities or on Russian gas pipelines, or to avoid being stalked by the NSA.

But now I have just started a new series of posts on Quantum Field Theory. Why on earth do I believe that this is useful or entertaining? Considering in particular that I don’t plan to cover leading edge research: I will not comment on hot new articles in Nature¬†about¬†stringy Theories of Everything.

I stubbornly focus on that part of science I have really grasped myself in depth – as an applied physicist slowly (re-)learning theory now. I will never reach the frontier of knowledge in contemporary physics in my lifetime. But, yes, I am guilty of sharing sensationalist physics nuggets on social media at times – and I jumped on the Negative Temperature Train last year.

My heart is in reading old text books, and in researching old patents describing inventions of the pre-digital era. If you asked me what I would save if my house is on fire I’d probably say I’d snatch the six volumes of text books in theoretical physics my former physics professor, Wilhelm Macke, has written in the 1960s. He had been the last graduate student supervised by Werner Heisenberg. Although I picked experimental physics eventually I still consider his lectures the most exceptional learning experience I ever had in life.

I have¬†enjoyed wading through mathematical derivations ever since. Mathy physics has helped me to¬†save money on life coaches or other therapists when I was a renowned, but nearly burnt-out ‘travelling knowledge worker’ AKA project nomad. However, I understand that advanced calculus is not everybody’s taste – you need to invest quite some time and efforts until you feel these therapeutic effects.

Yet, I aim at conveying that spirit, although I had been told repeatedly by curriculum strategists in higher education that if anything scares people off pursuing a tech¬†or science degree – in particular, as a post-graduate degree –¬†it is too much math, including reference to mathy terms in plain English.

However, I am motivated by a charming book:

The Calculus Diaries: How Math Can Help You Lose Weight, Win in Vegas, and Survive a Zombie Apocalypse

by science writer Jennifer Ouellette. According to her website,¬†she is a recovering English major who stumbled into science writing as a struggling freelance writer… and who has been avidly exploring her inner geek ever since. How could you not love her books? Jennifer¬†is the living proof that you can overcome math anxiety or reluctance, or even turn that into inspiration.

Richard Feynman has given a series of lectures in 1964 targeted to a lay audience, titled The Character of Physical Law.

Starting from an example in the first lecture, the gravitational field, Feynman tries expound how physics relates to mathematics in the second lecture – by the way also introducing the principle of least action as an alternative to tackle planetary motions, as discussed in the previous post.

It is also a test of your dedication as a Feynman fan as the quality of this video is low. Microsoft Research has originally brought these lectures to the internet – presenting them blended with additional background material (*) and a transcript.

You may or may not agree with Feynman’s conclusion about mathematics as the language spoken by nature:

It seems to me that it’s like: all the intellectual arguments that you can make would not in any way – or very, very little – communicate to deaf ears what the experience of music really is.

[People like] me, who’s trying to describe it to you (but is not getting it across, because it’s impossible), we’re talking to deaf ears.

This is ironic on two levels, as first of all, if anybody could get it across Рit was probably Feynman. Second, I agree to him. But I will still stick to my plan and continue writing about physics, trying to indulge in the mathy aspects, but not showing off the equations in posts. Did I mention this series is an experiment?

________________________________________

(*) Technical note: You had¬†to use Internet Explorer and install Microsoft Silverlight when this was launched in 2009 – now it seems to work with Firefox as well. Don’t hold be liable if it crashes your computer though!

Trading in IT Security for Heat Pumps? Seriously?

Astute analysts of science, technology and the world at large noticed that my resume reads like a character from The Big Bang Theory. After all, an important tag used with this blog is cliché, and I am dead serious about theory and practice of combining literally anything.

[Edit in 2016: At the time of writing this post, this blog’s title was Theory and Practice of Trying to Combine Just Anything.]

Recently I have setup our so-called business blog and business facebook page, but I admit it is hard to recognize them as such. Our facebook tagline says (translated from German):

Professional Tinkerers. Heat Pump Freaks. Villagers. (Ex-) IT Guys.

People liked the page – probably due to expecting this page to turn out as one of my experimental web 2.0 ventures (I am trying hard to meet those expectations anyway).

But then one of my friends has asked:

Heat pumps instead of IT security Рseriously?

Actually this is the pop-sci version: The true question included a lesser known term:
Heat pumps instead of PKI?

(1) PKI and IT Security

PKI means Public Key Infrastructure, and it is not as boring as the Wikipedia definition may sound. For more than ten years it way my mission to design, implement and troubleshoot PKI systems. The emphasis is on ‘systems’: PKI is about geeky cryptographic algorithms, hyper-paranoid risk management (Would the NSA be able to hack into this?) as well as about matching corporate politics and alleged or true risks with commercially feasible technical systems. Adding some management lingo it is about ‘technology, people, and processes’.

Full-blown PKI projects are for large corporations – so I was travelling a lot, although I was able to turn my services offerings from ‘working on site, doing time – whatever needs to be done’ (which is actually the common way to work as an expert freelance in IT) to ‘working mainly remote – working on very specific tasks only’. I turned into a PKI firefighter and PKI reviewer. If you really want to know it all in detail, click here¬†(I also¬†gave a lecture on PKI for five years in this MSc program).

There was nothing wrong with PKI as such: I enjoyed the geeky community of like-minded peers, and the business was self-running. The topic is hot. Just read your favorite tech newspaper’s articles on two-factor authentication or the like – both corporate compliance rules and new security threats related to cloud computing make PKI or related technologies being in demand a sure bet.

(2) Portfolio of Passions

I would like to¬†borrow another author’s picture here: In The Monk and the Riddle: The Art of Creating a Life While Making a Living¬†Randy Komisar¬†– Silicon Valley virtual CEO – expounds how he dabbled in some creative ventures after having graduated, and how he finally embarked on a career as a lawyer. And how he saw his future unfolding before him – Associate, Senior… Partner. He could see the office doors lined up neatly, reflecting the ever progressing evolvement of what we call career, and he¬†quit his career as a lawyer.

In particular, I like¬†Komisar’s definition of passion¬† that should not at confused with the new age-y approach of following your passion.

.

.

It is not about the passion, but about a portfolio of passion – don’t drive yourself crazy by trying to find THE passion once for all.

My personal portfolio had always comprised a whole lot Рthis blog has its name for a reason. Probably I will some day blog on all studies and master degree programmes I had ever evaluated attending. When I was a teenager there were times when philosophy and literature scored higher than anything sciencey.

So I had ended up in an obscure, but thought-after sub-branch of IT security. I have gone to great lengths in this blog to explain my transition from physics to IT. However, physics, science and engineering never vanished from my radar for opportunities.

I wanted less reputation as the internationally renowned high-flyer in IT, and more hands-on down-to-earth work. Ironically, the fact that security is hot in the corporate world started to turn me off. I felt I stood at the wrong side of fence or of the negotiation table Рas an effectively Anti-Security Consultant who helped productive business units to remain productive despite security and compliance policies. Probably worth a post of its own, but my favorite theory is: If you try to enforce policies beyond a certain limit, people will pour all their creativity into circumventing the processes and beating the system. And right they are because they could not do their jobs otherwise.

 For many years a resource-consuming background process of soul-searching was concerned with checking various option from my portfolio of passions. I was looking for a profession that:

  • is based on technology that is¬†not virtual, but allowing for utilizing my know-how in IT infrastructure and security as an add-on.
  • allows for working with clients whose sites can be reached by car – not by plane.
  • allows for self-consistency and authenticity: Practice what you preach / Turn your hobby into a job.
  • utilizes the infamous physicist’s analytical skills, that is combines (just anything): Theoretical calculations, hands-on engineering, managing the design of complex technical systems, dealing with customer requirements¬†versus available technical solutions.

The last item is a pet topic of mine: As a physicist Рeven as an applied physicist Рyou have not been trained for a specific job. Physics is more similar to philosophy than to engineering in this respect. We are dilettantes in the best sense Рand that is why many physicists end up in IT, management consulting or finance for example.

There are interdisciplinary fields of research that utilize physics via sort of mathematical analogs Рjust consider Bose-Einstein condensation in networking theory. According to another debatable theory of mine we have nearly blown up the financial system because of many former scientists working in finance Рon the physics of wall street Рwho were more interesting in doing something that mathematically resembles physics than in the impact on the real world.

Solar collector. Image credits: punktwissen

Solar collector, optimized for harvesting ambient heat by convection in winter time. Image credits: Mine / Our German blog.

.

(3) And Now for Something Completely Different: Heat Pump Systems and Sustainability

Though am truly interested in foundations of physics, fascinated by the LHC, and even intrigued even by econophysics, I rather prefer to work on mundane applications of physics in engineering as long as it allows for working on a solution to a problem that really matters right now.

Such as the effective utilization of the limited resources available on our planet. Anyone who believes exponential growth can go on forever in a finite world is either a madman or an economist (Kenneth Boulding). I do not want to enter the debate on climate warming and I do not think it makes sense to attempt evangelizing people by ethical arguments. Why should we act in a more responsible way than all the generations before us? My younger self, travelling the globe by plane, would not have listened to that arguments either.

However, I think we are all – green or not – striving for personal and economic independence and autonomy: as individuals, as home owners, as businesses.

That’s what got¬†us interested in renewable energy some time ago, and we started working on our personal pilot project that finally turned into¬†a research project / ‘garage start-up’.

We have finally come up with a concept of a heat pump system that uses an unconventional source of heat: The heat pump does not draw heat from ground, ground water or air, but from a large low temperature reservoir Рa cistern, in a sense. Ambient heat is in turn transferred to the water tank by means of a solar collector. A simple collector built from hoses (as depicted above) works better than a flat plate collector that relies on heat transfer via radiation.

As with PKI, this is more interesting than it sounds, and it is really about combining just anything: Numerical simulations and building stuff, consulting and product development, scrutinizing product descriptions provided by vendors and dealing with industry standards. None of the components of the heat pump system is special – we did not invent a device defying the laws of physics – but is it the controlling logic that matters most.

I am going to extend the scope for combining anything even further: Having enrolled in a Master’s degree programme in energy engineering in 2011, I will focus on smart metering in my master thesis. Future volatile electricity tariffs (communicated by intelligent meters)¬†will play an important role in management and control of heat pump systems, and there are lots of security risks to be considered.

It is all about systems, interfaces, and connections Рnot only in social media and IT, but also in building technology and engineering. Actually, all of that is converging onto one big cloudy network (probably also subject to similar chaotic phenomena as the financial markets). I am determined to make some small contribution to that.

(4) Concluding and Confusing Remark

Now I feel like Achill and the Tortoise in Gödel, Escher, Bach(*) Рin the chapter on pushing and popping through many levels of the story or the related dreamscape. I am not sure if I have reached the base level I had started from. This might be cliff-hanger.

(*) This is also a subtle tribute to the friend – and musician – mentioned above.

.