Filters that work

August 8th, 2013

Summary: The architecture for David Cameron’s filtering plans is wrong and has a negative consequences, however there are alternative architectures which might work.

There has been much news coverage about David Cameron’s plans for opt-out filters for all internet users in the UK. With opt-in systems barely anyone will opt-in and with opt-out systems barely anyone will opt-out and so this is a proposal for almost everyone to have a filter on their internet traffic. Enabling households to easily filter out bad content from their internet traffic is useful in that there are many people who do want to do this (such as myself[1]). However the proposed architecture has a number of significant flaws and (hopefully unintended) harmful side effects.

Here I will briefly recap what those flaws and side-effects are and propose an architecture which I claim lacks these flaws and side-effects while providing the desired benefits.

  1. All traffic goes through central servers which have to process it intensively. This makes bad things like analysing this traffic much easier. It also means that traffic cannot be so efficiently routed. It means that there can be no transparency about what is actually going on as no one outside the ISP can see.
  2. There is no transparency or accountability. The lists of things being blocked are not available and even if they were it is hard to verify that those are the ones actually being used. If an address gets added which should not be (say that of a political party or an organisation which someone does not like) then there is no way of knowing that it has been or of removing it from the list. Making such lists available even for illegal content (such as the IWF’s lists) does not make that content any more available but it does make it easier to detect and block it (for example TOR exit nodes could block it). In particular it means having found some bad content it is easier to work out if that content needs to be added to the list or if it is already on it.
  3. Central records must be kept on who is and who is not using such filters, really such information is none of anyone else’s business. They should not know or be able to tell, and they do not need to.

I am not going to discuss whether porn is bad for you though I have heard convincing arguments that it is. Nor will I expect any system to prevent people who really want to access such content from doing so. I also will not use a magic ‘detect if adult’ device to prevent teenagers from changing the settings to turn filters off.

Most home internet systems consist of a number of devices connected to some sort of ISP provided hub which then connects to the ISP’s systems and then to the internet. This hub is my focus as it is provided by the ISP and so can be provisioned with the software they desire and configured by them but is also under the control of the household and provides an opportunity for some transparency. The same architecture can be used with the device itself performing the filtering, for example when using mobile phones on 3G or inside web browsers when using TLS.

So how would such a system work? Well these hubs are basically just a very small Linux machine, like a Raspberry Pi and it is already handling the networking for the devices in the house, probably running a NAT[0] and doing DHCP, it should probably also be running a DNS server and using DNSSEC. It already has a little web server to display its management pages and so could trivially display web pages saying “this content blocked for you because of $reason, if this is wrong do $thing”. Then when it makes DNS requests for domains to the ISP’s servers then they can reply with additional information about whether this domain is known to have bad content and where to find additional information on that which the hub can then look up and use to as input to apply local policy.
Then the household can configure to hub that applies the policy they want and it can be shipped with a sensible default and no one knows what policy they chose unless they snoop their traffic (which should require a warrant).
Now there might want to be a couple of extra tweaks in here, for example there is some content which people really do not want to see but find very difficult not to seek out, for example I have friends who have struggled for a long time to recover from a pornography addiction. Hence providing the functionality whereby filter settings can be made read only such that a user can choose to make ‘impossible’ to turn off can be useful as in a stronger moment they can make a decision that prevents them being able to do something they do not want to in a weaker moment. Obviously any censorship system can be circumvented by a sufficiently determined person but self blocking things is an effective strategy to help people break addictions, whether to facebook in the run up to exams or to more addictive websites.

So would such a system actually work? I think that it is technically feasible and would achieve the purposes it is intended to and not have the same problems that the current proposed architecture has. However it might not work with currently deployed hardware as that might not have quite enough processing power (though not by much). However an open, well specified system would allow incremental roll out and independent implementation and verification. Additionally it does not provide the services for which David Cameron’s system is actually being built which is to make it easier to snoop on all internet users web traffic. This is just the Digital Economy bill all over again but with ‘think of the children’ rather than ‘think of the terrorists’ as its sales pitch. There is little point blocking access to illegal content as that can always be circumvented, much better to take the content down[2] and lock up the people who produced it, failing that, detect it as the traffic leaves the ISP’s network towards bad places and send round a police van to lock up the people accessing it. Then everything has to go through the proper legal process in plain sight.

[0]: in the case of Virgin Media’s ‘Super Hub’ doing so incredibly badly such that everything needs tunnelling out to a sane network.
[1]: Though currently I do not beyond using Google’s strict safe search because there is no easy mechanism for doing so, the only source of objectionable content that actually ends up on web pages I see is adverts, on which more later.
[2]: If this is difficult then make it easier, it is far too hard to take down criminal website such as phishing scams at the moment and improvements in international cooperation on this would be of great benefit.

Surveillance consequences

August 7th, 2013

Mass surveillance of the citizens of a country allows intelligence services to use ‘big data’ techniques to find suspicious things which they would not otherwise have found. They can analyse the graph structure of communications to look for suspicious patterns or suspicious keywords. However as a long term strategy it is fundamentally flawed. The problem is the effect of surveillance on those being watched. Being watched means not being trusted, being outside and other, separate from those who know best and under suspicion. It makes you foreign, alien and apart, it causes fear and apprehension, it reduces integration. It makes communities which feel that they are being picked on, distressed and splits them apart from those around them. This causes a feeling of oppression and unfairness, of injustice. This results in anger, which grows in the darkness and leads to death.

That is not the way to deal with ‘terrorism’. Come, let us build our lives together as one community, not set apart and divided. Let us come together and talk of how we can build a better world for us and for our children. Inside we are all the same, it does not matter where we came from, only where we are going to and how we get there.
Come, let us put on love rather than fear, let us welcome rather than reject, let us build a country where freedom reigns and peace flows like a river through happy tree lined streets where children play.

I may be an idealist but that does not make this impossible, only really hard, and massively worth it. The place to begin is as always in my own heart for I am not yet ready to live in the country I want us to be. There is a long way to go, and so my friends: let us begin.

Communicating with a Firefox extension from Selenium

May 20th, 2013

Edit: I think this now longer works with more recent versions of Firefox, or at least I have given up on this strategy and gone for extending Webdriver to do what I want instead.

For something I am currently working on I wanted to use Selenium to automatically access some parts of Firefox which are not accessible from a page. The chosen method was to use a Firefox extension and send events between the page and the extension to carry data. Getting this working was more tedious than I was expecting, perhaps mainly because I have tried to avoid javascript whenever possible in the past.

The following code extracts set up listeners with Selenium and the Firefox extension and send one event in each direction. Using this to do proper communication and to run automated tests is left as an exercise for the author but hopefully someone else will find this useful as a starting point. The full code base this forms part of will be open sourced and made public at some future point when it does something more useful.

App.java


package uk.ac.cam.cl.dtg.sync;

import java.io.File;
import java.io.IOException;

import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.firefox.FirefoxProfile;

public class App {
private static final String SEND = "\"syncCommandToExtension\"";
private static final String RECV = "\"syncCommandToPage\"";

public static void main(String[] args) throws IOException {
// This is where maven is configured to put the compiled .xpi
File extensionFile = new File("target/extension.xpi");
// So that the relevant Firefox extension developer settings get turned on.
File developerFile = new File("developer_profile-0.1-fn+fx.xpi");
FirefoxProfile firefoxProfile = new FirefoxProfile();
firefoxProfile.addExtension(extensionFile);
firefoxProfile.addExtension(developerFile);
WebDriver driver = new FirefoxDriver(firefoxProfile);
driver.get("about:blank");
if (driver instanceof JavascriptExecutor) {
AsyncExecute executor = new AsyncExecute(((JavascriptExecutor) driver));
executor.execute("document.addEventListener( " + RECV + ", function(aEvent) { document.title = (" + RECV
+ " + aEvent) }, true);");
executor.execute(
"document.dispatchEvent(new CustomEvent(" + SEND + "));");

} else {
System.err.println("Driver does not support javascript execution");
}
}

/**
* Encapsulate the boilerplate code required to execute javascript with Selenium
*/
private static class AsyncExecute {
private final JavascriptExecutor executor;

public AsyncExecute(JavascriptExecutor executor) {
this.executor = executor;
}

public void execute(String javascript) {
executor.executeAsyncScript("var callback = arguments[arguments.length - 1];"+ javascript
+ "callback(null);", new Object[0]);
}
}
}

browserOverlay.js Originally cribbed from the XUL School hello world tutorial.


document.addEventListener(
"syncCommandToExtension", function(aEvent) { window.alert("document syncCommandToExtension" + aEvent);/* do stuff*/ }, true, true);

// do not try to add a callback until the browser window has
// been initialised. We add a callback to the tabbed browser
// when the browser's window gets loaded.
window.addEventListener("load", function () {
// Add a callback to be run every time a document loads.
// note that this includes frames/iframes within the document
gBrowser.addEventListener("load", pageLoadSetup, true);
}, false);

function syncLog(message){
Application.console.log("SYNC-TEST: " + message);
}

function sendToPage(doc) {
doc.dispatchEvent(new CustomEvent("syncCommandToPage"));
}

function pageLoadSetup(event) {
// this is the content document of the loaded page.
let doc = event.originalTarget;

if (doc instanceof HTMLDocument) {
// is this an inner frame?
if (doc.defaultView.frameElement) {
// Frame within a tab was loaded.
// Find the root document:
while (doc.defaultView.frameElement) {
doc = doc.defaultView.frameElement.ownerDocument;
}
}
// The event listener is added after the page has loaded and we don't want to trigger
// the event until the listener is registered.
setTimeout(function () {sendToPage(doc);},1000);
};
};

21st International Workshop on Security Protocols

March 20th, 2013

For the last couple of days I have been at the Security Protocols Workshop which was conveniently located a short cycle ride away. I thoroughly enjoyed it and will definitely be coming back next year (hopefully with a short paper to present). I want to mention some of my  favourite new (to me) ideas which were presented. I am only covering them briefly so if it looks interesting go and read the paper (when it comes out at some unspecified point in the future or find someone with a copy of the pre-proceedings).

Towards new security primitives based on hard AI problems – Bin B. Zhu, Jeff Yan

The core idea here is that if there are problems which computers can’t solve but humans can (e.g. 2D Captchas) then these can be used to allow humans to input their passwords etc. in such a way that a computer trying to input passwords in has no idea what password it is inputting (CaRP). This means that on each attempt the attacker gains nothing because they don’t know what password they tried as they just sent a random selection of click events which the server then interpreted as a password using information that the attacker does not have without human assistance. This helps against online brute force attacks, particularly distributed attacks which are hard to solve with blacklisting without also locking the legitimate user out. It also helps as part of the ‘authentication is machine learning‘ approach as accounts which are flagged as being used suspiciously can be required to login using a CaRP which requires human input and so mitigates automated attacks in a similar way to requiring the use of a mobile number and sending it a text (though it is less strong than that – it does require less infrastructure). Additionally I think that if a particular Captcha scheme is broken then the process of breaking each one will still be computationally intensive and so this should still rate limit the attacker.

Remote device attestation with bounded leakage of secrets – Jun Zhao, Virgil Gligor, Adrian Perrig, James Newsome

This is a neat idea where if the hardware of a device is controlled such that its output bandwidth is strictly limited then it is still possible to be certain that the software on it has not been compromised even if an attacker can install malware on it and has full control of the network. This works by having a large pool of secrets on the device which are updated in a dependent way each epoch and there is not enough bandwidth in an epoch to leak enough data to construct the pool of secrets outside the device. Then the verifier can send the device a new program to fill its working RAM and request a MAC over the memory and secrets storage and this cannot be computed off the device or on the device without filling the RAM with the requested content and so when the MAC is returned the verifier knows the full contents of the hardware’s volatile state and so if it was compromised it no longer is.

Spraying Diffie-Hellman for secure key exchange in MANETs – Ariel Stulman, Jonathan Lahav, and Avraham Shmueli

This idea is for use in providing confidentiality of communication on mobile ad-hoc networks. Since the network is always changing and comprised of many nodes it is hard for an attacker to compromise all the nodes on all paths between two nodes which wish to communicate confidentiality. The idea is to do Diffie-Hellman but split the message into multiple pieces with a hash and send each message via a different route to the recipient. If any one of those pieces gets through without being man-in-the-middled then the attack has failed. In a random dynamically changing network it is hard for an attacker to ensure that. Though not impossible and so a very careful analysis needs to be done to mitigate those risks in practice.

Layering authentication channels to provide covert communication – Mohammed H. Almeshekah, Mikhail Atallah, Eugene H. Spafford

The idea here is that some additional information can be put in the authentication information such as typing <password> <code word> rather than just <password> in the password field and hence transmitting <code word> to the bank which can have many meanings, e.g. have three different code words for 3 levels of access (read only, transactions, administrative) and one for coercion. I particularly liked the idea of being able to tell the bank ‘help someone is coercing me to do this, make everything look as normal but take steps to reverse things afterwards and please send the police’.

 

There were also lots of other interesting ideas some of which I had seen before in other contexts. I thought I made some useful contributions to discussions and so maybe this whole PhD in computer security thing might work out. There were some really friendly welcoming people there and I already knew a bunch of them as they were CL Security Group people.

Defence of the Union: Britain is better together

January 5th, 2013

In 2014 there will be a referendum in Scotland on whether Scotland should be an independent state and leave the Union. Frankly I find it ridiculous that the question is even being asked as the answer is so clearly no. Essentially nothing is gained that could not be gained by internal reorganisation within the UK and much is lost.

Personally I was born in Scotland and have lived slightly less than half my life there, the rest being spent in England and some of my great grandparents were Scottish. However I have always lived in Britain and always been British. I am one of the significant number of people who would need dual nationality if Scotland were to become independent because we simply do not fit into the ‘English’, ‘Welsh’ or ‘Scottish’ categories, only in ‘British’.

All the arguments I have heard in favour of independence which are valid such as those which have convinced a slim majority of Scottish Green Party members are not in fact relevant to the question of independence. Rather they relate to the debate on the localisation of different powers at different scales from national to local. Obviously the positioning of park benches should not be done by act of the UK parliament and NHS policy should not be determined individually on a ward level – there is an appropriate scale for different decisions to be made at. There is a very interesting debate on what should be decided at what scale and I think a great deal of room for improvement on this. However none of that is relevant to the question of Scottish independence – or if it is it is just as relevant to the question of independence for the Highlands.

The only issues relevant to the decision on whether Scotland should break the Union are ones which must be decided at the national level and could not be devolved to Scotland. Fundamentally the only issues which then apply are international ones, all domestic issues can be reorganised as we like and the rest of the world does not need to know or care but the interface we provide to the world is that of the nation.

So only international issues matter to the debate on independence, and an independent Scotland would leave both Scotland and the rest of the UK worse off in many different ways and not make things better in any way. Currently the UK punches above its weight in international affairs, Scotland would not gain that and the rest of the UK would lose it. For example the UK has a permanent seat on the UN Security Council. This is justifiable for more than just historical reasons (Sierra Leone, Kosovo, Lybia) but only tenuously and without Scotland it would be hard to justify it continuing to have a seat. Currently the UK is big enough that when it is necessary for something to be done on the world stage (take action on climate change, stop genocide etc.) then the UK can go ‘Well we are going to do this, who is with us?’ we don’t have to persuade a whole bunch of countries to act in lockstep with us, we can lead[0]. Obviously we then need to persuade other countries to follow us but it is possible to try to lead. I think it is easier to persuade people to follow if they can see that you mean it by your actions than when it can only be words because action is impossible without their help.

Similarly within the EU the UK has a fair bit of influence (for all that David Cameron tries to throw that away). We will not gain any more by being two countries rather than one, Scotland will probably need to reapply for membership post-independence and that might take a few years of sitting out in the cold. Currently when a country needs to take a lead on an issue the UK can do that. It would be hard to see Scotland doing so to the same extent and the rest of the UK’s hand would also be weakened.

A Union was made and formed Great Britain, whatever the perceived legitimacy by current standards of the people involved in making that Union the fact remains that it was made. That was not a temporary treaty or a fair weather thing. That was and is a permanent covenant thing. A sickness and in health, in good economic times and in bad, in peace and war for all time and without end thing. As such it should not be lightly broken. I fail to see what the pressing issue is as to why Britain cannot continue as it is. Some bad things happened in the past long before I was born, why does that even matter? The future is ours to decide and the past remains unchanging whatever revenge is taken for past evil actions they are not undone.

The breaking of a Union would also be a permanent and unalterable thing, not a decision to revisit in 10, 50 or 100 years if it does not work out but one made with finality for all time. While right now the world is a fairly safe place to be as a rich nation [1] that might not always be the case, it certainly has not always been the case. There are many reasons to be uncertain of where the world as a whole will be in 50, 100 or 300 years, let alone thousands of years. This is a decision which needs to be made considering such time-scales rather than just temporary political circumstances.

There have been times when we have stood together when we would not have been able to stand alone. There was a time, still just in living memory when the UK stood alone in Europe, a light against the darkness. Stood and lasted until others came to our aid but only by a very thin margin. Perhaps as allies we could have stood together and lasted, but perhaps disagreements and infighting would have weakened us and a darkness might have fallen across the world. For 300 years we have stood together, one nation against all adversities. Our soldiers have fought together against various foes, bled and died for us, for Britain as much for the mountains of Scotland and Wales as for the hills of England. Should we betray them?

This Union has been sealed with blood in more than one way, in those years people have moved freely between the two and married in each place, there is no real division by race any more. Not that divisions by race really have any meaning any more. What does the colour of the skin matter or where your great great grandparents came from. You are still human.

What then divides us? Not race for there has been much movement between the two. Nor language for British English is spoken in both and variation is greater within each than between them. Nor of geography for the border has been drawn at various places at different times. While different parts of the landscape of each are beautiful in different ways there are places in both where it is hard to tell a Scottish hill for an English or Welsh one and more difference between the Highlands and the Central Belt than between the Central Belt and other parts of England. Nor economics for while the statistics might be different for Scotland as a whole from England as a whole, parts of Scotland match closely with parts of England. You will find places where manufacturing died in both, where tourism is the main industry, where there are high-tech companies or a strong service industry. Is then all that divides us old grudges, memories of past wrongs? Then know this: this is a fallen broken world and the mistakes made by countries and people are many and varied and the depth of the evil that is committed knows few bounds. For life it is necessary to forgive, and to ask for forgiveness: To strive once again to build a better future out of the broken fragments of the past. Fundamentally we are better together and long may we be so.

[0]: Iraq was a terrible illegal mistake but that was not our idea, we were following rather than leading. We also lack the courage to lead as we should on issues like Climate Change.

[1]: To a first approximation no one dies from terrorism in rich nations, our security services do a rather good job at stopping that sort of thing. We should try fixing our road collisions problem that kills many more people.

NHS IT policies that waste NHS money (and could easily be fixed)

January 3rd, 2013

Computer systems built for national scales are expensive – especially given the perverse incentives for previous and current government IT projects which practically guarantee that they will go over budget. However it is also important to remember that a computer system should make it easy and quick for a user to do what they need to do – it should not get in their way and slow them down – fundamentally the user’s time is paid for by the NHS (some of them at quite a high rate) and if they spend hours dealing with irrelevant trivialities of the computer systems they are using then that money is wasted.

Much nhs email goes via nhsmail. This imposes a 200MB quota for all users. That is tiny. Disk space is cheap, really cheap at the 2GB level and really it should be possible to offer 20GB per user without too much difficulty. So every user of nhsmail must periodically spend their valuable time deleting emails that are no longer vital. Occasionally they will make mistakes and delete emails that are actually important potentially directly impacting patient care. This is just silly. I am guessing the order of magnitude of the cost of fixing this (by buying more servers) is X00,000 and that this would easily pay for itself in terms of increased efficiency across the NHS within a year.

The NHS systems also have a ridiculous system of requiring users to change their passwords periodically. This is well know[0] to actually make security worse and to provide no benefit as users pick worse passwords to make them easier to remember (and to break) and then increment numbers on the end or similar (which unfortunately makes it harder to remember due to within list effects – people can’t remember which password they are on). So this is a policy that wastes staff time, makes security worse and should be fixable by someone unticking a few boxes marked ‘force users to change their passwords’ or similar. Unfortunately various incompetent IT auditing agencies always tell organisations without periodic password changing policies that they need to institute one – this is good grounds for firing the agency as they clearly have no idea what they are doing.

[0]: ‘Although change regimes are employed to reduce the impact of an undetected security breach, our findings suggest they reduce the overall password security in an organization. Users required to change their passwords frequently produce less secure password content (because they have to be more memorable) and disclose their passwords more frequently. Many of the users felt forced into these circumventing procedures, which subsequently decreased their own security motivation. Ultimately, this produces a spiraling decline in users’ password behavior (“I cannot remember my password, I have to write it down, everyone knows it’s on a post-it in my drawer, so I might as well stick it on the screen and tell everyone who wants to know.”)’

Christmas Newsletter

December 26th, 2012

The following is the contents of my section of the family newsletter with added links:

This year I continued as a Research Assistant at the University of Cambridge before starting a PhD in
encrypted cloud storage with the same supervisor in October. By the grace of God I have I think
grown up quite a bit this year which serves to demonstrate the distance still to go. I have done quite a
bit of teaching over the year ranging from Sunday School for 5-7 year olds in the summer to
undergraduate supervising in the department and student team at church during term time. All in all a wonderful year.

HTC android phones – the prefect spying platform?

October 24th, 2012

I am reading “Systematic detection of capability leaks in stock Android smartphones” for the CL’s Mobile Security reading group.

I read “all the tested HTC phones export the RECORD AUDIO permission, which allows any untrusted app to specify which file to write recorded audio to without asking for the RECORD AUDIO permission.” and then went back and looked again at Table 3 and saw the other permissions that stock HTC android images export to all other applications: these include access location and camera permissions. The authors report that HTC was rather slow at responding to their telling HTC that they had a problem. Hence stock HTC Legend, EVO 4G and Wildfire S phones are a really nice target for installing spy software because it doesn’t have to ask for any permissions at all (can pretend to be a harmless game) and yet can record where you go[0] what you say and at least if your phone is not in your pocket also what you see.

This is probably more likely to be incompetence than a deliberate conspiracy but if they were trying to be evil it would look just like this.

On the plus side Google’s Nexus range with stock images are much safer and Google is rather better at responding promptly to security issues. Since my android phone is one that Google has given our research group for doing research I am fortunately safe.

I also particularly liked HTC’s addition of the FREEZE capability which locks you out of the phone until you remove the battery, just perfect for when the attacker realises you are on to them to allow them to do the last bit of being malicious without your being able to stop them.

End of being provocative. ;-)

[0] Ok so on Wildfire S location information is implicitly rather than explicitly exported so probably harder to get hold of.

Raspberry Pie

August 25th, 2012

In honour of the Raspberry Pi I wanted to make a Raspberry Pie, I tried to do this by looking up a recipe on the rPi plugged into the TV but page loads were too slow (still running debian squeeze rather than raspbian so not taking advantage of the speed increases associated with that).
So I decided to just experiment and throw things together until they looked about right (the temporary absence of scales meant that being accurate was difficult). When you are making something yummy out of components which are all yummy there is only so far you can go wrong.
This produced the following:
A raspberry pie in a pyrex dish lead

There was a little less pastry than would have been optimal made using flour, unsalted butter and a little bit of water (cribbing from Delia’s instructions but without any accuracy). I left it in the fridge for well over the half an hour I had originally intended before rolling it out. This was cooked for ~10minutes at 180℃ (might have been better to leave it longer). I used two punnets of raspberries most of which went in raw on top of the cooked pastry but ~1/3 of a punnet went in with some sugar (mainly castor sugar but a little bit of soft brown which deepened the colour) and two heaped tablespoons of corn flour and a little big of water this was stirred vigorously on a hob such that it did a lot of bubbling until it turned into a rather nice thick goo with all the bits of raspberry broken up (looked very jam like). That then got poured on top. I left it in the fridge over night as it was quite late by this point and we ate most of it for lunch.

The only good pie chart - fraction of pie which is pacman, fraction which is pie

The only good pie chart, fraction of pie dish which looks like pacman, fraction which is pie.

Raspberry Pi Entropy server

August 23rd, 2012

The Raspberry Pi project is one of the more popular projects the Computer Lab is involved with at the moment and all the incoming freshers are getting one.

One of the things I have been working on as a Research Assistant in the Digital Technology Group is on improving the infrastructure we use for research and my current efforts include using puppet to automate the configuration of our servers.

We have a number of servers which are VMs and hence can be a little short of entropy. One solution to having a shortage of entropy is an ‘entropy key‘ which is a little USB device which uses reverse biased diodes to generate randomness and has a little ARM chip (ARM is something the CL is rather proud of) which does a pile of crypto and analysis to ensure that it is good randomness. As has been done before (with pretty graphs) this can then be fed to VMs providing them with the randomness they want.

My solution to the need for some physical hardware to host the entropy key was a Raspberry Pi because I don’t need very much compute power and dedicated hardware means that it is less likely to get randomly reinstalled. A rPi can be thought of as the hardware equivalent of a small VM.

Unboxed Raspberry Pi with entropy key

I got the rPi from Rob Mullins by taking a short walk down the corridor on the condition that there be photos. One of the interesting things about using rPis for servers is that the cost of the hardware is negligible in comparison with the cost of connecting that hardware to the network and configuring it.

The Raspberry Pi with entropy key temporarily installed in a wiring closet

The rPi is now happily serving entropy to various VMs from the back of a shelf in one of the racks in a server room (not the one shown, we had to move it elsewhere).

Initially it was serving entropy in the clear via the EGD protocol over TCP. Clearly this is rather bad as observable entropy doesn’t really gain you anything (and might lose you everything). Hence it was necessary to use crypto to protect the transport from the rPi to the VMs.
This is managed by the dtg::entropy, dtg::entropy::host and dtg::entropy::client classes which generate the relevant config for egd-linux and stunnel.

This generates an egd-client.conf which looks like this:

; This stunnel config is managed by Puppet.

sslVersion = TLSv1
client = yes

setuid = egd-client
setgid = egd-client
pid = /egd-client.pid
chroot = /var/lib/stunnel4/egd-client

socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
TIMEOUTclose = 0

debug = 0
output = /egd-client.log

verify = 3

CAfile = /usr/local/share/ssl/cafile

[egd-client]
accept = 7777
connect = entropy.dtg.cl.cam.ac.uk:7776

And a host config like:

; This stunnel config is managed by Puppet.

sslVersion = TLSv1

setuid = egd-host
setgid = egd-host
pid = /egd-host.pid
chroot = /var/lib/stunnel4/egd-host

socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
TIMEOUTclose = 0

debug = 0
output = /egd-host.log

cert = /root/puppet/ssl/stunnel.pem
key = /root/puppet/ssl/stunnel.pem
CAfile = /usr/local/share/ssl/cafile

[egd-host]
accept = 7776
connect = 777
cert = /root/puppet/ssl/stunnel.pem
key = /root/puppet/ssl/stunnel.pem
CAfile = /usr/local/share/ssl/cafile

Getting that right was somewhat tedious due to defaults not working well together.
openssl s_client -connect entropy.dtg.cl.cam.ac.uk:7776
and a python egd client were useful for debugging. In the version of debian in rasperian the stunnel binary points to an stunnel3 compatibility script around the actual stunnel4 binary which resulted in much confusion when trying to run stunnel manually.