Raising Hell: Cracking COVIDSafe: In Apps We Trust
In which a cryptographer explains how it should be done...
Vanessa Teague is something of a rock star when it comes to the field of cybersecurity. As a cryptographer, in recent years Teague has shown a knack for not only finding problems in software that other people miss, but the sorts of problems that scare governments enough to threaten her with prosecution. Her past work has included efforts to show how anonymised public medical data can be reconstructed to identify the people named in the dataset and how flaws within e-voting software leaves them vulnerable to manipulation.
All of this gives Teague a certain authority when she talks about the need for good privacy, security and democratic decision making during the design phase of government tech projects. I was lucky enough to get Vanessa on the phone for half an hour one Monday morning to unpack some of the broader legal issues facing cybersecurity research.
What follows is a transcript of our conversation about how efforts to control research in the area often work to undermine security in the long-run. The transcript has been edited for clarity and length.
Royce Kurmelovs: In having written about the COVIDSafe app, the thing that’s struck me is the way in which COVIDSafe serves as a little window into all these social forces that made it possible. There’s this sense that something so small says so much about how we think about and carry out this stuff. With that in mind, I’d like to start off with how you see things.
Vanessa Teague: I think you hit the nail on the head. I don’t think COVIDSafe is bad in principle, I think it is indicative of other problems across other areas of the Australian government’s approach to cybersecurity and IT.
It’s pretty clear at this point that from the start, they went in thinking public embarrassment is the worst thing that can possibly happen. That is wrong. It’s what led the government into trouble in the first place. What they should have done is be open much earlier, instead they kept it in house. We know the COVIDsafe app was in the pipeline months before any information about its design was publicly available.
As it has been done around the world, there’s two different ways of doing a contact tracing app. So the app collects data — this is not location data, by the way, it’s data that says who you were near. If you test positive, the app doesn’t say where you were, it says who you were with. You can either collect this data in a way that gathers a massive central government database, or you can do it in a way that does not create a massive centralised database. The second is a decentralised way, in which your phone does the work of checking whether you have potentially been exposed, and no database of who has been near whom is collected. I don’t think it’s a coincidence that we’ve settled on the way that creates a central database.
“You can either collect this data in a way that gathers a massive central government database, or you can do it in a way that does not create a massive centralised database.”
So we also know at this point the government chose to develop the app in a way that builds a big central database, and we know they made a decision before there were any discussions or public engagement. Had we been asked, most of us working in cybsersecurity research could have told them a decentralised model based on the Google-Apple API was the way to go. Even if they didn’t do that, there were people in the open source community who could have told them there was a better way to do it in terms of BlueTooth. That is a technology that is surprisingly complex. When I looked at this, I learned things about BlueTooth that I didn’t know before.
Instead, what the government decided to do was drop the app on the public in a massive marketing push, after building it in a way that was quite different to other countries like the UK. There, the approach was very different. There was a much more democratic decision making process that went into its design. The first version of their app was very, very similar to COVIDSafe but the Brits had a much more transparent and careful process to explain what they were doing and roll it out slowly, so they made better decisions in the long-run. What they did was put out a set of cryptographic specifications, they ran a controlled test on a small island and carefully looked at its performance. Their original app looked very much like the Australian app, but when they found it didn’t work as intended, they switched to the decentralised model.
Here in Australia, we did what Australians always do which is design it behind closed doors without talking to anyone in the hope we can make it perfect.
RK: What is the state of independent cyber security research in Australia?
VT: Threatened. The state of cyber security research in Australia is threatened. It’s very hard to do, particularly cryptography research, without crossing the lines into what may be considered illegal. There’s a lot of legislation that makes security research that would be legal in other democracies illegal in Australia — or at the very least there is an ambiguity.
There is a long tradition of locking discussions about how to design and deliver public goods behind closed doors that is not done in other parts of the world. This is true for technologies like the software used to count senate votes in federal elections, or that used in internet-voting in New South Wales. It is also true for COVIDSafe.
“This is the irony of COVIDSafe. It’s bad, but it is also less appalling than other federal government software projects like DigitalID and electronic voting — just about anything.”
There’s a few key pieces of legislation that have worked to close off independent research in this area. The Defence Trade Controls Act amendment was the first thing that changed my behaviour due to the risk of imprisonment. That amendment makes it a crime to export encryption related goods, services or ideas without a permit from the military. That causes considerable trouble both for academic cryptography research and people trying to write software in this area. The second thing that happened was the Re-Identification Amendment and George Brandis’ threat to ban the re-identification of data where the government has been stupid enough to release it publicly. It never passed, but it served as a threat — and a very effective one. And the third thing is Telecommunications and Other Legislation Amendment Act, that allows intelligence agencies to demand assistance from a ‘person with knowledge of a computer system’ and threaten people with jail if they refuse.
RK: Why is there no formal, organised way for people to report errors as they find them?
VT: There should be. To be fair, the Australian Cyber Security Centre is trying to set up these sorts of processes. There is hope that it might improve, but currently there’s no good system in place. In the case of COVIDSafe, because code has been made public, people have the opportunity to write comments on the Github for the app when they find something wrong, but there is no formal way to actually get issues addressed.
This is the irony of COVIDSafe. It’s bad, but it is also less appalling than other federal government software projects like DigitalID and electronic voting — just about anything. What made COVIDSafe different, I think, is that it had to be voluntary. They couldn’t force people to download it, so they had to encourage people to download it. To do this, the government took the transparency path for the first time ever, but they took a couple of steps and no further. They put the code out there, but they didn’t offer any way to interact with the institutions that created it. You can’t put a request in to get the code fixed — and there’s no explanation about what you should do if you find an error. There’s an attempt to do the right thing, because they felt the need to get people to like it and download it, but they didn’t do all the extra things they needed to do in terms of a structured vulnerability program or a way to fix bugs. And they still haven’t. You still can’t fix the server code because that’s also secret.
RK: You’ve seen the documents released under FOI from Ionize that show the government was very concerned about the potential for public embarrassment. What’s your take?
VT: Well, that priority clearly wasn’t very well managed. It would have been better to engage in an open public discussion earlier that would have created a better product in the long-run so the people who knew about it could offer input. There’s a misconception among these agencies that secrecy will make stuff secure, when in fact they’d be much better off having a community for open review to help them fix stuff. Especially in situations where the technology will be open for the public to analyse. The problem is the government consistently tries to buy public trust through secrecy, but it doesn’t work. It would be much better to buy public trust through transparency a long time in advance and so build a better product.
“While COVIDSafe is an example of all this, it’s still less of a disaster than other projects. […] It’s just that with COVIDSafe people have the opportunity of not using it and people are asking questions about how well it works — and realising there aren’t any answers.”
To be fair, what they did with COVIDSafe was very hard. Looking at it afterwards, I learnt a whole bunch of things about BlueTooth I didn’t know. It needed a whole lot of expertise. That’s another thing, too. There’s obviously a certain set of expertise inside the government, but this app being a very new and very different thing no one had ever done before, needed a set of skills that are available in the broader community but are not available behind the wall.
The others issues you might raise about incentives and outsourcing are the converse of that. COVIDSafe is a really good example. We never had in Australia a real debate whether we should switch to the Google-Apple API. Those decisions were held in the bureaucracy and within those companies they outsourced it to. We still don’t have good information about how well the COVIDSafe is working. So there’s definitely something going wrong with the way that we make these decisions in a democratic way.
While COVIDSafe is an example of all this, it’s still less of a disaster than other projects. These are projects like MyHealth Record and DigitalID. It’s just that with COVIDSafe people have the opportunity of not using it and people are asking questions about how well it works — and realising there aren’t any answers.
RK: Could you talk a little about you work elsewhere, such as on electronic voting in New South Wales?
VT: Yeah, so the software there was called iVote. There was a multinational e-voting software company that has now gone into liquidation that sold its code for handling electronic voting to New South Wales and Switzerland.
Now the situation is that it’s a crime to share code without the permission of the electoral commissioner in New South Wales. You could not get access to the source code to check it without signing a Non-Disclosure Agreement promising not to tell anyone about what you found for five years. In Switzerland, however, the opposite is true. There, they put it out for public inspection six months before use. This a long time ago now — early 2019. They have since paused the roll-out of this software because we discovered several serious cryptographic errors. At the same time Switzerland was doing this, NSW were already using the software. Authorities here didn’t bother to learn how broken it was ahead of time. Later, when they read our work that found two big security issues with the software, they claimed to have fixed one of the issues and that the other was irrelevant. Since then we’ve been able to look at some of the code used in NSW and found that actually the second issue is very relevant.
“The code for counting the federal senate vote is perhaps the most important bit of software in the country. When you vote on paper, the process of interpreting the numbers on them and transforming them into a digital record is done through software. Who’s checked that software? Nobody!”
This, by the way, is the internet voting system that counts about five percent of the vote in New South Wales. The public reaction? Nobody cares. There was a bigger public response in Switzerland — to be outraged would be very un-Swiss. But the Swiss really cared. Here? Nobody could care less. And keep in mind this was a problem we found out about more or less by coincidence because the vendor sold the same code to Switzerland and New South Wales, but Switzerland had a transparency law. That’s how we know about it. There’s other code we know about, but we can’t get access to. For example, the code for counting the federal senate vote is perhaps the most important bit of software in the country. When you vote on paper, the process of interpreting the numbers on them and transforming them into a digital record is done through software. Who’s checked that software? Nobody! We’ve never been able to look at the senate counting software and yet everybody trusts it. Most people don’t even think about it. There’s a big black box between Australian citizens filling out their ballot, and the senators taking their seats in parliament where what happens in between is completely unknown.
That’s different to COVIDSafe, but the point is that keeping stuff secret and telling the public to “just trust us” is not unique to COVIDSafe. It’s indicative of the wider approach of government to cybersecurity and it’s wrong.
RK: For the sake of clarity, how should we think about the role of intelligence agencies in the development of the app?
VT: To be fair, if you want anyone in the Australian government to do cryptographic work, it’s much better to get it done by the Australian Signals Directorate (ASD). I’ve seen it done by people not in the ASD, and trust me, you want it done by ASD. There’s no comparison. The technical competence in those agencies is very high and it’s much better than other people doing it. In some way, it’s good they’re involved. In my experience, people within those agencies, who understand the technical issues, are much more sensible and reasonable about learning about bugs and getting them fixed, than people in other parts of government who lack technical understanding.
The issue is the involvement of this technical expertise is not a substitute for broad, open public scrutiny and democratic decision making about how it gets done. There’s nothing wrong with getting intelligence agencies to check it out, but that’s not a substitute for opening it up to the public — otherwise you’re saying to the public: “Don’t worry about it, we had these experts check it out, you don’t need to see it.”
That’s nonsense. The way to do it is to say: “Hey, so we had these experts check it out, have a look for yourself.”
That’s how you actually build trust.
Cracking COVIDSafe is a feature series made in association with Electronic Frontiers Australia. It aims to highlight the importance of Freedom of Information as an essential tool for holding government to account while helping to teach people about the process so they can do it themselves.
The journalism published by Raising Hell will always be free and open to the public, but feature series like these are only made possible by the generous subscribers who pay to support my work. Your money goes towards helping me pay my bills and covering the cost of FOI applications, books and other research materials. If you like what you see share, retweet or tell a friend. Every little bit helps.