Raising Hell: Cracking COVIDSafe: Part 4: Spooks In The Machine
In which we start getting results...
The first application to come back with any answers was the one I had made to the Department of Home Affairs asking for documents relating to Atlassian’s involvement in the development of the COVIDSafe app
The result was curious. It may have been established on record the tech giant had volunteered its services as a good corporate citizen, but when it came down to it, Home Affairs said they had no records on file:
Naturally, I was confused. Not only was it known Atlassian had been somehow involved but members of open source community had dug into the code to find evidence the COVIDSafe app had originally been branded “COVIDCare”.
When I followed this up with Atlassian, the company’s media representative pointed to an article published in InnovationAus that recorded the involvement of Atlassian and a speech where Prime Minister Scott Morrison thanked the company for its contribution in setting up a Whatsapp chatline on another project. When I asked for specifics of the company’s involvement, the media spokesperson declined to comment saying any public comment about the app was “driven by the gov [sic] not us”.
So I followed up yet again, asking if that meant the company had not been made aware by the government that it would be identified publicly as having been involved. I also asked whether Atlassian would be willing to confirm the start date and length of time of its involvement. I never heard back — a curious outcome from an organisation that lists "open company, no bullshit” as their top corporate value.
Quality Control
The next application to come back was my request for documents relating to the cyber security firm Ionize:
Founded in 2008 by Andrew Muller, Ionize is a cyber security firm that ordinarily deals with the Department of Defence. Depending on the contract, its remit sees them running everything from security assessments to training seminars. In the development of the COVIDSafe app, the company would play a bit part — though an integral one. During development, Ionize would be handed $44,000 to run the penetration testing on the app. In other words, its job was to actively seek out and find vulnerabilities within the code ahead of time. Given the raft of security errors uncovered since the release of the app, it made sense to go after the report written by those responsible for quality control.
When the decision came back on application FOI 200/2020 on 3 September 2020, it was illustrative both for what how it showed the potential for public embarrassment was high on the government’s priority list:
This sure was something: of all the things to consider during quality control, the government had stressed the importance of “reputational damage”.
If there was anything else to be gleaned from the report, it was impossible to know: the rest had been heavily redacted. By my count, an 18-page report had seven full pages blocked out and enough portions of the rest to make it virtually unreadable.
Ordinarily, this might have been a source of frustration but this time it was the absence of information that was interesting. In redacting the pages, the decision maker relied on Section 7(2A) of The Freedom of Information Act 1982 (Cth) to partially refuse access to whole strips of the document:
In simple terms, this allows an agency to refuse any request for material made by, or sourced from, an intelligence agency. Ionize may not have been an intelligence agency but evidently they were collaborating with one — which was interesting. It was no secret the nation’s cyber spies had been involved in building the app — particularly in testing it for flaws. Here’s DTA chief Randall Brugeaud speaking about it during a senate estimates hearing in October 2020:
What was curious about it all was the way in which the involvement of these organisations was presented to the Australian public at launch.
Selling COVIDSafe
When the idea for an automated contract tracing app was first floated, the public immediately responded with a slew of doubts over whether the government could actually build a good contact-tracing app. Putting side the crazed 5-G conspiracy theorists, the concern from the public boiled down to a concern about whether it would work as intended. More specifically, the question was whether government could be trusted to collect and handle the information with respect.
To soothe this public mistrust — which had a genuine basis in reality given the government’s handling of past projects — a marketing campaign began. Celebrities announced they would “download the app” and various experts went public to explain how it was “benign”. One editorial in the Sydney Morning Herald from Stephen Wilson, an independent privacy expert said it was “pretty innocuous”, writing:
“All these criticisms are valid. But it saddens me to see respected privacy advocates rehashing entrenched positions at a time like this. There’s very little wrong with the app itself, but people resent it because they resent the government. Yet I don’t see how we can afford that luxury right now.”
Wilson’s defense of a government with a horrid history on digital projects — from robodebt to electronic voting software — would prove wrong on many fronts. He, however, was not alone. The ABC, meanwhile, ran another about the work the “independent” The Cyber Security Cooperative Research Centre (CSCRC) was doing after it had “offered” to stress test the app for the government.
Over three days the CSCRC had tested the app along with sixteen other “world-class” experts. The organisation’s chief executive, Rachael Falk, told The ABC how she had personally been using the app and had found nothing of note about the experience.
"There is always a lot of noise around anything that has to do with a commonwealth data application,” Falk said. "I come from a position of fact, so I can talk about what I've seen so far. And so far, I'm comfortable with what I've seen."
Falk later repeated the sentiment in an editorial published by The Australian in which she wrote:
Because when it comes to the application of tracing technology like this, we all agree there must be robust structures in place to ensure the privacy and security of citizens are protected.
That is why the Cyber Security Cooperative Research Centre, together with Data61, led an independent review of the app before its release with a team of 17 cyber experts from across the country collaborating to test it, analyse it and poke holes in it.
They found that COVIDSafe was secure and operated as described by government. And, in line with our democratic principles, it is opt-in. Those who do not wish to download it are under no obligation to do so.
If there was ever any question about what the CSCRC was and what it did, the information was there in the public domain. The organisation was set up with $50 million government funding in 2017 under the Cooperative Research Centres program. It launched on 5 April 2018 as part of a wider call in the 2016 Cyber Security Strategy to foster a closer working relationship between researchers in the area and the business community. That particular government program had been running since the Hawke government and worked on the basis that it connected university researchers with industry, where one — or preferably more — business partners could be found to match whatever federal government funding was being provided.
The composition of the organisation’s board reflected its origin story. On the one hand there were members current and former representatives from every major Australian intelligence agency. These included David Irvine, a former diplomat who has served at the head of both ASIO and ASIS, and Rachel Noble, the current head of the Australian Signals Directorate (ASD). On the other, were heavy-hitters from the Australian business community, including Jennifer Westacott, Chief Executive of the Business Council of Australia and John M Green, deputy chairman of QBE insurance and former executive director of the Macquarie Group.
When contacted, a spokesperson for the CSCRC explained in a statement that the organisation has always been transparent about its relationships.
“The CSCRC’s Board oversees the CSCRC’s governance and does not influence its research activities,” the statement read. “The review of the COVIDSafe app was led by the CSCRC Research Director and Data61. The report was never shared with the CSCRC board.”
Data61 is a big data and analytics division of the CSIRO.
On the question of why the various errors and security issues were missed, they explained:
“Prior to the COVIDSafe app's launch on 26 April, a group of 17 researchers led by the CSCRC and Data61 analysed and tested parts of the app's code and application over a three-day period. A technical assessment based on what the CSCRC team was able to access was then provided to the Australian Signals Directorate (ASD), noting certain aspects were not able to be investigated due to the app's evolving code base. As the ASD was the responsible agency for providing the overall security advice to the DTA with respect to the app, it is better placed to respond to this question.”
What this offered was a little more clarify on how the app was vetted for security issues and was none too surprising given Australia’s broader approach to cyber security.
Pride Cometh
Truly independent cryptographic and cyber security research may not be banned outright in Australia — despite a failed attempt by Attorney General George Brandis to do just that in 2016 — but it is certainly not encouraged. Where researchers have sought to find and fix flaws in encrypted data or software, they have typically gone on to have their work undermined, their job security questioned and face threats of jail time.
If there is a balance to be struck between control and security, the Australian government has consistently err on the side if control.
The reason? In a word: optics.
Traditionally the only thing an ambitious politician or public sector bureaucrat fears more than being contacted directly by a member of the public — or the media — is a “black eye” when a mistake goes public. The rule of thumb for dealing with these issues is simple. If there is an error, let it be handled in house. Failing that, better it go unnoticed and unaddressed than anyone know there was an error in the first place.
COVIDSafe offers a working demonstration of this. By keeping its development “in house”, key questions about design, approach and issues of quality control were handled by “our people”. Any anxiety among officials was eased by the comforting knowledge — bordering on a certainty — that “our people” were the “best in the world”. Once the app was vetted and out the door, everyone just assumed any errors would be trivial.
Based on everything known to date, it does not appear that anyone has reckoned with the uncomfortable reality that even the best have their limits. It is fair to say many of the errors that shipped with COVIDSafe’s release were trivial — some so trivial they should never have been allowed through. When Jim Mussared pulled apart the app on the first night of its release, he found the developers had not just imported the code from Singapore, warts and all, but their tinkering had actually made it worse.
"One early issue was the advertising payloads,” Mussared said. “In order for devices to find each other, they have to advertise by sending periodic radio announcements. These advertisements should be identical for anyone running COVIDSafe, so no one should be able to be used to track your phone in a crowd of other phones or to re-identify your phone later. And because your phone's address changes regularly, your phone isn't long-term trackable.”
“However, whoever designed it — for complicated but not very good reasons — included some additional bytes in this advertising payload that are supposed to change every time the address changes, but it turned out they were just set once when the program first starts. That means you had you had a unique code being broadcast for days — or weeks — at a time. This is basic from a security review perspective.”
Though these errors were eventually patched out, they continued to linger. Months after release, yet another bug was found on Android phones where the app didn’t automatically update with each new patch. While this has since been fixed, it meant users were left vulnerable for weeks — or months — even as the DTA promised it had been taken care of.
Since the app relied on Bluetooth technology — a highly specialised, niche area of technological expertise — even the best in the world could be forgiven for missing something that fell outside their range of experience. This, however, meant the decision to keep the whole development in a closed loop undermined basic security in the long-run when the flaws were eventually found by others.
It has also been made worse by the lack of an official process for allowing the public to safely report bugs they find in government software to the appropriate authority — on COVIDSafe or any other project. When members of the open source community did attempt to report the issues they discovered with the app, they reported being ignored, dismissed, insulted and finally chastised when forced to go to the media to make the issue known. Had they not, those issues might never have been addressed.
None of this is original insight, either. It is basically the entire plot of the 1996 science fiction blockbuster Independence Day.
Those in the cybersecurity world tend to answer most of these criticisms by saying the COVIDSafe app was always “low stakes”. A consistent talking point repeated both at the time of release and now is that the average person shares more information about themselves through their Instagram accounts. Though true, the matter is one of perspective. For a woman whose abusive, tech-savvy ex-boyfriend is looking for ways to monitor her phone, questions of security and privacy are real and immediate. It is why there is a whole government-funded safe-phones program for women in domestic violence situations.
In this way something small and seemingly insignificant like the COVIDSafe app can end up saying much about much bigger things as, if this how we treat the little things, what might it say about the stuff that’s supposed to really matter?
Cracking COVIDSafe is a feature series made in association with Electronic Frontiers Australia. It aims to highlight the importance of Freedom of Information as an essential tool for holding government to account while helping to teach people about the process so they can do it themselves.
The journalism published by Raising Hell will always be free and open to the public, but feature series like these are only made possible by the generous subscribers who pay to support my work. Your money goes towards helping me pay my bills and covering the cost of FOI applications, books and other research materials. If you like what you see share, retweet or tell a friend. Every little bit helps.