Wednesday, 20 May 2020

Contact Tracing Apps Don't Work Very Well

The tension between privacy and the public interest is old, hence it is no surprise to see the question surface with respect to covid-19 contact-tracing apps.  

Proponents start by postulating that almost everyone has a mobile phone with Bluetooth capability.  In fact, not everyone has a mobile phone that can run apps (such devices are expensive).  Personal values bear on this too: even if an app had magical covid-prevention superpowers, not everyone would install it.  Indeed, not everyone would even be open to dialog.  

But let's set that thought to the side and just assume that in fact everyone has a suitable device and agrees to run the app.  Given this assumption, one can configure the phone to emit Bluetooth chirps within a 2m radius (achieved by limiting the signal power).  Chirps are just random numbers, not encrypted identifiers.  Each phone maintains a secure on-board record of chirps it generated, and those that it heard.  On this we can build a primitive contact-tracing structure.

Suppose that someone becomes ill.  The infected user would upload chirps the phone sent during the past 2 weeks into an anonymous database hosted by the health authority.  This step requires a permission code provided by the health authority, intended to block malicious users from undertaking a form of DDoS exploit.  In some proposals, the phone would also upload chirps it heard:  Bluetooth isn't perfect, hence "I heard you" could be useful as a form of redundancy.   The explicit permission step could be an issue: a person with a 104.6 fever who feels like she was hit by a cement truck might not be in shape to do much of anything.  But let's just move on.  

The next task is to inform people that they may have been exposed.  For this, we introduce a query mechanism.  At some frequency, each phone sends the database a filtering query encoding chirps it heard (for example, it could compute a Bloom Filter and pass it to the database as an argument to its query).  The database uses the filter to select chirps that might match ones the phone actually heard.  The filter doesn't need to be overly precise: we do want the response sent to the phone to include all the infected chirps, but it actually is desirable to include others that aren't ones the phone was researching.  Then, as a last step, the phone checks to see whether it actually did hear (or emit) any of these.  

Why did we want the database to always sent a non-empty list?  Well, if every response includes a set of chirps, the mere fact of a non-empty response reveals nothing.  Indeed, we might even pad the response to some constant size!

Next, assume that your phone discovers some actual matches.  It takes time in close proximity to become infected.  Thus, we would want to know whether there was just a brief exposure, as opposed to an extended period of contact.  A problematic contact might be something like this: "On Friday afternoon your phone detected a a close exposure over a ten minute period", meaning that it received positive chirps at a strong Bluetooth power level.  The time-constant and signal strength constant is a parameter, set using epidemic models that balance risk of infection against risk of false positives.

Finally, given a problematic contact your device would walk you through a process to decide if you need to get tested and self-quarantine.    This dialog is private: no public agency knows who you are, where you have been, or what chirps your device emitted or heard.  

Covid contact-tracing technology can easily result in false positives.  For example, perhaps a covid-positive person walked past your office a few times, but you kept the door closed...  the dialog might trigger and yet that isn't, by itself, determinative.   Moreover, things can go wrong in precisely the opposite way too.  Suppose that you were briefly at some sort of crowded event -- maybe the line waiting to enter the local grocery store.  Later you learn that in fact someone tested positive at this location... but the good news is that your app didn't pick anything up!  If you were certain that everyone was using a compatible app, that might genuinely tell you something.   But we already noted that the rate of use of an app like this might not be very high, and moreover, some people might sometimes disable it, or their phones might be in a place that blocks Bluetooth signals.  The absence of a notification conveys very little information.  Thus, the technology can also yield false negatives.

The kind of Covid contact tracing app I've described above is respectful of privacy.  Nobody can force you to use the app, and for all the reasons mentioned, it might not be active at a particular moment in time. Some of the apps won't even tell you when or where you were exposed or for how long, although at that extreme of protectiveness, you have to question whether the data is even useful.  And the government heath authority can't compel you to get tested, or to upload your chirps even if you do test positive.

But there are other apps that adopt more nuanced stances.  Suppose that your phone were to also track chirp signal power, GPS locations, and time (CovidSafe, created at University of Washington, has most of this information).  Now you might be told that you had a low-risk (low signal power) period of exposure  on the bus from B lot to Day Hall, but also had a short but close-proximity exposure when purchasing an expresso at the coffee bar.  The app would presumably help you decide if either of these crosses the risk threshold at which self-quarantine and testing is recommended.  On the other hand, to provide that type of nuanced advice, much more data is being collected.  Even if held in an encrypted form on the phone, there are reasons to ask at what point too much information is being captured.  After all, we all have seen endless reporting on situations in which highly sensitive data leaked or was even deliberately shared in ways contrary to stated policy and without permission.

Another issue now arises.  GPS isn't incredibly accurate, which matters because Covid is far more likely to spread with prolonged close exposure to an infectious person: a few meters makes a big difference (an especially big deal in cities, where reflections off surfaces can make GPS even less accurate -- which is a shame because a city is precisely the sort of place where you could have frequent but remote brief periods of proximity to Covid-positive individuals).  You would ideally want to know more.  And cities raise another big issue: GPS doesn't work inside buildings.  Would the entire 50-story building be treated as a single "place"?   If so, with chirps bouncing around in corridors and stairwells and atria, the rate of false positives would soar!

On campus we can do something to push back on this limitation.  One idea would be to try and improve indoor localization.  For example, imagine that we were to set up a proxy phone within spaces that the campus wants to track, like the Gimme! Coffee café in Gates Hall.  Then when so-and-so tests positive, the café itself learns that "it was exposed".  That notification could be useful to schedule a deep cleaning, and it would also enable the system to relay the risk notification, by listing the chirps that the café proxy phone emitted during the period from when the exposure occurred (on the theory that if you spend an hour at a table that was used by a covid positive person who was in the café twenty minutes, ago, that presumably creates a risk).   In effect, we would treat the space as an extension of the covid positive person who was in it, if they were there for long enough to contaminate it.

Similarly, a phone could be configured to listen for nearby WiFi signals.  With that information, the phone could "name" locations in terms of the MAC addresses it heard and their power levels.  Phone A could then warn that during a period when A's user was presumed infectious, there was a 90-minute period with 4-bars WiFi X and 2-bars WiFi Y, with WiFi Z flickering at a very low level.  One might hope that this defines a somewhat smaller space.    We could then create a concept of a WiFi signal strength distance metric, at which point phone B could discover problematic proximity to A.  This could work if the WiFi signals are reasonably steady and the triangulation is of high quality.  But WiFi devices vary their power levels depending on numbers of users and choice of channels, and some settings, like elevators, rapidly zip through a range of WiFi connectivity options...  Presumably there are research papers on such topics... 

Another idea I heard about recently was suggested by an avid FitBit user (the little app that encourages you to do a bit more walking each day).  Perhaps one could have a "social distancing score" for each user (indeed, if Fitbit devices can hear one-another, maybe Fitbit itself could compute such a score).  The score would indicate your degree of isolation, and your goal would be to have as normal a day as possible while driving that number down.  Notice that the score wouldn't be limited to contacts with Covid positive people.  Rather, it would simply measure the degree to which you are exposed to dense environments where spread is more likely to occur rapidly.  To do this, though, you really want to use more than just random numbers as your "chirp", because otherwise, a day spent at home with your family might look like a lot of contacts, and yet you all live together.  So the app would really want to count the number of distinct individuals with whom you have prolonged contacts.  A way to do this is for each device to stick to the same random number for a whole day, or at least for a few hours.  Yet such a step would also reduce anonymity... a problematic choice.

As you may be aware, Facebook owns Fitbit, and of course Facebook knows all about us.  This makes Facebook particularly qualified to correlate location and contact data with your social network, enabling it to build models of how the virus might spread if someone in your social group is ever exposed.  Doing so would enable various forms of proactive response.  For example, if a person is egregiously ignoring social distancing guidance, the public authorities could step in and urge that he or she change their evil ways.  If the social network were to have an exposure, we might be able to warn its members to "Stay clear of Sharon; she was exposed to Sally, and now she is at risk."  But these ideas, while cute, clearly have sharp edges that could easily become a genuine threat.  In particular, under the European GDPR (a legal framework for privacy protection), it might not even be legal to do research on such ideas, at least within the European union.  Here in the US, Facebook could certainly explore the options, but it would probably think twice before introducing products.

Indeed, once you begin to think about what an intrusive government or employer could do, you realize that there are already far too many options for tracking us, if a sufficiently large entity were so-inclined.  It would be easy to combine contact tracing from apps with other forms of contact data.  Most buildings these days use card-swipes to unlock doors and elevators, so that offers one source of rather precise location information.  It might be possible to track purchases at food kiosks that accept cards, and in settings where there are security cameras, it would even be possible to do image recognition...   There are people who already live their days in fear that this sort of big-brother scenario is a real thing, and in constant use.  Could covid contact tracing put substance behind their (at present, mostly unwarranted) worries?

Meanwhile, as it turns out, there is considerable debate within the medical community concerning exactly how Covid spreads.  Above, I commented that just knowing you were exposed is probably not enough.  Clearly, virus particles need to get from the infected person to the exposed one.   The problem is that while everyone agrees that direct interactions with a person actively shedding virus are highly risky, there is much less certainty about indirect interactions, like using the same table or taking the same bus.  If you follow the news, you'll know of documented cases in which covid spread fairly long distances through the air, from a person coughing at one table in a restaurant all the way around the room to people fairly far away, and you'll learn that covid can survive for long periods on some surfaces.   But nobody knows how frequent such cases really are, or how often they give rise to new infections.    Thus if we ratchet up our behavioral tracing technology, we potentially intrude on privacy without necessarily gaining a greater prevention.

When I've raised this point with people, a person I'm chatting with will often remark that "well, I don't have anything to hide, and I would be happy to take any protection this offers at all, even if the coverage isn't perfect."  This tendency to personalize the question is striking to me, and I tend to classify it along with the tendency to assume that everyone has equal technology capabilities, or similar politics and civic inclinations.  One sees this sort of mistaken generalization quite often, which is a surprise given the degree to which the public sphere has become polarized and political.  

Indeed, my own reaction is to worry that even if I myself don't see a risk to being traced in some way,  other people might have legitimate reasons to keep some sort of activity private.  And I don't necessarily mean illicit activities.  A person might simply want privacy to deal with a health issue or to avoid the risk of some kind of discrimination.  A person may need privacy to help a friend or family person deal with a crisis: but simply something that isn't suitable for a public space.  So yes, perhaps a few people do have nasty things to hide, but my own presumption tends to be that all of us sometimes have a need for privacy, and hence that all of us should respect one-another's needs without prying into the reasons.  We shouldn't impose a tracking regime on everyone unless the value is so huge that the harm the tracking system itself imposes is clearly small in comparison.

In Singapore, these contract-tracing apps were aggressively pushed by the government -- a government that at times has been notorious for repressing dissidents.  Apparently, this overly assertive rollout triggered a significant public rejection:  people were worried by the government's seeming bias in favor of monitoring and its seeming dismissal of the privacy risks, concluded that whatever other people might do, they themselves didn't want to be traced, and many rejected the app.  Others installed it (why rock the boat?), but then took the obvious, minor, steps needed to defeat it.  Such a sequence renders the technology pointless: a nuisance at best, an intrusion at worst, but infective as a legitimate covid-prevention tool.  In fact just last week (mid May) the UK had a debate about whether or not to include location tracking in their national app.  Even the debate itself seems to have reduced the public appetite for the app, and this seems to be true even though the UK ultimately leaned towards recommending a version that has no location tracing at all (and hence is especially weak, as such tools go).

I find this curious because, as you may know, the UK deployed a great many public video cameras back in the 1980's (a period when there was a lot of worry about street crimes together with high-visibility frequency terrorist threats).  Those cameras live on, and yet seem not to have limited value.  

When I spent a few months in Cambridge in 2016, I wasn't very conscious of them, but now and then something would remind me to actually look for the things, and they still seem to be ubiquitous.  Meanwhile, during that same visit, there was a rash of bicycle thefts and a small surge in drug-related street violence.  The cameras apparently had no real value in stopping such events, even though the mode of the bicycle thefts was highly visible: thieves were showing up with metal saws or acetylene torches, cutting through the 2-inch thick steel bike stand supports that the city installed during the last rash of thefts, and then reassembling the stands using metal rods and duct-tape, so that at a glance, they seemed to be intact.  Later a truck could pull up, they could simply pull the stand off its supports, load the bikes, and reassemble the stand.  

Considering quite how "visible" such things should be to a camera, one might expect that a CTV system should be able to prevent such flagrant crimes.  Yet they failed to do so during my visit.  This underscores the broader British worry that monitoring often fails in its stated purpose, yet leaves a lingering loss of privacy.  After all: the devices may not be foiling thefts, yet someone might still be using them for cyberstalking. We all know about web sites that aggregate open webcams, whether the people imaged know it or not.  Some of those sites even use security exploits to break into cameras that were nominally disabled.

There is no question that a genuinely comprehensive, successful, privacy-preserving Covid tracing solution could be valuable.  A recent report in the MIT technology review shows that if one could trace 90% of the contacts for each Covid-positive individual, the infection can be stopped in its tracks.  Clearly this is worthwhile if it can be done.  On the other hand, we've seen how many technical obstacles this statement raises.

And these are just technical dimensions.  The report I cited wasn't even focused on technology!  That study focused on human factors at scale, which already limit the odds of reaching the 90% level of coverage.  The reasons were mundane, but also seem hard to overcome.  Many people (myself included) don't answer the phone if a call seems like possible spam.  For quite a few, calls from the local health department probably have that look.  Some people wouldn't trust a random caller who claims to be a contact tracer.  Some people speak languages other than English and could have difficulty understanding the questions being posed, or recommendations.  Some distrust the government.  The list is long, and it isn't one on which "more technology" jumps out as the answer.  

Suppose that we set contact tracing per-se to the side.  Might there be other options worth exploring?  A different use of "interaction" information could be to just understand where transmission exposures are occurring, with the goal of dedensifying those spots, or perhaps using other forms of policy to reduce exposure events.  An analyst searching for those locations would need ways to carry out the stated task, yet we would also want to block him or her from learning irrelevant private information.  After all, if the goal is to show that a lot of exposure occurs at the Sunflower Dining Hall, it isn't necessary to also know that John and Mary have been meeting there daily for weeks.

This question centers on data mining with a sensitive database, and the task would probably need to occur on a big-data analytic platform (a cloud system).  As a specialist in cloud computing, I can point to many technical options for such a task.  For example, we could upload our oversight data into a platform running within an Intel SGX security enclave, with hardware-supported protection.  A person who legitimately can log into such a system (via HTTPS connections to it, for example) would be allowed to use the database for tasks like contact tracing, or to discover hot-spots on campus where a lot of risk occurs -- so this solution doesn't protect against a nosy researcher.  The good news is that unauthorized observers would learn nothing because all the data moved over the network is encrypted at all times, if you trust the software (but should we trust the software?)  

There are lots of other options.  You could also upload the data in an encrypted form, and perhaps query it without decrypting it, or perhaps even carry out the analysis using a fully homomorphic data access scheme.  You can create a database but inject noise into the query results, concealing individual data (this is called the differential privacy query model).  

On the other hand, the most secure solutions are actually the least widely used.  Fully homomorphic computing and Intel SGX, for example, are viewed as too costly.  Few cloud systems deploy SGX tools; there are a variety of reasons, but the main one is just that SGX requires a whole specialized "ecosystem" and we lack this.  More common is to simply trust the cloud (and maybe even the people who built and operate it), and then use encryption to form a virtually private enclave within which the work would be done using standard tools: the very same spreadsheets and databases and machine-learning tools any of us use when trying to make sense of large data sets.

But this all leads back to the same core question.  If we are go down this path, and explore a series of increasingly aggressive steps to collect data and analyze it, to what degree is all of that activity measurably improving public safety?  I mentioned the MIT study because at least it has a numerical goal: for contact tracing, a 90% level of coverage is effective; below 90% we rapidly lose impact.  But we've touched upon a great many other ideas... so many that it wouldn't be plausible to do a comprehensive study of the most effective place to live on the resulting spectrum of options.

The ultimate choice is one that pits an unquantifiable form of covid-safety tracing against the specter of intrusive oversight that potentially violates individual privacy rights without necessarily bringing meaningful value.   On the positive side, even a panacea might reassure a public nearly panicked over this virus, by sending the message that "we are doing everything humanly possible, and we regret any inconvenience."  Oddly, I'm told, the inconvenience is somehow a plus in such situations.  The mix of reassurance with some form of individual "impact" can be valuable: it provides an outlet and focus for anger and this reduces the threat that some unbalanced individual might lash out in a harmful way. Still, even when deploying a panacea, there needs to be some form of cost-benefit analysis!

Where, then, is the magic balancing point for Covid contact tracing?  I can't speak for my employer, but I'll share my own personal opinion.  I have no issue with installing CovidSafe on my phone, and I would probably be a good citizen and leave it running if doing so doesn't kill my battery.  Moreover, I would actually want to know if someone who later tested positive spent an hour at the some table where I sat down not longer afterwards.  But I'm under no illusion that covid contact tracing is really going to be solved with technology.  The MIT study has it right: this is simply a very hard and very human task, and we delude ourselves to imagine that a phone app could somehow magically whisk it away.

1 comment:

  1. Ken, thanks for this analysis. Ken Finch from Sacramento CA near UC Davis

    ReplyDelete

This blog is inactive as of early in 2020. Comments have been disabled, and will be rejected as spam.

Note: only a member of this blog may post a comment.