Sunday 18 February 2018

Trolled!

Recent revelations about troll postings to Facebook, Twitter and other media sites create an obvious question: can we trust the authenticity of online commentary, or should we basically distrust everything?  After all, even a posting by my brother or an email from my mother could be some kind of forgery.

The question has broader dimensions.  Twenty-two years ago, David Cooper and I published a little paper on secure and private email.  In this paper, we asked whether there is a way for two people (who know one-another) to send and receive emails in a monitored environment, with some form of untrusted observer trying to detect that communication has occurred, and hoping to retrieve the message itself, too.

Our 1995 solution uses cryptography and fake traffic to solve the problem.  The fake traffic ensures that there is a steady flow of bytes, whether or not the two people are communicating.  Then we designed a kind of shared storage system that plays the role of an email server: you can hide data inside it.  The email itself was encrypted, but also broken into bits in the storage layer and hidden inside vast amounts of noise.  Then the act of sending or receiving an email was mapped to a vast amount of reading and rewriting of blocks in the storage system.  We showed that an observer learns very little in this case, and yet you can send and receive emails in a way that guarantees the authenticity of the messages.

This week a Cornell visitor told us about ways to improve on that style of older system, but the broad framework was similar.  I have always loved solutions built from little cryptographic building blocks, so I thought this was a really fun talk.   The problem, of course, is that nobody adopts tools like these, and unless everyone is using them, the mere fact of having a copy of the software might tip bad actors off to your interest in secret communication (then they can abduct you and force you to reveal everything).  To really work, we would need a universally adopted standard, one that nearly everyone was using even without realizing it -- the WhatsApp of secure email.  That way, when they come to question you, you can pretend to have absolutely no idea what they are talking about.

The other problem is that in contemporary society, there is a slight bias against privacy.  While most people would agree that we have a right to privacy, they seem to mean "unless you are trying to hide a secret we want to know about."  So there is a contradiction in the sense that we accept the right to privacy, yet also seem to believe in a broader societal right to intrude, particularly if the individuals are celebrities -- as if privacy rights vanish with any form of fame or notoriety.  There is also a significant community that assumes that privacy is something people would want primarily as a way to hide something: an unusual sexual preference, or criminal activity, or terrorism.

Back to the trolls.  In the cases recently publicized by the FBI, CIA and NSA, we learned that Russia has at least one (maybe more) companies, with large numbers of employees (80 or more) who work full time, day in and day out, planting fake news, false commentaries and incendiary remarks in the US and European press and social networks.  Here in Ithaca, the local example seems to be a recent event in which a dispute arose about the diversity of casting for a high school play (although the lead role is that of Carmen, a gypsy woman and hence someone who would normally have dark skin, the casting didn't reflect that aspect of the role).  This was then cited as part of a pattern, and a controversy around casting erupted.

Any small town has such episodes, but this one was unusual because suddenly, a torrent of really vile postings, full of racist threads, swamped the local debate.  One had the sense that Ithaca (a northern town that once had a big role on the underground railway for helping escaped slaves reach freedom) was some sort of a hotbed of racism.  But of course there is another easy explanation: perhaps we are just seeing the effects of this Russian-led trolling.  The story and this outburst of racism are precisely in line with what the FBI reported on.  In fact, some of the nasty stuff is home grown and purely American.  But these trolling companies apparently are masters at rabble-rousing and unifying the Archie Bunkers of the world to charge in whatever direction they point.

So here we have dual questions.  With Facebook, or in the commentary on an article in a newspaper, I want to be confident that I'm seeing a "legitimate" comment, not one manufactured in a rabble-rousing factory in Russia.  Arguably, this formulation is at odds with anonymity, because just knowing an account name won't give me much confidence that the person behind the account is a real person.  Trolls create thousands of accounts and use them to create the illusion that massive numbers of people agree passionately about whatever topic they are posting about.  They even use a few as fake counter-arguers to make it all seem more real.

So it isn't enough that Facebook has an account named Abraham Lincoln, and that the person posting on that account has the password.  There is some sense in which you want to know that this is really good old honest Abe posting from the great beyond, and not an imposter (or even a much younger namesake).  Facebook doesn't try to offer that assurance.

This is a technical question, and it may well have a technical answer, although honestly, I don't see an immediate way to solve it.  A quick summary:
  • Desired is a way to communicate, either one-to-one (email), or one-to-many (Facebook, within a social network), or one-to-all (commentary on newspapers and other public media web sites).
  • If the individuals wish to do so privately, we would wish for a way to do this that reveals no information to the authorities, under the assumption that "everyone uses social media".  So there should be a way to communicate privately that somehow hides itself as completely normal web site browsing or other normal activities.
  • If the individuals wish to post publically, others should be able to authenticate both the name of the person behind the posting (yes, this is the real "Ken Birman," not a forged and twisted fake person operated as a kind of web avatar under control of trolls in St. Petersburg), and the authenticity of the posting (nobody forged this posting, that sort of thing).
  • In this public mode, we should have several variations:
    • Trust in the postings by people in our own social network.
    • Indirect trust when some unknown person posts, but is "vouched for" by a more trusted person.  You can think of this as a form of "trust distance".
    • A warning (think of it as a "red frame of shame") on any posting that isn't trustworthy at all.  The idea would be to put a nice bright red frame around the troll postings. 
  • When someone reacts to a troll posting by reposting it, or replying to it, it would be nice if the social networking site or media site could flag that secondary posting too ("oops! The poster was trolled!").  A cute little icon, perhaps?  This could become a valuable tool for educating the population at large about the phenomenon, since we often see secondary commentary without understanding the context in which the secondary remark was made.
Then we would want these ideas widely adopted by email systems, Facebook, Twitter, Google, the New York Times, the Breitbart News, and so forth.  Ideally, every interaction would offer these options, so that any mail we send, posting we make, or any that we read, is always protected in the intended manner.

Could this agenda be carried out?  I believe so, if we are willing to trust the root authentication system.  The Cornell visitor from this week pointed out that there is always an issue of the root of trust: once someone can spoof the entire Internet, you don't have any real protections at all.  This extends to your computer too: if you are using a virtualized computer that pretends to support the trust framework but in reality, shares your information with the authorities, all privacy bets are (obviously) off.  If the display system carefully and selectively removes some red frames, and inserts others where they don't belong, we're back to square zero.

So there are limits.  But I think that with "reasonable assumptions" the game becomes one of creating the right building blocks and then assembling them into solutions with the various options.  Then industry would need to be convinced to adopt those solutions (perhaps under threat of sanctions for violating European privacy rules, which are much tougher than the ones in the US).  So my bet is that it could be done, and frankly, when we consider the scale of damage these hackers and trolls are causing, it is about time that we did something about it.  

2 comments:

  1. in Belgium, as you may know, there is a new smartphone application 'itsme.be' liked to our national ID.

    ReplyDelete
    Replies
    1. No, I was not aware of that. There are ways to obtain “blinded” signatures once you have such an authenticated id. In effect, I have a posting to submit. First I blind it, get it signed, then I unblind the posting. There is a mathematical trick for securely transforming the signature on the blinded version into a signature for my unblinded posting.. Now I can submit my posting with proof that it came from a legitimate person. Yet there is no linkage to my true id. You know a real person posted the remark, but not who that person is.

      Delete

This blog is inactive as of early in 2020. Comments have been disabled, and will be rejected as spam.

Note: only a member of this blog may post a comment.