Wednesday, 4 April 2018

Blockchains and the new mythology

For two years now, the drumbeat of Blockchain technology has gradually dominated one area of systems after another: in the public eye, Blockchains are somehow the universal solution to every problem.  This I find very odd: the core BlockChain concept is technically flawed (I don’t want to repeat prior blogs, so I’ll simply point out that four or five of my older postings were on technical problems with the model).  The model doesn’t even fit many of the imagined uses.  And in actual fact, we see few examples of real uses, other than to support cryptocurrencies that run some risk of evaporating from your wallet.  Yet this dream of using BlockChain technology for everything that really matters has somehow taken hold.

BlockChain has evolved into a mythology.

I remember a talk by Michael Brody, the CTO of Verizon around 1998 (back when it was still part of the GTE empire).  He focused on the psychological wish for magic silver bullets that can slay every technical barrier.  In companies struggling with technology challenges, it can be very appealing to wish for miracles (and all too easy to worry that the other guy will find it first and win market dominance by so doing).  At the time, the silver bullets were client server architectures, CORBA, Paxos.  But the underlying pattern was similar: an overwhelming desire to believe, coupled with eyes closed against the limitations.

We see this now for artificial intelligence, too: how many self-driving cars will have to run down bicyclists and swerve into ongoing traffic before people realize that putting a robot in charge of a car is simply an overreach? The technology isn’t ready yet.

And so too with BlockChain.  Yet when I attend talks on Digital Agriculture, or the future of medicine, or banking, somehow the very term seems to command authority (and to shut down any skepticism the audience might normally have expressed).  In the New York Times on April 3, an article talked about BlockChain in all of these and many other “uses”, quoting one gushing entrepreneur as saying that BlockChain is a revolutionary, disruptive and nearly universal technology for storage, communication, security and safety.  Oh, and he suggests we use it for online voting, to repel those Russian hackers.  Come again?  All this from an append-only log, running on anonymous servers, and prone to rollbacks?

It does seem true that a permissioned BlockChain (one running on specified servers, probably in the machine room of a bank, or sold as a turn-key product by a storage or cloud vendor) would be a great place to log transactions that you want to keep on record indefinitely.  Moreover, a permissioned  Blockchain won’t roll back unexpectedly.  But the expert quoted by the NY Times apparently wants all sorts of digital information logged into indelible records, and seemingly has the permissionless variety of BlockChains in mind (he would never trust any single bank or company to host the chain).

Beyond the privacy issues raised by having your life logged in a globally shared place, we get the oddity of using a type of log that by construction is capable of spontaneously erasing itself.  It could even be erased deliberately by the same Russian hackers out to tamper with the election you are trying to protect!

Setting the technical issues to the side, the psychology of this article speaks to Brody’s old story of using silver bullets to slay dragons.  Technology has become so complex that it certainly can feel like magic, and magical thinking is a natural fit.  Nobody wants their illusions punctured.  No technology is perfect, but even this plays into the story: if you point to a flaw, like the tendency of permissionless Blockchain to roll back, Blockchain fans just assert that version 2.0 will fix that.

The dialog reminds me of science fiction.  We start with a conceit: “dilithium crystals and antimatter  enable faster than light travel” (and every other technical miracle the script writers could dream up).  The physics wouldn’t bear up under close scrutiny, but we don’t look too closely.  And from this starting point, we boldly go where no one has gone before.

But one can easily understand the motivation of a science fiction script writer.   Where I’m left puzzled is with the motivations of all these self-proclaimed experts.  are they deliberately lying?

Quite a few must know better. Yet somehow, this chance to be the expert seems to drive rational computer scientists to make wild claims, while brushing obvious concerns to the side.  While the scam endures, these people are becoming millionaires.

I imagine that it must be fun to be able to sound off at fashionable cocktail parties, too.  “Am I concerned by the opioid crisis?  Well, of course.   But in my view, the entire problem could be solved by using Blockchain to log every transaction involving these habit forming drugs...”

Down the road reality will impose itself.  But I guess that by then, these experts will have long since cashed out, bought Napa vineyards, and moved on to speculate in some other illusory commodity.

Wednesday, 7 March 2018

Data stream computing

I've actually posted on this topic in the past, but I spend so much of my own time thinking about it that I'm going to revisit the question from my "revised and improved" perspective, a year or so down the road.

Context: with the move towards a disaggregated data center model in which we use RDMA or other forms of next-generation direct access to talk to remote memory (perhaps persistent, perhaps not), we find ourselves in a world where the underlying costs and barriers have shifted dramatically relative to the programming style familiar in the past.

Specific example: Consider 2-phase commit, or your favorite consensus protocol, or Paxos.  In the recent past, we would program these protocols by having some leader who initiates the protocol and sends out a request to the other participants: "can you commit transaction 71?" or "I vote 1", or  "I propose that we place value v into Paxos log slot 22, and this is my 3rd ballot for that slot."  Responses eventually come back, and after a round or two of these back-and-forth interactions, the protocol terminates.

In that classic way of thinking, there was a kind of implicit balance in mind between the cost of moving the data (value v in the example above) and the cost of the back-and-forth round-trips.  I'm not quite sure how to best express this sense of balance, but what one finds is that the cost of the round-trip actions (the RTT, as we say in the business...) is sort of "reasonable" relative to the bandwidth available for moving the value field, often over TCP connections or UDP.

Today, with new forms of direct remote memory access, talking to remote memory over a technology like RDMA is actually faster than talking to local memory, but only in the sense that RDMA can sustain a far higher bandwidth than local actions like using memcpy (the fastest available copying primitive on a Linux/C computing platform) to move data from one place in memory to another, assuming that the data in question isn't in the L2 cache.   As RDMA speeds up, this gap will only get larger.  So it is very cheap to move data from machine A to machine B.

In contrast, the RTT for a modern optical Ethernet is surprisingly high, when you consider quite how fast copying can be.  We see RTTs in the 1.5us to 2.5us range: quite fast compared to the 1-10ms numbers that were common for protocols running over TCP/UDP/IP, but relatively sluggish if you consider the number of bytes that could have been sent in that time, particularly if you add in the delays for scheduling a thread to notice the request coming in (when A's message arrives on B), or to send the reply (when B's response reaches A).

If A blocks sending while waiting for this RTT to complete, quite a few bytes won't be sent.  And as RDMA gets faster (today, 100Gbps, but tomorrow 200, then 400, and 1Tbps is within range now), the amount of missed transmission opportunity gets pretty substantial.

Concurrency feels like the obvious answer,  but this instinctive reaction proves to be naive.  The theory would be that if A has lots of side by side transactions with B, then while one waits, another can transmit.  But that overlooks overheads: concurrency also brings many kinds of costs, and this model is anything but a clear win!  Those costs could quickly add up: locking to ensure that the transfers won't interfere with one-another, memory contention on the DRAM chunks where the data lives, various forms of state that might be sitting around while the round-trip interactions are running (threads consume stack space, for example).  So concurrency doesn't come for free.

Batching is a second possibility:  A could collect a bunch of requests, then send them as a batch (while collecting the next batch), B could respond on the whole series of requests (one by one, but as a batch of responses), and so forth.  This amortizes costs in a nice way.

Batching makes sense on a receiver, too.  Rather than sending batches and forcing the receiver to use those batch sizes, a sender could send continuously, and the receiver could bite off a chunk of incoming records based on when it happens to be ready to run. So if we have A sending continuously, rather than having A batch 10 items at a time, it could stream steadily to B.  B might be ready for its next chunk of input at a moment when there are 11 items available; it treats these as a batch of 11.  Then at the next loop, B is perhaps ready for input when 4 new items have turned up: a batch of 4.  We still see a benefit of amortized costs, but the batch sizes are variable and determined by when B happens to be ready, rather than fixed and decided by A (the sender).

Derecho works this way.

Now why are we discussing these obvious points?  Well, the optimality of Derecho is directly tied to its use of this style of receiver-side batching.    The key "thing" about Derecho is that it runs a version of Paxos modified to make decisions as soon as possible, based on the Paxos notion of safety, using receiver-side batches.  This allows us to stream an exceptionally high rate of Paxos requests, process them with no unnecessary delays, and amortize all costs over batches of events.

In fact, the value of this method centers on the steady stream of updates.  With one update, by itself, classic Paxos is a reasonable choice of protocol.  But with a stream of updates, we need to amortize costs for efficiency.  The classic protocols run in a greedy way, and cease to be efficient because they miss this opportunity!

So now the question arises: is this a story purely about Derecho, or does the same opportunity transition to other kinds of systems that aren't doing atomic multicast (vertical Paxos) and atomic log updates (classic durable Paxos)?  For example, could a distributed file system somehow gain performance by being implemented to use the same sort of pattern?  Could a database system be optimized by leveraging this sort of transformation?

And we can pose the problem even more broadly.   As a programming question (almost a compilation question), is there a systematic programming methodology here, one that would lead the developer to an optimized solution matched to the underlying cost model mentioned earlier (the cost model in which steady flow works extremely well at high rates and with large data volumes, but RTT looks somewhat costly)?  How could one start with a specification of a file system, or a specification of Paxos, and systematically arrive at a solution like Derecho?

I find it fascinating that despite all the work that has been done on stream processing in database systems, there seems to be very little prior work on the kind of protocol optimization I've outlined.  The underlying "memory model" is really just an exaggerated NUMA one, with a cache-line memory coherence model, remote memory access, but with unusually high latencies relative to the extremely  high bandwidth.   By and large, my colleagues in the PL community view the resulting questions as being variations on what they are already used to: they have extensive experience with NUMA programming.  Yet here we can see that we actually are after solutions that can be quite different from the classic ones: Derecho doesn't look a lot like Paxos at first glance, and the optimality we achieved emerges from the Paxos protocol transformations that let us optimize for the RDMA environment.  So there was something non-trivial happening here, even though it happens over the standard NUMA model.

I don't know if we have PL readers of this blog, but I would love to hear their thoughts, if so!

Sunday, 18 February 2018

Trolled!

Recent revelations about troll postings to Facebook, Twitter and other media sites create an obvious question: can we trust the authenticity of online commentary, or should we basically distrust everything?  After all, even a posting by my brother or an email from my mother could be some kind of forgery.

The question has broader dimensions.  Twenty-two years ago, David Cooper and I published a little paper on secure and private email.  In this paper, we asked whether there is a way for two people (who know one-another) to send and receive emails in a monitored environment, with some form of untrusted observer trying to detect that communication has occurred, and hoping to retrieve the message itself, too.

Our 1995 solution uses cryptography and fake traffic to solve the problem.  The fake traffic ensures that there is a steady flow of bytes, whether or not the two people are communicating.  Then we designed a kind of shared storage system that plays the role of an email server: you can hide data inside it.  The email itself was encrypted, but also broken into bits in the storage layer and hidden inside vast amounts of noise.  Then the act of sending or receiving an email was mapped to a vast amount of reading and rewriting of blocks in the storage system.  We showed that an observer learns very little in this case, and yet you can send and receive emails in a way that guarantees the authenticity of the messages.

This week a Cornell visitor told us about ways to improve on that style of older system, but the broad framework was similar.  I have always loved solutions built from little cryptographic building blocks, so I thought this was a really fun talk.   The problem, of course, is that nobody adopts tools like these, and unless everyone is using them, the mere fact of having a copy of the software might tip bad actors off to your interest in secret communication (then they can abduct you and force you to reveal everything).  To really work, we would need a universally adopted standard, one that nearly everyone was using even without realizing it -- the WhatsApp of secure email.  That way, when they come to question you, you can pretend to have absolutely no idea what they are talking about.

The other problem is that in contemporary society, there is a slight bias against privacy.  While most people would agree that we have a right to privacy, they seem to mean "unless you are trying to hide a secret we want to know about."  So there is a contradiction in the sense that we accept the right to privacy, yet also seem to believe in a broader societal right to intrude, particularly if the individuals are celebrities -- as if privacy rights vanish with any form of fame or notoriety.  There is also a significant community that assumes that privacy is something people would want primarily as a way to hide something: an unusual sexual preference, or criminal activity, or terrorism.

Back to the trolls.  In the cases recently publicized by the FBI, CIA and NSA, we learned that Russia has at least one (maybe more) companies, with large numbers of employees (80 or more) who work full time, day in and day out, planting fake news, false commentaries and incendiary remarks in the US and European press and social networks.  Here in Ithaca, the local example seems to be a recent event in which a dispute arose about the diversity of casting for a high school play (although the lead role is that of Carmen, a gypsy woman and hence someone who would normally have dark skin, the casting didn't reflect that aspect of the role).  This was then cited as part of a pattern, and a controversy around casting erupted.

Any small town has such episodes, but this one was unusual because suddenly, a torrent of really vile postings, full of racist threads, swamped the local debate.  One had the sense that Ithaca (a northern town that once had a big role on the underground railway for helping escaped slaves reach freedom) was some sort of a hotbed of racism.  But of course there is another easy explanation: perhaps we are just seeing the effects of this Russian-led trolling.  The story and this outburst of racism are precisely in line with what the FBI reported on.  In fact, some of the nasty stuff is home grown and purely American.  But these trolling companies apparently are masters at rabble-rousing and unifying the Archie Bunkers of the world to charge in whatever direction they point.

So here we have dual questions.  With Facebook, or in the commentary on an article in a newspaper, I want to be confident that I'm seeing a "legitimate" comment, not one manufactured in a rabble-rousing factory in Russia.  Arguably, this formulation is at odds with anonymity, because just knowing an account name won't give me much confidence that the person behind the account is a real person.  Trolls create thousands of accounts and use them to create the illusion that massive numbers of people agree passionately about whatever topic they are posting about.  They even use a few as fake counter-arguers to make it all seem more real.

So it isn't enough that Facebook has an account named Abraham Lincoln, and that the person posting on that account has the password.  There is some sense in which you want to know that this is really good old honest Abe posting from the great beyond, and not an imposter (or even a much younger namesake).  Facebook doesn't try to offer that assurance.

This is a technical question, and it may well have a technical answer, although honestly, I don't see an immediate way to solve it.  A quick summary:
  • Desired is a way to communicate, either one-to-one (email), or one-to-many (Facebook, within a social network), or one-to-all (commentary on newspapers and other public media web sites).
  • If the individuals wish to do so privately, we would wish for a way to do this that reveals no information to the authorities, under the assumption that "everyone uses social media".  So there should be a way to communicate privately that somehow hides itself as completely normal web site browsing or other normal activities.
  • If the individuals wish to post publically, others should be able to authenticate both the name of the person behind the posting (yes, this is the real "Ken Birman," not a forged and twisted fake person operated as a kind of web avatar under control of trolls in St. Petersburg), and the authenticity of the posting (nobody forged this posting, that sort of thing).
  • In this public mode, we should have several variations:
    • Trust in the postings by people in our own social network.
    • Indirect trust when some unknown person posts, but is "vouched for" by a more trusted person.  You can think of this as a form of "trust distance".
    • A warning (think of it as a "red frame of shame") on any posting that isn't trustworthy at all.  The idea would be to put a nice bright red frame around the troll postings. 
  • When someone reacts to a troll posting by reposting it, or replying to it, it would be nice if the social networking site or media site could flag that secondary posting too ("oops! The poster was trolled!").  A cute little icon, perhaps?  This could become a valuable tool for educating the population at large about the phenomenon, since we often see secondary commentary without understanding the context in which the secondary remark was made.
Then we would want these ideas widely adopted by email systems, Facebook, Twitter, Google, the New York Times, the Breitbart News, and so forth.  Ideally, every interaction would offer these options, so that any mail we send, posting we make, or any that we read, is always protected in the intended manner.

Could this agenda be carried out?  I believe so, if we are willing to trust the root authentication system.  The Cornell visitor from this week pointed out that there is always an issue of the root of trust: once someone can spoof the entire Internet, you don't have any real protections at all.  This extends to your computer too: if you are using a virtualized computer that pretends to support the trust framework but in reality, shares your information with the authorities, all privacy bets are (obviously) off.  If the display system carefully and selectively removes some red frames, and inserts others where they don't belong, we're back to square zero.

So there are limits.  But I think that with "reasonable assumptions" the game becomes one of creating the right building blocks and then assembling them into solutions with the various options.  Then industry would need to be convinced to adopt those solutions (perhaps under threat of sanctions for violating European privacy rules, which are much tougher than the ones in the US).  So my bet is that it could be done, and frankly, when we consider the scale of damage these hackers and trolls are causing, it is about time that we did something about it.  

Wednesday, 31 January 2018

The Paradox of Trust in Permissionless BlockChain

A few days ago, I posted on work by my colleagues (Eyal, Sirer, Van Renesse and Gencer), who showed that most BitCoin mining is occurring in a fairly small number of data centers, mostly operating in China.

Why is this even possible?  To appreciate the context, it helps to realize that any BlockChain system that operates anonymously needs a way to protect against DDoS attacks (BlockChain assumes that some percentage of the participating computers are unhelpful and perhaps even malicious in a Byzantine sense, and hence could try to flood the system with nonsense of various kinds).  To this end, they employ two mechanisms: (1) A proof-of-work requirement, so that it takes some heavy computing to generate new blocks; (2) A cryptocurrency reward for solving those puzzles and placing new blocks at the end of the chain.  Basically, even though you don't trust some of the computers, you block bad behavior and incent good behavior. 

This is how new cryptocoins arise.  Unlike fiat currency, which is simply printed by the United States Treasury, or the Chinese Treasury, or the Swiss one, you need to earn your BitCoins or Ether coins, by mining.  While you can also obtain coins in transactions with other participants, for example by selling them chewing gum or some other high-value commodity, those are previously minted ones.

As cryptocurrencies have soared in value, one after another, the potential to become rich by playing a big role in coin mining wasn't lost on the kind of companies in a position to invest in doing that.  In fact it makes sense to jump in early: you mine lots of coins back when the chain had no real value, and later it turns out that you possess a resource valuable beyond all comprehension.

So, if you are the kind of company that builds data centers, data centers specifically for mining cybercurrency makes sense.  It became very appealing to pursue this form of reward by five years ago, and massive data centers emerged.  Gradually, they upped their game, deploying cutting edge hardware to gain a performance edge.  This pushed less-well-equipped miners out: Anyone who knows how to mine faster and cheaper can earn more and more of those precious bits. 

But it turns out that this dynamic favors places like China, which have inexpensive electric power and labor, yet also have the needed high-tech skills to actually create and run such a mining operation.  So (1) the mining quickly concentrates in a few places, and (2) they tend to be in China.  But that one-two story creates a new and unexpected threat.

The risk is that once a majority of BitCoin mining becomes concentrated in the hands of any small number of players, they can use their power to control which transactions are logged and when, or can offer preferred performance to their friends and penalize their enemies, or even force a rollback to really harm someone they dislike.  A rollback is pretty drastic: since the coins actually originate in blocks within the chain, if the block that minted a coin rolls back, that coin evaporates.  Worse, transactions won't automatically be reissued and performed in the new chain: a transaction becomes invalid if it uses even a fractional version of a non-existent coin.  So potentially, some coins evaporate and some transactions evaporate too.  Yet the chewing gum already changed hands, and has been chewed.  Can't roll that back...

The risk is definitely real.  First, the mere possibility of behaving like a cartel in order to extract more profit from the BlockChain is a powerful force in favor of the first players to explore doing so.  Of course, any cartel will quickly turn into a kind of BitCoin mafia, but the members will get very rich in the process.  So it will happen sooner or later, purely due to market forces.

As for making deals to favor some transactions, or trying to extort money in exchange for fair treatment, we are definitely already hearing stories along those lines today.

But the really extreme case hasn't occurred -- yet.  The scenario here starts with a political event, such as a trade war or a confrontation over the ultimate status of some Asian island-state, or even a border dispute.  It causes great tension between China and some other country that uses BitCoin (or some other cybercoin where Chinese data farms are doing much of the heavy lifting).  Then you have to imagine that China is so angry that it steps in, demands national control over the data centers within its own borders, and then sets out to use this control as leverage in their global political ambitions.  This escalates until they just wipe out some huge portion of the chain by aggressively mining to create an even longer portion.  Everyone expects roll-back events with a block or two, but nobody is prepared for roll-backs that might span days, weeks, months... even years.  Could it happen?  Apparently.

So rather than wander into a gloom and doom story again (I did that on a prior posting), I wan to comment on "hoist on one's own petard" aspect of this.  Is BitCoin in fact doomed, in some way, by the very principle of democratization and global fairness that Satoshi set out to achieve?

How did I ever come up with that idea?  Well, you need to think about Satoshi Nakamoto's missive, the document that inspired the entire endeavor.  The story was basically one of eliminating the monopoly power that governments hold over fiat currency in favor of a global and decentralized model: everyone would mine coins, and everyone would share the benefit.

But with this flipped situation where the economic incentives favor centralization, we can now see something that wasn't evident back then: a paradoxical outcome in which it turns out that the seeds of monopoly were actually hidden in the original premise!  We need a game theoretic analysis!  But it seems to me that BitCoin is turning out to be a game with an unexpected Nash Equilibrium: a state in which there is a small monopoly in total control of the BlockChain.

It all centers on a mistaken notion of trust.  In Satoshi's writings, the concept of trust is central.  Satoshi does not trust centralized government, leading him to invent a new ecosystem in which nobody needs to trust any at all.  Everyone runs the BlockChain software, nobody cares where it runs, or trusts the participants, yet the chain grows, and everyone can use it to buy and sell the flavor of bubblegum that they prefer, in perfect anonymity.   In concept, at least, there is no advantage to be had by modifying the protocol or attacking the chain.

But we can see how this neglected one aspect of trust: trust that the BlockChain software is run by large numbers of autonomous players, and that mining doesn't become concentrated in any pool that gets even close to controlling one third or more of the mining power.  The point about one third, by the way, arises from a different way of manipulating the system: in a previous result, the same authors managed to show that a cartel with one-third of the mining power can earn all the income by sharing certain kinds of information only within cartel participants (this is a slight violation of the protocol, but one you could implement with just a few lines of changes to the BitCoin code base).

Nobody would know that the cartel had pulled the trick off.  It would just start to win every single competition to place the next block.  By shifting through lots of aliases, it would be quite hard to prove that they were up to this.

Conclusion: It appears that the economics of cybercoin mining creates an unexpected pressure to concentrate mining power in a small number of players.  This in turn gives those players an incentive to form a cartel, in which case they can earn all the money, decide which transactions to clear and which to ignore.  And if they get really, really angry, they can collapse the entire BlockChain.

Would I worry about this, if I was invested in BitCoin?  Actually, I think the risk seems real and worrying enough to genuinely think about.  It could happen.  And if the chain were to collapse, all of that wealth would evaporate like so much pixie dust.

So why aren't the big players in a panic?  My guess is that they have profited so greatly from playing in the current high-risk ecosystem that BitCoin represents that they became inured to risk along the way.  BitCoin sometimes doubles in price (or loses 80% of its value) in minutes.  Maybe this blog posting will send it on some new gyration.  And these investors have learned that like a really great roller coaster, each stomach-wrenching downslope is followed by a giddy climb to previously unimagined heights.  They get hooked on it... and are unable to appreciate that giving some cartel power to collapse the whole thing is perhaps unwise -- like simply assuming that the rollercoaster framework is sound, while entrusting it to an unknown construction crew from out of town.  Did they loosen the bolts last night,  secretly?  You might not find out until the thing literally collapses underneath you, and by then it would be far too late to reconsider your willingness to take another ride!

So here's a shout-out to my friends who work on game theory.  Are BlockChain systems doomed to form cartel structures?  Or could one imagine a version of BlockChain in which membership is anonymous (permissionless), and yet cartels cannot arise?

One last observation.  Right now, nobody thinks of systems like Microsoft/VMWare/Facebook's Corfu logs as the basis for a BlockChain, primarily because they don't use proof of work, because they don't operate in a permissionless model.  Same for Derecho, the system we've been working on at Cornell: Derecho is a fast version of Paxos, but you wouldn't normally store HyperLedger records into it.

But why not?  Why shouldn't we create permissioned ledgers this way?  We would eliminate that rate limitation, which drives everyone crazy right now, and in fact would gain other advantages too: auditability, elimination of proof-of-work computations.  This would be good, right?  Rather than mint coins in exchange for mining, we could just stick with traditional fiat currency, but allow these ledgered transactions to talk about it: "For the sum of $1, Ken agrees to sell five packs of BubbleYum Chewing Gum to Ittay."    Maybe later we find a follow-on transaction: "The third party escrow agent confirms delivery of the gum", and after that, confirmation that the $1 changed hands.

Then we could focus on ways to run transactions that span multiple BlockChains.  Maurice Herlihy just published a paper on a protocol for doing that sort of transaction.  Even more interesting than the solution is the specification: if I run a private chain at bank branch A, and it interacts with your branch B of some other bank, how do I know that the transaction was safe, and valid?  And can we avoid creating an audit dependency between A's private log, and B's private log?  Would there be a role for an organization like SWIFT, to assist in clearing at the end of the working day?

You can see where this all leads... my colleagues in game theory would oblige me by proving that permissionless BlockChain is doomed to create high-risk monopolies that concentrate all the control in the hands of a few small cartel-like groups.  Bad news.  To save themselves from a total coin-ocalypse, everyone switches to permissioned models, and we work out the details for using those to host cryptocurrency.  My research group and I enhance Derecho until it stands alone as the ultimate solution for solving that problem (perhaps you didn't know this, but Derecho is provably optimal for Paxos, and in fact is demonstrably way faster than any previous Paxos system, too).  My students then go out and commercialize this...   I like this story!

So what about it, game theory people?  Is this BitCoin vulnerability inevitable in permissionless BlockChains?

Saturday, 20 January 2018

Blockchains immune to cartels.


In my last blog posting, I reported on work by my colleagues that showed how concentrated Blockchain mining has become, with twenty data centers (mostly in China) doing the majority of the mining.  We saw that this makes it easy for China to attack the Blockchain in various ways.

A natural question is to ask why exactly this issue arises, and what can be done about it.

For Bitcoin and other permissionless Blockchain Systems, the central issue is “proof of work,” and it arises because of two factors: the rewards for mining, and worries about a form of DDoS vulnerability.  In Bitcoin, the miner that creates a block of transactions and is able to place it in the Blockchain by computing a nonce value that will result in a SHA5 hash with the requisite number of zeros is paid for the effort: the block will “issue” a new coin to the miner, and the value of the coin combines a fee on the transactions it contains, plus a small additional reward for placing the block itself.  

Because the cryptographic system used is not known to offer any computational shortcuts, computing the nonce is a brute force task: a miner computes a next block and then tests various nonce values until it finds one that results in a hash with the desired characteristic.  There is no way to avoid the brute force task of sifting through nonce values, at present.  This Bitcoin rewards computational power.  The more powerful your mining center, in terms of hashes it can compute per second, the more money (coins) you earn.

Could we imagine a Bitcoin or Blockchain protocol wired to locality, so that the right to create blocks could be fairly shared between miners in the USA, Canada, France, Russia, China, India, Uzbekistan,  etc?  Not really.  Doing this would just incent global mining consortia that would run proxy computers in those countries, but would outsource the actual nonce computation back to the farms in China, allowing the existing mining cartel to preserve its monopoly on the income, and its power to attack the global Blockchain.

So why, exactly, did we need proof of work?  The DDoS worry stems directly from the anonymity of Bitcoin: any computer that wishes to join the Bitcoin pool can do so just by downloading the proper software and attaching itself to the network, using a virtual “name” that isn’t tied to any physical computer address in a direct way.  Thus one real computer could fork off hundreds of clones, and they could generate huge rates of transactions by just selling bubblegum to one-another at preposterously high rates.  Bitcoin is designed to discourage such behavior: first, the fees paid to log transactions make it expensive to generate huge numbers of them.  And second, the intent of the hashing thing is to spread the odds of creating the next block broadly, “fairly.”  Here we can see a flaw in the design: fairness breaks down, and once a cartel gains control of the show, most other properties of anonymity and non-fiat currency characteristics are out the window too.

Better, it seems to me, would be a permissioned model in which non-anonymized players (call them “banks”) maintain local ledgers.  We would have a local ledger of transactions at Bank of America, another local one at the Bank of Russia, etc.  Individual branches could have their own local sub-ledgers.  I’m not proposing that these logs replicate all the transactions in the world.  Just local logs,  using hyperledger or a similar technology to record banking and business contracts and e-coin transactions.

Such a local ledger won’t need to be permissionless because a branch of the Alternatives Federal Credit Union knows perfectly well which machines play server roles.  So we can use membership management protocols, and in fact are looking at a problem like the one solved by Corfu, or by Derecho in its persistent mode.  Perhaps it would be wise to harden the solution against Byzantine attacks, also a solvable, classic problem.   

Then we could create a protocol for banks to talk to one another, more or less SWIFT (the widely used standard interbank protocol) but enhanced to deal with inter-bank transfers.  Here, I think we see an interesting distributed transaction problem: how would one specify the required properties?  For example, if I ask Alternatives Federal Credit Union to purchase euros for me, then wire them to my sister-in-law in Belgium to pay my share of our house for the summer vacation rental, the money moves through a chain of perhaps six banks.  Each wants to know that after a clearing delay (as short as possible: zero would be best) its own local log is finalized.  We don’t want a lingering dependency period during which the ledger for the international branch of Chase Bank somehow can’t be audited for correctness without also auditing the one at Alternatives FCU in Ithaca.

But this seems solvable to me.  We could then eliminate proof of work (banks do charge fees for transactions, so those will avoid DDoS scenarios, and there is no anonymity, so the fairness aspect is moot).  We would simply have found a clean way to introduce Blockchain technology into standard banking, with or without cryptocurrency (honestly, I’m not s big believer in the stuff, but one can have Blockchain solutions for any kind of contract, and if you want to treat large numbers as commodities with value, who am I to stop you?  In fact the earliest banks operated precisely that way: “... just present this unforgeable document to my brother in Milan, and he will give you two gold florins”).  It would work just as well for crates of oranges, or contracts describing mortgage-backed securities.

Bottom line: my colleagues have shown that in effect, Bitcoin is currently a cartel, ultimately controlled by China. All that stuff about anonymity and fairness?  Basically, a hoax, except that the end users do seem to be anonymous, hence Bitcoin remains very helpful for drug transactions, prostitution, money laundering, international weapons sales.  Stuff like that, where real cash and credit is “awkward”.   Tax evasion.  Yet we can have Blockchain solutions without anonymity, and doing so allows high transaction rates, eliminates the monopoly data mining element, gets rid of the wasted power and cooling for all that SHA5 hashing, and if you want e-coins,  you can have them too.  

So it all comes down to the permissionless model, which in turn is ultimately a political statement by Satoshi Nakamoto, stemming from his distrust of fiat currencies and his views about global political structures.  He favors anonymity as a kind of enabler for anarchy, and envisioned a new global political system fueled by a fiat-free currency.  Yet his vision stumbles, we now see, on it’s substitution of computing power for political power, leaving China in control.  I myself would prefer to just have a system that really works (I pay my taxes, and don’t engage in money laundering or Bitcoin barter for illicit stuff).  If Bitcoin and anonymous Blockchain Systems are at risk of foreign domination and attack, I say: enough with this pretense of anonymity!  Let’s just go with Blockchains!

Thursday, 18 January 2018

Attacking the blockchain

A recent paper by my colleagues showed that bitcoin mining’s dominated by about 20 computing farms, which apparently are located mostly in China.

This isn’t surprising: Blockchain mining centers on using special hardware (ASICs) designed to compute cryptographic hash values very rapidly, so a block chain mining cloud is just a massive array of these ASIC equipped computers.  These massive computing farms tend to be somewhat cheaper to build and operate in China, simply because the human costs of construction and management are low there.  It also helps to put them in places where electric power is cheap, and where cooling is easy to come by.  Turns out that China had all of these properties, and a big national semiconductor industry that designs and builds ASICS too.  (We aren’t talking about a sophisticated mathematical attack on the underlying cryptographic methods, just a brute force advantage that comes from owning more computers, and having more of these special hardware accelerators on them).

The question this raises is as follows:  what risks arise if some single country gains majority control over the Blockchain mining power for a particular currency?

Notice that we are talking here about a kind of brute-force control over the Blockchain.  Anyone could do it, by spending enough money on compute farms and specialized ASICs.  The key insight is that special hardware can be hundreds of thousands of times faster than desktop computing, for the kind of computation.  So you and your friends might run a Blockchain ledger on your home computers, and even configure them to crunch day and night trying to earn coins for you by extending the block chain, but because this involves solving cryptographic puzzles and the big farms collectively have millions of times more compute power, the probabilities favor those mining’s farms.  Most likely, those guys will win every single race to compute the next block, and pocket all the resulting coins.  So it isn’t a question of how many people are running the Blockchain software.  This is a story of how much compute power (and electric power, and money to build and operate the farm) you can dedicate.  The winner of that race will be the world leader in computing cryptographic hashes, and hence will dominate in extending the chain.  The issue arises for any Blockchain: Bitcoin, Ethereum, etc.

I talked this over with some friends who know their stuff, and thought I might share the list of weird attacks one can potentially launch once you have this sort of practical domination.
  • Each new block is a set of transactions.  The dominating cartel could favor its own transactions, so that ones from less favored customers have very long clearing times, or perhaps never get logged at all.  They can effectively help friends and censor enemies: a big issue if the Blockchain is supposed to be fair.
  • They could stop mining entirely.  This would cause chaos: new transactions would stop be logged, or be logged very, very slowly. The problem here is that the hardness of Blockchain schemes is a function of the mining power, so if the mining power suddenly drops drastically and the hardness remains the same, the rate of new blocks drops commensurately.
  • As the main operators of the mining system, they would be in an unusual position to run compromised software, hence could collude among their farms, which is a way to further amplify the power of the cartel (there is a prior result by the Sirer and Eyal on this).
  • They could  run two side-by-side Blockchains that start with the same prefix of blocks (by “forking” the original prefix), keeping one public and one secret.  Since they dominate the compute power by a large factor, they can run the public chain slowly, and the secret chain faster.  The secret chain will quickly become longer than the public one. Then one day they can suddenly release the secret chain.  This has the effect of rolling back every single block in the previously-public chain, because Blockchain software favors the longer chain.  So suddenly, every logged transaction in the public chain has been backed out!  This would allow double spending.  And such an attack could erase years of transactions.
  • Same attack, but now suppose that the chain was also used to keep a ledger of contracts (for banking).  The rollback would effectively erase the tail of the ledger.  Those contracts might vanish entirely, or they could reappear at some other location or in some other order in the new chain (the previously secret one). Since some of the contract languages are state or order-sensitive, like hyperledger, this can change the meaning of the contracts. This risk arises because with hyperledger, one record can refer to a value defined in another record. 
  • With knowledge of the secret chain, they could arbitrage.  For example, suppose that in the public chain, a million shares of Apple are sold at some price.  This might impact the actual share price of Apple.  But if the sale was erased in the secret chain, not only would we have chaos concerning ownership of the shares, we would also have impacted the price, and the attackers could exploit this knowledge to anticipate the impact and profit from it by placing bets in public exchanges, via futures contracts.
I bet this list isn’t exclusive.  Jump in with your own ideas!  But you can already see that basically, if you use Blockchain, you had better trust the operators of the majority of the mining power in the mining pool.  If you don’t, or if they turn on you, you could be seriously harmed down the road.

The bad news, obviously, is that apparently, today’s Blockchain systems are seriously at risk in this sense!

Tuesday, 19 December 2017

Data concentrators: Follow the bytes

In my last posting, I discussed the need for cloud-hosted data concentrators in support of fog computing.  I thought it might be interesting to dive a bit deeper on this topic.  A number of questions arise, but I want to suggest that they center on understanding the "critical path" for data.  In computer systems parlance, the term just means "the portion of the system that limits performance."  Once we understand which elements make up the critical path, we can then ask further questions about how one might implement an architecture to support this form of data concentrator, with various cloud-like goals: resource sharing, scalability, and so forth. 

To set the context, let me give an example of how this form of reasoning can be valuable.  Consider Eric Brewer's famous CAP conjecture for web servers (and the associated BASE methodology).  It may be surprising to realize that before Eric actually put this forward, many web server systems were built as classic 3-tier database structures, in which every web page was basically the output of a database query, and the database server was thus on the critical path even for read-only web transactions.  The database queries ultimately became the bottleneck: nobody could figure out how to get linear scaling without having the database concurrency mechanism dead center on the critical path and ultimately, overloaded by the load.

At that time we saw some very clever workarounds that came close to solving the problem.  For example, with snapshot isolation, it was shown that you could approximate serializability by running transactions slightly in the past, on consistent snapshots.  There was a scheme for gradually narrowing the snapshot window, to make it as current as practical and yet ensure that the data in the snapshot was committed and consistent.  So by doing a read off this window, you could avoid consulting the back-end database more frequently than required and end up generating your web response in a consistent way, but totally locally, just talking to the cache.  Yet even snapshot isolation frustrated some vendors.  They wanted far more scalability, and snapshot isolation still had some overheads that involved queries to the backend database.  It seemed clear that consistency would be at odds with eliminating this round-trip to the back end.

CAP originated from Eric's decision to study this exact issue: he observed that  responsiveness for web page generation was overwhelmingly more critical than any other aspect of the system, and that database queries are slow due to their consistency mechanisms -- mechanisms that do guarantee that the answer is correct, but involve locking and other delays.  This led him to ask whether web servers actually needed consistency, and to speculate about how much data can be served out of cached content.  He ended up arguing for a "soft state" model, in which pretty much every web request is handled as much as possible from cached content, and his company, Inktomi, was born.  CAP then gave rise to a development methodology (the BASE approach), resulted in scalable key-value caching architectures (MemCached, Cassandra, etc), argued for NoSQL database-like models such as in DynamoDB, and so forth.

Honestly, my crowd was up in arms at the time.  Very few "strong consistency" researchers liked CAP or its conclusion that we should just toss consistency out the window.  I could point to colleagues who rail against CAP, even though CAP itself is 15 years old.  And how can one really dispute Eric's deeper point: if these applications don't actually need consistency, why are we insisting that they pay for a property that they don't want, and that isn't all that cheap either?

Notice how from single core insight about the critical path, we end up with a fertile debate and a whole area of research that played out over more than a decade.  You may not love CAP, yet it was actually extremely impactful. 

CAP isn't the only such story.  Jim Gray once wrote a lovely little essay on the "Dangers of Database Replication" in which he did a paper-and-pencil analysis (jointly with colleagues at Microsoft: Pat Helland, Dennis Shasha and ) and showed that if you just toss state machine replication into a database, the resulting system will probably slow down as n^5, where n is the number of replicas.  Since you probably wanted a speedup, not a slowdown, this is a classic example of how critical path analysis leads to recognition that a design is just very, very wrong!  And Jim's point isn't even the same one Eric was making.  Eric was just saying that if you want your first tier to respond to queries without sometimes pausing to fetch fresh data from back-end databases, you need to relax consistency.

Ultimately, both examples illustrate variations on the famous End to End principle: you should only solve a problem if you actually think you have that problem, particularly if the solution might be costly!  And I think it also points to a trap that the research community often falls into: we have a definite tendency to favor stronger guarantees even when the application doesn't need guarantees.  We are almost irresistibly drawn to making strong promises, no matter who we are making them to!  And face it: we tend to brush costs to the side, even when those costs are a really big deal.

So, armed with this powerful principle, how might we tackle the whole data concentrator question?  Perhaps the best place to start is with a recap of the problem statement itself, which arises in fog/edge computing, where we want a cloud system to interact closely with sensors deployed out in the real world.  The core of that story centered on whether we should expect the sensors themselves to be super-smart.  We argued that probably, they will be rather dumb, for a number of reasons: (1) power limits; (2) impracticality of sending them large machine-learned models, and of updating those in real-time; (3) least-common denominator, given that many vendors are competing in the IoT space and companies will end up viewing all these devices as plug-and-play options; (4) lack of peer-to-peer connectivity, when sensors would need to talk to one-another in real-time to understand a situation.  In the previous blog I didn't flesh out all 4 of these points, but hopefully they are pretty self-evident (I suppose that if you already bet your whole company on brilliant sensors, you might disagree with one or more of them, but at least for me, they are pretty clear).

Specialized sensors for one-time uses might be fancier (sensors for AUVs, for example, that do various kinds of information analysis and data fusion while in flight).  But any kind of general purpose system can't bet on those specialized features. We end up with a split: a military surveillance system might use very smart sensors, operating autonomously in an environment where connectivity to the ground was terrible in any case.  But a smart highway might ignore those super-fancy features because it wants to be compatible with a wide range of vendors and anyhow, wants to work with a very dynamically-updated knowledge model.  So, across the broad spectrum of products, a system that wants to have lots of options will end up forced towards that least-common denominator perspective.

From this, we arrived at various initial conclusions, centering on the need to do quite a bit of analysis on the cloud: you can probably do basic forms of image analysis on a camera, but matching your video or photo to a massive "model" of activity along Highway 101 near Redwood City -- that's going to be a cloud-computing task.  Many people believe that speech understanding is basically a cloud-scale puzzle, at least for the current decade or so (the best solutions seem to need deep neural networks implemented by massive FPGA or ASIC arrays).  Most forms of machine learning need big TPU or GPU clusters to crunch the data. 

So most modern roads lead to the cloud.  I'll grant that someday this might change, as we learn to miniaturize the hardware and figure out how to pack all the data in those models into tiny persistent storage units, but it isn't going to happen overnight.  And again, I'm not opposed to fancy sensors for AUVs.  I'm just saying that if some of your AUVs come from IBM, some from Microsoft and some from Boeing, and you want to work with all three options, you won't be able to leverage the fanciest features of each.

The data concentrator concept follows more or less directly from the observations above.  Today's cloud is built around Eric's CAP infrastructure, with banks of web servers that rush to serve up web pages using cached data spread over key-value stores.  Most of the deep learning and adaptation occurs as a back-end task, more or less offline.  It isn't unusual for edge models to be hours out of date.  And it is very hard to keep them within seconds of current state.  But if you plan to control a smart highway, or even a smart home, seconds are too high a latency.  "Watch out for that rusty pipe!" isn't a very helpful warning, if you (or your smart car) won't get that warning until 10s too late.

So what would a data concentrator do, and how does the answer to such a question reveal the most likely critical paths?  Basically, data concentrators give rise to a new kind of tier-one system.  Today's first tier of the cloud runs Eric Brewer's architecture: we have vast army's of very lightweight, stateless, web-page generating engines.  A data concentrator would be a new form of first tier, one that focuses on stateful responses using machine-learned models and customized using machine-learning code, much as we customize a web server to generate the various web pages needed for media, direct sales, advertising, etc.  Over time, I'm betting we would code these in a high level language, like TensorFlow, although today C++ might give much better performance.

Back to basics.  You know the business adage about following the money.   For a fog system where we care mostly about performance and real-time responses, the "costly" thing is slowdowns or delays.  So let's follow the data.  The whole reason for building data concentrators is that our sensors are kind of dumb, but the volumes of data are huge, hence we'll need a system element to filter and intelligently compress the data: I like to think of the resulting systems as "smart memory", meaning that we might treat them like file systems, and yet they are smart about how they store the stuff they receive. A smart memory is just one form of data concentrator; I can think of other forms, too; maybe a future blog could focus on that.

Via its file system personality, a smart-memory system could be used just like any storage infrastructure, supporting normal stuff like Spark/Databricks, TimescaleDB, you name it.  But in its edge-of-the-cloud personality, it offers the chance to do tasks like image or voice classification using big machine-learned models, real-time reaction to urgent events (like mufflers falling off on the highway), and perhaps even simple learning tasks ("Ken's car is 2" to the left of what was predicted").

Where would the highest data pressures arise?  If we have sensors capturing (and perhaps storing) video or voice, the individual data links back to the data collector tier of the cloud won't be terribly burdened: the core Internet has a lot of capacity, and even a dumb camera can probably do basic tasks like recognizing that the highway is totally empty.  But each of these collectors will be a concentrator of streams, capturing data from lots of sources in parallel.  So the highest demand may be on that very first leg inside the cloud: we should be asking what happens to a data stream as it arrives in a single data collector.

Without knowing the application, this may seem like an impossible goal, but consider this: we actually know a lot about modern applications, and they share a lot of properties.  One is the use of accelerators: if the data is video, the first steps will be best fitted to GPU or TPU or FPGA hardware, which can do tasks like parallel image processing, evaluating large Bayesian or Neural Network models, and so forth.  Right now, it is surprisingly hard to actually load data from an incoming TCP connection directly into an accelerator of these kinds: most likely it will need to be copied many times before it ever reaches GPU memory and we can turn that hardware loose.  And we've already concluded that only a limited amount of that kind of work can occur on the sensor itself.

So a first insight more or less leaps at us: for this data collection model to work, we have to evolve the architecture to directly shunt data from incoming TCP connections to GMEM or TMEM or through a bump-in-the-wire FPGA transformation.  Today's operating systems don't make this easy, so we've identified an O/S research need.  Lacking O/S research, we've identified a likely hot-spot that could be the performance-limiting step of the architecture.  That was basically how CAP emerged, and how Jim Gray's database scaling work advanced: both were paper-and-pencil analyses that led to pinch-points (in the case of CAP, the locking or cache-coherency delays needed to guarantee consistency for read-mostly workloads; in the case of database updates, the concurrency control and conflicts that caused aborts).

Distilled to its bare bones, the insight is: data concentrators need ways to move data from where it is captured to where it will be processed that minimize delay and maximize throughput -- and lack those options today.  Without them, we will see a lot of copying, and it will be very costly.

A second such issue involves comparing data that is captured in multiple ways.  If we believe in a sensor-rich IoT environment, we'll have many sensors in a position to hear a single utterance, or to see a single event, and different sensors may have differing qualities of signal.  So we might wish to combine sensor inputs to strengthen a signal, enrich a set of 2-D visual perspectives into a 3-D one, triangulate to locate objects in space, select the best qualify image from a set of candidates, etc.

Again, we have a simple insight: cross-node interactions will also be a pressure point.  The basic observation leads essentially the same findings as before.

Another form of cross node interaction arises because most data concentration systems form some kind of pipeline, with stages of processing.  If you know about Microsoft's Cosmos technology, this is what I have in mind.  Hadoop jobs (MapReduce) are another good example: the output of stage one becomes input to stage two.

So for such systems, performance is often dominated by flow of data between the stages.  For example, suppose that one stage is a GPU computation that segments video images, and a second stage guesses at what the segmented objects might be ("car", "truck", "rusty pipe"), and a third one matches those against a knowledge model ("Ken's car", "Diane's truck"), then compares the details ("is Ken on the same trajectory as predicted?").  We might see some form of fanout: there could be one element in the first stage, but there could be two or even many second or third stage components, examining very different information and with distinct purposes.  Each might send events or log data into storage.

So this then leads to scrutiny of architectural support for that form of structure, that form of handoffs, and attention to the requirements.  As one example: if we want our infrastructure to be highly available, does that imply that handoff needs to be reliable on an event-by-event basis?  The answers may differ case by case: data from a smart highway is very redundant because cars remain on it for long periods; if a glitch somehow drops one event, a follow-on event will surely have the same content.  So in an end-to-end sense, we wouldn't need reliability for that specific handoff, especially if we could gain speed by relaxing guarantees.  Eric would like that sort of thing.  Conversely, if an event is one of a kind ("Oh my goodness, Diane's muffler is falling off!  There it goes!!") you might want to view that as a critical event that must not be dropped, and that has many urgent consequences.

The tradeoffs between data throughput and latency are intriguing too.  Generally, one wants to keep an architecture busy, but a system can be busy in good or bad ways.  Further, goodness is very situation-specific: for warnings about roadway trash, a quick warning is really important.  For aggregated steady state data processing, longer pipelines and batch processing pay off.  And these are clearly in tension.

The pithy version would be along these lines: improved support for data pipelines will be important, and is lacking in today's systems.  Lacking progress, not only will we see a fair amount of inefficient copying, but we could end up with barrier delays much as in Hadoop (where stage two needs to sit waiting for a set of results from stage one, and any single delayed result can snowball to delay the entire computation).

I won't drag this out -- for a blog, this is already a rather long essay!  But hopefully you can see my core point, which I'll summarize.  Basically, we need to visualize a scalable data concentrator system in terms of the data flows it can support and process, and optimize to make those flows as efficient as possible.  Modern O/S structures seem ill-matched to this, although it may not be very hard to improve them until the fit is much better.  But in doing so, it jumps out that low latency and high throughput may well be in conflict here.  Following the bytes down these different paths might lead to very different conclusions about the ideal structure to employ!

I love puzzles like this one, and will surely have more to say about this one, someday.  But jump in if you want to share a perspective of your own!