Sunday, 18 February 2018


Recent revelations about troll postings to Facebook, Twitter and other media sites create an obvious question: can we trust the authenticity of online commentary, or should we basically distrust everything?  After all, even a posting by my brother or an email from my mother could be some kind of forgery.

The question has broader dimensions.  Twenty-two years ago, David Cooper and I published a little paper on secure and private email.  In this paper, we asked whether there is a way for two people (who know one-another) to send and receive emails in a monitored environment, with some form of untrusted observer trying to detect that communication has occurred, and hoping to retrieve the message itself, too.

Our 1995 solution uses cryptography and fake traffic to solve the problem.  The fake traffic ensures that there is a steady flow of bytes, whether or not the two people are communicating.  Then we designed a kind of shared storage system that plays the role of an email server: you can hide data inside it.  The email itself was encrypted, but also broken into bits in the storage layer and hidden inside vast amounts of noise.  Then the act of sending or receiving an email was mapped to a vast amount of reading and rewriting of blocks in the storage system.  We showed that an observer learns very little in this case, and yet you can send and receive emails in a way that guarantees the authenticity of the messages.

This week a Cornell visitor told us about ways to improve on that style of older system, but the broad framework was similar.  I have always loved solutions built from little cryptographic building blocks, so I thought this was a really fun talk.   The problem, of course, is that nobody adopts tools like these, and unless everyone is using them, the mere fact of having a copy of the software might tip bad actors off to your interest in secret communication (then they can abduct you and force you to reveal everything).  To really work, we would need a universally adopted standard, one that nearly everyone was using even without realizing it -- the WhatsApp of secure email.  That way, when they come to question you, you can pretend to have absolutely no idea what they are talking about.

The other problem is that in contemporary society, there is a slight bias against privacy.  While most people would agree that we have a right to privacy, they seem to mean "unless you are trying to hide a secret we want to know about."  So there is a contradiction in the sense that we accept the right to privacy, yet also seem to believe in a broader societal right to intrude, particularly if the individuals are celebrities -- as if privacy rights vanish with any form of fame or notoriety.  There is also a significant community that assumes that privacy is something people would want primarily as a way to hide something: an unusual sexual preference, or criminal activity, or terrorism.

Back to the trolls.  In the cases recently publicized by the FBI, CIA and NSA, we learned that Russia has at least one (maybe more) companies, with large numbers of employees (80 or more) who work full time, day in and day out, planting fake news, false commentaries and incendiary remarks in the US and European press and social networks.  Here in Ithaca, the local example seems to be a recent event in which a dispute arose about the diversity of casting for a high school play (although the lead role is that of Carmen, a gypsy woman and hence someone who would normally have dark skin, the casting didn't reflect that aspect of the role).  This was then cited as part of a pattern, and a controversy around casting erupted.

Any small town has such episodes, but this one was unusual because suddenly, a torrent of really vile postings, full of racist threads, swamped the local debate.  One had the sense that Ithaca (a northern town that once had a big role on the underground railway for helping escaped slaves reach freedom) was some sort of a hotbed of racism.  But of course there is another easy explanation: perhaps we are just seeing the effects of this Russian-led trolling.  The story and this outburst of racism are precisely in line with what the FBI reported on.  In fact, some of the nasty stuff is home grown and purely American.  But these trolling companies apparently are masters at rabble-rousing and unifying the Archie Bunkers of the world to charge in whatever direction they point.

So here we have dual questions.  With Facebook, or in the commentary on an article in a newspaper, I want to be confident that I'm seeing a "legitimate" comment, not one manufactured in a rabble-rousing factory in Russia.  Arguably, this formulation is at odds with anonymity, because just knowing an account name won't give me much confidence that the person behind the account is a real person.  Trolls create thousands of accounts and use them to create the illusion that massive numbers of people agree passionately about whatever topic they are posting about.  They even use a few as fake counter-arguers to make it all seem more real.

So it isn't enough that Facebook has an account named Abraham Lincoln, and that the person posting on that account has the password.  There is some sense in which you want to know that this is really good old honest Abe posting from the great beyond, and not an imposter (or even a much younger namesake).  Facebook doesn't try to offer that assurance.

This is a technical question, and it may well have a technical answer, although honestly, I don't see an immediate way to solve it.  A quick summary:
  • Desired is a way to communicate, either one-to-one (email), or one-to-many (Facebook, within a social network), or one-to-all (commentary on newspapers and other public media web sites).
  • If the individuals wish to do so privately, we would wish for a way to do this that reveals no information to the authorities, under the assumption that "everyone uses social media".  So there should be a way to communicate privately that somehow hides itself as completely normal web site browsing or other normal activities.
  • If the individuals wish to post publically, others should be able to authenticate both the name of the person behind the posting (yes, this is the real "Ken Birman," not a forged and twisted fake person operated as a kind of web avatar under control of trolls in St. Petersburg), and the authenticity of the posting (nobody forged this posting, that sort of thing).
  • In this public mode, we should have several variations:
    • Trust in the postings by people in our own social network.
    • Indirect trust when some unknown person posts, but is "vouched for" by a more trusted person.  You can think of this as a form of "trust distance".
    • A warning (think of it as a "red frame of shame") on any posting that isn't trustworthy at all.  The idea would be to put a nice bright red frame around the troll postings. 
  • When someone reacts to a troll posting by reposting it, or replying to it, it would be nice if the social networking site or media site could flag that secondary posting too ("oops! The poster was trolled!").  A cute little icon, perhaps?  This could become a valuable tool for educating the population at large about the phenomenon, since we often see secondary commentary without understanding the context in which the secondary remark was made.
Then we would want these ideas widely adopted by email systems, Facebook, Twitter, Google, the New York Times, the Breitbart News, and so forth.  Ideally, every interaction would offer these options, so that any mail we send, posting we make, or any that we read, is always protected in the intended manner.

Could this agenda be carried out?  I believe so, if we are willing to trust the root authentication system.  The Cornell visitor from this week pointed out that there is always an issue of the root of trust: once someone can spoof the entire Internet, you don't have any real protections at all.  This extends to your computer too: if you are using a virtualized computer that pretends to support the trust framework but in reality, shares your information with the authorities, all privacy bets are (obviously) off.  If the display system carefully and selectively removes some red frames, and inserts others where they don't belong, we're back to square zero.

So there are limits.  But I think that with "reasonable assumptions" the game becomes one of creating the right building blocks and then assembling them into solutions with the various options.  Then industry would need to be convinced to adopt those solutions (perhaps under threat of sanctions for violating European privacy rules, which are much tougher than the ones in the US).  So my bet is that it could be done, and frankly, when we consider the scale of damage these hackers and trolls are causing, it is about time that we did something about it.  

Wednesday, 31 January 2018

The Paradox of Trust in Permissionless BlockChain

A few days ago, I posted on work by my colleagues (Eyal, Sirer, Van Renesse and Gencer), who showed that most BitCoin mining is occurring in a fairly small number of data centers, mostly operating in China.

Why is this even possible?  To appreciate the context, it helps to realize that any BlockChain system that operates anonymously needs a way to protect against DDoS attacks (BlockChain assumes that some percentage of the participating computers are unhelpful and perhaps even malicious in a Byzantine sense, and hence could try to flood the system with nonsense of various kinds).  To this end, they employ two mechanisms: (1) A proof-of-work requirement, so that it takes some heavy computing to generate new blocks; (2) A cryptocurrency reward for solving those puzzles and placing new blocks at the end of the chain.  Basically, even though you don't trust some of the computers, you block bad behavior and incent good behavior. 

This is how new cryptocoins arise.  Unlike fiat currency, which is simply printed by the United States Treasury, or the Chinese Treasury, or the Swiss one, you need to earn your BitCoins or Ether coins, by mining.  While you can also obtain coins in transactions with other participants, for example by selling them chewing gum or some other high-value commodity, those are previously minted ones.

As cryptocurrencies have soared in value, one after another, the potential to become rich by playing a big role in coin mining wasn't lost on the kind of companies in a position to invest in doing that.  In fact it makes sense to jump in early: you mine lots of coins back when the chain had no real value, and later it turns out that you possess a resource valuable beyond all comprehension.

So, if you are the kind of company that builds data centers, data centers specifically for mining cybercurrency makes sense.  It became very appealing to pursue this form of reward by five years ago, and massive data centers emerged.  Gradually, they upped their game, deploying cutting edge hardware to gain a performance edge.  This pushed less-well-equipped miners out: Anyone who knows how to mine faster and cheaper can earn more and more of those precious bits. 

But it turns out that this dynamic favors places like China, which have inexpensive electric power and labor, yet also have the needed high-tech skills to actually create and run such a mining operation.  So (1) the mining quickly concentrates in a few places, and (2) they tend to be in China.  But that one-two story creates a new and unexpected threat.

The risk is that once a majority of BitCoin mining becomes concentrated in the hands of any small number of players, they can use their power to control which transactions are logged and when, or can offer preferred performance to their friends and penalize their enemies, or even force a rollback to really harm someone they dislike.  A rollback is pretty drastic: since the coins actually originate in blocks within the chain, if the block that minted a coin rolls back, that coin evaporates.  Worse, transactions won't automatically be reissued and performed in the new chain: a transaction becomes invalid if it uses even a fractional version of a non-existent coin.  So potentially, some coins evaporate and some transactions evaporate too.  Yet the chewing gum already changed hands, and has been chewed.  Can't roll that back...

The risk is definitely real.  First, the mere possibility of behaving like a cartel in order to extract more profit from the BlockChain is a powerful force in favor of the first players to explore doing so.  Of course, any cartel will quickly turn into a kind of BitCoin mafia, but the members will get very rich in the process.  So it will happen sooner or later, purely due to market forces.

As for making deals to favor some transactions, or trying to extort money in exchange for fair treatment, we are definitely already hearing stories along those lines today.

But the really extreme case hasn't occurred -- yet.  The scenario here starts with a political event, such as a trade war or a confrontation over the ultimate status of some Asian island-state, or even a border dispute.  It causes great tension between China and some other country that uses BitCoin (or some other cybercoin where Chinese data farms are doing much of the heavy lifting).  Then you have to imagine that China is so angry that it steps in, demands national control over the data centers within its own borders, and then sets out to use this control as leverage in their global political ambitions.  This escalates until they just wipe out some huge portion of the chain by aggressively mining to create an even longer portion.  Everyone expects roll-back events with a block or two, but nobody is prepared for roll-backs that might span days, weeks, months... even years.  Could it happen?  Apparently.

So rather than wander into a gloom and doom story again (I did that on a prior posting), I wan to comment on "hoist on one's own petard" aspect of this.  Is BitCoin in fact doomed, in some way, by the very principle of democratization and global fairness that Satoshi set out to achieve?

How did I ever come up with that idea?  Well, you need to think about Satoshi Nakamoto's missive, the document that inspired the entire endeavor.  The story was basically one of eliminating the monopoly power that governments hold over fiat currency in favor of a global and decentralized model: everyone would mine coins, and everyone would share the benefit.

But with this flipped situation where the economic incentives favor centralization, we can now see something that wasn't evident back then: a paradoxical outcome in which it turns out that the seeds of monopoly were actually hidden in the original premise!  We need a game theoretic analysis!  But it seems to me that BitCoin is turning out to be a game with an unexpected Nash Equilibrium: a state in which there is a small monopoly in total control of the BlockChain.

It all centers on a mistaken notion of trust.  In Satoshi's writings, the concept of trust is central.  Satoshi does not trust centralized government, leading him to invent a new ecosystem in which nobody needs to trust any at all.  Everyone runs the BlockChain software, nobody cares where it runs, or trusts the participants, yet the chain grows, and everyone can use it to buy and sell the flavor of bubblegum that they prefer, in perfect anonymity.   In concept, at least, there is no advantage to be had by modifying the protocol or attacking the chain.

But we can see how this neglected one aspect of trust: trust that the BlockChain software is run by large numbers of autonomous players, and that mining doesn't become concentrated in any pool that gets even close to controlling one third or more of the mining power.  The point about one third, by the way, arises from a different way of manipulating the system: in a previous result, the same authors managed to show that a cartel with one-third of the mining power can earn all the income by sharing certain kinds of information only within cartel participants (this is a slight violation of the protocol, but one you could implement with just a few lines of changes to the BitCoin code base).

Nobody would know that the cartel had pulled the trick off.  It would just start to win every single competition to place the next block.  By shifting through lots of aliases, it would be quite hard to prove that they were up to this.

Conclusion: It appears that the economics of cybercoin mining creates an unexpected pressure to concentrate mining power in a small number of players.  This in turn gives those players an incentive to form a cartel, in which case they can earn all the money, decide which transactions to clear and which to ignore.  And if they get really, really angry, they can collapse the entire BlockChain.

Would I worry about this, if I was invested in BitCoin?  Actually, I think the risk seems real and worrying enough to genuinely think about.  It could happen.  And if the chain were to collapse, all of that wealth would evaporate like so much pixie dust.

So why aren't the big players in a panic?  My guess is that they have profited so greatly from playing in the current high-risk ecosystem that BitCoin represents that they became inured to risk along the way.  BitCoin sometimes doubles in price (or loses 80% of its value) in minutes.  Maybe this blog posting will send it on some new gyration.  And these investors have learned that like a really great roller coaster, each stomach-wrenching downslope is followed by a giddy climb to previously unimagined heights.  They get hooked on it... and are unable to appreciate that giving some cartel power to collapse the whole thing is perhaps unwise -- like simply assuming that the rollercoaster framework is sound, while entrusting it to an unknown construction crew from out of town.  Did they loosen the bolts last night,  secretly?  You might not find out until the thing literally collapses underneath you, and by then it would be far too late to reconsider your willingness to take another ride!

So here's a shout-out to my friends who work on game theory.  Are BlockChain systems doomed to form cartel structures?  Or could one imagine a version of BlockChain in which membership is anonymous (permissionless), and yet cartels cannot arise?

One last observation.  Right now, nobody thinks of systems like Microsoft/VMWare/Facebook's Corfu logs as the basis for a BlockChain, primarily because they don't use proof of work, because they don't operate in a permissionless model.  Same for Derecho, the system we've been working on at Cornell: Derecho is a fast version of Paxos, but you wouldn't normally store HyperLedger records into it.

But why not?  Why shouldn't we create permissioned ledgers this way?  We would eliminate that rate limitation, which drives everyone crazy right now, and in fact would gain other advantages too: auditability, elimination of proof-of-work computations.  This would be good, right?  Rather than mint coins in exchange for mining, we could just stick with traditional fiat currency, but allow these ledgered transactions to talk about it: "For the sum of $1, Ken agrees to sell five packs of BubbleYum Chewing Gum to Ittay."    Maybe later we find a follow-on transaction: "The third party escrow agent confirms delivery of the gum", and after that, confirmation that the $1 changed hands.

Then we could focus on ways to run transactions that span multiple BlockChains.  Maurice Herlihy just published a paper on a protocol for doing that sort of transaction.  Even more interesting than the solution is the specification: if I run a private chain at bank branch A, and it interacts with your branch B of some other bank, how do I know that the transaction was safe, and valid?  And can we avoid creating an audit dependency between A's private log, and B's private log?  Would there be a role for an organization like SWIFT, to assist in clearing at the end of the working day?

You can see where this all leads... my colleagues in game theory would oblige me by proving that permissionless BlockChain is doomed to create high-risk monopolies that concentrate all the control in the hands of a few small cartel-like groups.  Bad news.  To save themselves from a total coin-ocalypse, everyone switches to permissioned models, and we work out the details for using those to host cryptocurrency.  My research group and I enhance Derecho until it stands alone as the ultimate solution for solving that problem (perhaps you didn't know this, but Derecho is provably optimal for Paxos, and in fact is demonstrably way faster than any previous Paxos system, too).  My students then go out and commercialize this...   I like this story!

So what about it, game theory people?  Is this BitCoin vulnerability inevitable in permissionless BlockChains?

Saturday, 20 January 2018

Blockchains immune to cartels.

In my last blog posting, I reported on work by my colleagues that showed how concentrated Blockchain mining has become, with twenty data centers (mostly in China) doing the majority of the mining.  We saw that this makes it easy for China to attack the Blockchain in various ways.

A natural question is to ask why exactly this issue arises, and what can be done about it.

For Bitcoin and other permissionless Blockchain Systems, the central issue is “proof of work,” and it arises because of two factors: the rewards for mining, and worries about a form of DDoS vulnerability.  In Bitcoin, the miner that creates a block of transactions and is able to place it in the Blockchain by computing a nonce value that will result in a SHA5 hash with the requisite number of zeros is paid for the effort: the block will “issue” a new coin to the miner, and the value of the coin combines a fee on the transactions it contains, plus a small additional reward for placing the block itself.  

Because the cryptographic system used is not known to offer any computational shortcuts, computing the nonce is a brute force task: a miner computes a next block and then tests various nonce values until it finds one that results in a hash with the desired characteristic.  There is no way to avoid the brute force task of sifting through nonce values, at present.  This Bitcoin rewards computational power.  The more powerful your mining center, in terms of hashes it can compute per second, the more money (coins) you earn.

Could we imagine a Bitcoin or Blockchain protocol wired to locality, so that the right to create blocks could be fairly shared between miners in the USA, Canada, France, Russia, China, India, Uzbekistan,  etc?  Not really.  Doing this would just incent global mining consortia that would run proxy computers in those countries, but would outsource the actual nonce computation back to the farms in China, allowing the existing mining cartel to preserve its monopoly on the income, and its power to attack the global Blockchain.

So why, exactly, did we need proof of work?  The DDoS worry stems directly from the anonymity of Bitcoin: any computer that wishes to join the Bitcoin pool can do so just by downloading the proper software and attaching itself to the network, using a virtual “name” that isn’t tied to any physical computer address in a direct way.  Thus one real computer could fork off hundreds of clones, and they could generate huge rates of transactions by just selling bubblegum to one-another at preposterously high rates.  Bitcoin is designed to discourage such behavior: first, the fees paid to log transactions make it expensive to generate huge numbers of them.  And second, the intent of the hashing thing is to spread the odds of creating the next block broadly, “fairly.”  Here we can see a flaw in the design: fairness breaks down, and once a cartel gains control of the show, most other properties of anonymity and non-fiat currency characteristics are out the window too.

Better, it seems to me, would be a permissioned model in which non-anonymized players (call them “banks”) maintain local ledgers.  We would have a local ledger of transactions at Bank of America, another local one at the Bank of Russia, etc.  Individual branches could have their own local sub-ledgers.  I’m not proposing that these logs replicate all the transactions in the world.  Just local logs,  using hyperledger or a similar technology to record banking and business contracts and e-coin transactions.

Such a local ledger won’t need to be permissionless because a branch of the Alternatives Federal Credit Union knows perfectly well which machines play server roles.  So we can use membership management protocols, and in fact are looking at a problem like the one solved by Corfu, or by Derecho in its persistent mode.  Perhaps it would be wise to harden the solution against Byzantine attacks, also a solvable, classic problem.   

Then we could create a protocol for banks to talk to one another, more or less SWIFT (the widely used standard interbank protocol) but enhanced to deal with inter-bank transfers.  Here, I think we see an interesting distributed transaction problem: how would one specify the required properties?  For example, if I ask Alternatives Federal Credit Union to purchase euros for me, then wire them to my sister-in-law in Belgium to pay my share of our house for the summer vacation rental, the money moves through a chain of perhaps six banks.  Each wants to know that after a clearing delay (as short as possible: zero would be best) its own local log is finalized.  We don’t want a lingering dependency period during which the ledger for the international branch of Chase Bank somehow can’t be audited for correctness without also auditing the one at Alternatives FCU in Ithaca.

But this seems solvable to me.  We could then eliminate proof of work (banks do charge fees for transactions, so those will avoid DDoS scenarios, and there is no anonymity, so the fairness aspect is moot).  We would simply have found a clean way to introduce Blockchain technology into standard banking, with or without cryptocurrency (honestly, I’m not s big believer in the stuff, but one can have Blockchain solutions for any kind of contract, and if you want to treat large numbers as commodities with value, who am I to stop you?  In fact the earliest banks operated precisely that way: “... just present this unforgeable document to my brother in Milan, and he will give you two gold florins”).  It would work just as well for crates of oranges, or contracts describing mortgage-backed securities.

Bottom line: my colleagues have shown that in effect, Bitcoin is currently a cartel, ultimately controlled by China. All that stuff about anonymity and fairness?  Basically, a hoax, except that the end users do seem to be anonymous, hence Bitcoin remains very helpful for drug transactions, prostitution, money laundering, international weapons sales.  Stuff like that, where real cash and credit is “awkward”.   Tax evasion.  Yet we can have Blockchain solutions without anonymity, and doing so allows high transaction rates, eliminates the monopoly data mining element, gets rid of the wasted power and cooling for all that SHA5 hashing, and if you want e-coins,  you can have them too.  

So it all comes down to the permissionless model, which in turn is ultimately a political statement by Satoshi Nakamoto, stemming from his distrust of fiat currencies and his views about global political structures.  He favors anonymity as a kind of enabler for anarchy, and envisioned a new global political system fueled by a fiat-free currency.  Yet his vision stumbles, we now see, on it’s substitution of computing power for political power, leaving China in control.  I myself would prefer to just have a system that really works (I pay my taxes, and don’t engage in money laundering or Bitcoin barter for illicit stuff).  If Bitcoin and anonymous Blockchain Systems are at risk of foreign domination and attack, I say: enough with this pretense of anonymity!  Let’s just go with Blockchains!

Thursday, 18 January 2018

Attacking the blockchain

A recent paper by my colleagues showed that bitcoin mining’s dominated by about 20 computing farms, which apparently are located mostly in China.

This isn’t surprising: Blockchain mining centers on using special hardware (ASICs) designed to compute cryptographic hash values very rapidly, so a block chain mining cloud is just a massive array of these ASIC equipped computers.  These massive computing farms tend to be somewhat cheaper to build and operate in China, simply because the human costs of construction and management are low there.  It also helps to put them in places where electric power is cheap, and where cooling is easy to come by.  Turns out that China had all of these properties, and a big national semiconductor industry that designs and builds ASICS too.  (We aren’t talking about a sophisticated mathematical attack on the underlying cryptographic methods, just a brute force advantage that comes from owning more computers, and having more of these special hardware accelerators on them).

The question this raises is as follows:  what risks arise if some single country gains majority control over the Blockchain mining power for a particular currency?

Notice that we are talking here about a kind of brute-force control over the Blockchain.  Anyone could do it, by spending enough money on compute farms and specialized ASICs.  The key insight is that special hardware can be hundreds of thousands of times faster than desktop computing, for the kind of computation.  So you and your friends might run a Blockchain ledger on your home computers, and even configure them to crunch day and night trying to earn coins for you by extending the block chain, but because this involves solving cryptographic puzzles and the big farms collectively have millions of times more compute power, the probabilities favor those mining’s farms.  Most likely, those guys will win every single race to compute the next block, and pocket all the resulting coins.  So it isn’t a question of how many people are running the Blockchain software.  This is a story of how much compute power (and electric power, and money to build and operate the farm) you can dedicate.  The winner of that race will be the world leader in computing cryptographic hashes, and hence will dominate in extending the chain.  The issue arises for any Blockchain: Bitcoin, Ethereum, etc.

I talked this over with some friends who know their stuff, and thought I might share the list of weird attacks one can potentially launch once you have this sort of practical domination.
  • Each new block is a set of transactions.  The dominating cartel could favor its own transactions, so that ones from less favored customers have very long clearing times, or perhaps never get logged at all.  They can effectively help friends and censor enemies: a big issue if the Blockchain is supposed to be fair.
  • They could stop mining entirely.  This would cause chaos: new transactions would stop be logged, or be logged very, very slowly. The problem here is that the hardness of Blockchain schemes is a function of the mining power, so if the mining power suddenly drops drastically and the hardness remains the same, the rate of new blocks drops commensurately.
  • As the main operators of the mining system, they would be in an unusual position to run compromised software, hence could collude among their farms, which is a way to further amplify the power of the cartel (there is a prior result by the Sirer and Eyal on this).
  • They could  run two side-by-side Blockchains that start with the same prefix of blocks (by “forking” the original prefix), keeping one public and one secret.  Since they dominate the compute power by a large factor, they can run the public chain slowly, and the secret chain faster.  The secret chain will quickly become longer than the public one. Then one day they can suddenly release the secret chain.  This has the effect of rolling back every single block in the previously-public chain, because Blockchain software favors the longer chain.  So suddenly, every logged transaction in the public chain has been backed out!  This would allow double spending.  And such an attack could erase years of transactions.
  • Same attack, but now suppose that the chain was also used to keep a ledger of contracts (for banking).  The rollback would effectively erase the tail of the ledger.  Those contracts might vanish entirely, or they could reappear at some other location or in some other order in the new chain (the previously secret one). Since some of the contract languages are state or order-sensitive, like hyperledger, this can change the meaning of the contracts. This risk arises because with hyperledger, one record can refer to a value defined in another record. 
  • With knowledge of the secret chain, they could arbitrage.  For example, suppose that in the public chain, a million shares of Apple are sold at some price.  This might impact the actual share price of Apple.  But if the sale was erased in the secret chain, not only would we have chaos concerning ownership of the shares, we would also have impacted the price, and the attackers could exploit this knowledge to anticipate the impact and profit from it by placing bets in public exchanges, via futures contracts.
I bet this list isn’t exclusive.  Jump in with your own ideas!  But you can already see that basically, if you use Blockchain, you had better trust the operators of the majority of the mining power in the mining pool.  If you don’t, or if they turn on you, you could be seriously harmed down the road.

The bad news, obviously, is that apparently, today’s Blockchain systems are seriously at risk in this sense!

Tuesday, 19 December 2017

Data concentrators: Follow the bytes

In my last posting, I discussed the need for cloud-hosted data concentrators in support of fog computing.  I thought it might be interesting to dive a bit deeper on this topic.  A number of questions arise, but I want to suggest that they center on understanding the "critical path" for data.  In computer systems parlance, the term just means "the portion of the system that limits performance."  Once we understand which elements make up the critical path, we can then ask further questions about how one might implement an architecture to support this form of data concentrator, with various cloud-like goals: resource sharing, scalability, and so forth. 

To set the context, let me give an example of how this form of reasoning can be valuable.  Consider Eric Brewer's famous CAP conjecture for web servers (and the associated BASE methodology).  It may be surprising to realize that before Eric actually put this forward, many web server systems were built as classic 3-tier database structures, in which every web page was basically the output of a database query, and the database server was thus on the critical path even for read-only web transactions.  The database queries ultimately became the bottleneck: nobody could figure out how to get linear scaling without having the database concurrency mechanism dead center on the critical path and ultimately, overloaded by the load.

At that time we saw some very clever workarounds that came close to solving the problem.  For example, with snapshot isolation, it was shown that you could approximate serializability by running transactions slightly in the past, on consistent snapshots.  There was a scheme for gradually narrowing the snapshot window, to make it as current as practical and yet ensure that the data in the snapshot was committed and consistent.  So by doing a read off this window, you could avoid consulting the back-end database more frequently than required and end up generating your web response in a consistent way, but totally locally, just talking to the cache.  Yet even snapshot isolation frustrated some vendors.  They wanted far more scalability, and snapshot isolation still had some overheads that involved queries to the backend database.  It seemed clear that consistency would be at odds with eliminating this round-trip to the back end.

CAP originated from Eric's decision to study this exact issue: he observed that  responsiveness for web page generation was overwhelmingly more critical than any other aspect of the system, and that database queries are slow due to their consistency mechanisms -- mechanisms that do guarantee that the answer is correct, but involve locking and other delays.  This led him to ask whether web servers actually needed consistency, and to speculate about how much data can be served out of cached content.  He ended up arguing for a "soft state" model, in which pretty much every web request is handled as much as possible from cached content, and his company, Inktomi, was born.  CAP then gave rise to a development methodology (the BASE approach), resulted in scalable key-value caching architectures (MemCached, Cassandra, etc), argued for NoSQL database-like models such as in DynamoDB, and so forth.

Honestly, my crowd was up in arms at the time.  Very few "strong consistency" researchers liked CAP or its conclusion that we should just toss consistency out the window.  I could point to colleagues who rail against CAP, even though CAP itself is 15 years old.  And how can one really dispute Eric's deeper point: if these applications don't actually need consistency, why are we insisting that they pay for a property that they don't want, and that isn't all that cheap either?

Notice how from single core insight about the critical path, we end up with a fertile debate and a whole area of research that played out over more than a decade.  You may not love CAP, yet it was actually extremely impactful. 

CAP isn't the only such story.  Jim Gray once wrote a lovely little essay on the "Dangers of Database Replication" in which he did a paper-and-pencil analysis (jointly with colleagues at Microsoft: Pat Helland, Dennis Shasha and ) and showed that if you just toss state machine replication into a database, the resulting system will probably slow down as n^5, where n is the number of replicas.  Since you probably wanted a speedup, not a slowdown, this is a classic example of how critical path analysis leads to recognition that a design is just very, very wrong!  And Jim's point isn't even the same one Eric was making.  Eric was just saying that if you want your first tier to respond to queries without sometimes pausing to fetch fresh data from back-end databases, you need to relax consistency.

Ultimately, both examples illustrate variations on the famous End to End principle: you should only solve a problem if you actually think you have that problem, particularly if the solution might be costly!  And I think it also points to a trap that the research community often falls into: we have a definite tendency to favor stronger guarantees even when the application doesn't need guarantees.  We are almost irresistibly drawn to making strong promises, no matter who we are making them to!  And face it: we tend to brush costs to the side, even when those costs are a really big deal.

So, armed with this powerful principle, how might we tackle the whole data concentrator question?  Perhaps the best place to start is with a recap of the problem statement itself, which arises in fog/edge computing, where we want a cloud system to interact closely with sensors deployed out in the real world.  The core of that story centered on whether we should expect the sensors themselves to be super-smart.  We argued that probably, they will be rather dumb, for a number of reasons: (1) power limits; (2) impracticality of sending them large machine-learned models, and of updating those in real-time; (3) least-common denominator, given that many vendors are competing in the IoT space and companies will end up viewing all these devices as plug-and-play options; (4) lack of peer-to-peer connectivity, when sensors would need to talk to one-another in real-time to understand a situation.  In the previous blog I didn't flesh out all 4 of these points, but hopefully they are pretty self-evident (I suppose that if you already bet your whole company on brilliant sensors, you might disagree with one or more of them, but at least for me, they are pretty clear).

Specialized sensors for one-time uses might be fancier (sensors for AUVs, for example, that do various kinds of information analysis and data fusion while in flight).  But any kind of general purpose system can't bet on those specialized features. We end up with a split: a military surveillance system might use very smart sensors, operating autonomously in an environment where connectivity to the ground was terrible in any case.  But a smart highway might ignore those super-fancy features because it wants to be compatible with a wide range of vendors and anyhow, wants to work with a very dynamically-updated knowledge model.  So, across the broad spectrum of products, a system that wants to have lots of options will end up forced towards that least-common denominator perspective.

From this, we arrived at various initial conclusions, centering on the need to do quite a bit of analysis on the cloud: you can probably do basic forms of image analysis on a camera, but matching your video or photo to a massive "model" of activity along Highway 101 near Redwood City -- that's going to be a cloud-computing task.  Many people believe that speech understanding is basically a cloud-scale puzzle, at least for the current decade or so (the best solutions seem to need deep neural networks implemented by massive FPGA or ASIC arrays).  Most forms of machine learning need big TPU or GPU clusters to crunch the data. 

So most modern roads lead to the cloud.  I'll grant that someday this might change, as we learn to miniaturize the hardware and figure out how to pack all the data in those models into tiny persistent storage units, but it isn't going to happen overnight.  And again, I'm not opposed to fancy sensors for AUVs.  I'm just saying that if some of your AUVs come from IBM, some from Microsoft and some from Boeing, and you want to work with all three options, you won't be able to leverage the fanciest features of each.

The data concentrator concept follows more or less directly from the observations above.  Today's cloud is built around Eric's CAP infrastructure, with banks of web servers that rush to serve up web pages using cached data spread over key-value stores.  Most of the deep learning and adaptation occurs as a back-end task, more or less offline.  It isn't unusual for edge models to be hours out of date.  And it is very hard to keep them within seconds of current state.  But if you plan to control a smart highway, or even a smart home, seconds are too high a latency.  "Watch out for that rusty pipe!" isn't a very helpful warning, if you (or your smart car) won't get that warning until 10s too late.

So what would a data concentrator do, and how does the answer to such a question reveal the most likely critical paths?  Basically, data concentrators give rise to a new kind of tier-one system.  Today's first tier of the cloud runs Eric Brewer's architecture: we have vast army's of very lightweight, stateless, web-page generating engines.  A data concentrator would be a new form of first tier, one that focuses on stateful responses using machine-learned models and customized using machine-learning code, much as we customize a web server to generate the various web pages needed for media, direct sales, advertising, etc.  Over time, I'm betting we would code these in a high level language, like TensorFlow, although today C++ might give much better performance.

Back to basics.  You know the business adage about following the money.   For a fog system where we care mostly about performance and real-time responses, the "costly" thing is slowdowns or delays.  So let's follow the data.  The whole reason for building data concentrators is that our sensors are kind of dumb, but the volumes of data are huge, hence we'll need a system element to filter and intelligently compress the data: I like to think of the resulting systems as "smart memory", meaning that we might treat them like file systems, and yet they are smart about how they store the stuff they receive. A smart memory is just one form of data concentrator; I can think of other forms, too; maybe a future blog could focus on that.

Via its file system personality, a smart-memory system could be used just like any storage infrastructure, supporting normal stuff like Spark/Databricks, TimescaleDB, you name it.  But in its edge-of-the-cloud personality, it offers the chance to do tasks like image or voice classification using big machine-learned models, real-time reaction to urgent events (like mufflers falling off on the highway), and perhaps even simple learning tasks ("Ken's car is 2" to the left of what was predicted").

Where would the highest data pressures arise?  If we have sensors capturing (and perhaps storing) video or voice, the individual data links back to the data collector tier of the cloud won't be terribly burdened: the core Internet has a lot of capacity, and even a dumb camera can probably do basic tasks like recognizing that the highway is totally empty.  But each of these collectors will be a concentrator of streams, capturing data from lots of sources in parallel.  So the highest demand may be on that very first leg inside the cloud: we should be asking what happens to a data stream as it arrives in a single data collector.

Without knowing the application, this may seem like an impossible goal, but consider this: we actually know a lot about modern applications, and they share a lot of properties.  One is the use of accelerators: if the data is video, the first steps will be best fitted to GPU or TPU or FPGA hardware, which can do tasks like parallel image processing, evaluating large Bayesian or Neural Network models, and so forth.  Right now, it is surprisingly hard to actually load data from an incoming TCP connection directly into an accelerator of these kinds: most likely it will need to be copied many times before it ever reaches GPU memory and we can turn that hardware loose.  And we've already concluded that only a limited amount of that kind of work can occur on the sensor itself.

So a first insight more or less leaps at us: for this data collection model to work, we have to evolve the architecture to directly shunt data from incoming TCP connections to GMEM or TMEM or through a bump-in-the-wire FPGA transformation.  Today's operating systems don't make this easy, so we've identified an O/S research need.  Lacking O/S research, we've identified a likely hot-spot that could be the performance-limiting step of the architecture.  That was basically how CAP emerged, and how Jim Gray's database scaling work advanced: both were paper-and-pencil analyses that led to pinch-points (in the case of CAP, the locking or cache-coherency delays needed to guarantee consistency for read-mostly workloads; in the case of database updates, the concurrency control and conflicts that caused aborts).

Distilled to its bare bones, the insight is: data concentrators need ways to move data from where it is captured to where it will be processed that minimize delay and maximize throughput -- and lack those options today.  Without them, we will see a lot of copying, and it will be very costly.

A second such issue involves comparing data that is captured in multiple ways.  If we believe in a sensor-rich IoT environment, we'll have many sensors in a position to hear a single utterance, or to see a single event, and different sensors may have differing qualities of signal.  So we might wish to combine sensor inputs to strengthen a signal, enrich a set of 2-D visual perspectives into a 3-D one, triangulate to locate objects in space, select the best qualify image from a set of candidates, etc.

Again, we have a simple insight: cross-node interactions will also be a pressure point.  The basic observation leads essentially the same findings as before.

Another form of cross node interaction arises because most data concentration systems form some kind of pipeline, with stages of processing.  If you know about Microsoft's Cosmos technology, this is what I have in mind.  Hadoop jobs (MapReduce) are another good example: the output of stage one becomes input to stage two.

So for such systems, performance is often dominated by flow of data between the stages.  For example, suppose that one stage is a GPU computation that segments video images, and a second stage guesses at what the segmented objects might be ("car", "truck", "rusty pipe"), and a third one matches those against a knowledge model ("Ken's car", "Diane's truck"), then compares the details ("is Ken on the same trajectory as predicted?").  We might see some form of fanout: there could be one element in the first stage, but there could be two or even many second or third stage components, examining very different information and with distinct purposes.  Each might send events or log data into storage.

So this then leads to scrutiny of architectural support for that form of structure, that form of handoffs, and attention to the requirements.  As one example: if we want our infrastructure to be highly available, does that imply that handoff needs to be reliable on an event-by-event basis?  The answers may differ case by case: data from a smart highway is very redundant because cars remain on it for long periods; if a glitch somehow drops one event, a follow-on event will surely have the same content.  So in an end-to-end sense, we wouldn't need reliability for that specific handoff, especially if we could gain speed by relaxing guarantees.  Eric would like that sort of thing.  Conversely, if an event is one of a kind ("Oh my goodness, Diane's muffler is falling off!  There it goes!!") you might want to view that as a critical event that must not be dropped, and that has many urgent consequences.

The tradeoffs between data throughput and latency are intriguing too.  Generally, one wants to keep an architecture busy, but a system can be busy in good or bad ways.  Further, goodness is very situation-specific: for warnings about roadway trash, a quick warning is really important.  For aggregated steady state data processing, longer pipelines and batch processing pay off.  And these are clearly in tension.

The pithy version would be along these lines: improved support for data pipelines will be important, and is lacking in today's systems.  Lacking progress, not only will we see a fair amount of inefficient copying, but we could end up with barrier delays much as in Hadoop (where stage two needs to sit waiting for a set of results from stage one, and any single delayed result can snowball to delay the entire computation).

I won't drag this out -- for a blog, this is already a rather long essay!  But hopefully you can see my core point, which I'll summarize.  Basically, we need to visualize a scalable data concentrator system in terms of the data flows it can support and process, and optimize to make those flows as efficient as possible.  Modern O/S structures seem ill-matched to this, although it may not be very hard to improve them until the fit is much better.  But in doing so, it jumps out that low latency and high throughput may well be in conflict here.  Following the bytes down these different paths might lead to very different conclusions about the ideal structure to employ!

I love puzzles like this one, and will surely have more to say about this one, someday.  But jump in if you want to share a perspective of your own!

Monday, 11 December 2017

Fog computing and cloud-hosted data concentrators

When a cloud touches the ground, we get fog, an image that inspires a new buzzword: "fog computing."  If you are interested in cloud integration with edge devices you work in the fog!

Internet of Things is going to a big deal, and this suggests that as a research topic, fog computing deserves close scrutiny.  Today's most interest question is more of a meta-question: figuring out which system elements should play which roles (once this question is resolved, a series of more technical follow-on questions would arise). 

The question isn't really so new: At least since the 1980's, researchers have speculated about the challenges of computing systems that might reach very broadly into the physical world through a variety of sensing modalities (still and video imaging, radar and lidar and IR motion detectors, temperature and humidity sensors, microphones, etc), use machine-learned techniques to make sense of those inputs, and then perhaps take action.  But the game changer relative to those early days is the contemporary appreciation of how cloud computing can drive costs down, and enable a new form of nimble startup -- the kind of company that builds a cool new technology and then can scale it to support millions of users almost overnight.  Past versions of edge systems mired in issues of cost and suffered market failures: the users who might have benefitted didn't want to pay, and because the costs were genuinely steep, the technology simply didn't take off.

Today, we can at least identify a strong market pull.  In any technology media publication you read about smart homes and office complexes and cities, smart highways and power grids, smart healthcare technologies.  Clearly there is a wave of interest for this form of deeply integrated machine intelligence.  Moreover, and this points to a social shift relative to the 1980's, the dominant tone isn't really a worry about privacy, although you do see some articles that fret about the risks (and they are quite real; we need to acknowledge and engage on that front).  But the more dominant tone looks at upside, drawing analogies to our cell phones.

For example, I have an advisory role in a company called  It was founded by Ashutosh Saxena, a friend who was a Cornell faculty member until he left to do the startup, and David Cheriton, who you may know of as the first outside investor in Google (he was a Stanford professor but also a very successful entrepreneur, and when Larry and Sergie knocked on his door with their idea and an early proof of concept, Dave jumped to help them get established).  They view the apartment or condominium as the next platform, and Ashutosh actually gives talks in which he shows you the architecture diagram of an iPhone and then a nearly identical one for a condo in Santa Rosa.  The system is like the iPhone O/S and could host apps, and because Caspar works with the developer who built the entire development, the technology can orient itself: it is in such-and-such a room listening to the apartment owner giving instructions about music for the party tonight, etc.

The example highlights one of the puzzles we'll want to think about: Caspar has orientation because it is built right into the condominium or home.  But most fog computing devices today are just small gadgets that individuals buy and install by hand.  How would they know where they are, which way they are pointing, etc?  And even if a very geeky technology person could configure such a thing, could her grandfather do it, or would he need to call his granddaughter to come and set the device up?  Part of the fog computing puzzle is visible right in this example, and in the contrast with how will these systems orient themselves, and how will the devices be told what roles to play?

It isn't just about smart homes.  One of the more exciting ideas I heard about most recently centers on smart agriculture: I attended a workshop on digital agriculture at Cornell a few weeks ago, and completely coincidentally, was invited to attend a second one on the concept of a smart soil "macroscope" at Chicago almost immediate afterward.

So how would the fog impact agriculture or dive into the soil?  Researchers spoke about tracking produce from farm to table, literally step by step, and using that knowledge to reduce loss due to fresh produce sitting on the shelf for too long, to improve the efficiency of the produce supply chain, prevent accidental contamination by E-Coli or other bacteria, redirect shipments to match demand more closely, etc.  A researcher at Microsoft showed that with a new kind of white-fi communications, drones could fly over fields and map out insect infestations in real-time, enabling the farmer to stamp out the pests with spot applications of insecticides, reducing unnecessary pesticides by a factor of 1000x.  You could do the same with fertilizer, or when watering.   One person talked about underground WiFi: it turns out to work surprisingly well, if you have enough power!  Who would have imagined an underground WiFi network?  But if the need ever becomes real enough, it can be done!  The one caveat is that they need fairly well-drained soil; pooled water can block the signals.

Who needs this?  I found one answer out in Napa.  I happen to love great wines, and I'm friendly with some growers who own or manage insanely expensive vineyards.   They would love to be able to visualize and track the subsurface biome, the movement of nutrients and water, and the conversion of surface materials into soil.  This might help a grower identify particularly promising spots to place the next great winery.  Of course, they are also quick to point out that no technology is a complete answer to any question, and that going from data to "useful insights" is quite a complicated matter.  But in a world of constant climatic change, they are keenly interested in knowing what is going on down there.  In fact I'm thinking I should start a little company to work on this topic, if for no other reason than as an excuse to visit and taste some of those wines!  A lot of them are way to expensive for me to actually buy and drink on a routine basis.

Getting technical again: what questions can we identify from this set of examples, and how do they shape the likely form a fog computing system might take?

Part of the puzzle centers on the limitations of sensing devices.  Devices are gaining in potential compute power, but you need to ask whether computing on the device itself is a smart move, given that more and more sensing devices are designed to operate on batteries, or with minimal power draw.  Computing at the edge isn't a very power-efficient model, and relaying data back to a data center has overwhelmingly dominated when you look at actual deployed IoT products. 

In fact there is much to be said for viewing sensors as dumb devices that might store a few hours or days of data, but don't compute very much.  First, if you want to understand an image or a spoken command, the size of database you would use to do that is huge -- deep neural networks and Bayesian networks generate models that can be terabytes or even petabytes in size when used for tasks of this kind.  Keeping the neural network models and precomputed data back in your local cloud where they can be shared among a number of users is far more cost-effective than shipping those petabytes to the devices, and then needing to keep those updated as the runtime conditions and goals evolve.

The proper way to process data might also depend on things the sensor is unlikely to have access to, such as the best estimate of its location and orientation, knowledge of which people are in the home or office, context associated with their prior commands to the system that were given in a different location, and captured by some other device.  As we saw, while might actually have this kind of orientation at hand, most devices lack that sort of context information (think of an Alexa camera/microphone/speaker that people shift around much like a flower vase: it isn't in the same place from day to day, and that camera could easily end up pointing at a wall, or a stack of books!)   All of this might argue for a sensor model in which sensors capture everything in the vicinity, store a copy locally, but then just blindly relay the acquired data to the cloud.  The sensor could still do some very basic stuff: for example, perhaps it can figure out that absolutely nothing is happening at all, and skip the upload in that case, or upload a tiny marker saying "no change."  This really limited form of local computing is something that even very simple, disoriented devices can perform.

However, arguing in the other direction, there are sensing options that only make sense if deployed at the edge.  For example, you can't easily correct for focus on vibration of a video after capturing it, so that form of dynamic adjustment should be performed right on the camera.   A subsurface sensor used to track humidity in the soil may need to dynamically vary the way it operates its sensing components, because the best options for measuring moisture vary enormously depending on the depth of the water table, how far from fully saturated the soil is, etc.  So for cases like these, a dumb sensor might end up generating endless streams of low-quality data that can't be patched up later.

Broadly, I think we'll need to do both, but that the sensors will be used mostly in pretty dumb ways (like to hold video, and to discard empty content), but then will relay most of the potentially interesting stuff back to the cloud.

So this starts to answer the meta question.  Given this model, we can see what the technical need might be: the model argues that we should create a new technology base focused on cloud-hosted data concentrators that are integrated deeply into cloud storage systems: I'm fond of the term "smart memory" for this functionality.  A single instance of a concentrator, on some single mid-range compute server within the cloud, might handle some large but not unbounded number of sensors: perhaps, 10,000 smart homes in some community, or 100 video cameras.  If you need more capacity, you would just spread your incoming data streams over more data concentrators.

Notice that I'm not working within a standard 3-tier cloud model in this example.  A standard cloud has a first tier that generates web content, often using cached data and other second-tier services.  The third tier covers everything at the back end.  A concentrator is an example of a new kind of first-tier: one that is stateful and smart, perhaps with real-time guarantees and consistency promises.  This is not today's most common cloud model -- although it is close enough to it that today's cloud could definitely evolve to fit this picture, and in fact it might not even be a major reach to pull it off!

Within the data concentrator we would have a machine-learning model, dynamically updated and accurate, that could be used to immediately "understand" the data.  Thus if a person in an apartment utters a remark that only makes sense in context, we could create a dynamic machine-learned utterance model that is accurate to the second and expresses the system knowledge of recent past; even if the speaker is moving from room to room, that model would evolve to reflect the system understanding of his or her request.  For example, "Caspar, can you adjust the shade to get rid of that glare?" can be understood only by a system that knows the position of the sun, the locations of the windows and shades and the TV, and the options for adjusting that kind of window shade, but with that data, it can be done -- and that data would most likely live in a cloud-style concentrator or a smart-home "brain unit" if we put the concentrator right in the home (appealing as a response to privacy worries).  Tell Alexa or Siri to do this, and because those technologies are kind of a free-standing autonomous edge, the best you can hope for is a response like "I'm sorry, Ken, I don't know how to do that."

The other argument for cloud models leads to a data storage and update insight.  In particular, it isn't just the massive databases used for vision and speech understanding that would probably need to live on the cloud.  There is also a question of the smaller knowledge models used to make sense of edge events, and to adapt as they occur.

A smart highway might have an evolving understanding of the situation on such-and-such a segment of the freeway, and a smart farming system might build up a model of the insect infestation in a field, adapting the drone's flight plan to focus on areas that are at highest risk, while spending less time scanning areas that seem to be unaffected by the bugs.

The argument for smart storage would simply be that as we capture and make sense of these data streams, we're in a unique position to decide what to keep and what to discard, which data to route to back-end systems for further evaluation using offline techniques, etc.  The back-end systems would view the entire smart memory as a big storage cluster containing read-only files, reflecting the knowledge acquired by the smart analytic layer, and indexed by time.  Of course they could write files too, for example to update parameters of the knowledge model in use on the front end.

As an example, if a smart highway were to observe that some car is a few cm to the side relative to predictions, the system would probably just tweak the model parameters at the edge.  But if a car changes lanes unexpectedly, that would be a big event, and might be better handled by forwarding the information to a back-end system running Hadoop (Spark/Databricks), where we could recompute the entire set of expected vehicle trajectories for that segment of highway.

In other blog entries, I've shifted to a pitch for Derecho around this point, but this blog is getting long and perhaps I'll just wrap up.  In fact, it isn't just Derecho that I've omitted: I haven't even touched on the need for specialized hardware (FPGA, GPU clusters and TPU clusters seem like the most promising technologies for really understanding speech and video at scale), privacy, security, or data consistency: all could be topics for future blogs.  But those argue for a cloud model too.  Overall, it does strike me as a very promising area for study.  My one qualm, really, centers on the buzzword: fog computing isn't my favorite term; it sounds way too much like marketing drivel, and we've all heard a lot of that sort of thing.  What was wrong with plain old IoT?  Or "smart edge"?

Tuesday, 28 November 2017

Dercho discussion thread

 As requested by Scott...  Scott, how about reposting your questions here?
Hi Ken.

Thanks very much for the info. I wasn't familiar with LibFabrics. I think since I don't have access to RDMA hardware right now I will wait for the derecho over libfabrics. How can I know when it is available?

In the mean time a question: you mentioned in one of your blog posts that it would be difficult to use other languages than c++ for using the derecho replicated object API. Could you comment on why that would be?...e.g. relative to python or java say.

To share one thought: the OSGi (java) concept of a service...or rather a remote conceptually similar. OSGi services are plain 'ol object instances that are managed by a local broker (aka service registry) and accessed by interfaces.

One thing that makes OSGi services different from java objects is support for dynamics...i.e. OSGi services can come and go at runtime

Which brings me to another question: Is it possible for replicated object instances to be created and destroyed at runtime within a derecho process group?

BTW, if there is some derecho mailing list that is more appropriate than your blog for such questions please just send me there.