While driving from Sebastopol (in Sonoma valley, north of San Francisco) to Silicon Valley (which is south of San Francisco), one has a lot of time to think about how human-operated cars coordinate and implicitly communicate. So I thought I might follow up on the posting I did a few weeks ago concerning the learnability of implicit protocols and see if I can't make the question a bit more concrete.
The best way to simplify a problem is to think of a base case, so I started by asking myself about the simplest model I could: vehicles modeled as points and a highway modeled as a straight line.
Interactions between vehicles would basically define a graph: any given car gets a chance to interact with the car in front of it, and the one behind it, and the protocol (whatever it might be) is like a state machine on these graphs.
So what behaviors would constitute a protocol in such a situation? Well, we presumably would have car A in state Sa, and car B in state Sb, and then some "communication" takes place. Not too many options: they either draw closer to one-another at some speed, move apart from one-another at some speed (I mean speed of closing or speed of separating, but probably the joint speed and direction of movement needs to be specified too), or perhaps one or the other brakes sharply or accelerates sharply. So here we have a "vocabulary" for the cars, with which they communicate.
A protocol, then, would be a rule by which A reacts to inputs of this form from B, and vice versa, since any action by B on A is also perceived by B itself (that is, even when B probes A, B itself may be forced to take some reactive action too: the human version is that you are trying to shift from some horrendously bad radio station to a different, slightly less horrible, alternative when you look up and notice that you've gotten much closer to the car in front you than you expected. So in this case it was your own darn fault, but just the same, you react!)
So to learn a protocol, we might imagine a series of "tests" by which vehicle B, approaching A from behind, experiments to investigate A's reaction to various inputs, recording these into a model that over time would converge towards B's approximation of A's algorithm. A similarly is learning B's behavior (and A might elicit behaviors from B. For example, sometimes a car comes up on my tail very close, and especially if we are already basically moving at the fastest possible speed, it annoys me enough that I gradually slow down, rather than speeding up, as he or she probably intended for me to do -- there is an optimal distance at which to follow me, if you are trying to do so. Similarly for most drivers. Drive 3x further back and I might ignore you completely. Drive at half that distance and my behavior departs from the norm: I slow down, which is suboptimal for both of us. My way of punishing your pushy behavior!).
If you could figure out how two cars can learn a protocol, then you could ask if this generalizes to situations that form trains of cars. So here, B catches up with a train: not just A, but perhaps X-Y-Z-A, with A in the rear and some form of agreed-upon protocol in use by X,Y,Z and A. B gets to join the train if it can learn the protocol and participate properly. Failing to do so leaves B abandoned, or results in an accident, etc.
Optimal behavior, of course, maximizes speed and minimizes the risk of accidents. I'm starting to see how one could add a kind of utility function here: B wants to learn A's protocol quickly and "efficiently", with as few tests as possible. Then B wants to use the knowledge gained to achieve this highest possible speed "jointly" with A. That defines a paired protocol, and the generalization becomes a train protocol.
Another observation: Suppose that B has a small set of models in a kind of grab-bag of models that cars commonly use. Now B's job of learning the model is potentially easier: if A is using one of the standard models, it may only take a very small number of tests to figure that out. Moreover, having a good guess might make some models "learnable" that would actually not be learnable with a small number of tests otherwise. This would fit well with the experience I've had in New Jersey, Belgium, Tel Aviv and California: each country has its own norms for how closely cars tend to space themselves on high speed throughways and in other situations, so while you can feel disoriented at first, in fact this only adds up to four models. If someone handed you those four models, I bet you could figure out which one applies with very few experiments.
Unless, of course, you accidentally end up behind a New Jersey driver who is learning the rules of the road in Tel Aviv for the first time, of course. I think I've encountered a few such cases too.
So that would be a further interesting case to consider: roads with mixtures of models: subsets of cars, with each subset using a distinct model. In fact my story of New York City cab drivers, from last time, would fall into that category. The cabs figure out how to form school-of-fish packs that flow around obstacles and other cars, no matter what those other cars might be doing. Meanwhile, there could easily be other protocols: New York public transit buses, for example, have their own rules (they basically are bigger than you, and can do whatever they like, and that is precisely how they behave).
Conversely, the New Jersey driver also illustrates a complication: the implicit protocols of interest to me are cooperative behaviors that maximize utility for those participating in the protocol, and yield higher utility than could be gained from a selfish behavior. But any vehicle also will have an autonomous set of behaviors: a protocol too, but one that it engages in purely for its own mysterious goals. And these selfish behaviors might not be at all optimal: I recently watched a teenage driver in the car behind me putting on makeup while talking on a cell phone and apparently texting as well. So this particular driver wasn't maximizing any normal concept of utility! And all of us have experienced drivers in the grips of road-rage, driving hyper aggressively, or teenage guys hot-dogging on motorcycles or weaving through traffic in their cars. Their sense of utility is clearly very warped towards speed and has a very diminished negative utility for things like accidents, traffic violations and tickets, or sudden death. So even as we learn to cooperate with vehicles prepared to cooperate, and need to figure out the protocols they are using, the environment is subject to heavy noise from these kinds of loners, inattentive drivers, reckless drivers, and the list goes on.
One thing that strikes me is that the cab driver protocol in New York, once you know it, is a good example of a very easily discovered policy. Despite a huge level of non-compliant vehicles, anyone who knows the cab protocol and drives using it will quickly fall into synchrony with a pack of others doing so. So the level of positive feedback must be massive, if we had the proper metric, relative to the level of noise.
Interestingly, there is clearly a pack-like behavior even for a line of cars driving on a single lane. This would be a very good case to start by understanding.
Of course the real-world-school-of-fish behaviors involve (1) shifting lanes, which is the easy case and (2) ignoring lane boundaries, which is a harder case. Given that even the cars on a string case taxes my imaginative capabilities, I think I'll leave the full 2-dimensional generalization to the reader!
No comments:
Post a Comment
This blog is inactive as of early in 2020. Comments have been disabled, and will be rejected as spam.
Note: only a member of this blog may post a comment.