Every few years, in the networking industry, there is a new promising technology proposal around the concept of controlling individual flows of traffic. The analogies this invokes in my mind is the idea of trying to build a sandcastle by moving individual grains of sand. The idea that “fine grain” control is appealing is illusive: the sand moves.
In telecommunications, the challenge has always been how to “bucketize” traffic in a way that one can make statistical inferences about it. Traffic flows in data networks are transient. No system is ever going to be build to control individual flows on a one off basis. A flow must be classified into a bucket by a policy for any meaningful control to be possible.
Thus rather than talk about flows, the interesting question is what policies can be define and expressed in a network. This is not a new debate. When MPLS was designed, the architects of the protocol had the clarity of though of building the protocol around controlling traffic “trunks” (Forward Equivalence Classes to use the proper terminology). This was at a time where Ipsilon was actively promoting its technology proposals centered around controlling individual traffic flows. Without ever quite explaining how these would be managed in aggregate.
Assuming one can express polices that can classify flows into meaningful buckets there is no possible rational for attempting to perform flow classification in a centralized way. It is perfectly reasonable to attempt to have a central location to configure such policies or even help compute some of the implementation details. But classifying flows is a function that must be implemented as distributed as possible in any system that is designed to do anything more than be an educational tool.
It is encouraging that a few years into the OpenFlow meme the majority of the people that initially ignored “networking 101” have managed to re-discover it. Hopefully in a couple more years we can collectively move on.