The business model of a network equipment vendor works roughly like the following: assume that you have a piece of equipment that has a Cost Of GoodS (COGS) of 1 unit; multiply by 10 to get list price for an enterprise product or by 20 in the case of a service provider product. Then discount by 60% or 80% respectively to get an Average Sales Price (ASP) of 4 units.
That would yield a margin of 75%. Unfortunately the numbers never quite add up to that. There always is a bit of a cost overrun; and large volume customers always end up with a few extra discount points. If you dig into the financial reports of a major vendor, margins over around 65%.
Out of this 65% gross margin the revenue must be allocate to 3 major buckets: cost of sales and marketing, R&D and bottom line. Investors are looking for an operating margin of 20%; sales can easily consume more than 30% of revenue; R&D must be funded because when you sell the equipment you also sell a perpetual software license and customers expect maintenance and new functionality for a period of at least 5 years after purchase.
Yes, an equipment vendor will “price” a $6,500 PC for $65,000; but that number is entirely fictional. The average customer pays $22k out of which $6k could be COGS; leaving $16k for a perpetual software license which is not at all outrageous when compared with licenses of enterprise software. There is some wiggle room in this business model: one could sell the gear at COGS + 30% and then monetize the software with license plus yearly subscription. But personally, i don’t expect the math to change by much between physical or virtual based appliances. There is not that much wiggle room.
Equipment vendors play a lot of attention to cost of goods. As we’ve described above, it underlies the entire economic model. It is the first decision to be taken in a new project: the curiously named “not to exceed” cost (you can add cost overrun to “death and taxes” when it comes to life’s certainties).
A physical appliance is always going to be more efficient than a virtualized one when it comes to cost of goods and power efficiency. In one end of the spectrum you have a forwarding device (e.g. router/switch) which is going to be probably 10-20x most cost efficient in the non-virtualized form. At the other end, one has boxes like security appliances that tend to be made out of general purpose CPUs (although probably using something most cost/power efficient than intel/amd). Virtualizing a particular appliance will increase cost, often in a non-trivial way, when compared to its physical counter part.
So why do it ? What is the idea behind virtualizing network equipment ?
There are two complimentary answers: resource utilization and agility in terms of time to deploy a service.
The first one is the usual Time Division Multiplexing vs Time-sharing equation. If one deploys a firewall service, to take an example, as a distributed appliance in every branch office then the firewall capacity required is the sum total of every branch office max throughput; although the utilization of each is likely to be very low in average. A virtualized offering deployed in a metro/regional POP can be scaled for the maximum aggregated throughput that is actually seen in the network. The delta can be very significant. Note that this concentrated service could be deployed with either physical or virtual appliances in order to get the same benefits. Subscriber management architectures (DSL, Wireless) tend to already follow this design aggregating a very large number of subscribers to a concentrated set of resources.
The other key factor to consider is agility. Even when less efficient, an environment where one doesn’t have to wait for physical appliances to be procured and provisioned may pay for itself. An infrastructure that is pre-positioned and is ready to accept new services can allow for rapid deployment of new service offerings.
This however requires a completely different approach to provisioning and management of network services. One model that can potentially deliver on the agility promise is one where the service is entirely managed by the tenant. This is after all the service model of AWS, Meraki and other players that have been re-imagining the space.
My fear is that carriers maybe tempted to marry the cost inefficiencies of appliance virtualization with their traditional management solutions in the form of complex operational procedures glued by the OSS/BSS. I hope that by openly discussing the cost structure of network equipment we can at least bury the PCs-are-cheap-so-by-using-servers-i-m-going-to-reduce-cost argument. That should not be the goal as the numbers do not add up.