cto.vmware.com – In 2013 we’ll see network virtualization established as the real goal of SDN.

In 2013 we’ll see network virtualization established as the real goal of SDN.  cto.vmware.com  Allwyn Sequeira

Beyond SDNs – Networking & Security in 2013
We will look back on 2012 as the year that ushered in a new era of networking and infrastructure security in enterprise and cloud datacenters, driven by the software defined datacenter vision.

Networking stands alone

#gartnerdc -I keep hearing vendors recognize networking products are outside virtualization driven convergence but keep pushing hardware for scale – why? Seems like most are ignoring the new dimension of scale – scale out. Looking at scale as up only is an artifact of needing to optimize high cost hardware.

NWW – 10 hard truths IT must learn to accept

Unsanctioned devices, compromised networks, downtime — today’s IT is all about embracing imperfections – Check out the whole deck on NetworkWorld

IT concession No. 2: You’ve lost control over how your company uses technology

Business users with no tech acumen can spin up a third-party business cloud service with a credit card and a click of a button. IT has lost control over IT.

That’s not necessarily a bad thing. Cloud and mobile apps can give frustrated business users access to tech resources without putting additional burden on IT.

Your job is no longer to provide top-down solutions; it’s to enable business users to make the right decisions, says Scott Goldman, CEO of TextPower.

“Instead of struggling to regain control, tech departments should strive for something more valuable: influence,” he says. “The days of the all-powerful IT department dictating methods and machines is gone. The sooner they realize it, the faster they’ll actually regain some level of control.”

Winning the Capacity Planning Game – Terrific Post from Linerate Systems

I can’t improve on Manish’s post.  Hardware dependency will always create to much compromise.  With software networking – “Underprovisioned? — add servers in minutes. Overprovisioned? — repurpose the servers or enjoy the spare capacity.”

http://lineratesystems.com/winning-the-capacity-planning-game/

MAR 26, 2012
Posted by Manish Vachharajani
Network architects and operators know that capacity planning for network appliances — firewalls, load balancers, advanced security devices, etc. — is a high stakes game, critical to the total cost of deploying and operating data center infrastructure. The details vary, but there are some common techniques used to ensure that network appliances will have sufficient capacity. Unfortunately, these techniques have serious drawbacks, especially when operating at scale. Below, I’ll describe these techniques in the context of some customer experiences we’ve had at LineRate.

Capacity Planning Techniques

Massively Overprovision The first technique is to massively overprovision appliances. At one customer site, the network architect always insisted on buying the highest end appliance that could be budgeted, just in case. At another, the customer only used 25% of the rated capacity (e.g., 10 Gb/s of traffic through a pair of 20 Gb/s appliances), leaving 75% of the capacity idle most of the time. While this approach can work, it is a very expensive way to hedge your bets. High-end application delivery controllers, for example, run a few hundred thousand dollars per high-availability pair at list price.

Measure Capacity The second technique is to carefully measure and test each appliance to see how much capacity it can handle in today’s environment. Unfortunately, this is harder than it seems. Appliances see huge performance variations depending on the specific features they use and the traffic mix they experience. For example, one site was getting only 30% of the rated capacity from their appliances. Many sites measure appliance performance on production traffic because they’ve learned that the test network can’t predict what they will see in the production network. One very large datacenter operator described a byzantine multi-month test process where they quantified the cost per gigabit, cost per TCP connection, etc. for a large variety of workloads with the exact feature set that would be used in production.

Despite their best measurement efforts, network operators still get caught with their pants down. The site that saw 30% of the appliance’s rated capacity only found out about the problem 9 months after production deployment. Another site calls their vendor before changing any configuration on their appliances to avoid this kind of nasty surprise.

Project Future Traffic The third technique is to make educated guesses about future application requirements and how that will effect appliance performance. Unfortunately this is more art than science, and often futile given the rapid pace of innovation in applications and devices. Consider that the iPhone was released in 2007, 5 short years ago. Appliances deployed before the iPhone announcement had to weather an unanticipated, massive increase in slow mobile clients. Folks who build network equipment will tell you that it is often the slow connections that hurt most because they occupy resources for much longer than fast connections.

An Alternative Approach

All these techniques try to reduce the likelihood of mis-estimating the required appliance capacity. Network operators need to get this right because being wrong is really expensive. Undersizing a deployment could cost hundreds of thousands of dollars in new equipment, not to mention the cost of service outages. Oversizing the deployment can waste hundreds of thousands of dollars up front.

A profoundly different approach is to concede that performance data and traffic forecasts will be highly inaccurate. Instead of trying to get everything right, just reduce the cost of being wrong.

If being wrong is cheap, you don’t need as much accuracy in the capacity planning process. If pure software can deliver network services at the performance points of high-end custom hardware appliances, the cost of overprovisioning or underprovisioning is the cost of a few commodity x86 servers. Underprovisioned? — add servers in minutes. Overprovisioned? — repurpose the servers or enjoy the spare capacity. At LineRate Systems, our Software Defined Network Services make this possible.

ABOUT THE AUTHOR

Manish Vachharajani

Manish is a Founder and the Chief Software Architect at LineRate Systems. He has dedicated 13 years studying software performance on general purpose processors and has co-authored 50 publications on processor performance in a range of fields including optimizing compiler design, on-chip optics, performance modeling, parallel programming, and high performance networking. LineRate’s core technology is the result of bringing together the work of Manish’s high-performance computing research group at the University of Colorado and co-founder John Giacomoni’s work in software-defined networking.
Other Posts by Author