So I recently came across an article reposted by a popular internet blogger (pointing to http://www.theregister.co.uk/2011/09/21/flatter_networks/ ). This article basically asserts that Cisco's Hierarchical Model for Datacenters is not well-suited for high-performance virtualized datacenters. To my understanding, the position taken is that as hardware platforms continue to increase in the quantity and performance of their processor cores, the system performance of server nodes (hypervising shared VMs) will outpace the performance of the route/switch hardware infrastructure; ipso facto, the server nodes processors will be waiting on the network to move packets between the nodes. The arguement appears to be that replacing the hierarchy with a flat network is more efficient and suitable for the datacenter.
So while I follow the basic logic of this arguement, I'm failing to see where the specific functions delegated to the different layers (such as filtering and security functions in the aggregation layer) will be relocated. In other words, how might you implement such things as ACLs or inter-vlan routing typically found in the distribution layer? I mean wouldn't the nanonsecond/microsecond latency incurred to support such tasks simply be shifted elsewhere? Maybe I'm misunderstanding the strategies of the competing models proposed within the articles and how it is overall superior? Thanks in advance for all your input.
I have kind of been hear similar things out there. Virtualization definately brings changes to the design. That is one of the reasons that the CCDA and CCDP were updated.
Thanks for the feedback, Jared. I can't speak (yet) for the updated CCDP, but after having recently passed the updated CCDA exam, I'm wondering if this issue is actually properly addressed using a hierarchical model? On the other hand, I'm not sure if anyone has actually published test results supporting this claim (ie. cisco's hierarchy hinders virtualization processing).
It's one of those "too early to tell" concepts. The funny thing is how quickly stuff changes. What used to be a laughed-at design is now the best practice.
In the end, though, you'll design your best data center around how the traffic is going to flow. Your DC may be completely different than mine, and that doesn't make either one wrong as the purpose of your DC and the data flow therein may be completely different as well.
Just my two cents,
So yea....welcome to the churn. A few of us discussed this out at CiscoLive this year.
The tiered model was put in place because you needed demarcation points and redundancy, as well as place to terminate Layer 2. The push for the campus is still to keep pushing to the access with Layer 3 if at all possible...
But, the datacenter designs continue to evolve. vSphere and OTV can really turn a traditional datacenter design on it's head. With Nexus beginning to support FabricPath you're going to see increasingly flatter networks as Spanning Tree becomes less of a concern. STP is a slow-to-converge protocol compared to most dynamic routing protocols, even if properly tuned.
Flatter does not necessarily mean less complex, either - it just refers to collapsing the traditional layers and their associated functionality.
Suffice to say its not a competing model, its the dawn of a new era, the real dilemma facing organisations is Vendor Selection!
Thanks for all the replies. Sorry for the late reply, new job, new obligations.
I appeared to have awarded points to this discussion with no apparent rhyme or reason. In hindsight I probably shouldn't have marked it as a question, and left it as a discussion. But I definitely think that Scott's answer happened to be the most "correct" so to speak. I feel designs are application specific. Further I suppose the question as to whether or not this approach is "outdated" is yet to be seen; although it appears to be an increasingly popular opinion in some parts. Thanks again for all your responses!
The above sounds it could be useful in comparing and contrasting the different approaches that vendors are taking in this respect, though almost bound to give you even more of a headache.
Thanks, rmhango. I've registered for this event, should be interesting. Got my trusty Extra Strength Tylenol (rapid release) on hand
Yeah, you know actually I think this is pretty cool.
I'd done a bunch of research on Software Defined Networking last year and at first I was a bit apprehensive. But after viewing a bunch of different demonstrations, including HP's latest "Flex Network" technology (http://h17007.www1.hp.com/us/en/whatsnew/090511.aspx), I can definitely see the value in such advancement.
Now I don't have a great many years experience in IT, but my opinion so far is that SDN would take a while to really catch on. Although it does have relatively broad buy-in so far by some pretty key players, including Cisco (http://blogs.cisco.com/datacenter/whats-new-with-cisco-and-openflow/), it's still got a very long way to go in the broader arena. While promising, my sense is that is will be quite a while before starts redefining the jobs of current Cisco network professionals.
While promising my sense is that is will be quite a while before starts redefining the jobs of current Cisco network professionals.
Hmm! I'm not so sure?
Managed to find some time to bring myself up to speed (with the architecture at least), not sure how many releases we are away from maturity, but the future looks like it will incorporate both the configuration and/or programming of networks.
Perhaps a case of program where you can and configure where you must!
Isn't programming a form or configuration?
It's all perspective... New, old, SSDD, whatever.
Cool article, thanks for the find. I'm honestly quite glad to see technology continually revolutionizing things for the better. That said, I still personally don't fear my skillset will be going the way of Novell any time real soon.
And just to highlight, that article actually focuses on a product providing layer 4-7 network services (load balancers, firewall, VPN,etc), not layer 2-3 network intrastructure SDN. But to your point, while I think the technology is revolutionary, I'm not yet convinced that the buy-in is broad enough, nor circumstances ideal enough, for companies to completely jump ship and so disruptively overhauld their networks. Maybe that will be different in 5 years, but that's plenty of time to amend to one's skill-set. If it's supposedly that easy to use, then it'll be that easy to learn.
Lastly, do I think it would be so easy to learn that organizations would start hacking off their network operators to replace with low-wage Windows GUI script kiddies? I personally do not foresee companies dispensing with the perspective and experience that "legacy network operators" would have to offer, at least not for a long while.
Much of that depends on how well those GUIs are designed and what they are (or are not) capable of!
The difference is though, what happens when something goes wrong? Scripts are great, but under expected circumstances!
In the end, nothing will replace experience! Many tasks can be made easier, and it's all fun and games until someone loses an eye!
CNEs certainly had their day and though it may not be of much comfort to them at least Novell is still operational.
Overall I think you make a reasonable assumption, though get the impression it may be from a narrow perspective, all I would add is not to underestimate the role of economics in technology adoption and decisions.