Monday, January 16, 2017

Traditional Data Center Design

Primer


This lab will review and discuss design elements used to build a secure, modular data center environment. Labs are limited to what can be accomplished in GNS3, so as such, the focus will be from a logical Layer 3 perspective. The devices used to represent the environment are virtual routers, 7200 and 3725 models, running legacy IOS, versions 15.2 and 12.4. With that said, an eye will be kept on being vendor agnostic. Modern data centers should aim to leverage newer technologies to eliminate traditional sore spots, we’re looking at you spanning-tree. An example would be something like Cisco’s VSS\vPC to mitigate the affect of STP, or VXLAN to remove it all together.

Just like anything else, requirements need be established before any actual work is done; though, there are a few inherent traits that should exist in any good data center design.

- Modularity
- Security
- Fault tolerance (HA)

The world is constantly changing. A data center’s requirements are likely to change even more. Modularity is probably the most important aspect of a data center. It is the key to adapting to this change. After completion, it should be easy to simply extend or built a new functionality into a data center. A “Pod” is one method to facilitate modularity. From a high lever perspective, a “pod” is an area of the data center dedicated to a specific purpose. The purpose is dependent of the business, but could involve anything from a specific application family, a single business unit, or simply DMZ. Inside a “pod”, multiple virtual routing instances (IE: Cisco VRF) are used to provide further sub-areas within the whole. VRFs are further broken down into VLANs. It is important to remember to adapt a design to the requirements, rather than adapt the requirements to the data center.

Security is a no-brainer. Obviously, business that require PCI, SOX or HIPAA compliance are required to focus on protecting valuable information. Security should still be a top priority for business that do not require such compliance, if even to just protect production applications from an inadvertent user virus.

How many reports are released discussing financial business impact of a data center outage? Fault tolerance\High Availability designs mitigate the effects of unexpected device failures, thereby preventing the dreaded outage. In one of its most basic forms, HA can be achieved with “two of everything”. Also, Layer 3 of the OSI model provides excellent, robust environments. Layer 2 is has an okay feature in STP, but has its drawbacks, chiefly in is lack of multipathing. As discussed earlier, there are newer technologies that can either mitigate or remove STP all together.

High Level Review


This data center design leverages the traditional Core and Distribution\Aggregation model. OSPF is chosen as the IGP due to being an open standard. It also forces the use of a hierarchical scheme, hierarchy being the main way to provide modularity. The basic idea is that the Core contains high speed devices with Layer 3 redundant connections to every pod. Every service; application, WAN, infrastructure (AD for example), DMZ, Internet, is contained within a pod. The Core contains routes to every network, regardless of security. Something like out of band management, though, is handled out of scope of this design, but would be another, simpler network laid on top of this one.


  

This diagram depicts multiple pods, each connected to the Core via a CORE VRF in the Distribution area. The Access area of each pod is simply any number of VRFs that live on the same switching hardware. This allows all traffic to be routed from the Core, to the Distribution CORE VRF, through the firewall(s), and finally to the correct Access VRF. The switching hardware would be a redundant pair, maybe two Cisco Nexus 5Ks. Most firewall vendors could be used thanks to OSPF. The firewall, and any other appliance, should be connected using some type of multichassis etherchannel (vPC for example).

The Distribution area is responsible for interfacing with the Core and performs functions such as network summarization, but to and from the pod OSPF area. In OSPF speak, the ABRs for each OSPF area live here. The firewalls, Access VRFs and other appliances all live in the same OSPF area. It could be thought that each pod equals a unique OSPF area encompassing all pod VRFs.

A critical point, since the firewalls are OSPF internal routers, they are able to take advantage of the network summarization done by the Distribution ABRs. (Remember, the Distribution ABRs are just VRFs that live on the pod chassis switches.) This reduces the routing table, saving resources inside each firewall that can be dedicated to the purpose of firewalling.





This drawing shows an in-depth look at the make up of a pod and how each chassis is composed of different VRFs, with connectivity to different areas of the network. Notice that the CORE VRF has links in both OSPF area 0 and the local OSPF area. Area 10 in this example. Firewalls and ADCs (Application Delivery Controllers, or more commonly called Load Balancers) are connected to each VRF. Ideally, each individual appliance would be connected to both chassis through some sort of ether-channel (exampled below). The firewall provides the only route between the VRFs that exist on the chassis.





Keep in mind that this data center design review is limited to “traditional” methods, which come with inherent technical limitations. For example, it get difficult to elegantly grow a pod past the capacity of the distribution chassis switches without having to lean on older methods switch chaining and STP. Technologies like Software Defined Networks (SDN) and VXLAN promise to eliminate (or at least reduce) the short falls of traditional data centers. The principles discussed in this overview can still apply to data centers leveraging newer technologies.

In the next lab overview, I will deep dive into the actual lab built to showcase the traditional data center described here. I will not only share and discuss the diagrams, but also review and share the device configurations along with the GNS3 topology files.



No comments:

Post a Comment