TECH NEWS

Why you should care about logical separation and how you can do it in software?

How software can be used to separate development, test and production environments? In doing so save up to 50% of infrastructure costs. Bart Coole, country manager at VMware Belgium & Luxemburg

January 22, 2019

Security has always been a must have, in fact a necessity for business and indeed civilisation itself. No fortified castle, no community.  No alarm, no jewels. No perimeter security, certain data breach. You get the point.

Security has been solved for much of the physical world. But as business success has become increasingly reliant on the delivery of applications into the hands of its users, as fast as possible, the breadth and complexity of IT has changed the parameters and role of security. Networking and security platforms like VMware NSX are now ‘enablers ‘of this app development, across multiple environments; not just the ‘security guards’. Now, software is the only way apps and data can be connected across the network, from the datacentre to the edge, and across clouds. Just think, by 2022 Gartner predicts that 50% of enterprise-generated data will be created and processed outside a traditional centralised data center or cloud.

 

Networking and security virtualisation is at the core of today’s data centre modernisation, all enabled through software. Talking to Neil Symons, consultant;lead solutions architect, Information Application Services (IAS) Army HQ, he remembers his pre NSX days of having a standalone dev and test system and a production and pre-production system with “expensive shared load balancers and firewalls so that any central release management, collaboration or defect and test management tool had to be deployed in isolation to the individual instances”. This, he reveals, resulted in a “significant licensing and hardware overhead and a management and configuration nightmare.”

With NSX, the Army now have a single environment from dev through to production, utilising a predominantly software-defined approach.  And in terms of savings, this has come from capex as well as a significant reduction in administration resource and licenses.

Neil says: “Achieving logical separation through VMware SDDC technologies has brought significant benefits to the Army. We have been able to revolutionise our through-life application delivery pipeline, while maintaining consistency and removing duplication of the software required to support a stove piped architecture. This has resulted in increased delivery velocity and enhanced security, while reducing the costs and enabling the organisation to do more with less.”

 

So what is the logical separation approach and what does it matter?

The big problem for many organisations is that they have traditionally separated various elements of their infrastructure stack to ‘de-risk’ new app development. They use physical separation between their developer, test, pre-production and production tiers so that one stage won’t affect the other. If an organisation wants to introduce a new service or app, they don’t want to make a change that could in any way risk or impact current services. So, instead the app moves between the different stages, from development to production, once it has reached its accepted milestones. But each stage requires a different ‘silo’, complete with hardware, networking, compute, storage and security.

 

This already feels a very antiquated, cumbersome and expensive, given the Software-Defined Data Centre has already virtualised these environments.

We are able to use the abstraction layer to still provide organisations with the release model of the three environments, but rather than spend money in separating them out in physical silos they can do this effectively and less costly in software. The first thing you do is abstract networking into software, and then we can isolate those networks and add in the security. There is no longer a physical separation but a ‘logical’ separation – all enabled through software.  Each of the three environments are still isolated but all with the same operational controls.

So, why is this of importance as a use case? There are a multitude of compelling reasons. The most obvious is a massive reduction in networking and security hardware, physical firewalls, switches and load balancers. So, less hardware to purchase and less money to run and manage it.

Using software, you can much better utilise compute resource. In pre-production, you might build up huge capacity at certain development peaks but then for 80% of the time it’s just sitting there, idle. If there are fewer compute platforms, there are fewer licences to pay for, as well as reducing Opex managing fewer servers.

Software is just more elastic and offers great flexibility if you want to flex the different tiers in any way, should you only need a test environment for a bit of time. But should you need to extend this, you can, and then go back to normal.

 

Let’s touch on security, of course. Each workload can have its own policy enforcement, enabling micro-segmentation,that leverages network virtualisation to segmentenvironments around logical boundaries such as applications and regulatory scopes.

There are also the benefits of ‘native isolation’, We are creating networks in the overlay which are transparent to the network in the underlay, meaning you can have overlapping IP addresses. As your new service or app moves between the three development and production environments, you keep the same IP address.  No small feat as there doesn’t have to be any re-IP’ing as an app moves between clouds or to another datacentre, permitting explicit communication between tiers, eliminating a lot of time and quite frankly a big headache.

Using NSX for logical separation is genuinely delivering fiscal and operational results and to hammer the point home, this is all software-enabled. It simply can’t be done this way in hardware.

Watch video

In the same category