WTL

How next generation virtualisation is driving digital transformation

You would have heard the term next-generation bandied about a lot, but what does it mean when we talk about next-generation virtualisation?

Server virtualisation is well established and has been around for many years now, having been one of the most transformational technologies the world has ever seen. Server virtualisation followed the classic technology adoption trajectory of being utilised by the innovators first, then the early adopters, the early and late majority and finally by the laggards, with Gartner proclaiming it “mature” as far back as 2016 after seeing saturation levels of between 75% – 90% of all servers being virtualised for many firms.

As with all technology, virtualisation must evolve further to keep up with the demands of the modern business in today’s digital economy. In order to compete in this digital, software-driven and fast-paced world, businesses need to accelerate the development and delivery of the applications and services they provide. To cope with the speed of development and new approaches to development, cloud-native or agile DevOps approaches for example, data centres need to be fully virtualised, software defined, and highly automated, with consistent application delivery across multiple cloud environments. Next-generation applications need a next-generation infrastructure to run on, so in other words, virtualisation must grow up and become next-generation virtualisation.

Next-generation virtualisation brings many benefits, firstly that it can support a cloud-native approach, which relies on containerised workloads. Containers are bundles which contain an application and all the compute and resources it needs in a single package. Containerised workloads are simple to move from one cloud environment to another. Another revolutionary technology to take data centres into the future.

Maintaining a next-generation virtualised data centre is easier than a traditional data centre, and tasks like installing, updating, provisioning, deploying and moving workloads around are faster and easier to manage. Automation features heavily in a next-generation virtualisation platform, with many management tasks being automated to help administrators accomplish tasks and maintain performance with minimal intervention.

This frees up IT time for more strategic tasks and subsequently makes the IT team more productive. This automated, policy driven approach also means that enhanced security features can be baked in at scale, at both the infrastructure level and the data level.

Next-generation virtualisation brings greater insight and analytics features to help administrators to understand how their infrastructure is performing and help to avoid disruption to services. Capacity planning features built into next-generation virtualisation provide administrators with a clear view of performance trends, extended forecasts and projections and the ability to model scenarios to demonstrate outcomes. This level of visibility helps organisations to reduce risks and prevent problems.When you consider the modern businesses’ requirements for a secure, agile, flexible, scalable, powerful, resilient architecture to power their next-generation applications across a multitude of environments, next-generation virtualisation is the obvious choice.

Find out how VMware are leading the field of Next-Generation Virtualisation in the Dummies Guide.

Useful Links

Virtualisation Market Now Mature, Gartner Finds

Server virtualisation trends: Is there still room to grow?

Technology Adoption Life Cycle

The 10 things you must consider for your cloud adoption strategy

Making the shift from on-premise technology infrastructure to a cloud-based architecture, specifically infrastructure-as-a-service (IaaS), is a significant decision and one that should be carefully considered for its impact across an organisation. When procuring infrastructure-as-a-service consider the following:

1. Cloud computing is different
Using IaaS is not just about the location of equipment. Cloud services are priced, bought and used differently, with an on-demand utility model that is designed to maximise spend, but needs a different approach to traditional technology infrastructure.

2. Early planning is essential
All key stakeholders across an organisation should be engaged in the decision to move to a cloud model. There will certainly be huge implications for finance, IT, operations, compliance, from board level down.

3. Flexibility
When planning an IaaS procurement, focus on performance at an application level as your requirement and allow the Cloud Service Provider (CSP) to make recommendations based on their experience and understanding of best practice. Prepare to be flexible and to adapt your expectations of the actual equipment and procedures, according to the advice given.

4. Separate cloud infrastructure and managed services
Keep procurements of IaaS and managed service labour separate if possible. It will make it easier to agree and monitor specific IaaS Service Level Agreements and terms and conditions.

5. Utility pricing
As mentioned, IaaS is priced using a utility model, or pay as you go model which allows you to make maximum efficiency gains. Customised billing and transparent pricing models allow you to continually evaluate and ascertain whether you are receiving best value for money and maximising usage.

6. Industry standards
Look out for CSPs with industry standard accreditations that you can trust. Cyber Essentials, ISO 27000, ISO 9000, SSAE, PCI, GDPR compliance are all good starting points and will save you time in re-evaluation.

7. Share responsibility
A CSP ensures that infrastructure is secure and controlled, but you ensure that you architect it correctly and use secure, controlled applications. Be aware of what is your responsibility and what is the responsibility of the CSP.

8. Ensure cloud data governance

Following on from cybersecurity and data protection, it is your responsibility to ensure cloud data governance controls are in place. Find out what identity and access controls are offered by the CSP and make provisions for additional data protection, encryption and validation tools.

9. Agree commercial item terms

Cloud computing is a commercial item and should be procured under appropriate terms and conditions. Ensure you utilise these to the best effect.

10. Define cloud evaluation criteria
In order to ascertain whether you have achieved your objectives and performance requirements, you should specify your cloud evaluation criteria at the outset. The National Institute of Standards and Technology outlines some benefits of cloud usage and is a good starting point for defining cloud evaluation criteria.

Use this list before you start to plan your IaaS project and it will help you define a successful procurement strategy.

The questions you should ask when planning your tape-to-cloud migration

With the huge advances in public cloud security, efficiency and value for money, many organisations are now planning to move towards cloud backup strategies, which are less complex and more reliable than traditional tape backup solutions. But migrating your backup to cloud from tape can be a big project and does require careful scoping. There are some key questions to ask before embarking on a migration from tape to cloud, which will help you to understand the scale of the project.

Firstly, do you need to move all historical backups to the cloud, or could you start backing up new data to the cloud and gradually reduce on-premises tape dependency as data reaches end-of-life? This is a straightforward approach but depends on the business being comfortable with different RPO and RTOs for new versus aged data.

Next, what is the best way of migrating a large data set to the cloud initially? You can use on-premises network transport methods, or physical transport methods. High speed internet transfer would only be an option for smaller data sets, as can be time consuming.

You might need to consider that when you move data from tape to cloud, it could be prudent to perform any indexing, transcoding or repackaging that will make it easier to extract value from the data once in the cloud.

Do you know if your current backup vendor can natively support a cloud backup store, or are new feature licenses or major version updates required? Once you’ve migrated, can you restore to cloud virtual machines or will data restore to a physical machine?

Can you write data directly to the cloud and do your backup windows support that too? Should you use a traditional storage protocol such as a network file system (NFS)?

Do you need to change your workflows to suit the cloud environment, or will your cloud solution appear as a virtual tape library allowing you to keep the same processes and save time and management overhead?

Does your cloud backup provider give you the scalability and elasticity needed to make changes without disruption to the backup activity? Enterprise cloud providers should have the provisions, AWS offers Amazon Elastic Cloud Compute which can flex to keep processes consistent.

When accessing backup data will this be done in the cloud, or will it be pulled back and accessed on-premises? It could affect the services you purchase, from archives which are seldom accessed to a virtual tape library which holds frequently accessed, recent files.

Can you leverage the cloud to simplify widely distributed backup workflows?

Many cloud providers offer complementary services such as analytics, data lifecycle management or compliance features. Do you need these as part of your backup solution?

Could a cloud integrator help you to scope, implement and migrate your current backup environment across to the cloud?

Getting answers to these questions now will save immeasurable time during and after your move to the cloud and can help you to maximise your budget, by cutting out unnecessary services.

Benefits of backing up to the cloud versus tape

The benefits of backing up to cloud versus backing up to tape

Tape has been the backup media of choice for over 60 years, due to its portability and reliability. Tape technology has developed and density has increased, meaning cost per gigabyte has been low, but the complexity and time consuming nature of tape management means many organisations are looking for an alternative.

A traditional tiered storage architecture uses local disk or networked storage for speedy access to primary data, then periodically sends snapshots or data to a backup server that writes the data to magnetic tape. Usually stored onsite in tape backup libraries and sometimes replicated to an offsite location, via WAN or even manually moved to an offsite storage facility.

Cloud backup offers organisations a new way of backing up their data, removing the complexity and risk of manually moving and handling magnetic tapes and improving the performance, availability and reliability of backups.

Whilst the cost of tape storage has come down, the costs associated with handling, managing and storing tape media have been increasing. At the same time, the cost of public cloud services has been coming down, allowing customers to take advantage of economies of scale, making them an accessible and affordable backup solution. Cloud has no upfront capital investment costs, no costs associated with media, or configuration and no data retrieval costs.

Using the public cloud to store backup data is generally a very reliable solution, with some CSPs offering a durability service level agreement of 99.999999999%. The chance of data loss through infrastructure failure therefore is incredibly low. The availability that public cloud providers can achieve is generally higher than most organisations can implement in house, with multi-site replication and failover of every single component.

Magnetic tape on the other hand is based on mechanical equipment which can fail and lead to data loss or unavailability. The quality of data stored on tape can be eroded if retrieved and read too often, although more robust tape intended for frequent use is available, the cost is often prohibitive.

Tape can perform well for read write capabilities but can be unpredictable. The retrieval of data is particularly slow, especially for large datasets, from hours to days. When retrieving data from the cloud, organisations are often hindered more by WAN speeds than native storage performance, but there are still options available offering lower cost, longer term storage, which inevitably takes longer to restore.

Whatever requirements an organisation has, there are many reasons why a public cloud backup solution is the right option. Cost, performance, availability, reliability and the ability to restore quickly and easily, are all big reasons to consider cloud over tape.

Understanding the benefits of selecting a cloud-native development platform

“Cloud-native” is a big trend in software development right now, but what does it mean and how do you adopt a cloud-native approach in your organisation?

It’s not about simply moving legacy applications to the cloud, it goes much deeper than that. IDG states that cloud-native is about how applications are developed, not where. Cloud-native can be described as an approach to application development and deployment that is takes full advantage of the benefits of cloud, which results in applications that are fully optimised for distributed, cloud-based deployments. In comparison, typical on-premise, legacy apps are built to run on a centralised system, as a single piece of code, which doesn’t necessarily translate well in a distributed cloud environment. Nor is it easy to bring these applications to market quickly, or fix issues, or roll out a new release regularly.

What then are the main components of a cloud-native development approach?

The cloud-native approach uses a services-based architecture to develop applications. A service in this case is a process or activity that is self-contained and presented in a container to isolate and package just the right amount of resources that service needs. A collection of these loosely coupled, self-contained services makes up the application. Any service can be tested, released, replaced or updated independently of the overall application, making it much more agile and flexible. In addition, the containers not only isolate the service, but they make it more portable.

Cloud-native apps use DevOps automation for speed, quality and agility. By automating DevOps processes, releasing more frequently, monitoring the performance and user experience then making adjustments before deploying another release, developers can bring applications to market quicker and improve the user experience each time. By doing this at the microservice level too, the speed to market is even greater and disruption can be contained and minimised.

It’s a tricky leap to make however, everything is different in a cloud-native environment. With so many microservices being developed independently, on different platforms, sometimes in different languages, with multiple different release schedules, all requiring development, test and production environments, things can spiral out of control quite quickly.

In addition, it’s a rare business that doesn’t have some legacy applications that they need to modernise. They may have decided to adopt a cloud-native approach to all new applications but struggle to know what to do with their legacy applications.

The organisation may also want to maintain a hybrid environment, where they have some infrastructure on premises, some in private cloud and some in public cloud services. The prospect of changing applications to suit different platforms, the potential for different monitoring and operational tools for each platform, can seem daunting.

Looking at a platform like Red Hat OpenShift, a self-service platform where developers can build and run containerised applications, is a great place to start. OpenShift has been built specifically for cloud-native application development and allows automated, continuous integration and delivery build and development pipelines at the click of a button.
In addition, Red Hat OpenShift Application Runtimes provides a number of prescriptive, guided paths for developers to move through the development process with ease. When developers use these runtimes on the development platform, they can develop and deploy faster and with less risk. From microservices development to migrating existing applications to the cloud, there is a runtime to guide you through.

Read more about Red Hat’s developer platform and runtimes here.

Useful Links

Why Developers and Business Leaders Are Going Cloud Native

What is cloud native?

Partners