Linux managed services in the West Midlands

We know technology…

We know technology…


Solaris managed services in the West Midlands

We know Virtualisation…

We know



Solaris managed systems in Birmingham

We know Cloud…

We know Cloud…


Linux managed services in Birmingham

We know Network and Security…

We know Network

and Security…


Linux managed systems in Birmingham

We know Data Management…

We know

Data Management…


Solaris managed services in Birmingham

We know Systems…

We know Systems…


Best Practice for your Next-Generation Virtualisation Platform

You’ve decided that you need a virtualised, next-generation data centre, so whether you’re starting afresh or you’re updating what you already have in place, what are the key preparatory steps you need to take?

First you need to prepare your physical servers. These will become the hosts for your hypervisor in a virtualised environment, so ensure that firmware and BIOS are updated, enable any hardware assisted virtualisation features that are available and make sure that you have installed all the recommended drivers for the hypervisor.

The next step is to install the hypervisor. Follow the vendor’s guidance documentation and record key information such as host names, IP addresses etc. Launch your admin platform, adjust any security permissions, such as firewalls, and ensure your storage arrays are discoverable and accessible. This is important because I/O throughput can be adversely affected by the storage configuration. If the storage solution isn’t configured correctly for the workload and the throughput and IOPS aren’t matched, performance at the front end will be affected.

The next step is to ensure your network is configured appropriately for a next-generation virtualisation platform. If a virtualised server can’t communicate properly over the network, the benefits will be lost. Install the appropriate network interface cards, network adaptors and network cards are installed and test end to end connectivity once hypervisors are connected to the network.

Cybersecurity is just as important in a virtualised, next-generation environment as it is in a physical data centre, so the apps and OS need to be secured and protected by isolating the management information from the virtual machine traffic. A Single sign on solution can ensure that only those with the correct permissions can access management information.

A virtualised environment is flexible enough to improve the performance of applications that are latency sensitive, but some tweaks may be necessary to power management settings and network adaptors to ensure they aren’t slowing things down.

Following the migration of your servers to your new virtualised environment, there are many advanced management tools you can utilise for high availability, load balancing and networking. Enable monitoring and capacity planning tools which can enable machine learning and smart-management of the environment. Granular reporting on uptime, performance, capacity and efficiency can really help prove the value of your investment in the next-generation infrastructure.

Set out the settings that your virtual machines will all need and document that as a template so that they are all optimised from the outset. To make sure your VMs are all running at peak performance schedule backups and virus scans at off-peak hours so they don’t impact performance.

When the virtualised data centre is up and running, you can look at the high-availability, replication, or fault tolerance tools that are fundamental to the performance of the business.

In short, by leveraging a next-gen virtualisation platform properly you should be able to deliver a higher quality service, with less risk, at a lower cost. What’s not to like about that?

Going beyond server virtualisation

Next-generation virtualisation goes way beyond server virtualisation and provides the platform for virtualised storage, virtualised networking and cloud orchestration tools that are essential elements of a software-defined data centre. Cloud-native architectures, containerised data and hyperconvergence are some of the technology approaches that a next generation, software-defined data centre can enable and the benefits of these are huge. Businesses are more agile, flexible and dynamic. Administrators can centrally control their infrastructure, regardless of where it is situated and applications move to centre stage, repositioning technology as a business enabler, not a separate department.

One huge trend in the data centre is hyperconvergence, which relies on software-defined storage, whereby the hypervisor dynamically allocates the right amount of storage to the applications that are running in the data centre. Additional compute, storage and management resources are delivered by adding additional server hardware, which is then viewed as shared storage and allocated to the apps as they need it, via the hypervisor. Without this level of next-generation virtualisation, achieving a hyperconverged infrastructure is not possible.

Virtualised networks work in a similar fashion, using the hypervisor to pool the network resources and attach them to individual virtual machines based on the defined policies for each application. The underlying network acts as a simple packet forwarding mechanism, whilst the management, provisioning and security features are all dealt with by the hypervisor. Because virtualised networks completely decouple the network functionality from the network hardware, they differ from software-defined networks, which use clever software to manage networking hardware rather than separating it entirely. The benefits of this hardware decoupling are realised when moving virtual machines between logical domains, as there is no network reconfiguration required and the administration overhead savings can be immense.

Likewise cloud orchestration is a buzz word right now and a next generation virtualisation platform really does provide the right foundation for a cloud-native environment, where different cloud services can be combined and managed simply. Making tangible reductions in administration overheads once again becomes a reality if you have a platform that can help you to manage all of your cloud services. The use of cloud services is growing massively, especially those in the public cloud market, as concerns around security and privacy are allayed and confidence in the public cloud providers grows. Indeed, Gartner predicts that globally, public cloud usage will grow by 17.3% in 2019 to total $206.2bn and many of the businesses using public cloud will also have a mix of SaaS applications, private cloud services and on-premises, physical infrastructure. What is already and will be even more important for customers is a platform where everything can be managed, without dipping in and out of interfaces and requiring different accreditations and skills.

In short, a next generation virtualised environment is much more than virtual machines. As hardware becomes almost irrelevant, the hypervisor is the powerful tool of choice, and the applications dictate what resources are needed, where and when. The application really does become king.U

Gartner Forecasts Public Cloud Revenue to Grow 17.3 Percent in 2019

How next generation virtualisation is driving digital transformation

You would have heard the term next-generation bandied about a lot, but what does it mean when we talk about next-generation virtualisation?

Server virtualisation is well established and has been around for many years now, having been one of the most transformational technologies the world has ever seen. Server virtualisation followed the classic technology adoption trajectory of being utilised by the innovators first, then the early adopters, the early and late majority and finally by the laggards, with Gartner proclaiming it “mature” as far back as 2016 after seeing saturation levels of between 75% – 90% of all servers being virtualised for many firms.

As with all technology, virtualisation must evolve further to keep up with the demands of the modern business in today’s digital economy. In order to compete in this digital, software-driven and fast-paced world, businesses need to accelerate the development and delivery of the applications and services they provide. To cope with the speed of development and new approaches to development, cloud-native or agile DevOps approaches for example, data centres need to be fully virtualised, software defined, and highly automated, with consistent application delivery across multiple cloud environments. Next-generation applications need a next-generation infrastructure to run on, so in other words, virtualisation must grow up and become next-generation virtualisation.

Next-generation virtualisation brings many benefits, firstly that it can support a cloud-native approach, which relies on containerised workloads. Containers are bundles which contain an application and all the compute and resources it needs in a single package. Containerised workloads are simple to move from one cloud environment to another. Another revolutionary technology to take data centres into the future.

Maintaining a next-generation virtualised data centre is easier than a traditional data centre, and tasks like installing, updating, provisioning, deploying and moving workloads around are faster and easier to manage. Automation features heavily in a next-generation virtualisation platform, with many management tasks being automated to help administrators accomplish tasks and maintain performance with minimal intervention.

This frees up IT time for more strategic tasks and subsequently makes the IT team more productive. This automated, policy driven approach also means that enhanced security features can be baked in at scale, at both the infrastructure level and the data level.

Next-generation virtualisation brings greater insight and analytics features to help administrators to understand how their infrastructure is performing and help to avoid disruption to services. Capacity planning features built into next-generation virtualisation provide administrators with a clear view of performance trends, extended forecasts and projections and the ability to model scenarios to demonstrate outcomes. This level of visibility helps organisations to reduce risks and prevent problems.When you consider the modern businesses’ requirements for a secure, agile, flexible, scalable, powerful, resilient architecture to power their next-generation applications across a multitude of environments, next-generation virtualisation is the obvious choice.

Find out how VMware are leading the field of Next-Generation Virtualisation in the Dummies Guide.

Useful Links

Virtualisation Market Now Mature, Gartner Finds

Server virtualisation trends: Is there still room to grow?

Technology Adoption Life Cycle

The 10 things you must consider for your cloud adoption strategy

Making the shift from on-premise technology infrastructure to a cloud-based architecture, specifically infrastructure-as-a-service (IaaS), is a significant decision and one that should be carefully considered for its impact across an organisation. When procuring infrastructure-as-a-service consider the following:

1. Cloud computing is different
Using IaaS is not just about the location of equipment. Cloud services are priced, bought and used differently, with an on-demand utility model that is designed to maximise spend, but needs a different approach to traditional technology infrastructure.

2. Early planning is essential
All key stakeholders across an organisation should be engaged in the decision to move to a cloud model. There will certainly be huge implications for finance, IT, operations, compliance, from board level down.

3. Flexibility
When planning an IaaS procurement, focus on performance at an application level as your requirement and allow the Cloud Service Provider (CSP) to make recommendations based on their experience and understanding of best practice. Prepare to be flexible and to adapt your expectations of the actual equipment and procedures, according to the advice given.

4. Separate cloud infrastructure and managed services
Keep procurements of IaaS and managed service labour separate if possible. It will make it easier to agree and monitor specific IaaS Service Level Agreements and terms and conditions.

5. Utility pricing
As mentioned, IaaS is priced using a utility model, or pay as you go model which allows you to make maximum efficiency gains. Customised billing and transparent pricing models allow you to continually evaluate and ascertain whether you are receiving best value for money and maximising usage.

6. Industry standards
Look out for CSPs with industry standard accreditations that you can trust. Cyber Essentials, ISO 27000, ISO 9000, SSAE, PCI, GDPR compliance are all good starting points and will save you time in re-evaluation.

7. Share responsibility
A CSP ensures that infrastructure is secure and controlled, but you ensure that you architect it correctly and use secure, controlled applications. Be aware of what is your responsibility and what is the responsibility of the CSP.

8. Ensure cloud data governance

Following on from cybersecurity and data protection, it is your responsibility to ensure cloud data governance controls are in place. Find out what identity and access controls are offered by the CSP and make provisions for additional data protection, encryption and validation tools.

9. Agree commercial item terms

Cloud computing is a commercial item and should be procured under appropriate terms and conditions. Ensure you utilise these to the best effect.

10. Define cloud evaluation criteria
In order to ascertain whether you have achieved your objectives and performance requirements, you should specify your cloud evaluation criteria at the outset. The National Institute of Standards and Technology outlines some benefits of cloud usage and is a good starting point for defining cloud evaluation criteria.

Use this list before you start to plan your IaaS project and it will help you define a successful procurement strategy.

The questions you should ask when planning your tape-to-cloud migration

With the huge advances in public cloud security, efficiency and value for money, many organisations are now planning to move towards cloud backup strategies, which are less complex and more reliable than traditional tape backup solutions. But migrating your backup to cloud from tape can be a big project and does require careful scoping. There are some key questions to ask before embarking on a migration from tape to cloud, which will help you to understand the scale of the project.

Firstly, do you need to move all historical backups to the cloud, or could you start backing up new data to the cloud and gradually reduce on-premises tape dependency as data reaches end-of-life? This is a straightforward approach but depends on the business being comfortable with different RPO and RTOs for new versus aged data.

Next, what is the best way of migrating a large data set to the cloud initially? You can use on-premises network transport methods, or physical transport methods. High speed internet transfer would only be an option for smaller data sets, as can be time consuming.

You might need to consider that when you move data from tape to cloud, it could be prudent to perform any indexing, transcoding or repackaging that will make it easier to extract value from the data once in the cloud.

Do you know if your current backup vendor can natively support a cloud backup store, or are new feature licenses or major version updates required? Once you’ve migrated, can you restore to cloud virtual machines or will data restore to a physical machine?

Can you write data directly to the cloud and do your backup windows support that too? Should you use a traditional storage protocol such as a network file system (NFS)?

Do you need to change your workflows to suit the cloud environment, or will your cloud solution appear as a virtual tape library allowing you to keep the same processes and save time and management overhead?

Does your cloud backup provider give you the scalability and elasticity needed to make changes without disruption to the backup activity? Enterprise cloud providers should have the provisions, AWS offers Amazon Elastic Cloud Compute which can flex to keep processes consistent.

When accessing backup data will this be done in the cloud, or will it be pulled back and accessed on-premises? It could affect the services you purchase, from archives which are seldom accessed to a virtual tape library which holds frequently accessed, recent files.

Can you leverage the cloud to simplify widely distributed backup workflows?

Many cloud providers offer complementary services such as analytics, data lifecycle management or compliance features. Do you need these as part of your backup solution?

Could a cloud integrator help you to scope, implement and migrate your current backup environment across to the cloud?

Getting answers to these questions now will save immeasurable time during and after your move to the cloud and can help you to maximise your budget, by cutting out unnecessary services.


Cisco logo
Oracle logo
VM ware logo
Veritas logo
Red Hat logo
Amazon web services