Business continuity plan is on the table.

5 Steps to Business Continuity Success

If the events of the last year have taught businesses anything, it is the importance of a workable business continuity strategy. The ability to maintain operations in the face of continued uncertainty is now vital to survival.

As you map out your business continuity plan, here are five factors you must address.

1. Backup, Disaster Recovery or Both?

Before the current era of data-driven business, maintaining a complete/incremental daily backup of data was deemed sufficient because the volumes of information could be restored in a matter of hours. But now that recovering huge volumes of data can take days, a backup cannot be your only method protection.

Disaster Recovery (DR) uses near-live copies of virtual servers and databases that can fail-over in the event of an outage. DR can shorten the recovery of operations to mere minutes.

In all likelihood, your business will need a combination of both. Backup for regulatory requirements and secondary systems, DR for mission-critical operations.

2. Inventory your systems

It’s vital to understand your IT systems and how they are used. This will help you categorise them – mission-critical, secondary, archive – and thereby prioritise recovery.

It is important to assess how often data is updated too. This will help you understand how quickly the system must be recovered before permanent data loss occurs, or before operations are badly affected.

Don’t forget to check interdependence either – your mission-critical systems may be reliant on some less-important, secondary applications.

3. Choosing your solutions

When you understand what you have and what you need, you must then choose and implement suitable solutions. As data volumes increase, local storage of backup data becomes a headache – particularly when those archives need to be rotated off-site.

Cloud-based backup and DR services help to address some of these problems. Capacity can scale almost infinitely, and information is stored off-site by design.

You will need to consider issues like bandwidth requirements, time to recovery, support provided during a disaster and how your network and systems need to be reconfigured to support the new backup and replication regime.

With those decisions made, you can configure your new DR provisions to begin copying, replicating and backing up data as required.

4. Building out the DR plan

With a robust, secure backup platform in place, you then need to specify how it is to be used. Define what a disaster looks like, when the DR plan is to be executed, and by whom.

Consider all DR scenarios, from a single server failure to complete blackout and document the steps required to restore operations as quickly and efficiently as possible. Identify which individuals need to take control of a DR event and provide them with a simple guide or checklist to follow for when the time comes.

The DR plan will need to be updated and reviewed as your infrastructure evolves, so ensure documentation is regularly updated. And don’t forget to keep hard/off-line copies of these documents as they may not be available if your file servers are affected by the outage.

5. Testing the DR plan

The worst time to discover a flaw in your DR plan is during an actual disaster. But if you never actually test your resources and processes, that’s exactly what will happen.

Regular failover testing allows you to verify that systems designed to take over in the event of a server issue are working correctly. Importantly, failover tests can be conducted in real-time without disrupting operations because they simply confirm that the replicated systems are operating correctly.

In-depth testing of your DR plan is carried out via a live failover. This test simulates a full outage of your production system and verifies that the transfer of workloads to the failover systems completes correctly. These tests help to prove that your DR plans and technologies are properly configured and ready to take over in a real emergency.

Don’t go it alone

Your choice of DR provider can help to streamline your business continuity provisions. More than simply providing the failover/backup technology plans, the right partner will also be able to assist with building and testing a DR plan that properly protects your organisation during a disaster.

WTL offer a range of Cloud-based Disaster Recovery solutions that allow you  extend your DR to the cloud in just a few clicks for simple, straightforward, peace of mind.

To learn more about disaster recovery provisioning and business continuity services, please get in touch.

Cybersecurity backup

Improving your cybersecurity protection with cloud-based services

For many businesses, the cloud has become an important aspect of IT-based operations. Many already host infrastructure, applications and data in hosted services, so why not leverage that same power to enhance your cybersecurity protection?

Backup – the ultimate insurance against a cybersecurity breach

Every business operation now relies on data; ensuring access to that information is mission critical. A robust, workable backup regime is similarly vital, providing a way back in the event of an attack.

For many established businesses, backup relies on removable media like LTO tapes or portable hard drives. Each night the media is swapped, and a copy is taken offsite to prevent loss in the event of a localised disaster. Should data be corrupted or deleted, you can recover the information from the relevant media.

Obviously, this regime does work – but it also has limitations. Aside from physical damage to the removable media, criminals are increasingly targeting backup systems as part of their attacks. If they can destroy data in the live system and the backup media, recovery is almost impossible.

Invoking the power of the cloud

As a result, “air-gapped” backup systems are increasingly important. By saving backup information to a completely separate, unconnected system – which is where our cloud security service comes into play.

By exporting to the cloud, you immediately benefit from your backup sets being stored securely offsite. This protects information from loss in a localised event (such as a fire) and prevents physical access by unauthorised third parties. The data centres are also protected by enterprise grade security provisions.

Built on Veeam Cloud Connect technology, there are three key protective aspects to this service:

1. Malware defence

Ransomware and malware have the potential to take a business offline for hours. If backup systems are also infected, that could easily become days or weeks. If the infection is severe, some data may never be fully recovered.

By moving backup copies offsite, you create another layer of physical security that makes it harder for malware to breach.

2. Malicious deletion protection

If hackers break into your systems, they may manually delete data – or they may deploy malware to automate the process. WTL ensures that maliciously deleted data can always be recovered using a virtual recycle bin. Everything that is deleted is moved to this recycle bin – effectively a backup of your backup, just in case.

3. Accidental deletion protection

There is always a risk that data is deleted accidentally; a well-meaning IT tech clears down a ‘temporary’ shared folder, only to discover that there was important business data in there. Or the accounts team applies an annual tax code patch that corrupts their database. Whatever the exact cause, this information needs to be recovered quickly.

Again, the WTL recycle bin comes to the rescue – every deleted file is first copied to a secure, air-gapped area in the cloud. If something goes wrong, you can simply copy the information back from the recycle bin for near-instant data recovery.

Ensuring a copy is always available

The key to effective backups – and disaster recovery – is ensuring you always have at least one copy of your data available at all times. By integrating WTL cloud services into your cybersecurity defences, you have more options available for when the worst does happen.

To discuss your current cybersecurity protection stategy and how WTL can strengthen your data protection and recovery options please get in touch.

Going beyond server virtualisation

Next-generation virtualisation goes way beyond server virtualisation and provides the platform for virtualised storage, virtualised networking and cloud orchestration tools that are essential elements of a software-defined data centre. Cloud-native architectures, containerised data and hyperconvergence are some of the technology approaches that a next generation, software-defined data centre can enable and the benefits of these are huge. Businesses are more agile, flexible and dynamic. Administrators can centrally control their infrastructure, regardless of where it is situated and applications move to centre stage, repositioning technology as a business enabler, not a separate department.

One huge trend in the data centre is hyperconvergence, which relies on software-defined storage, whereby the hypervisor dynamically allocates the right amount of storage to the applications that are running in the data centre. Additional compute, storage and management resources are delivered by adding additional server hardware, which is then viewed as shared storage and allocated to the apps as they need it, via the hypervisor. Without this level of next-generation virtualisation, achieving a hyperconverged infrastructure is not possible.

Virtualised networks work in a similar fashion, using the hypervisor to pool the network resources and attach them to individual virtual machines based on the defined policies for each application. The underlying network acts as a simple packet forwarding mechanism, whilst the management, provisioning and security features are all dealt with by the hypervisor. Because virtualised networks completely decouple the network functionality from the network hardware, they differ from software-defined networks, which use clever software to manage networking hardware rather than separating it entirely. The benefits of this hardware decoupling are realised when moving virtual machines between logical domains, as there is no network reconfiguration required and the administration overhead savings can be immense.

Likewise cloud orchestration is a buzz word right now and a next generation virtualisation platform really does provide the right foundation for a cloud-native environment, where different cloud services can be combined and managed simply. Making tangible reductions in administration overheads once again becomes a reality if you have a platform that can help you to manage all of your cloud services. The use of cloud services is growing massively, especially those in the public cloud market, as concerns around security and privacy are allayed and confidence in the public cloud providers grows. Indeed, Gartner predicts that globally, public cloud usage will grow by 17.3% in 2019 to total $206.2bn and many of the businesses using public cloud will also have a mix of SaaS applications, private cloud services and on-premises, physical infrastructure. What is already and will be even more important for customers is a platform where everything can be managed, without dipping in and out of interfaces and requiring different accreditations and skills.

In short, a next generation virtualised environment is much more than virtual machines. As hardware becomes almost irrelevant, the hypervisor is the powerful tool of choice, and the applications dictate what resources are needed, where and when. The application really does become king.U

Gartner Forecasts Public Cloud Revenue to Grow 17.3 Percent in 2019

IT Automation

How open source agentless IT automation can help deliver a competitive edge

Automating legacy technology and processes using cloud services is a sound strategy in theory and many businesses have increased their competitive edge by doing just that. IT Automation minimises the risk of human error or inconsistency and allows a business to reduce the time it spends on repetitive IT administration tasks. Automation also enables applications and services to be developed and delivered much quicker. In fact, Gartner estimates that the Application Release Orchestration (ARO) market grew by an estimated 37.5% in 2017, taking what they call the Delivery Automation: ARO market size to over £200million globally.

It’s not all plain sailing though. Moving systems and processes that were built to be on-premise to the cloud for automation, is often difficult. Sometimes the benefits of automation can’t be realised, because the complex processes reduce productivity and negate any gains you may have made.

So how do businesses take advantage of automation technology to make sure they’re running their operations in as lean and efficient way as possible?

Enter RedHat Ansible Tower; the result of a collaboration between RedHat and AWS and an automation solution that allows businesses to harness the power of the public cloud to provision the resources it needs, develop applications in the cloud and simplify configuration file management. In addition, it allows businesses to deploy and manage applications more easily, secure the infrastructure and orchestrate multiple configurations and environments making sure that the necessary storage, networks and databases are in place to support new applications.

Being open source, users of this solution benefit from the large network of developers and open source communities that RedHat has cultivated and many different deployments that are outlined in easy, repeatable playbooks.

But does it matter if you choose a solution which requires agents, or one that is agentless? Actually, yes and especially in the cloud, where agentless technology is faster and easier to implement, is compatible with a broader range of systems and applications and the risks associated with upgrades are minimised as they can be rolled out to the entire system in one go.

RedHat Ansible Tower is an agentless solution that works across a businesses’ entire AWS environment, giving visibility and control over to the business, via a visual dashboard. Applications can be built and deployed continuously in the cloud, with a series of playbooks to speed up and simplify the process. Resources can be provisioned wherever and whenever needed, and the whole set of configurations can be orchestrated from the same dashboard.

With role-based access policies providing control over who sees and can manage what, and custom security policies that will automatically be implemented when new applications or servers are provisioned, security and compliance is built in from start to finish.

Systems Administrators who were previously spending all their time running complex, manual processes to update and provision their environments and developing and deploying applications are now free to focus on core business initiatives. Administration overheads reduce, productivity improves and no time is wasted in getting applications to market. As a result, it’s true to say that agentless automation helps the whole business become more competitive.

Useful Links

Top 5 Challenges of Automating the Software Delivery Pipeline

5 Ways Agentless IT Automation Can Benefit Your Business

Agent vs. agentless: Monitoring choices for diverse IT ops needs

An initiation into infrastructure automation tools and methods