We know Technology…


We know Systems…


We know Virtualisation…


We know Cloud…


We know Data Management…


We know Network
and Security…


We know Cyber

cloud backup strategy

Do you need to get physical with a cloud backup strategy?

Virtualising backup with the cloud is powerful, effective and extremely safe. But just because data is now being archived off-site does not mean that hardware can be completely removed from your backup strategy.

In fact, physical hardware may still have an extremely important role to play in your cloud backup strategy.

1. Export by hard drive

The initial speed of a cloud backup may take weeks to complete as you transfer terabytes of data offsite. The actual time taken will depend on network and broadband speeds. Without careful traffic management, the uploads may negatively impact day-to-day operations too.

The process can be accelerated by shipping physical drives to the backup provider so that the data can be copied locally. This will be exponentially quicker – and arguably more secure – than trying to upload over the internet.

2. Restore by hard drive

Restoring from cloud archives is just as important – and fraught with the same difficulties. Speed of recovery will be limited by available internet bandwidth and download speeds.

For downloads that can be sized in gigabytes, online recovery will probably be acceptable. But for a disaster recovery scenario which involves a large amount of data, the speed of transfer is critical.

In the same way that physical hard drives can accelerate seeding of backups, they can also be employed to speed up recovery. If you plan to make cloud backup your principal method of data recovery, check to see if your service has the option of shipping physical disks.

3. Cloud as backup

The issue of time to recovery is of critical importance. Knowing that a complete dataset may take days to recover from the internet, it may be that the cloud is best deployed as a secondary backup.

In this scenario, your existing systems provide real-time services for instant recovery, while periodic (daily / weekly / monthly) backups are replicated to the cloud. Maintaining physical backups on-site minimises time to recovery, while off-site backups help to maintain the integrity and ensure that data is always recoverable.

4. Local servers for recovery testing

You know that your data is always protected when using cloud backup services – but how do you go about recovering it? Keeping spare physical servers will allow you to test your recovery protocols and ensure that they deliver against business needs.

For best results, keep at least one example of each bare metal server to ensure everything works correctly.

5. Physical recovery documentation

Modern business is driven by digital data – but there will always be a place for hard copy records in certain circumstances. In the case of disaster recovery, you must maintain physical, off-line copies of the information required to brings systems back online.

Records must include the recovery action plan, applications and serial numbers. And don’t forget to include contact details for the individual who holds the administrative passwords required for recovery and reconfiguration.

The future is hybrid

Until available bandwidth increases exponentially, there will always be a place for physical assets in your backup regime. The trick is knowing where to divide the load between local and cloud.

WTL offer a range of cloud based solutions. that can extend the rigour of your on-premise backup without without compromising control, visibility, or auditability.

For more assistance in defining a cloud backup strategy that delivers the reliability, speed and security your business demands, please give us a call.

Moving to the Cloud

The Less Scary Road to the Moving to the Cloud

Cloud adoption is set to become a computing norm – even for companies that have until now rejected these technologies. But as hosted software (G Suite, Office 365, Salseforce.com etc) gather pace, most have been unable to completely avoid cloud services.

Much of the discussion around cloud migration suggests that it is a ‘big bang’, all-or-nothing play, with the whole data centre being shifted to the cloud. Although possible (in theory) not all workloads belong in the cloud.

Cloud migration doesn’t have to be ‘big bang’

Many cloud operators give the impression that adoption is not only inevitable but that all of your systems will eventually be hosted. And the sooner this transition takes place, the better.

The reality is that this is your business and your infrastructure, and you are fully justified in moving at your own pace. For various reasons (unfamiliarity, security concerns, uncertainty etc) you have resisted major cloud adoption projects – so it makes sense to maintain a cautious roll-out.

Cloud migration on your terms

One way to maintain control of the process and the speed at which you move is to bring cloud technologies into your data centre first. Using platforms like VMware vCloud Suite, Microsoft Hyper-V virtualisation or OpenStack.

Deploying a private cloud allows your business to migrate applications and workloads, learning how the concepts and technologies apply to your business. At the same time, you can take advantage of automation and self-service to accelerate your IT operations to deliver a better quality of service to your in-house users.

This approach can be more expensive than going with one of the large platforms like AWS, Azure or Google Cloud. With cloud in-house, however, you retain full control of the process so you can migrate applications and servers at your own pace. This makes the transition more manageable and lays the groundwork for when you do decide to migrate to a lower-cost public cloud provider.

Re-engineering workloads for the cloud

One of the key benefits of cloud platforms is their elastic pricing model – you only pay for what you use. However, simply moving your virtual servers into the cloud is not efficient.

Your on-premise systems are configured to run 24x7x365 because there is no reason to let them spin down. But in the cloud where every resource is billable – CPU cycles, RAM, storage etc – you pay for running servers, even when they are not being accessed.

The major cloud platforms allow you to set servers to spin down automatically overnight for instance, helping to reduce costs. However, these servers are themselves considered relatively heavyweight.

The future of operating in the cloud lies in containerisation. This technology breaks applications into blocks that can be created and destroyed automatically according to demand. Unlike a virtual server, the container is a much smaller package, containing nothing but your application code and the libraries required to run it; there is no operating system or additional applications, helping to minimise the number of resources used – and therefore costs.

With a private cloud, you can begin the process of re-engineering and optimising for the cloud before moving to a public cloud platform. This will help to contain costs when you do finally migrate and simplify the process of transition.

To learn more about the moving to the cloud and how to simplify the transition to the cloud, please get in touch.

Business continuity plan is on the table.

5 Steps to Business Continuity Success

If the events of the last year have taught businesses anything, it is the importance of a workable business continuity strategy. The ability to maintain operations in the face of continued uncertainty is now vital to survival.

As you map out your business continuity plan, here are five factors you must address.

1. Backup, Disaster Recovery or Both?

Before the current era of data-driven business, maintaining a complete/incremental daily backup of data was deemed sufficient because the volumes of information could be restored in a matter of hours. But now that recovering huge volumes of data can take days, a backup cannot be your only method protection.

Disaster Recovery (DR) uses near-live copies of virtual servers and databases that can fail-over in the event of an outage. DR can shorten the recovery of operations to mere minutes.

In all likelihood, your business will need a combination of both. Backup for regulatory requirements and secondary systems, DR for mission-critical operations.

2. Inventory your systems

It’s vital to understand your IT systems and how they are used. This will help you categorise them – mission-critical, secondary, archive – and thereby prioritise recovery.

It is important to assess how often data is updated too. This will help you understand how quickly the system must be recovered before permanent data loss occurs, or before operations are badly affected.

Don’t forget to check interdependence either – your mission-critical systems may be reliant on some less-important, secondary applications.

3. Choosing your solutions

When you understand what you have and what you need, you must then choose and implement suitable solutions. As data volumes increase, local storage of backup data becomes a headache – particularly when those archives need to be rotated off-site.

Cloud-based backup and DR services help to address some of these problems. Capacity can scale almost infinitely, and information is stored off-site by design.

You will need to consider issues like bandwidth requirements, time to recovery, support provided during a disaster and how your network and systems need to be reconfigured to support the new backup and replication regime.

With those decisions made, you can configure your new DR provisions to begin copying, replicating and backing up data as required.

4. Building out the DR plan

With a robust, secure backup platform in place, you then need to specify how it is to be used. Define what a disaster looks like, when the DR plan is to be executed, and by whom.

Consider all DR scenarios, from a single server failure to complete blackout and document the steps required to restore operations as quickly and efficiently as possible. Identify which individuals need to take control of a DR event and provide them with a simple guide or checklist to follow for when the time comes.

The DR plan will need to be updated and reviewed as your infrastructure evolves, so ensure documentation is regularly updated. And don’t forget to keep hard/off-line copies of these documents as they may not be available if your file servers are affected by the outage.

5. Testing the DR plan

The worst time to discover a flaw in your DR plan is during an actual disaster. But if you never actually test your resources and processes, that’s exactly what will happen.

Regular failover testing allows you to verify that systems designed to take over in the event of a server issue are working correctly. Importantly, failover tests can be conducted in real-time without disrupting operations because they simply confirm that the replicated systems are operating correctly.

In-depth testing of your DR plan is carried out via a live failover. This test simulates a full outage of your production system and verifies that the transfer of workloads to the failover systems completes correctly. These tests help to prove that your DR plans and technologies are properly configured and ready to take over in a real emergency.

Don’t go it alone

Your choice of DR provider can help to streamline your business continuity provisions. More than simply providing the failover/backup technology plans, the right partner will also be able to assist with building and testing a DR plan that properly protects your organisation during a disaster.

WTL offer a range of Cloud-based Disaster Recovery solutions that allow you  extend your DR to the cloud in just a few clicks for simple, straightforward, peace of mind.

To learn more about disaster recovery provisioning and business continuity services, please get in touch.

Cloud Partner

How to choose a cloud partner – 13 factors to consider

If the cloud is an integral part of your IT strategy, choosing the right cloud partner is one of the most important decisions you must make. Here are 13 factors to consider that will help you chose a cloud partner that is right for your business.

1. Technology stack

To contain costs, maximise ROI and accelerate deployment, you want a provider whose technology stack is compatible with your own. As well as using the same hypervisor technology, their hardware and connectivity needs to be reliable, fast and resilient. The platform must outperform your own.

2. Public or Private

Public cloud platforms offer multi-tenancy solutions to share resources and contain costs. Private cloud offers dedicated infrastructure for each client. Although carefully configured to prevent data passing between tenants’ instances, public cloud may not be suitable for your most sensitive workloads. Make sure any prospective supplier offers the right cloud formats for your workloads.

3. Management responsibilities

Maintaining the underlying infrastructure and connectivity is the responsibility of the provider – but what else do they offer? Can you spin up your own instances or allocate virtual resources, or is that handled by the provider too? Understanding these responsibilities will also help to understand the ongoing costs and staffing requirements.

4. Infrastructure transparency

By ceding control of the underlying infrastructure, cloud services do not offer the same degree of visibility as your on-premise data centre. Make sure that you understand the tools required to deliver business value and whether they can be used on the platform being evaluated – or if the provider offers an equivalent toolset.

5. The route to the cloud

There are many routes to the cloud – you need to pick the option that balances speed, future-readiness and cost. You need to understand the data and workloads that need to be migrated, and how it will be moved. Does the provider offer a managed migration service to help plug skills and knowledge gaps within your own team?

6. Third-party networking support

Pure cloud deployments are extremely rare – you will probably continue to run at least some of your operations in-house. You need to understand whether the provider offers support for the third party network technologies that connect on-premise with the cloud, and to what degree. Will your provider assume all the risk, some, or none at all?

7. Physical hosting options

If you operate any form of hosted infrastructure or co-located systems, it makes sense to consolidate them where possible. If your processes require physical hardware of any kind, you need to know if your partner is willing to host it.

8. Automating the cloud

Your on-premise systems use automation to streamline processes and reduce manual workload – are there similar options available in the cloud too? What are your options for automation and reporting? Is there an API available allowing you to connect third-party tools – or to develop your own?

9.  Disaster recovery provisions

Cloud platforms are resilient by design – but the risk of data loss remains. How is your data protected at the virtual and infrastructure levels? Can you configure and control backup frequencies? And importantly, how is data recovered when required? Does the provider offer disaster recovery as a service (DRaaS) and what are their SLAs?

10. Cloud failover mechanisms

Cloud outages are rare, but the risk of ransomware infection, accidental deletion and data corruption are almost as common as in your on-premise data centre. User error is still a threat wherever your systems are hosted. Your ideal provider needs to provide a way to recover data and operations within minutes – including cloud-to-cloud fail-over if there is a significant ransomware/data corruption event.

11. Compliance and security

Your business has a legal duty to protect data from loss or theft – and to ensure it is stored securely in an approved territory (normally the UK and EU). As you assess the various options available you need to know how responsibility is divided between parties, the tools and services (if any) that are available to maintain compliance and the physical location of your data to ensure it is not being stored illegally.

12. Technical support provisions

The provider’s technical support offering takes on additional importance when you have outsourced key elements of your IT estate. You need to be sure about the level of service you will receive, including how issues are reported, investigated and resolved. You must also ascertain what the service level parameters are to ensure you have the level of coverage appropriate to your workloads.

13. How to track costs

The pay-as-you-use cloud computing model ensures that you never invest in over-capacity for your on-premise data centre. But it also makes budget over-spend far easier, particularly given the complexities of some cloud billing models. Your ideal provider will have transparent pricing, cost models and tools that allow you to track your spend and balance resource requirements with budget constraints.

Every factor is important

13 factors might seem like a lot – but every one of them is important to consider when chosing a cloud partner. After all, this is a strategic partnership that should last years. The time and effort invested in conducting this due diligence will help to smooth your cloud adoption projects – and ensure you get exactly what you need to grow your business.

To learn more about the WTL offering and how we can help you build a flexible, future-ready platform in the cloud, please get in touch.

protect your business against ransomware

How to protect your business against ransomware

If you can project knowledge and experience, you can probably talk unprepared users into doing whatever you tell them. Hackers will learn your organisation structure, names of key stakeholders and then contact staff pretending to be a senior manager and urging them to open an important file. Even if the employee realises they have been tricked, it is too late – the ransomware will have already set to work on your network.

We take a look at some practical tips to protect your business against ransomware infection but first we look two common ways hackers can gain access to your IT sytems.


Phishing has evolved from stealing sensitive login details to encouraging users to install ransomware. Having received an official-looking email and clicked through to an official-looking website, the user is encouraged to download and install an official-looking app – which just happens to contain malware.

Malicious websites

Just general web surfing can be a recipe for disaster if your employees land on a compromised site. Click on the wrong pop-up or download the wrong file and malware can gain a foothold in the network.

You must teach your employees about these risks – and how to avoid them.


Preventing ransomware infections is mostly common sense, applying IT security best practices to your infrastructure and operations, including:

  • Regularly patching and updating software to address vulnerabilities and reduce opportunities for exploits.
  • Ensuring endpoint anti-virus software is installed, configured and kept fully up-to-date at all times.
  • Use policies to prevent end-users from installing software or running applications with elevated permissions.
  • Maintain content filtering and firewall whitelists and blacklists to limit traffic to untrusted or compromised websites.
  • Limit access to the physical computer ports to prevent ransomware ingress on removable drives etc.
  • Audit your network regularly to identify gaps in your security systems – including testing your employees’ responses to social engineering attacks.
  • Lockdown as many permissions and access rights as possible. Ensure that staff only have what they need to do their jobs.

Limiting access rights may occasionally cause issues – but they are nothing compared to the fall-out from a ransomware attack.


Despite your best efforts, it is likely ransomware will eventually make it through your defences – the larger the network, the higher the probability. When it does, you need to be prepared to bring operations back online as quickly as possible.

Usually, backups take place once every 24 hours. If a ransomware outbreak shortly before the cycle restarts, you could lose a full day’s work – which could be catastrophic.

Your disaster recovery provisions need to reduce these gaps between cycles. Snapshots and smaller, targeted backups can create copies of key data more regularly speeding up the remediation process after infection.

There are many tools to help achieve these goals, but identifying, configuring and deploying the right ones for your business is not necessarily straightforward. WTL can cut through the confusion, our specialists will help your business build an effective, efficient disaster recovery solution that allows you to respond to ransomware quickly – without losing data.

To learn more about how we can help you protect your business against ransomware, please get in touch or take a look at the cyber security services  we offer.