recovery

Are you ignoring the ‘R’ in Disaster Recovery

When it comes to a disaster recovery strategy, it is easy to focus on the ‘disaster’ side of provisioning. Identifying, prioritising and copying mission-critical data is straightforward, albeit time-consuming.

But ensuring there is always a copy of your data available is just one part of the story. The second aspect is, as the name implies, recovery. That is, make sure your data can be recovered from backup according to your strategic goals and SLAs.

Testing, testing…

The final sign-off for any DR strategy will involve a recovery test run. It will probably include at least one follow-up test every year too.

But digital transformation is driving rapid change – there is a very real chance that your IT infrastructure is evolving faster than your DR plans. This is why recovery testing must be a regular aspect of your maintenance routines.

What about the disruption?

It is true that a full DR test can be immensely disruptive and may involve some downtime of mission-critical systems. Then there is the number of resources required – personnel need to be taken away from other tasks for the testing period, potentially causing their other responsibilities to suffer. A full-scale DR test can be expensive.

This is why most businesses only perform DR exercises when required for an audit or similar. However, delaying testing increases the risk of something going wrong during a real disaster because the plan is out of date. The C-suite needs to make a value call – is the cost of testing DR provisions to ensure they work greater or lesser than the losses incurred by an actual disaster and the shortcomings of the DR plan?

Solving the problem with regular, smaller tests

A full DR test is always the best way to ensure your plans will actually work when needed. But there is an alternative that will allow you to optimise your provisions incrementally – partial DR testing.

Under this scenario, you select a sub-section of your infrastructure for disaster recovery testing. This could be a branch office, a business unit or a single application – every aspect of your system needs to be tested and refined, so why not focus on a single aspect first?

It’s also worth remembering that your choice of backup technology will have a significant effect on your recovery point objectives (RPO) and recovery time objectives (RTO). Tape may be an effective medium for point-in-time backups, but what about the data that is created between backups? And the time it takes to recover an entire system from tape?

Choosing a solution like Zerto that offers continuous data protection (CDP) can shorten RPO to mere seconds for instance. This not only increases your level of protection but will also minimise the impact of your testing on operations. This means that you should be able to conduct DR testing more regularly, refining your plans and provisions as you go.

To learn more about DR testing and Zerto CDP,  please give us a call

Disaster Recovery Planning

Disaster Recovery Planning – Revisited!

An effective disaster recovery (DR) plan only works if it is regularly reviewed and updated. Following our best-practice principles, we’ve updated our DR planning advice to help you refine and improve your strategy and processes.

Here are six things you must do:

1. Identify the key players

If your business experiences a serious system outage, who do you need to alert? Who will be involved in the actual DR process?

Your first step is to identify the key stakeholders, providers, third-party personnel and incident response teams who will help to bring systems back online. You must then negotiate and agree on acceptable SLAs that will allow you to resume operations ASAP.

2. Conduct a Business Impact Analysis (BIA)

What would happen if mission-critical systems went down? What would the wider implications of losing operations be?

There are several categories of business impact you must assess, including:

  • Expenses
  • Legal and regulatory
  • Revenue loss
  • Customer service
  • Brand/reputation damage

The BIA will be invaluable for prioritising DR activities and for identifying acceptable RPOs and RTOs. for each business unit.

3. Complete a Risk Assessment

A risk assessment attempts to quantify the likelihood of any system outage occurring. You need to consider the potential hazards to your operations – fire, cyberattack, natural disaster, extended power cut – and the magnitude of the problem each of these events would cause.

You then need to identify the assets at risk from these events. How would they affect personnel, critical infrastructure, operations, corporate reputation etc? These insights will then feed back into your BIA to provide a 360º view of threats and their effect on your business.

4. Prioritise Critical Functions

What are the most important functions required to keep your core business operations running? Identifying these processes will help you properly prioritise your recovery efforts during a disaster.

These business priorities may include:

  • Revenue operations
  • Information security
  • Identity and access management
  • Data protection
  • Payroll and time tracking

5. Build a Communications Chain

In the early stages of a major system outage, communication is critical to ensure the DR processes have been initiated and are proceeding as planned. You need to define a communications strategy that keeps all of the stakeholders identified in step 1 connected and informed about progress to ensure the plan is executed as smoothly as possible and to avoid costly miscommunication.

Don’t forget to include any third parties in the comms strategy – you will need their assistance during a disaster too.

6. Test, test, test

The only way to prove whether your DR plan works is to test it. Running regular disaster simulations will expose gaps and weaknesses in the plan, offering an opportunity to improve plans before they are needed for real.

Testing will also show whether your RPO and RTO objectives are realistic with current DR provisions – or if you need to invest in new technologies and services to meet them. Testing is an integral aspect of your regular DR planning reviews.

Contact us

To learn more about developing a DR plan that works for your business, and how WTL can help with your business continuity planning, please give us a call.

Cloud Migration

Pain-free cloud migration? 9 things you need to know

The cloud has become an established part of most organisations’ IT strategy. However, the example of early adopters shows that migration is not necessarily smooth – which is hardly a surprise given that the cloud represents a significant change in operating processes.

Here are nine things to consider that will make your transition easier.

1. Not everything belongs in the cloud

Almost all businesses currently operate a hybrid model, with some of their workloads migrated to the cloud and some retained on-premise. Why? Because some workloads are best kept locally.

If your business has decided to adopt a cloud-first strategy you need to be realistic – not everything should be migrated.

2. Audit everything

Before migrating anything, you need to know what you have. Carrying out a full audit of your operating environment and applications is essential. This will allow you to prioritise workloads for migration – and identify the technical challenges involved in ‘cloudifying’ them.

Top tip – your cloud provider may have tools to make the auditing process easier.

3. Use technology wisely

Most businesses understand the power of disaster recovery tools and how they can be used to failover to a secondary location. But this also makes DR tools an excellent aid for cloud migration – even though the final transition is intended to be permanent.

Take a look at your current DR capabilities and check to see whether they could assist with your cloud migration project.

4. Leverage VMware portability

VMware virtual servers are intended to be portable by design. But the VMware suite of tools includes functionality designed to help transition to the cloud. Take VMware vCloud Extender which creates a Layer 2 VPN between your on-premise data centre and the cloud, allowing you to migrate workloads natively at your own pace – and to avoid downtime.

Alternatively, you can build new VMs or take advantage of export/import functionality offered by your service provider if preferred.

5. Plan for physical workloads

Perhaps the biggest challenge you face is migrating physical workloads. Remember, not all workloads belong in the cloud, so it may be that you decide to retain at least some physical servers locally.

For others, cloud migration offers an opportunity to virtualise servers and realise all the benefits of the technology across the rest of your estate.

6. How to transfer data sets?

Data storage is a constant headache, which is why the challenge of moving datasets to the cloud is one of the first challenges you will encounter. Seeding – shipping physical disks containing a point-in-time backup – is an obvious solution, but it tends to be costly, inefficient and not entirely free from error.

In some cases, a combination of seeding and DR provisioning may offer a better solution for getting your data into the cloud and reducing errors during the transfer.

7. Calculate your bandwidth requirements

In most cases, your current bandwidth will be sufficient for day-to-day cloud operations. But it never hurts to check whether you have enough speed and availability to make the most of the new operating model.

8. Consider ongoing management needs

Once your workloads are in the cloud, how will you manage them? What tools are supplied by your vendor? What else would help? Do your team need additional training before go-live?

Make sure you know how you will measure and report on your KPIs – and control cloud spending. And don’t forget to examine the support options available from potential cloud suppliers – you will undoubtedly require their help at some point in the future.

9. Build a partnership

Cloud migration is often a step into the unknown – so it makes sense to get help to avoid common mistakes. Work with potential cloud providers to understand your needs and how they can help. Building a strong relationship early in the project will open new opportunities to improve your cloud operations once the move has been completed.

Ready to learn more?

To discover more about smooth cloud migrations and how your business can avoid costly pitfalls, please give the WTL team a call today.

Digital Transformation

3 Security Issues That Will Affect Your Digital Transformation Outcomes

Digital transformation is supposed to make business faster and more efficient. But if those changes come at the expense of security, any gains made could quickly be reversed.

According to research by HPE, those businesses that achieve a successful operating model have security built into the very foundation of their transformation model. Their security efforts are focused on three key areas:

1. Risk and compliance

Infrastructure as Code methodologies has evolved with the specific goal of accelerating development. The software development pipeline can be automated, allowing new applications and code to be delivered quickly.

For digital leaders, the pipeline is accompanied by a robust logging and monitoring solution that automatically scales alongside their environment. This allows them to embed security into their processes and to assess compliance with necessary protocols – without decreasing development velocity.

Leaders’ systems continuously monitor the production environment, conducting compliance and pipeline checks and automatically notifying stakeholders of issues that require remediation.

2. Security controls

Traditional security controls do still work in the cloud – but the way they are implemented must change. On-premise tools, however, do not work because they are not designed for use in a hybrid or cloud-native estate.

HPE cites the example of endpoint security, where locally installed anti-malware periodically updates itself from a central repository. In the cloud, where machine images spin up and down as required (sometimes for just a matter of minutes), this model does not work because the updates do not complete in that narrow timeframe. This leaves elements of the environment unprotected because they do not keep pace with changes in the threat landscape.

Leaders apply their proven security controls using hybrid tools that can cope with the realities of the cloud model. They will also integrate these tools across their entire ecosystem, such as scanning container images at the end of the development pipeline to improve security compliance standards across the organisation.

3. Governance

The spin-up spin-down approach to resource usage may be completely different to the traditional three-tier data centre architecture, but the compliance requirements of your business do not change. Approaching governance using the same techniques as on-premise applications will create risk for your cloud environment.

Cloud transformation leaders understand the fundamental differences in approach and retrain their security teams accordingly. Rather than attempting to create a hardened perimeter that protects corporate resources, these organisations ensure their staff can think in terms of zero-trust operations that creates a network of secure devices.

How can you catch the leaders?

It is clear from the example of cloud transformation leaders that successful change is a combination of technology and culture. These organisations balance business objectives with risk objectives, ensuring that rapid development and deployment do  come at the cost of data security.

At the most basic level, leaders can put in place the people, processes and tool changes necessary to deliver compliant, consistent security across their hybrid estate. And it is precisely this balance that your business will need to achieve to contain risk in the cloud.

To learn more about building security into your cloud digital transformation strategy, please give the WTL team a call today.

exascale computing

What is exascale computing and why does it matter?

In 2008 a new benchmark in high-performance computing (HPC) was set by the so-called petascale generation of supercomputers. But no matter how quickly technology evolves, our demand for even greater performance and potential continues to exceed capability.

Today we are at the threshold of the next giant leap forward – exascale computing.

What is exascale computing?

Like all the great computing advances, exascale marks a significant step-change. These new computers will be capable of executing 1018 floating-point operations every second – a quintillion is one billion billion (1,000,000,000,000,000,000) FLOPS.

But what does a quintillion look like? Imagine every single person on earth performing one maths calculation every second, 24 hours per day. It would take us four years working around the clock to complete one quintillion calculations – the same amount as an exascale computer does in one second.

What’s so great about exascale computing?

With the ability to complete so many calculations simultaneously, an exascale computer can complete complex problems far more quickly than its petascale predecessors. Some estimates suggest that exascale computers will complete these calculations up to 1000 times faster, allowing data scientists to improve productivity and output.

But exascale is about more than just raw power. By processing data 1000 times faster, an exascale computer can also crunch 1000 times more data in the same timeframes. This opens new possibilities for scientists, providing them with the capacity they need to solve the most complex problems. What once took days and weeks to compute will now be achievable in mere minutes and hours.

Exascale computing will also play a pivotal role in the future development of artificial intelligence (AI). Using current high-performance computing (HPC) systems, training an AI model can take weeks as the system learns from training data. With the increased throughput offered by exascale, that training period can be dramatically reduced, allowing businesses to deploy reliable AI models faster.

At the same time, AI models themselves will become increasingly performant. AI will be capable of making more real-time decisions to take automated action quicker than ever before.

Looking to the future

As these capabilities come online, expect to see significant advances in scientific research. Identifying potential vaccines for the next pandemic will become faster than ever, as researchers can process millions of molecular combinations and protein folds in a matter of hours. Automating the early stages of detection will prevent resources from being wasted on testing candidate treatments that have little or no potential benefit.

Exascale also opens opportunities to monitor and predict the most complex of scenarios. Weather prediction will become more accurate as meteorologists can process more historical data points to provide greater depth to their analysis for instance. These calculations will allow governments to better forecast the paths of hurricanes and tornadoes, and to issue life-saving guidance to citizens who may be affected.

As the volume of data collected by businesses grows, we need a way to process it – otherwise, its true value will never be unlocked. Exascale offers just the capabilities required to process the entire data estate and to put it to work for the business – or for the good of mankind in general.

To learn more about exascale – or more affordable HPC technologies – and how they will help your business achieve its strategic goals, please drop us a line