WTL

High Availability

Meeting the High Availability Requirements in Digitally Transformed Enterprises

Heavily reliant on access to their data, digitally transformed organisations need infrastructure that is always available. So, what should you be looking for as your business begins its digital transformation journey (or prepares to take the next step)?

Here are five factors to consider as identified by IDC:

1. Solid-state storage

All Flash Arrays (AFF) offer highly performant storage and improved availability over spinning disk alternatives. AFFs also have the advantage of increased density, allowing for more storage to be packed into the same physical footprint.

When combined with NVMe technology, AFF arrays are even faster, further reducing the total cost of ownership and delivering the high levels of performance and availability needed for mission-critical operations.

2. Scale-out design

Cloud platforms have proven the importance of scalable computing, both in terms of containing costs through pay-as-you-use billing and by allowing businesses to grow and shrink resources as demand changes. Scale-out designs are therefore an essential aspect of high availability computing, allowing your business to draw upon additional resources whenever required in a non-disruptive manner.

A scale-out infrastructure allows for similarly non-disruptive upgrades. By connecting newer nodes to the environment, you can seamlessly migrate workloads using data mobility tools, removing older technology from the ‘pool’ once complete. You can stay at the cutting edge of HA computing without affecting operations.

3. Granular management for multi-tenant environments

As infrastructure density increases, businesses are forced to consolidate workloads. Although this maximises the value of hardware investments, it also increases the ‘blast radius’ – the potential damage caused to other applications and servers when one of the tenants fails.

To ensure high availability, operators need systems that allow them to better manage the environment on an application-by-application basis. They can then configure the storage to better manage each workload and its requirements – and limit the impact of any failures.

4. Support for the hybrid multi-cloud

The majority of businesses (80%) are now using hybrid cloud operations, often with multiple providers. To ensure seamless high availability operations, they will need a unified control plane that provides visibility across all their assets, no matter where they are located.

This will almost certainly involve a shift towards software-defined infrastructure, allowing for increased automation of platforms like Kubernetes and Ansible. These enhanced API controls allow operators to better understand their environment and simplify management across the multi-cloud.

5. Automated storage management

With hybrid multi-cloud operations, the IT environment is only becoming more complex. It is now almost impossible to meet high availability SLAs while relying on manual processes.

Instead, operators should be looking at tools that allow them to automate storage management using policies and artificial intelligence. These tools not only accelerate management and deployment but can also be used effectively by IT generalists, reducing the need for costly, hard-to-hire storage specialists.

Smarter storage for high availability applications

These 5 tips are just the starting point for high availability infrastructure design. However, this should be enough to help you start asking the right questions to ensure you get the platform your business needs.

To learn more about building a high availability storage platform for the future and how WTL can assist, please give us a call.

Oracle on IBM

Running Oracle on IBM? Why now is the time to revisit your strategy?

According to IBM, a hybrid cloud is defined as RedHat OpenShift with RedHat Linux. But for many businesses this is a serious problem because Oracle does not support:

  • Linux on Power architecture
  • RedHat Openshift
  • The IBM Cloud

Yes, the IBM Cloud is optimised for Power applications, but that’s of no use if Oracle does not offer support. And because refactoring Power applications for the x86 public cloud is slow, labour intensive and expensive, there is little appetite or interest for organisations to adopt Azure or AWS.

This lack of support means that any organisation running Oracle will need to keep their databases running on-premise on AIX hardware running Power 10. Which is completely at odds with their cloud-first strategies.

So what’s the alternative?

Oracle on Oracle

It is an undeniable truth that Oracle workloads tend to perform best on Oracle-engineered hardware. Importantly, Oracle on Oracle is (and always will be) fully supported by Oracle.

By keeping Oracle workloads on premise, does this not create the same strategic problem as retaining IBM AIX servers? No.

Oracle engineered systems are designed to integrate seamlessly with the Oracle cloud. Hybrid operations are built into the core of each system, allowing users to manage and migrate workloads across local and cloud infrastructure. Operations can be kept on-premise, migrated to the cloud entirely, or spread across both as required.

Oracle systems offer enhanced security and a smarter path to the cloud. Workloads can be shifted back and forth wherever required, so users can always balance cost, flexibility and performance to meet their strategic computing goals.

Why change server architecture?

Ultimately IBM’s cloud architecture is incompatible with Oracle – retaining an IBM server architecture limits the potential for your future systems development. And as your digital transformation efforts accelerate, this lack of flexibility could have significant, negative consequences.

But there are several other important reasons to consider moving to Oracle architected systems. Oracle on Oracle users realise:

  • 256% return on investment (5-year period)
  • 73% less unplanned downtime
  • 47% reduced total cost of operations
  • 40% faster time-to-market

With the ability to better integrate the cloud into your operations comes improved disaster recovery options. Workloads can be moved to the cloud in the event of a local data centre issue for instance. And you also have the option to take advantage of modern cloud-based real-time DR solutions that provide instant fail-over and data recovery features.

These are compelling reasons for making the switch – even if they do require additional investment. The returns are far greater than the initial outlay.

To learn more about your Oracle options and how you can make the transition to the cloud smoother (and replace your ageing IBM AIX infrastructure), please give us a call

recovery

Are you ignoring the ‘R’ in Disaster Recovery

When it comes to a disaster recovery strategy, it is easy to focus on the ‘disaster’ side of provisioning. Identifying, prioritising and copying mission-critical data is straightforward, albeit time-consuming.

But ensuring there is always a copy of your data available is just one part of the story. The second aspect is, as the name implies, recovery. That is, make sure your data can be recovered from backup according to your strategic goals and SLAs.

Testing, testing…

The final sign-off for any DR strategy will involve a recovery test run. It will probably include at least one follow-up test every year too.

But digital transformation is driving rapid change – there is a very real chance that your IT infrastructure is evolving faster than your DR plans. This is why recovery testing must be a regular aspect of your maintenance routines.

What about the disruption?

It is true that a full DR test can be immensely disruptive and may involve some downtime of mission-critical systems. Then there is the number of resources required – personnel need to be taken away from other tasks for the testing period, potentially causing their other responsibilities to suffer. A full-scale DR test can be expensive.

This is why most businesses only perform DR exercises when required for an audit or similar. However, delaying testing increases the risk of something going wrong during a real disaster because the plan is out of date. The C-suite needs to make a value call – is the cost of testing DR provisions to ensure they work greater or lesser than the losses incurred by an actual disaster and the shortcomings of the DR plan?

Solving the problem with regular, smaller tests

A full DR test is always the best way to ensure your plans will actually work when needed. But there is an alternative that will allow you to optimise your provisions incrementally – partial DR testing.

Under this scenario, you select a sub-section of your infrastructure for disaster recovery testing. This could be a branch office, a business unit or a single application – every aspect of your system needs to be tested and refined, so why not focus on a single aspect first?

It’s also worth remembering that your choice of backup technology will have a significant effect on your recovery point objectives (RPO) and recovery time objectives (RTO). Tape may be an effective medium for point-in-time backups, but what about the data that is created between backups? And the time it takes to recover an entire system from tape?

Choosing a solution like Zerto that offers continuous data protection (CDP) can shorten RPO to mere seconds for instance. This not only increases your level of protection but will also minimise the impact of your testing on operations. This means that you should be able to conduct DR testing more regularly, refining your plans and provisions as you go.

To learn more about DR testing and Zerto CDP,  please give us a call

Disaster Recovery Planning

Disaster Recovery Planning – Revisited!

An effective disaster recovery (DR) plan only works if it is regularly reviewed and updated. Following our best-practice principles, we’ve updated our DR planning advice to help you refine and improve your strategy and processes.

Here are six things you must do:

1. Identify the key players

If your business experiences a serious system outage, who do you need to alert? Who will be involved in the actual DR process?

Your first step is to identify the key stakeholders, providers, third-party personnel and incident response teams who will help to bring systems back online. You must then negotiate and agree on acceptable SLAs that will allow you to resume operations ASAP.

2. Conduct a Business Impact Analysis (BIA)

What would happen if mission-critical systems went down? What would the wider implications of losing operations be?

There are several categories of business impact you must assess, including:

  • Expenses
  • Legal and regulatory
  • Revenue loss
  • Customer service
  • Brand/reputation damage

The BIA will be invaluable for prioritising DR activities and for identifying acceptable RPOs and RTOs. for each business unit.

3. Complete a Risk Assessment

A risk assessment attempts to quantify the likelihood of any system outage occurring. You need to consider the potential hazards to your operations – fire, cyberattack, natural disaster, extended power cut – and the magnitude of the problem each of these events would cause.

You then need to identify the assets at risk from these events. How would they affect personnel, critical infrastructure, operations, corporate reputation etc? These insights will then feed back into your BIA to provide a 360º view of threats and their effect on your business.

4. Prioritise Critical Functions

What are the most important functions required to keep your core business operations running? Identifying these processes will help you properly prioritise your recovery efforts during a disaster.

These business priorities may include:

  • Revenue operations
  • Information security
  • Identity and access management
  • Data protection
  • Payroll and time tracking

5. Build a Communications Chain

In the early stages of a major system outage, communication is critical to ensure the DR processes have been initiated and are proceeding as planned. You need to define a communications strategy that keeps all of the stakeholders identified in step 1 connected and informed about progress to ensure the plan is executed as smoothly as possible and to avoid costly miscommunication.

Don’t forget to include any third parties in the comms strategy – you will need their assistance during a disaster too.

6. Test, test, test

The only way to prove whether your DR plan works is to test it. Running regular disaster simulations will expose gaps and weaknesses in the plan, offering an opportunity to improve plans before they are needed for real.

Testing will also show whether your RPO and RTO objectives are realistic with current DR provisions – or if you need to invest in new technologies and services to meet them. Testing is an integral aspect of your regular DR planning reviews.

Contact us

To learn more about developing a DR plan that works for your business, and how WTL can help with your business continuity planning, please give us a call.

Cloud Migration

Pain-free cloud migration? 9 things you need to know

The cloud has become an established part of most organisations’ IT strategy. However, the example of early adopters shows that migration is not necessarily smooth – which is hardly a surprise given that the cloud represents a significant change in operating processes.

Here are nine things to consider that will make your transition easier.

1. Not everything belongs in the cloud

Almost all businesses currently operate a hybrid model, with some of their workloads migrated to the cloud and some retained on-premise. Why? Because some workloads are best kept locally.

If your business has decided to adopt a cloud-first strategy you need to be realistic – not everything should be migrated.

2. Audit everything

Before migrating anything, you need to know what you have. Carrying out a full audit of your operating environment and applications is essential. This will allow you to prioritise workloads for migration – and identify the technical challenges involved in ‘cloudifying’ them.

Top tip – your cloud provider may have tools to make the auditing process easier.

3. Use technology wisely

Most businesses understand the power of disaster recovery tools and how they can be used to failover to a secondary location. But this also makes DR tools an excellent aid for cloud migration – even though the final transition is intended to be permanent.

Take a look at your current DR capabilities and check to see whether they could assist with your cloud migration project.

4. Leverage VMware portability

VMware virtual servers are intended to be portable by design. But the VMware suite of tools includes functionality designed to help transition to the cloud. Take VMware vCloud Extender which creates a Layer 2 VPN between your on-premise data centre and the cloud, allowing you to migrate workloads natively at your own pace – and to avoid downtime.

Alternatively, you can build new VMs or take advantage of export/import functionality offered by your service provider if preferred.

5. Plan for physical workloads

Perhaps the biggest challenge you face is migrating physical workloads. Remember, not all workloads belong in the cloud, so it may be that you decide to retain at least some physical servers locally.

For others, cloud migration offers an opportunity to virtualise servers and realise all the benefits of the technology across the rest of your estate.

6. How to transfer data sets?

Data storage is a constant headache, which is why the challenge of moving datasets to the cloud is one of the first challenges you will encounter. Seeding – shipping physical disks containing a point-in-time backup – is an obvious solution, but it tends to be costly, inefficient and not entirely free from error.

In some cases, a combination of seeding and DR provisioning may offer a better solution for getting your data into the cloud and reducing errors during the transfer.

7. Calculate your bandwidth requirements

In most cases, your current bandwidth will be sufficient for day-to-day cloud operations. But it never hurts to check whether you have enough speed and availability to make the most of the new operating model.

8. Consider ongoing management needs

Once your workloads are in the cloud, how will you manage them? What tools are supplied by your vendor? What else would help? Do your team need additional training before go-live?

Make sure you know how you will measure and report on your KPIs – and control cloud spending. And don’t forget to examine the support options available from potential cloud suppliers – you will undoubtedly require their help at some point in the future.

9. Build a partnership

Cloud migration is often a step into the unknown – so it makes sense to get help to avoid common mistakes. Work with potential cloud providers to understand your needs and how they can help. Building a strong relationship early in the project will open new opportunities to improve your cloud operations once the move has been completed.

Ready to learn more?

To discover more about smooth cloud migrations and how your business can avoid costly pitfalls, please give the WTL team a call today.

Partners