Disaster Recover

Making the business case for disaster recovery

Because of the relative rarity of a significant system outage, many SMEs have deliberately underinvested in their data protection provisions. Backup allows them to recover almost all of their systems and data (eventually), so why invest in a true disaster recovery solution?

Although most see this as a calculated risk, the impact of an outage can be devastating for a data-driven business. Here’s what you need to consider before trying to justify not investing in disaster recovery tools.

Reducing RPO windows

It is possible to recover most of your data from a traditional backup but:

  • The process is typically quite slow
  • Data changes significantly between backups – how much information can you afford to lose in that window?

It is entirely possible that relying on backups could cost your business 22 hours or more lost productivity.

A disaster recovery platform is specifically designed to reduce the recovery point objective (RPO – the amount of data your business can ‘afford’ to lose) to minutes or seconds. Cloud-based DR systems also allow you to create service tiers so that you can prioritise what is protected, the performance of the underlying infrastructure and how it is brought online in an emergency. This granular control ensures you can balance availability, performance and cost to meet your strategic recovery goals.

Affordable resilience

Modern disaster recovery platforms use the cloud to provide massive scalability without upfront hardware investment. You simply pay for the storage you use. There is no longer any need to invest in co-located data centres, duplicate hardware set-ups, licensing or the resources required to administer them.

Using cloud-based services allows you to avoid significant up-front capital investment – immediately answering one of the main arguments against deploying DR. It also ensures that your data is fully recoverable from anywhere in the world.

Built for the cloud

Backup and recovery tools are normally designed for use with on-premise systems. This becomes a serious shortcoming as your business adopts more cloud-based services.

DR tools are increasingly cloud-native, meaning that they can capture snapshots of data stored in hosted systems. Importantly, they can also restore data to other cloud platforms, offering a useful alternative if your on-premise data centre is out-of-action.

Improve your testing capabilities

Disaster recovery tools create a complete copy of your operating environment that is ready to be recovered at any moment. However, you can also use these DR copies for advanced testing and planning.

Say you want to assess the potential risks associated with a new software update. Rather than deploying into your live environment, you can use a DR copy. All of your tests are completely accurate and reliable because the copied system is identical to your production environment. Tests can be completed without any risk to operations.

To learn more about why your business can’t afford to not invest in disaster recovery tools – and what you stand to gain – please get in touch.

containers data protection

Data Protection Trends and Strategies for Containers

The demands of DevOps and continuous development have helped to accelerate the update containers. Indeed, Kubernetes and other containers are set to become the preferred production deployment technology within the next two years.

Containers present a new challenge for developers – and the data protection team responsible for ensuring they are properly backed up. So as your business accelerates container adoption, what do you need to consider?

Existing strategies and technologies probably won’t work

Unsurprisingly, traditional data protection strategies are focused on corporate data. But there is a problem with this approach; container-based applications cannot be protected in the same way that individual applications are. Backing up data alone will not restore the containers in the event of a system outage.

Anecdotal evidence suggests that this misunderstanding is not uncommon. This means that some businesses are operating a data protection strategy that will probably not meet their RPO objectives. There is a significant risk of permanent data loss if the data protection strategy is not adjusted to the realities of the new containerised environment.

Orchestration is essential

Because we have been doing the same thing for many years, attitudes and approaches to backup have become ingrained. Research conducted by the Enterprise Strategy Group suggests that many businesses are ignoring orchestration as part of their data protection strategy because they deem it irrelevant.

But as the containerised environment becomes more complex, managing backup and recovery will become too resource-intensive to complete using current tools. Orchestration offers a way to automate many low-level operations during a data loss incident. You will be able to accelerate recovery options, maintain operations according to SLAs and reduce the risk of permanent data loss too.

Re-assess data protection provisions

The hybrid cloud model has already complicated data protection. Many IT professionals are still grappling with the challenge of properly backing up data held in cloud platforms and SaaS silos; containers will further complicate matters.

It is essential to realise that current tools and techniques are not properly aligned with the needs of the containerised cloud. An urgent review of the data protection strategy is required, along with a reappraisal of the tools that are currently in use.

This review should also investigate the various specialist tools available that are specifically designed for backup and recovery of containerised applications. For maximum protection and flexibility, you will need a toolset capable of operating at Kubernetes-level and lower – including cluster-level, pod-level, namespace-level and tag-level.

Speak to an expert

As an emerging technology space, data protection for containers is not well understood – the necessary skills are hard to come by. Rather than hoping for the best or trying to re-engineer your existing strategy with unsuitable tools, you should seek third party expertise.

To learn more about protecting your containerised data and applications, please give the WTL team a call and we will guide you through your options.

 

NetApp FAS Storage Arrays

Why Choose NetApp FAS Storage Arrays?

As you plan the next stage of data storage evolution, there are hundreds of different vendor and technology combinations available. So why should you consider NetApp FAS Storage as your platform?

Simple

Ideally, storage should be set-and-forget – failing that, you are looking for a solution that has minimal administrative overheads. NetApp FAS combined with ONTAP data management operating system allows you to provision storage from cold in just ten minutes.

This combination of hardware and software has been designed to streamline and simplify storage management for the entire lifespan of your platform. You can even upgrade or service your storage during business hours without affecting operations or performance. Cards and controllers can be replaced quickly, designed in such a way that servicing is simple and straightforward, minimising the risk of errors that may result in downtime.

Deploying NetApp FAS also helps to simplify your overall infrastructure. FAS arrays offer the same functionality as both SAN and NAS, allowing you to replace both with a single alternative.

Smart

Flexible and modular, the NetApp FAS range can scale up and out according to your changing data storage needs. Scale-up by adding capacity and controllers, scale-out by growing nodes – adding up to 176PB of capacity.

ONTAP also simplifies the process of integrating public cloud services with your FAS arrays. Data and workloads can be migrated between on-premise and hosted services quickly and easily – or automated entirely where appropriate. You can define tiers of service, then allow ONTAP to migrate data to the storage platform best suited to deliver on those requirements.

Built with one eye on the future, NetApp FAS supports large volumes of data in various formats. With provisions for unstructured data, FAS arrays are suitable for vanilla file storage and big data analytics operations.

Trusted and Secure

Using ONTAP Volume Encryption, any data held on your NetApp FAS arrays can be encrypted while at rest. This encryption is built-in as standard, so there’s no need for special encrypting disks. Once data is no longer required, individual files can be cryptographically shredded and sanitized, making them unrecoverable in line with data protection demands like GDPR.

Other features like rules-based access control (RBAC) and multi-factor authentication prevent unauthorised use of admin systems and your data. Even more granular, storage-level file security to protect data against unauthorised access or deletion – even by admin users.

The ONTAP – FAS combination also offers excellent replication capabilities. This helps to accelerate recovery and ensure consistent data protection across your application estate, including asynchronous replication to the cloud for remote backup and recovery. There’s even deep integration with leading backup applications to further simplify management.

NetApp FAS storage has demonstrated proven, real-world six-nines availability, underscoring the built-in resilience of the platform. Market-leading reliability means that businesses across the world are happy to place their trust in FAS arrays for mission-critical data storage.

A great choice for your data storage requirements

The NetApp FAS family offers a family on which to build your data storage future. Excellent features combined with leading security provisions add up to a platform you can trust with your most important workloads.

To learn more about NetApp FAS and what it offers your business, please get in touch.

Oracle Exadata X9M

Introducing the all-new Oracle Exadata X9M platform

When the Oracle Exadata X8M platform was released, no other database server came close in terms of performance, either on-premise or in the cloud. But with the release of the new Oracle Exadata X9M platform, an all-new standard has been set.

According to Oracle, this latest generation server offers a massive 70% performance boost, delivering up to 27.6 million SQL IOPS per rack. This is achieved by using Intel Ice Lake 32-core CPUs to give a 33% increase in the max number of cores and RAM per server, along with a 64% upgrade in memory bandwidth.

By switching the 14TB disks used in the X8M series with 18TB alternatives, the X9M allows users to increase potential storage capacity without affecting the purchase price. Overall, each rack offers a mix of up to 1,216 DB cores, 38TB memory, 3.8 PB Raw Disk, 920 TB NVMe (Non-Volatile Memory express) Flash, and 27 TB Intel Optane PMem (Persistent Memory).

So what?

The Oracle Exadata X9M platform is built for the most demanding of database deployments. SQL IOPS are 42% cheaper, making the platform an affordable choice for mid-sized customers with transaction-intensive workloads for instance.

The X9M platform is also well suited to analytics functions. With 1TB/second of SQL throughput per rack, you can scan over a terabyte of data every second. This is ideal for real-time analytics, such as IoT or financial services applications. Scanning is also 47% cheaper than the X8M, enabling the use of analytics by smaller organisations.

Cloud-like operations – in your data centre

Exadata Cloud@Customer X9M is described by Oracle as “the world’s fastest on-premises cloud database system”. And with IO latencies of under 19 microseconds, they are almost certainly correct.

According to the Wikibon community, a cloud database:

  • Matches the infrastructure technology with the database application performance requirements
  • Scales horizontally and vertically to match the business requirements seamlessly
  • Automatically provides appropriate levels (SLAs) of availability, speed-of-recovery, and data-loss in recovery required by the business
  • Scales up and down rapidly and with no operational impact to changing demand
  • Allows users to pay only for the compute and storage used
  • Provides automation of all standard operating processes and database options

The X9M meets all these criteria – and it significantly outperforms Amazon RDS and Microsoft Azure SQL. Exadata Cloud@Customer X9M delivers 50x better OLTP latency and 100x better than Azure SQL. For analytics functions, X9M is 25x faster than Azure and 72x quicker than Amazon RDS.

The Exadata Cloud@Customer platform offers the best of both worlds. Massive scalability and on-demand resource allocation like the cloud coupled with extreme performance that can only normally be found in an on-premise data centre. Indeed, the X9M is the perfect platform for high-demand, high-performance Oracle databases.

To learn more about the Oracle Exadata X9M platform and what it offers your business, please give us a call.

Continuous data protection

Digital transformation and Continuous Data Protection

Digital transformation efforts are seeing businesses re-engineer systems and processes to become ‘data-driven’. According to research conducted by IDC, 60% of organisations have already implemented tools and methods to use data more effectively.

The issue of availability

As the name implies, data driven operations are almost entirely reliant on data. Accurate, timely, contextual information must be available whenever required to assist with decision-making and automation. This makes the issue of data protection and data recovery an even higher priority.

Historically businesses may have been able to tolerate some degree of downtime – or potentially even loss. Now that information is mission critical, neither of these is excusable. Both RTO (recovery time objective) and RPO (recovery point objective) are expected to be zero – or as close as possible.

Traditional backup systems cannot deliver

Backup systems have become increasingly complex, employing several different technologies to offer both data protection and data recovery functionality. Backup and recovery software, snapshots, mirrors and replicas all play a role in protecting systems, but they are not bulletproof – there is still the potential for loss in between captures.

Introducing continuous data protection (CDP) into your data protection strategy solves two problems. First, it will help to reduce RPO and RTO to zero. Second, CDP solves many of the problems businesses face with protecting data in their hybrid cloud environments.

Cloud trends

The issue of data backup in the cloud is of particular importance. As part of their digital transformation efforts, businesses are ever more reliant on hosted platforms like Microsoft Azure and AWS.

Your data protection provisions will need to adapt to accommodate these hybrid systems, including containerised applications and any data held in SaaS services. In most cases, the best way to achieve this functionality without over-complicating your backup infrastructure is using CDP.

Unavoidable and urgent

The constant threat posed by malware, particularly ransomware, means that businesses need to act now to protect themselves. According to IDC, 91.5% of businesses have suffered a malicious attack in the past 12 months – and 36.6% have experienced more than 25 attacks over the same period.u

Clearly it is now a case of ‘when’ not ‘if’ as every business will experience a cyberattack in the near future – often repeatedly. Each attack can be expensive due to employee overtime, lost productivity, direct cost of recovery and any costs associated with data that is permanently lost.

CDP is particularly resilient to ransomware attacks, storing immutable backups that cannot be changed or overwritten. As such, it is an excellent addition to your data protection strategy – particularly as it can be deployed to provide backups of data stored in the cloud or on premise.

To learn more about Continuous Data Protection and its role in your digital transformation efforts, please give us a call.