WTL

Hybrid Storage Array

Seven critical capabilities to consider when choosing hybrid storage arrays

As part of their research into the market, Gartner has identified seven critical capabilities that buyers must assess when purchasing a hybrid storage arrays. These factors offer a way to directly compare the various products available and ensure you buy the best technology for your business.

The seven critical capabilities are:

1. Manageability

The lower your overheads, the greater the return on your storage investments. Gartner assesses how available solutions simplify management, awarding additional ‘marks’ for features that reduce the need for manual input.

Modern hybrid storage arrays will often use single-pane management consoles coupled with in-depth monitoring and reporting to provide engineers with all the information they need to optimise performance. There should also be provisions to automate activities, including your response to outages and failures.

2. Reliability, Availability and Serviceability (RAS)

Reliability and availability are crucial to the always-on business, so Gartner also assesses how well each hybrid storage system meets these needs. To score highly, vendors will build systems using reliable, de-rated components to increase mean time between failure (MTBF).

Systems will also be engineered to reduce disruption caused by updates and upgrades – code or hardware. Enhanced diagnostics boost performance and reduce human error, while other potential features include protection against data corruption and tolerance of multiple component failures.

3. Performance

Raw performance is important. Gartner analyse IOPS, bandwidth and latency – the performance of the overall system, not the individual components. Don’t forget to evaluate the ability to scale performance as demand grows.

4. Scalability

Your data estate will only ever get larger, even with hybrid cloud connectivity. To help prepare for future state, scalability factors like capacity growth will need to be considered, along with SLAs linked  system performance.

5. Ecosystem

Your operating environment is increasingly complex, and you need a platform capable of running a broad range of applications. Gartner assess the ability of each platform to support multiple OSs, hypervisors and third-party independent software vendors (ISVs).

You will also need to address support for applications, such as databases, backup/archiving products and management tools and public cloud vendors to ensure your entire ecosystem can operate using these hybrid storage systems.

6. Multitenancy and Security

Cloud-like storage allows you to make better use of capacity – but workloads need to be properly segregated to prevent issues between development and production. Your ideal storage platform should offer support for diverse workloads, protected by user access controls and system configuration change audit logging.

7. Storage Efficiency

Just because the data estate is expected to grow exponentially does not mean that expansion should not be managed. Gartner recommend hybrid storage systems include features to increase efficiency such as compression, deduplication, thin provisioning and auto-tiering to maintain control of use and cost.

What does this actually look like?

Even with a 7-point checklist, the process of analysing potential hybrid storage arrays is not necessarily straightforward. To learn more about your options – and how to choose a platform that will help your business achieve its strategic goals – please give us a call.

Oracle X8M

Oracle X8M and the end of the DIY architecture nightmare

Data access speeds are now a critical strategic priority. The faster you can access information, the faster you can make informed decisions, answer customer queries, execute trades or respond to changing market conditions.

To exert maximum control over your environment – and to extract every last drop of speed – it seems logical to build out DIY architecture. However, this approach can be problematic.

The cost of assembling ultra-high spec hardware is expensive – as are the increased management and service overheads. But the penalties are worth the performance gain – in theory.

There is an easier way

The Exadata X8M product family has been engineered and optimised to deliver the very best possible Oracle database performance, out-classing everything – including DIY storage solutions. X8M achieves this by combining Database-optimised compute, networking and storage hardware with specialized algorithms that can vastly improve all aspects of business-critical OLTP applications.

How?

So what exactly does the x8M offer?

Persistent memory (PMEM)

Memory will always outperform even the fastest flash storage. The X8M offers up to 21.5TB of PMEM per standard rack, allowing for 1.5TB to be allocated to each storage server.

Using X8M, businesses can achieve up to 16 million OLTP 8K read IOPS. This is a 250% improvement over the original X8 series. And PEMEM is impervious to data loss caused by power outages or similar.

Improved network throughput

The X8M uses Converged Ethernet (RoCE) as the internal fabric. Offering 100Gb RDMA over RoCE, the system delivers latency of under 19 microseconds. This is a 10x improvement over the InfiniBand fabric used by the X8.

KVM virtualisation

Oracle Exadata X8M uses KVM-based virtualisation so that it can take advantage of RoCE and persistent memory. KVM offers significant benefits – guest VMs support twice as much memory (1.5Tb per server), network latency is reduced and you can host up to 50% more guest VMs per server.

Cost savings all round

Although Exadata X8M boasts some impressive technical specifications, the real story is the savings your business can achieve. More memory per VM allows increases database performance, while reduced network latency allows you to access and use your data more quickly.

Using Exadata X8M as the foundation of your operations, you will also be able to host more virtual machines on fewer physical boxes. Consolidating hardware reduces administrative overheads and running costs – without compromising database performance.

DI Why?

Exadata X8M simplifies operations, reduces costs and dramatically improves the speed of your Oracle Databases. Given that the entire ecosystem is optimised out-of-the-box for Oracle, why would you choose a more costly, less effective DIY platform?

To learn more about the Oracle Exadata X8M product suite and how your business can achieve its strategic database performance goals, please get in touch.

continuous data protection

8 killer benefits of continuous data protection (CDP)

Last week we discussed continuous data protection (CDP) and how it works. This week we outline eight key reasons why investing in CDP technologies will benefit your business.

1. Always-on replication

Because CDP uses change-block tracking, your data is replicated the moment it is written to storage. The CDP backup mechanism is always on, so every change is captured in real-time. Recovery Point Objectives (RPO) can be reduced to a matter of seconds, while recovery time objectives are also shortened to a few minutes – considerably faster than traditional backup tools.

2. Zero performance impact

Traditional backup tools use a series of resource-intensive snapshots to capture updated information. CDP instead relies on a journal – and even that is only used until committed to the selected point in time. This approach has a considerably smaller impact on performance than storing multiple snapshots on replica virtual machines.

3. Journal-based recovery

The journal approach used by CDP ensures that there is a log of every single change made to applications and data. The journal allows you to recover data from any point in time, simply by selecting a checkpoint and rolling back.

4. Scalable architecture

Snapshots on replicated VMs have a serious architectural flaw in that they offer no way to control the total space used for snapshots. If the datastore runs out of space, replication breaks and backups fail. CDP allows you to place your journal on any data store and to apply size limits and warnings that allow you to better scale and control your backup architecture.

5. Reduced storage demands

By replacing snapshots with a journaling system, CDP consumes no additional space in the source storage. CDP also offers significant space savings, consuming just &5 to 10% of the target storage – far less than the 20% to 30% required for storing snapshots.

6. Lightning-fast ransomware recovery

With a continuous stream of recovery checkpoints, you can roll back to any point in time. You can reliably restore data to mere seconds before the corruption took place, using a process that takes just a few minutes to complete.

7. Lower TCO

As a software-based solution, CDP is easy to install, manage and scale. Some CDP solutions are available on a subscription licensing basis, ensuring you only pay for what you actually use and offering greater control – and returns.

8. Modernised IT infrastructure

CDP converges backup, disaster recovery, and data mobility across on-premises, hybrid, and multi-cloud environments – effectively all backups, everywhere, managed from a single management console. The system should also integrate with your existing IT infrastructure, eliminating the need for costly point solutions.

Time to act

Continuous data protection solves many of the challenges your business faces. Shortened RPOs and RTOs are welcome, as is the reassurance that every data change is captured and available for recovery whenever you need it.

To learn more about CDP and what it offers your business, please get in touch.

Continuous Backup and Recovery

Understanding continuous backup and recovery technologies

Continuous Backup and Recovery is designed to meet all of your data protection requirements in a single platform across your entire IT estate. The idea is to converge backup, disaster recovery and data mobility across on-premises and cloud systems.

Additionally, continuous backup and recovery tools should offer orchestration, automation and analytics to further simplify data protection.

Continuous Data Protection (CDP)

In the past, synchronous replication has been reserved for mission-critical workloads. Using Change Block Tracking (CBT) for near-synchronous replication, it is possible to back-up data in real-time without having to worry about backup windows and schedules.

CDP is always-on, operating at the hypervisor level and integrating with existing assets. You can benefit from the technology immediately without costly hardware upgrades or replacements.

Track changes with Journaling

By tracking and recording every single change in your application or server, the journal offers finely detailed recovery options. Every five seconds, the journal is updated, recording these changes as checkpoints.

Any of these checkpoints can be used as a recovery point, helping to significantly reduce RPO and potential data loss. A journal is created for all of your virtual machines – even when you have thousands, giving you total control of your production system and backups.

Each journal is also updated with a checkpoint stamp every few seconds. Checkpoints offer another potential recovery point for files, VMs, applications and more

Long-Term Retention

Checkpoints in the journal are available for up to 30 days – beyond that, your CDP solution will need to offer long-term repositories (LTR). The LTR and replicated files are located on secondary storage – often a low-cost cloud repository.

LTR can be configured to maintain files and journals for up to seven years – or even longer if required.

Multi-VM Application Consistency

Complex, enterprise-grade applications typically run across multiple virtual machines. In order to ensure consistent, accurate data at the point of restore, you must be able to select a consistent checkpoint across them all. In this way, you guarantee application consistency because all VMs are restored as a single entity back to the exact same point in time

Orchestration and Automation

The complexity of your operating environment is reflected in the complexity of your backup processes. Consolidating continuous backup and recovery into a single platform will immediately help to simplify operations. However, a leading-edge tool will also include orchestration and administration functionality too.

By configuring the setting in advance, the recovery process – and intermediate stages – can be triggered with a few mouse clicks. Much of the procedure will then complete automatically so the IT team can focus on other activities in the middle of a data centre crisis.

Analytics

Understanding how your backup and recovery processes are performing is more than a pass/fail indicator. Continuous backup and recovery tools should provide you with detailed analytics so you can see trends, anomalies and issues without having to dig through event logs and reports. Using these analytics you should be able to better model “what if” scenarios for planning future backup infrastructure requirements. And to spot opportunities for improvement.

Taking the next step

The value of your data continues to increase – data loss is no longer tolerable or acceptable. To avoid problems, your business should be looking at implementing a continuous backup and recovery toolkit immediately.

To learn more about your options – and our preferred continuous backup and recovery platform, Veeam – please get in touch.

backup and recovery

Dealing with traditional backup and recovery challenges

In the data-driven business, access to data is everything. Protecting against loss is a strategic priority – and yet many businesses are still reliant on 30-year-old concepts and processes.

Some of the backup and recovery challenges that we have learned to live with over the years are now becoming untenable.

Here are some of the issues you need to consider – and address – as your digital transformation efforts gather pace.

1. Infrastructure complexity

The rapid evolution of your data estate has created an infrastructure that is complex to maintain – and even harder to backup. In the age of hybrid computing, you could be using any combination of technologies and applications, from tape autoloaders to tiered storage arrays with multiple backup targets.

This is an administrative nightmare – and a significant risk to your operations in the event of a disaster recovery event.

2. Tool complexity

This complicated infrastructure typically requires multiple applications to meet your backup goals. Disaster recovery, system-level backup, file-level backup and long-term archiving – all essential functions, each with its own toolset, tailored to your disaster recovery objectives.

If you’re lucky, these apps will integrate neatly. In reality, you’re facing another administrative nightmare.

3. Unacceptably long backup windows

The corporate data estate is growing exponentially – unlike your backup windows. Backup jobs are typically run after hours to limit their impact on system resources. Eventually, however, there is too much data to copy in the allotted time – and that’s before you consider the I/O limitations of your network and backup hardware. Or the various backup checkpoints speed across your VMs.

Eventually, overlapping backup windows and unclear checkpoints will bring the whole system into doubt – can you trust your backups are accurate and complete?

4. Unacceptably long recovery windows

If saving data to backup is slow, recovery is just as problematic. Pulling data back from an archive is also limited by the I/O performance of your hardware and network. And there’s a very good chance you will be trying to recover data during working hours when there is additional load on your servers, slowing operations further still.

5. Outdated RPOs and RTOs

The size of your backup sets coupled with the physical limitations of the hardware means that RPOs and RTOs are commonly quoted in days. Waiting days to successfully complete a backup/recovery operation is simply unacceptable – and unsustainable – when dealing with mission-critical applications.

6. Outdated technologies

Periodic daily backups are only used for secondary systems that are rarely used. Incremental snapshots and array-based replication are a far safer option, allowing you to backup regularly throughout the day and narrowing the window during which data may be lost.

However, these developments are still insufficient for your line-of-business applications that are updated many thousands of times each hour. Every gap between snapshots is an opportunity for data to be lost.

A glimpse of the future – continuous backups

Ultimately, the success of any backup and recovery strategy is the speed at which you can resume operations. To help bring narrow that timeframe, we now have continuous data protection (CDP). CDP uses software-based replication to capture every data modification, copying it to a target repository.

CDP is an ideal solution for disaster recovery; replicating data to the cloud allows your business to resume operations in a matter of minutes – from anywhere in the world.

To discuss your backup and recovery challenges and learn more about overcoming the limitations of legacy backup and recovery solutions, please get in touch.

Partners