WTL

oracle database management skills gap

We can’t teach you how to juggle. We can help you simplify Oracle database management 

The skills gap is not a new phenomenon – businesses have known for some time that they lack at least some of the resources they need to reach their strategic goals. Now, in their report ‘The Cloud Infrastructure and Platform Services Skills I&O Teams Require for the Future’, Garner has quantified that shortfall.

According to this research, 60% of organisations lack the infrastructure and operations (I&O) skills required to complete the tasks they are responsible for. This is particularly concerning; I&O teams cannot meet their obligation for current tasks – and that’s before they begin introducing new technologies and processes.

How do you plug the gaps?

In an ideal world, your business would have unlimited funding to hire all the staff you need. But as demand for skills like Oracle database management increases, so do salaries, making this approach unsustainable for all but the richest companies.

Fortunately, Gartner has outlined a four-step process for solving this crisis.

1. Promote from within

Your workforce always has the potential to learn new skills, so it makes sense to identify and promote people from within the team. Identify those who already have some experience of Oracle database management and offer them a development and growth plan that will help them acquire the new skills you need.

2. Pair programming and self-paced learning

Encourage knowledge sharing by pairing more and less experienced employees together to enable one-on-one training. At the same time, encourage all of your team to complete self-paced training to help build their understanding and skills.

3. Take advantage of training programs

Oracle provides training and certification programs that are designed to help students learn the latest – and most important – skills. Don’t forget that many MSPs and third parties also offer training, sometimes at no cost, that will help to contain training costs and upskill your workforce.

4. Learn by doing

Knowledge is of title value until it is put into action. Encourage your employees to experiment with new technologies and techniques – and to look for opportunities to apply them in your Oracle database environment.

Adding a stop-gap

Clearly, the Gartner plan will not solve your Oracle database management problems overnight – but they do set out a roadmap for the future. In the meantime, your choice of MSP and service providers will be crucial to plugging the gaps.

The right partner will be able to provide knowledge and guidance – and resources if required – to help your business get on top of its Oracle database management workloads. They can provide an important resource that allows your I&O teams to push forward with their strategic projects, without having to wait for employees to up-skill.

To learn more about Oracle database management services, and how WTL can help, please get in touch. – we promise not to try and teach you to juggle.

latency

How to realise the lowest latencies for your enterprise applications

When it comes to artificial intelligence and machine learning applications, every microsecond counts. As well as ensuring you have sufficient processing power, it is essential that you remove latency from every layer of your technology stack.

As you would expect when dealing with enormous data sets, choice of storage is critical. For cutting edge AI models to operate optimally, you will need cutting edge storage to underpin it. And this is where the NetApp AFF A-series comes into its own.

All Flash Arrays – obviously

Flash storage currently outperforms any other medium, so it is the natural choice for mission-critical operations. As the name implies – AFF – the NetApp A-series is an all-flash array, using high speed SSD drives to deliver up to 1.4 million IOPS.

No other storage medium is suitable for processing mission-critical applications involving large data sets – up to 702.7PB with the NetApp AFF family. And with native de-duplication, compression and compacting, these systems ensure your storage space is maximised, increasing return on investment and lowering total cost of ownership.

NVMe for maximum throughput

You can further reduce latency by accelerating data transfer between storage and CPU. The NetApp A-Series systems offers NVMe/TCP for connection to existing ethernet infrastructure. Alternatively, double IOPS potential and halve latency the latency found in traditional channel connections with NVMe/FC. Whichever you choose, NVMe connectivity will also allow you to attach ultra-high-performance storage to your existing SAN without disrupting operations to achieve up to 300GB/s throughput.

Flexible connectivity options

In the modern hybrid cloud operating environment, your business needs seamless connectivity to on- and off-premises systems. In addition to high-speed NVMe, the A-series also offers a complete range of alternative technologies including iSCSI, FCoE, NFS, SMB, Amazon S3.

This means that A-series systems will slot directly into your existing infrastructure. You can take advantage of ultra-high performance SSD storage for local processing and off-load lower priority workloads to low-cost secondary systems or cloud platforms.

Scalability

In addition to support for external services, the NetApp A-series is also fully scalable. As your demands increase, the system scales to 24 nodes for a maximum total capacity of 702.7PB of high resilience, high performance all flash storage.

And as you would expect, the A-series is fully supported by ONTAP software to simplify the process of managing and optimising your storage workloads. ONTAP allows you to scale at speed whenever your applications demand.

Find out more

Ready to learn more about the NetApp AFF A-series and how it will help your business achieve its high-performance computing objectives? Get in touch today and our team of storage experts will guide you through the benefits and features so you understand where the A-series fits into your storage strategy.

 

 

 

Disaster Recover

Making the business case for disaster recovery

Because of the relative rarity of a significant system outage, many SMEs have deliberately underinvested in their data protection provisions. Backup allows them to recover almost all of their systems and data (eventually), so why invest in a true disaster recovery solution?

Although most see this as a calculated risk, the impact of an outage can be devastating for a data-driven business. Here’s what you need to consider before trying to justify not investing in disaster recovery tools.

Reducing RPO windows

It is possible to recover most of your data from a traditional backup but:

  • The process is typically quite slow
  • Data changes significantly between backups – how much information can you afford to lose in that window?

It is entirely possible that relying on backups could cost your business 22 hours or more lost productivity.

A disaster recovery platform is specifically designed to reduce the recovery point objective (RPO – the amount of data your business can ‘afford’ to lose) to minutes or seconds. Cloud-based DR systems also allow you to create service tiers so that you can prioritise what is protected, the performance of the underlying infrastructure and how it is brought online in an emergency. This granular control ensures you can balance availability, performance and cost to meet your strategic recovery goals.

Affordable resilience

Modern disaster recovery platforms use the cloud to provide massive scalability without upfront hardware investment. You simply pay for the storage you use. There is no longer any need to invest in co-located data centres, duplicate hardware set-ups, licensing or the resources required to administer them.

Using cloud-based services allows you to avoid significant up-front capital investment – immediately answering one of the main arguments against deploying DR. It also ensures that your data is fully recoverable from anywhere in the world.

Built for the cloud

Backup and recovery tools are normally designed for use with on-premise systems. This becomes a serious shortcoming as your business adopts more cloud-based services.

DR tools are increasingly cloud-native, meaning that they can capture snapshots of data stored in hosted systems. Importantly, they can also restore data to other cloud platforms, offering a useful alternative if your on-premise data centre is out-of-action.

Improve your testing capabilities

Disaster recovery tools create a complete copy of your operating environment that is ready to be recovered at any moment. However, you can also use these DR copies for advanced testing and planning.

Say you want to assess the potential risks associated with a new software update. Rather than deploying into your live environment, you can use a DR copy. All of your tests are completely accurate and reliable because the copied system is identical to your production environment. Tests can be completed without any risk to operations.

To learn more about why your business can’t afford to not invest in disaster recovery tools – and what you stand to gain – please get in touch.

disaster recovery

How and why to make the switch to disaster recovery in the cloud

The flexibility and scalability of cloud-based disaster recovery (DR) means that it is rapidly becoming the new standard for data protection operations. A survey carried out by ESG found that 74% of those questioned already use cloud DR.

However, transitioning from traditional on-premise backup and recovery processes requires a slightly different approach if it is to be successful.

This is what you need to know.

What does Cloud DR mean for your organisation?

There are two key reasons why cloud DR is gaining in popularity: shorter Recovery Time Objectives (RTOs) and lower operating costs.

Savings are typically realised by reducing costs associated with site maintenance and hardware investment. These decreases are not one-off events either – maintenance and support for the DR platform are built-in the subscription cost. And because you only pay for what you use, there’s no need to invest in costly contingency.

Cloud DR also offers superior RTOs, allowing businesses to achieve previously unattainable recovery targets. The ability to recover data on an application basis makes it far easier and quicker to bring mission-critical systems back online.

How do you prepare for Cloud DR?

Preparation for cloud DR hinges on two key factors: Fast, reliable internet connectivity and a partner with the right skills and experience to help you configure and complete the transition. Once these are in place, you will have everything required to get underway.

How do you finally make the switch?

A successful migration has four common steps:

1. Requirements analysis – Identify the applications that will be moved to the cloud. Your existing data recovery plans provide an excellent starting point. Your DR partner will probably also suggest avoiding ‘hybrid’ applications initially to simplify and accelerate the migration.

2. Prepare to spend – The new cloud DR model requires a specific combination of tools and skills. Work with your cloud DR partner to understand where to invest for the best outcomes.

3. Prepare to test – Disaster recovery is mission-critical. You have to be sure that you can always meet your recovery objectives – and that the cloud DR platform delivers on expectations. Allocate some budget for extensive testing, particularly as you move towards application-specific backup and recovery.

4. Optimise costs – Synchronising your cloud migration with a scheduled hardware refresh will help to maximise your investment. Your cloud DR partner can also help you ensure you are using an old-demand charging model which will work out more cost-effective than paying for reserve capacity.

Getting the help you need

Migrating to cloud disaster recovery is straightforward – when you have the right partner to assist. They can help you avoid many of the common pitfalls that cause projects to stall – and ensure you get the maximum return on investment without compromising your RPOs.

To learn more about cloud disaster recovery and how WTL can help, please get in touch.

containers data protection

Data Protection Trends and Strategies for Containers

The demands of DevOps and continuous development have helped to accelerate the update containers. Indeed, Kubernetes and other containers are set to become the preferred production deployment technology within the next two years.

Containers present a new challenge for developers – and the data protection team responsible for ensuring they are properly backed up. So as your business accelerates container adoption, what do you need to consider?

Existing strategies and technologies probably won’t work

Unsurprisingly, traditional data protection strategies are focused on corporate data. But there is a problem with this approach; container-based applications cannot be protected in the same way that individual applications are. Backing up data alone will not restore the containers in the event of a system outage.

Anecdotal evidence suggests that this misunderstanding is not uncommon. This means that some businesses are operating a data protection strategy that will probably not meet their RPO objectives. There is a significant risk of permanent data loss if the data protection strategy is not adjusted to the realities of the new containerised environment.

Orchestration is essential

Because we have been doing the same thing for many years, attitudes and approaches to backup have become ingrained. Research conducted by the Enterprise Strategy Group suggests that many businesses are ignoring orchestration as part of their data protection strategy because they deem it irrelevant.

But as the containerised environment becomes more complex, managing backup and recovery will become too resource-intensive to complete using current tools. Orchestration offers a way to automate many low-level operations during a data loss incident. You will be able to accelerate recovery options, maintain operations according to SLAs and reduce the risk of permanent data loss too.

Re-assess data protection provisions

The hybrid cloud model has already complicated data protection. Many IT professionals are still grappling with the challenge of properly backing up data held in cloud platforms and SaaS silos; containers will further complicate matters.

It is essential to realise that current tools and techniques are not properly aligned with the needs of the containerised cloud. An urgent review of the data protection strategy is required, along with a reappraisal of the tools that are currently in use.

This review should also investigate the various specialist tools available that are specifically designed for backup and recovery of containerised applications. For maximum protection and flexibility, you will need a toolset capable of operating at Kubernetes-level and lower – including cluster-level, pod-level, namespace-level and tag-level.

Speak to an expert

As an emerging technology space, data protection for containers is not well understood – the necessary skills are hard to come by. Rather than hoping for the best or trying to re-engineer your existing strategy with unsuitable tools, you should seek third party expertise.

To learn more about protecting your containerised data and applications, please give the WTL team a call and we will guide you through your options.

 

Partners