WTL

We know Technology…

ABOUT US

We know Systems…

FIND OUT MORE

We know Virtualisation…

FIND OUT MORE

We know Cloud…

FIND OUT MORE

We know Data Management…

FIND OUT MORE

We know Network
and Security…

FIND OUT MORE

We know Cyber
Security…

FIND OUT MORE
Return to work

The return to the office – a personal view

Who knew we’d be continuing the remote working theme, some 15 weeks after the start! We’re all too aware of how time flies, but the relative ease of our transition to the ‘office at home’ environment, has made the best part of this last four months ‘business as usual’ here in WTL.

For some, the move has been seamless, however for others, it’s been fraught with making the best use of a shared space and probably awkward at best! So whilst all of the above is true, I for one will be looking forward to the return date and all the comforts of the office space that comes with its familiar corners nooks and crannies, to strategically place files, folders and documents, knowing they will still be there, a day, a week or a month after, without the annoying search for them, post an anonymous tidy up!!

Heading towards normality seems painfully slow, but missing the office banter, favourite mug, colleagues on tap for a quick response, are just a few of the benefits that will flood back amongst many others and make it all worthwhile sometime soon.

Fingers tightly crossed for there to be no second spike in the pandemic, is what we’re all hoping for and so we massively acknowledge the fantastic effort and bravery, of the key workers across all sectors of the health services, the vital food supply chain industries and related distribution networks that keep our world moving and thriving – we thank you and many more, without reservation.

There are so many pieces of the ‘jigsaw’ to mention every area, suffice to say that each individual, in every business, has an important part to play in some respect! Every person at WTL has another person relying on his or her function to make ends meet. This is no different whether small, medium, large or enterprise, someone will be relying on someone else, for a result to be a success. Keep up your spirits and morale for everyone’s sake, but looking after yourself, your own well-being and mental health is the first priority.

NetApp all-flash

Optimise Oracle Workloads with NetApp all-flash Solutions

When it comes to choosing infrastructure to support your line-of-business Oracle databases, Oracle hardware seems the logical choice. But faced by evolving computing needs and shrinking IT budgets what are the alternatives. NetApp all-flash solutions provide a more than viable option to Oracle hardware – well worth considering as you plan the next phase of your infrastructure lifecycle.

Best in class performance

The headline benefit of NetApp all-flash technology is its superior performance. Capable of performing up to 1 million IOPS with latency of about 100 microseconds, NetApp systems are the fastest available – up to 20 times faster than traditional storage. With end-to-end flash arrays and NVMe, these scalable all-flash systems are capable of halving application response times. No other database platform – including Oracle – comes close in terms of performance.

Increasing flexibility and growth options

The hybrid infrastructure operating model solves several problems about latency and security – but the integration between on- and off-premise systems could be improved. NetApp brings the power and flexibility of cloud into the local data centre. The ONTAP data management software bundled with NetApp flash arrays allows you to dynamically allocate your database workloads for maximum performance-cost benefits. This includes pushing lower priority data to cheaper cloud storage to maintain local capacity. NetApp solutions also integrate neatly with Oracle management tools, greatly simplifying administration. Application-integrated workflows can be automated; you can provision and prototype with a single mouse-click in as little as eight seconds. NetApp all flash arrays are also ideal for rapid development and prototyping. FlexClone technology makes it possible to clone large data volumes in seconds. A thin provisioning mechanism means that the data records or files aren’t actually cloned until accessed or used, helping to constrain the physical storage requirements for your test applications.

Consistent and stable operations

NetApp all flash arrays have been engineered to deliver consistently high performance for database operations. They are also extremely reliable averaging just 31.5 seconds pause time per year – that’s 99.9999% availability. This reliability is essential for mission-critical Oracle workloads. Oracle database owners also benefit from SnapShot and SnapMirror technologies that automatically replicate data to prevent loss. Further protection is available using FlexClone to transfer databases to an active disaster recovery site – including the cloud. As well as databases operating at the core, data is protected at the core and in the cloud too.

Streamlined operations and cost savings

Customers using NetApp for Oracle report some significant benefits: fewer components, greater return on investment and a lower total cost of ownership. Estimates suggest time and effort savings of up to 90% compared with their existing solutions. By blurring the boundaries between on-premise and cloud, NetApp arrays make it easy to migrate workloads to wherever they are best suited. This helps to overcome issues of local capacity and avoid the need for costly investment in redundant physical storage.

A worthy alternative option

Thanks to its high performance and reliability, NetApp all flash storage is a credible platform for your most critical Oracle database applications. Factor in the integrated suite of ONTAP management apps and it is easy to see why NetApp users are able to realise such significant returns on their investments.

Useful Links

White Paper: Optimise Oracle Workloads with NetApp Solutions

cloud-connected storage

Is cloud-connected storage your path to the future?

The hybrid cloud infrastructure model has become the platform of choice for most businesses for two reasons.

First, questions about security and sovereignty means that some operations are best retained in-house to maintain compliance.

Second, time sensitive operations, particularly operations that rely on real-time processing, need to be kept on-premise. Latency between local data centre and the cloud could prevent timely processing.

Time to blur the boundaries

Despite best efforts, current hybrid models emphasise the disconnect between on-premise and cloud. The number of applications and operations being run locally may have decreased, but the CTO must still deploy sufficient processing and storage capacity for those that remain.

This is where the choice of on-premises technology platform becomes crucial. Ideally you want to eliminate the barrier between local and hosted resources to create a seamless, unified platform on which to build.

One choice would be NetApp AFF storage. The ultra-low latency all flash arrays are powered by ONTAP, NetApp’s comprehensive management and configuration system providing cloud-connected storage.

Included in ONTAP is the FabricPool technology which allows you to connect various public and private cloud services directly to your on-site infrastructure. This forms the basis of your seamless hybrid cloud.

Time to get smart

A unified platform is just the start of a future-ready infrastructure, however. FabricPool goes further, using intelligent rules and analysis to automate data and workload allocation.

Mission-critical applications requiring the very highest levels of performance are retained in-house, using the NVMe flash to minimise latency. FabricPool then re-allocates other workloads to off-site cloud platforms to help balance performance and cost.

Embracing the multi-cloud future

Despite the best efforts of cloud providers, CTOs have been keen to avoid the trap of vendor lock-in. The ability to move workloads between providers cost effectively is important for future proofing and flexibility, driving a more towards multi-cloud deployments.

Best-of-breed infrastructure can be costly to set-up and maintain, mainly because the relevant skills are in such high demand. As a result, many of the cost-control opportunities of multi-cloud operations are lost through increased staffing and administration costs.

Again, NetApp AFF technology can help you build a multi-tier storage strategy. FabricPool analysis will identify and categorise workloads, moving data to the most appropriate cloud platform automatically. Shifting ‘cold data’ to a hosted archive service will help to reduce per-terabyte storage costs and free up valuable high-performance local capacity. Extra sensitive data can be piped to lower-cost private cloud storage if preferred too.

Cloud-connection will be key to the future

The beauty of cloud platforms is the flexibility they offer. With almost infinite scalability, your business is free to rapidly grow its systems without capital investment.

But while some workloads remain tied to the local data centre, there is no reason similar scalability cannot be deployed on premise. Choosing all-flash arrays with the ability to join on-off-site platforms offers exceptional processing speed and the option to extend into the cloud whenever required.

While the hybrid cloud model remains default, CTOs should pay close attention to their choice of on-premise systems. Cloud-connected storage offers valuable strategic opportunities – and a way to bridge the on/off-premise divide seamlessly.

Useful Links

White Paper: Optimise Oracle Workloads with NetApp Solutions

Intelligent Data Management with Machine Learning and Artificial Intelligence

The next step of your digital transformation – Intelligent Data Management

Digital transformation projects are intended to help businesses improve efficiency by using data to drive strategic and operational decision making. But while efforts are focused on generating actionable insights, much less attention is being given to the underlying infrastructure. Or more specifically, the management of the infrastructure.

Which is why you need an Intelligent Data Management Strategy to support your digital transformation efforts.

Generating insights – and administrative headaches

Currently, Machine Learning (ML) capabilities are directed towards linking disparate data sets and extracting previously unknown insights. Similarly, Artificial Intelligence (AI) is turning those insights into action, accelerating decision-making, automating low-level tasks and flagging anomalous data for review by human operators.

ML and AI are helping to make sense of unstructured data. But at the same time, corporate computing environments are becoming increasingly complex. The exponential growth of data coupled with the use of a disparate set of hardware, applications and services is creating a data estate that requires a disproportionate amount of administrative intervention and oversight.

Under the current paradigm, data is easier to use but increasingly difficult to manage. Unless the administration can be simplified and automated, businesses will begin drowning in data again.

Widening the scope for ML and AI

An Intelligent Data Management Strategy seeks to apply ML and AI technologies to virtually any problem – including systems management. Some vendors, like HPE, are building these capabilities into their hardware stacks, creating an intelligent data platform.

Machine Learning can be used to establish a baseline for normal operations for instance. By monitoring network traffic, server activity, application usage and other variables, infrastructure gains an understanding of what “normal” looks like.

Using the insights generated by ML, AI can then be applied to solving common network management challenges. Where an excessive load is detected, AI can automatically offload processing to reserve servers – or even to the cloud. If a system begins generating suspicious network activity, AI will throttle bandwidth, or even disable the system, until an engineer can resolve the issue.

Automated actions are not limited to problems either. AI can be trained to take proactive steps to ensure the entire stack is performing optimally. This relieves systems engineers of another important but time-consuming responsibility and ensures infrastructure continues to deliver value.

Because AI can make these adjustments in real-time, administrators can focus on other strategic tasks. Automated detection and remediation are also much faster than a similar human response, helping to ensure the entire infrastructure stack is functioning optimally.

To avoid being overwhelmed by unmanageable system complexity in the near future, your business must consider how ML and AI can be applied. Your Intelligent Data Strategy needs to be rebalanced to consider infrastructure overheads alongside analytics and insights.

Contact us today to learn more about adding automation and intelligence to your data strategy – and what you will gain in the process.

Useful Links

White Paper: Why Organizations Need an Intelligent Data Strategy

Oracle Autonomous Linux

Oracle Autonomous Linux – Human Error, Solved?

When it comes to catastrophic systems failure, attention immediately shifts to cybersecurity. A hacking is the sexiest of all possible causes – but probably not the most likely.

Instead, the most common problems are caused by human error. A poorly tested code upgrade, a missed software patch or even a basic mis-key, all have the potential to take operations off-line. And that risk increases as your network evolves.

Reducing human input is a sure-fire way to prevent many avoidable future IT outages. How often are system breaches as a result of inconsistent patching?

To help address the problem of human error, Oracle have introduced Autonomous Linux – here’s why you should consider it as part of your future OS strategy.

What’s so special about Oracle Autonomous Linux?

Along with a proven, reliable OS kernel, Oracle Autonomous Linux also includes a new OS Management Service. As the name implies, this new OS offers a high degree of autonomy to improve patch management.

In fact, Oracle Autonomous Linux is the world’s first (cloud-based) operating system that carries installs updates and patches automatically. Updates are installed daily, without requiring downtime – and there’s no human intervention required.

Allowing the OS to manage its own updates helps to solve two key administrative problems. First, your servers will audit patch status on their own, saving you the massive resource overheads of assessing an extensive on- and off-premises estate.

Second, manual patch management typically involves an extended change control process that delays application by weeks, expanding the window of opportunity for system compromise. Allowing the OS autonomous control of installation accelerates the process and lowers the cost of managing your systems.

By allowing the operating system to control patch management, servers experience less downtime, planned or unplanned. Oracle Autonomous Linux will also help to reduce chargeable spikes in your cloud billing because patches are applied to in-service servers. This is because you no longer need to rotate workloads while maintenance takes place. And of course, every time servers and processes are moved, you create potential for another human-error related system failure.

Operating system as a service

Moving to an autonomous operating system effectively replicates the “as a service” model. Oracle Autonomous Linux takes care of itself in much the same way as IaaS, PaaS and SaaS services do. Which means that your server management resources can be redeployed to focus on other projects that maximise the value of a reliable and secure platform that offers greater availability than a non-autonomous alternative.

In our previous blog we looked at why Oracle Database runs best on Oracle Linux. You can read this here.

Useful Links

White Paper: Why Oracle Database Runs Best on Oracle Linux

Partners