WTL

We know Technology…

ABOUT US

We know Systems…

FIND OUT MORE

We know Virtualisation…

FIND OUT MORE

We know Cloud…

FIND OUT MORE

We know Data Management…

FIND OUT MORE

We know Network
and Security…

FIND OUT MORE

We know Cyber
Security…

FIND OUT MORE
Artificial Intelligence

All Flash Arrays and Your Artificial Intelligence Future

Artificial Intelligence (AI) is set to become an increasingly important aspect of your operations in the near future. Indeed, AI will help to automate and accelerate many of your core business operations.

However, the choice of technology will also play an important role in ensuring AI delivers on expectations. Here are some factors to consider moving forwards.

Machine Learning, model training and storage

Building and training models takes time, usually because the algorithms need to process vast amounts of data for pattern analysis. Google’s famous Deepmind cancer detection model was trained using images and health records from more than 90,000 patients .

While developing a machine learning model, the primary focus is the accuracy of the results. This means that speed can take a back seat while the data science team tweaks and refines algorithms and verifies that all is working within expected parameters.

During development, low-cost cloud storage or higher latency spinning disk arrays will (normally) be adequate.

Production AI and speed

Once the model moves into production the choice of storage becomes much more important. As well as improving the accuracy of decision-making, AI is intended to accelerate outcomes by performing calculations and inferences more quickly than a human.

This is particularly true when dealing with real-time calculations. In these deployments, speed of storage (and the rest of the infrastructure) will be critical. All-flash storage is the only viable technology available that reduces latency, both at the edge and the core.

Ultimately production Artificial Intelligence systems must be designed to improve data flow at every point of the lifecycle – with the exception of archived records that are retained for auditing or compliance purposes. This information can be stored on lower-performing hardware, like spinning disk arrays or cloud archives to help reduce costs.

AI at the edge

One of the most challenging aspects of AI design is performance at the edge. What needs to be processed and auctioned at the edge, what needs to be passed to the core for action, what can be redirected immediately to the cloud for cold storage?

NetApp Artificial Intelligence solutions have been developed to address these questions, using a tiered approach to data service that automatically directs incoming data to the correct location. Through the use of edge-level analytics, data is processed and categorised in such a way that movement is accelerated, eliminating bottlenecks for the fastest, most efficient AI operations.

For existing NetApp customers, the ability to provision and manage AI storage and infrastructure using familiar ONTAP tools. This allows your business to accelerate deployments and reduce the learning curve so your data science team can focus all their efforts on building a model that delivers true business value.

To learn more about NetApp Artificial Intelligence solutions and how they will help your business meet the challenges of the data-driven future, please get in touch.

IT trends for 2021

IT trends for 2021 – what can we expect?

2020 was a year like no other, affecting every industry in every country across the world. It was also the year that technology proved itself, allowing organisations to keep operating even as workplaces were closed.

So, what can we expect in 2021? We’ve taken a look at the predictions from Gartner, Forrester and more to identify some common IT trends for 2021.

Automation everywhere

Automation has been an integral aspect of production lines for decades, but now it is moving inside the data centre. More than simply building software workflows and macros, we will see automation of infrastructure and code to accelerate deployment.

Rather than adopting automation on a case-by-case basis, businesses will focus on automating all of their processes. This will help them to align systems, processes and strategy to deliver greater efficiencies. This ‘hyperautomation’ trend will be crucial to realising maximum value from machine learning and artificial intelligence deployments.

Work everywhere

Where 2020 was the year of work from home, one of the IT trends for 2021 will be the year of work from anywhere. Flexi-working will become the norm, particularly in larger businesses who have realised the benefits of a decentralised workforce.

Cloud platforms will be fundamental to this new working model. Infrastructure-as-a-Service provides a platform to host custom business applications, while as-a-Service tools like Office 365 offer more generic productivity on the road. Expect to see continued growth in specialised web-based apps too.

Security everywhere

With employees working on the road and at home, the traditional network perimeter has blurred beyond recognition. With so many more potential attack surfaces, systems and data have never been at greater risk of loss theft or corruption.

Gartner predicts increased uptake of cloud-delivered security and operational tools that operate outside the company network. Centrally managed but autonomous in operation, these tools provide protections for data everywhere – on premise, in the cloud, or in your employees’ homes.

Backup everywhere

As well as extending security coverage beyond the company network perimeter, similar provisions need to be made for data too. Services like Office 365 make it very easy to share data securely, but they do not come with the robust backup and recovery features your disaster recovery (DR) strategy requires.

At the same time, Forrester predicts the era of on-site DR is accelerating towards its conclusion. Following the wider as-a-Service trend, disaster recovery will move into the cloud too. DRaaS takes advantage of low-cost hosted storage that can be used for backup and recovery from anywhere to anywhere.

Other trends of note

5G cellular network deployments are accelerating, offering more bandwidth for mobile users. 5G will allow businesses to build new, resource intensive wireless applications, using secure public networks to deliver throughput.

Machine Learning, AI and automation will become more visible as robotics leave the factory and go on the road. 2021 will see autonomous cars and drones becoming more mainstream as ongoing concerns about viral spread limit human contact. By removing humans from the chain, opportunities for infection are reduced.

Prepping for your cloud future

The cloud sits at the heart of the IT trends for 2021. Ensuring you have a suitable platform protected by resilient DR measures will be crucial to staying competitive this year.

For more help and advice about what your business needs to succeed, please give the WTL team a call.

NetApp for Oracle Databas

Why choose NetApp solutions for your Oracle deployments

Oracle on Oracle is often regarded as the gold standard for database operations – and it is certainly high performing. But there are credible alternatives to Oracle hardware that are worth investigating.

This is particularly true for businesses who are already committed to the NetApp ecosystem. As well as offering comparable Oracle performance, NetApp hardware usage extends far beyond ‘just’ database operations.

Here are some additional factors to consider:

Exceptional performance

NetApp all-flash arrays (AFA) are built on NVMe and fibre channel technologies to deliver exceptional performance – and the six-nines (99.9999%) uptime your mission-critical operations require. Running Oracle on NetApp increases database response times by as much as 12x.

There are also benefits in the event of a system failure too. Data recovery on an AFA can be completed as much as 98% faster during a system outage than from a traditional disk-based array.

Increased efficiency

As existing NetApp users will be aware, ONTAP can automatically allocate workloads and data to the most appropriate storage location. Line of business Oracle databases are almost certainly best suited to local AFAs, but archive data sets can be migrated to a low-cost cloud service where appropriate. An automated transfer is seamless, reducing management overheads for your infrastructure team as well as the total cost of ownership.

Familiar ONTAP management

Choosing NetApp for Oracle database operations is a smart strategic move for businesses already invested in that ecosystem. Leveraging existing ONTAP skills will help to accelerate infrastructure deployment. And any learning curve will be minimal, allowing your team to focus on database operations and development and maximising return on investment.

Future-proofed infrastructure

NetApp AFAs are designed with the future in mind, simplifying the process of cloud migration where appropriate. In addition to FlexPod storage servers, the product range also includes Cloud Volumes for Oracle which allows your business to take full advantage of hybrid cloud operations.

With NetApp, you have the option of running Oracle databases on-premises, in the cloud, or a combination of the two. As connectivity improves and latency decreases, an easy transition path to cloud-only operations is made available. Importantly, this also provides virtually infinite scalability as your data stores continue to grow.

Cloud volumes also support containerisation technologies. You can use them to build next-generation microservices and to deliver the service-oriented IT infrastructure your business needs to support DevOps and digital transformation programs.

Speak to a NetApp specialist

Oracle on NetApp is highly performant, making it a more than viable proposition – and for many businesses makes more sense than Oracle on Oracle. For organisations committed to Oracle as their database engine of choice, NetApp offers clear future-proofing and the opportunity to build a best-of-breed storage infrastructure that will not limit strategic choices in future.

To learn more about running Oracle on NetApp and what it may mean for your business, please get in touch.

cloud backup strategy

Do you need to get physical with a cloud backup strategy?

Virtualising backup with the cloud is powerful, effective and extremely safe. But just because data is now being archived off-site does not mean that hardware can be completely removed from your backup strategy.

In fact, physical hardware may still have an extremely important role to play in your cloud backup strategy.

1. Export by hard drive

The initial speed of a cloud backup may take weeks to complete as you transfer terabytes of data offsite. The actual time taken will depend on network and broadband speeds. Without careful traffic management, the uploads may negatively impact day-to-day operations too.

The process can be accelerated by shipping physical drives to the backup provider so that the data can be copied locally. This will be exponentially quicker – and arguably more secure – than trying to upload over the internet.

2. Restore by hard drive

Restoring from cloud archives is just as important – and fraught with the same difficulties. Speed of recovery will be limited by available internet bandwidth and download speeds.

For downloads that can be sized in gigabytes, online recovery will probably be acceptable. But for a disaster recovery scenario which involves a large amount of data, the speed of transfer is critical.

In the same way that physical hard drives can accelerate seeding of backups, they can also be employed to speed up recovery. If you plan to make cloud backup your principal method of data recovery, check to see if your service has the option of shipping physical disks.

3. Cloud as backup

The issue of time to recovery is of critical importance. Knowing that a complete dataset may take days to recover from the internet, it may be that the cloud is best deployed as a secondary backup.

In this scenario, your existing systems provide real-time services for instant recovery, while periodic (daily / weekly / monthly) backups are replicated to the cloud. Maintaining physical backups on-site minimises time to recovery, while off-site backups help to maintain the integrity and ensure that data is always recoverable.

4. Local servers for recovery testing

You know that your data is always protected when using cloud backup services – but how do you go about recovering it? Keeping spare physical servers will allow you to test your recovery protocols and ensure that they deliver against business needs.

For best results, keep at least one example of each bare metal server to ensure everything works correctly.

5. Physical recovery documentation

Modern business is driven by digital data – but there will always be a place for hard copy records in certain circumstances. In the case of disaster recovery, you must maintain physical, off-line copies of the information required to brings systems back online.

Records must include the recovery action plan, applications and serial numbers. And don’t forget to include contact details for the individual who holds the administrative passwords required for recovery and reconfiguration.

The future is hybrid

Until available bandwidth increases exponentially, there will always be a place for physical assets in your backup regime. The trick is knowing where to divide the load between local and cloud.

WTL offer a range of cloud based solutions. that can extend the rigour of your on-premise backup without without compromising control, visibility, or auditability.

For more assistance in defining a cloud backup strategy that delivers the reliability, speed and security your business demands, please give us a call.

Moving to the Cloud

The Less Scary Road to the Moving to the Cloud

Cloud adoption is set to become a computing norm – even for companies that have until now rejected these technologies. But as hosted software (G Suite, Office 365, Salseforce.com etc) gather pace, most have been unable to completely avoid cloud services.

Much of the discussion around cloud migration suggests that it is a ‘big bang’, all-or-nothing play, with the whole data centre being shifted to the cloud. Although possible (in theory) not all workloads belong in the cloud.

Cloud migration doesn’t have to be ‘big bang’

Many cloud operators give the impression that adoption is not only inevitable but that all of your systems will eventually be hosted. And the sooner this transition takes place, the better.

The reality is that this is your business and your infrastructure, and you are fully justified in moving at your own pace. For various reasons (unfamiliarity, security concerns, uncertainty etc) you have resisted major cloud adoption projects – so it makes sense to maintain a cautious roll-out.

Cloud migration on your terms

One way to maintain control of the process and the speed at which you move is to bring cloud technologies into your data centre first. Using platforms like VMware vCloud Suite, Microsoft Hyper-V virtualisation or OpenStack.

Deploying a private cloud allows your business to migrate applications and workloads, learning how the concepts and technologies apply to your business. At the same time, you can take advantage of automation and self-service to accelerate your IT operations to deliver a better quality of service to your in-house users.

This approach can be more expensive than going with one of the large platforms like AWS, Azure or Google Cloud. With cloud in-house, however, you retain full control of the process so you can migrate applications and servers at your own pace. This makes the transition more manageable and lays the groundwork for when you do decide to migrate to a lower-cost public cloud provider.

Re-engineering workloads for the cloud

One of the key benefits of cloud platforms is their elastic pricing model – you only pay for what you use. However, simply moving your virtual servers into the cloud is not efficient.

Your on-premise systems are configured to run 24x7x365 because there is no reason to let them spin down. But in the cloud where every resource is billable – CPU cycles, RAM, storage etc – you pay for running servers, even when they are not being accessed.

The major cloud platforms allow you to set servers to spin down automatically overnight for instance, helping to reduce costs. However, these servers are themselves considered relatively heavyweight.

The future of operating in the cloud lies in containerisation. This technology breaks applications into blocks that can be created and destroyed automatically according to demand. Unlike a virtual server, the container is a much smaller package, containing nothing but your application code and the libraries required to run it; there is no operating system or additional applications, helping to minimise the number of resources used – and therefore costs.

With a private cloud, you can begin the process of re-engineering and optimising for the cloud before moving to a public cloud platform. This will help to contain costs when you do finally migrate and simplify the process of transition.

To learn more about the moving to the cloud and how to simplify the transition to the cloud, please get in touch.

Partners