We know Technology…


We know Systems…


We know Virtualisation…


We know Cloud…


We know Data Management…


We know Network
and Security…


We know Cyber

successful journey to the cloud

7 best practice tips for a successful journey to the cloud

The cloud undoubtedly features in your future IT strategy – but how do you make sure your investments pay off, you realise all available benefits with a successful journey to the cloud ? Here are seven best practice tips to ensure your cloud strategy starts in the best possible way – and continues to deliver value into the future.

1. Establish a cloud philosophy

What does cloud mean to your business? Do you have a very simplistic understanding – all your on-site systems recreated in the cloud? Or a strict commitment to a specific cloud service? If your definition is too loose, you will never realise all available benefits. Cloud computing is an ongoing journey of constant refinement; your stakeholders need to accept this reality – and philosophy – so that you can all move forwards together.

2. Be honest about the cloud

As the cloud began to grow in popularity, a lot of businesses began to adopt ‘cloud first’ strategies. All future development and deployments would be built in the cloud. But the reality is that not all systems are suited to cloud platforms. Real time processing needs to be performed on-site to ensure minimal latency for instance. And you may prefer to keep your most sensitive information in the local data centre. Determine exactly why each system needs to be migrated before moving anything off-site.

3. Ensure the conditions are right

Every IT migration project has major implications for time, finance, resources, culture and business continuity. But you must not proceed with a cloud project until the business is ready. Waiting until you are truly prepared will reduce risk of failure, or costly mistakes that limit or hinder future developments.

4. Create a team to oversee cloud

Effective cloud is more than just a technical issue. Like any business partnership you will need to seek advice from other stakeholders, including senior management, HR, legal, procurement and finance – in addition to IT. Hewlett Packard call this a ‘Cloud Business Office’ (CBO), a ‘central point of decision-making and communication for your cloud program’. You should appoint stakeholders to serve on this multi-disciplinary team who are empowered to steer cloud strategy and ensure that you can cover the non-technical factors too. Your CBO will need to address issues like financial governance, risk and security, compliance, vendor management and project oversight. And the IT team is not usually capable of addressing all of these issues without assistance.

5. Do your sums

You know that the cloud should be good for business – but can you quantify those benefits? Cloud platforms may allow you to switch from capital expenditure (CapEx) to operational expenditure (OpEx) models, but you may not see any dramatic reduction in overall costs. Instead you will need to quantify the other benefits of cloud computing. How much will we save when we don’t have to buy redundant capacity for future growth? How much will we save outsourcing management of hardware, software and networking to a cloud provider? What are the risks and associated costs – and are they shared with our partners? Are we seeing measurable productivity gains? TCO is hard to calculate when dealing with cloud operating models. But you will never properly understand whether you are receiving value for money if you don’t do the sums.

6. Obtain resources

A ‘lift and shift’ cloud migration could probably be completed by your existing IT team. Simply replicating your onsite infrastructure in the cloud will also replicate the problems and issues you are trying to solve, however. Instead you need to re-engineer systems to run in the infrastructure as code model. This will require skills and experience you probably don’t have in house. You will need a partner who can supply the relevant resources to make the migration a success – at speed.

7. Stay informed

Cloud technology continues to evolve at warp speed. We’ve gone from dedicated hosted hardware to virtual servers to containers and infrastructure as code in a matter of years. And the pace of change continues to accelerate. Your CBO will need to stay informed about these developments and how they can be used to help your business reach its strategic goals.

Take your time

As you can see, moving to the cloud is an involved, time-consuming process. These seven best practice tips will help map out the start of your cloud journey, but you must allocate sufficient time and resources to the process; shortcuts will inevitably compromise the success of your project. And don’t forget – the partners you choose to help steer a course through the multitude of options will also be vital. Their knowledge and experience will help you avoid the pitfalls that have caught your competitors in the past. Ready to start your cloud journey? Give the WTL team a call for friendly informal discussion.

Useful Links

Enterprise.nxt: Expert advice to help you get the most out of your cloud transformation

Return to work

The return to the office – a personal view

Who knew we’d be continuing the remote working theme, some 15 weeks after the start! We’re all too aware of how time flies, but the relative ease of our transition to the ‘office at home’ environment, has made the best part of this last four months ‘business as usual’ here in WTL.

For some, the move has been seamless, however for others, it’s been fraught with making the best use of a shared space and probably awkward at best! So whilst all of the above is true, I for one will be looking forward to the return date and all the comforts of the office space that comes with its familiar corners nooks and crannies, to strategically place files, folders and documents, knowing they will still be there, a day, a week or a month after, without the annoying search for them, post an anonymous tidy up!!

Heading towards normality seems painfully slow, but missing the office banter, favourite mug, colleagues on tap for a quick response, are just a few of the benefits that will flood back amongst many others and make it all worthwhile sometime soon.

Fingers tightly crossed for there to be no second spike in the pandemic, is what we’re all hoping for and so we massively acknowledge the fantastic effort and bravery, of the key workers across all sectors of the health services, the vital food supply chain industries and related distribution networks that keep our world moving and thriving – we thank you and many more, without reservation.

There are so many pieces of the ‘jigsaw’ to mention every area, suffice to say that each individual, in every business, has an important part to play in some respect! Every person at WTL has another person relying on his or her function to make ends meet. This is no different whether small, medium, large or enterprise, someone will be relying on someone else, for a result to be a success. Keep up your spirits and morale for everyone’s sake, but looking after yourself, your own well-being and mental health is the first priority.

NetApp all-flash

Optimise Oracle Workloads with NetApp all-flash Solutions

When it comes to choosing infrastructure to support your line-of-business Oracle databases, Oracle hardware seems the logical choice. But faced by evolving computing needs and shrinking IT budgets what are the alternatives. NetApp all-flash solutions provide a more than viable option to Oracle hardware – well worth considering as you plan the next phase of your infrastructure lifecycle.

Best in class performance

The headline benefit of NetApp all-flash technology is its superior performance. Capable of performing up to 1 million IOPS with latency of about 100 microseconds, NetApp systems are the fastest available – up to 20 times faster than traditional storage. With end-to-end flash arrays and NVMe, these scalable all-flash systems are capable of halving application response times. No other database platform – including Oracle – comes close in terms of performance.

Increasing flexibility and growth options

The hybrid infrastructure operating model solves several problems about latency and security – but the integration between on- and off-premise systems could be improved. NetApp brings the power and flexibility of cloud into the local data centre. The ONTAP data management software bundled with NetApp flash arrays allows you to dynamically allocate your database workloads for maximum performance-cost benefits. This includes pushing lower priority data to cheaper cloud storage to maintain local capacity. NetApp solutions also integrate neatly with Oracle management tools, greatly simplifying administration. Application-integrated workflows can be automated; you can provision and prototype with a single mouse-click in as little as eight seconds. NetApp all flash arrays are also ideal for rapid development and prototyping. FlexClone technology makes it possible to clone large data volumes in seconds. A thin provisioning mechanism means that the data records or files aren’t actually cloned until accessed or used, helping to constrain the physical storage requirements for your test applications.

Consistent and stable operations

NetApp all flash arrays have been engineered to deliver consistently high performance for database operations. They are also extremely reliable averaging just 31.5 seconds pause time per year – that’s 99.9999% availability. This reliability is essential for mission-critical Oracle workloads. Oracle database owners also benefit from SnapShot and SnapMirror technologies that automatically replicate data to prevent loss. Further protection is available using FlexClone to transfer databases to an active disaster recovery site – including the cloud. As well as databases operating at the core, data is protected at the core and in the cloud too.

Streamlined operations and cost savings

Customers using NetApp for Oracle report some significant benefits: fewer components, greater return on investment and a lower total cost of ownership. Estimates suggest time and effort savings of up to 90% compared with their existing solutions. By blurring the boundaries between on-premise and cloud, NetApp arrays make it easy to migrate workloads to wherever they are best suited. This helps to overcome issues of local capacity and avoid the need for costly investment in redundant physical storage.

A worthy alternative option

Thanks to its high performance and reliability, NetApp all flash storage is a credible platform for your most critical Oracle database applications. Factor in the integrated suite of ONTAP management apps and it is easy to see why NetApp users are able to realise such significant returns on their investments.

Useful Links

White Paper: Optimise Oracle Workloads with NetApp Solutions

cloud-connected storage

Is cloud-connected storage your path to the future?

The hybrid cloud infrastructure model has become the platform of choice for most businesses for two reasons.

First, questions about security and sovereignty means that some operations are best retained in-house to maintain compliance.

Second, time sensitive operations, particularly operations that rely on real-time processing, need to be kept on-premise. Latency between local data centre and the cloud could prevent timely processing.

Time to blur the boundaries

Despite best efforts, current hybrid models emphasise the disconnect between on-premise and cloud. The number of applications and operations being run locally may have decreased, but the CTO must still deploy sufficient processing and storage capacity for those that remain.

This is where the choice of on-premises technology platform becomes crucial. Ideally you want to eliminate the barrier between local and hosted resources to create a seamless, unified platform on which to build.

One choice would be NetApp AFF storage. The ultra-low latency all flash arrays are powered by ONTAP, NetApp’s comprehensive management and configuration system providing cloud-connected storage.

Included in ONTAP is the FabricPool technology which allows you to connect various public and private cloud services directly to your on-site infrastructure. This forms the basis of your seamless hybrid cloud.

Time to get smart

A unified platform is just the start of a future-ready infrastructure, however. FabricPool goes further, using intelligent rules and analysis to automate data and workload allocation.

Mission-critical applications requiring the very highest levels of performance are retained in-house, using the NVMe flash to minimise latency. FabricPool then re-allocates other workloads to off-site cloud platforms to help balance performance and cost.

Embracing the multi-cloud future

Despite the best efforts of cloud providers, CTOs have been keen to avoid the trap of vendor lock-in. The ability to move workloads between providers cost effectively is important for future proofing and flexibility, driving a more towards multi-cloud deployments.

Best-of-breed infrastructure can be costly to set-up and maintain, mainly because the relevant skills are in such high demand. As a result, many of the cost-control opportunities of multi-cloud operations are lost through increased staffing and administration costs.

Again, NetApp AFF technology can help you build a multi-tier storage strategy. FabricPool analysis will identify and categorise workloads, moving data to the most appropriate cloud platform automatically. Shifting ‘cold data’ to a hosted archive service will help to reduce per-terabyte storage costs and free up valuable high-performance local capacity. Extra sensitive data can be piped to lower-cost private cloud storage if preferred too.

Cloud-connection will be key to the future

The beauty of cloud platforms is the flexibility they offer. With almost infinite scalability, your business is free to rapidly grow its systems without capital investment.

But while some workloads remain tied to the local data centre, there is no reason similar scalability cannot be deployed on premise. Choosing all-flash arrays with the ability to join on-off-site platforms offers exceptional processing speed and the option to extend into the cloud whenever required.

While the hybrid cloud model remains default, CTOs should pay close attention to their choice of on-premise systems. Cloud-connected storage offers valuable strategic opportunities – and a way to bridge the on/off-premise divide seamlessly.

Useful Links

White Paper: Optimise Oracle Workloads with NetApp Solutions

Intelligent Data Management with Machine Learning and Artificial Intelligence

The next step of your digital transformation – Intelligent Data Management

Digital transformation projects are intended to help businesses improve efficiency by using data to drive strategic and operational decision making. But while efforts are focused on generating actionable insights, much less attention is being given to the underlying infrastructure. Or more specifically, the management of the infrastructure.

Which is why you need an Intelligent Data Management Strategy to support your digital transformation efforts.

Generating insights – and administrative headaches

Currently, Machine Learning (ML) capabilities are directed towards linking disparate data sets and extracting previously unknown insights. Similarly, Artificial Intelligence (AI) is turning those insights into action, accelerating decision-making, automating low-level tasks and flagging anomalous data for review by human operators.

ML and AI are helping to make sense of unstructured data. But at the same time, corporate computing environments are becoming increasingly complex. The exponential growth of data coupled with the use of a disparate set of hardware, applications and services is creating a data estate that requires a disproportionate amount of administrative intervention and oversight.

Under the current paradigm, data is easier to use but increasingly difficult to manage. Unless the administration can be simplified and automated, businesses will begin drowning in data again.

Widening the scope for ML and AI

An Intelligent Data Management Strategy seeks to apply ML and AI technologies to virtually any problem – including systems management. Some vendors, like HPE, are building these capabilities into their hardware stacks, creating an intelligent data platform.

Machine Learning can be used to establish a baseline for normal operations for instance. By monitoring network traffic, server activity, application usage and other variables, infrastructure gains an understanding of what “normal” looks like.

Using the insights generated by ML, AI can then be applied to solving common network management challenges. Where an excessive load is detected, AI can automatically offload processing to reserve servers – or even to the cloud. If a system begins generating suspicious network activity, AI will throttle bandwidth, or even disable the system, until an engineer can resolve the issue.

Automated actions are not limited to problems either. AI can be trained to take proactive steps to ensure the entire stack is performing optimally. This relieves systems engineers of another important but time-consuming responsibility and ensures infrastructure continues to deliver value.

Because AI can make these adjustments in real-time, administrators can focus on other strategic tasks. Automated detection and remediation are also much faster than a similar human response, helping to ensure the entire infrastructure stack is functioning optimally.

To avoid being overwhelmed by unmanageable system complexity in the near future, your business must consider how ML and AI can be applied. Your Intelligent Data Strategy needs to be rebalanced to consider infrastructure overheads alongside analytics and insights.

Contact us today to learn more about adding automation and intelligence to your data strategy – and what you will gain in the process.

Useful Links

White Paper: Why Organizations Need an Intelligent Data Strategy