Cloud computing

Cloud computing – 10 issues to discuss to avoid a cloud disaster

Cloud computing has revolutionised corporate IT – generally for the better. As the technologies have matured, early adopters have encountered some potential pitfalls that may add to costs – or limit your strategic outcomes.

Here are ten issues to address with a potential cloud partner to avoid these mistakes.

1. Where will our data be stored?

One of the best features of cloud platforms is the distributed storage of data. By spreading the load across pooled hardware, your systems are more resilient, stable and less prone to failure.

Before uploading any information, you need to know where your data will physically reside. Will there be any issues about data sovereignty if the information is transferred across international borders? Are there likely to be bandwidth problems if the data centre is too far away? Is there sufficient geographical separation between live data and backups to prevent permanent loss in a disaster?

As you negotiate with providers, consider the impact location may have on your data operations.

2. How do we manage our cloud assets?

Once your data is in the cloud, you need to be able to work with it – so how does the provider make that possible? What tools do they provide? Will your developers and administrators require additional training – and can the provider assist?

You will also want to know how much control and visibility you have your cloud assets. Can you report on performance, security and billing? Will you have access to the information you need to prove you are meeting your SLA obligations, or to calculate ROI.

Take notice of the available management interfaces and consider how easily your IT team will be able to adjust.

3. How do you work with us to maintain application performance?

Access to ‘unlimited’ computing resources should deliver unrivalled application performance, increasing speed and reducing latency. But you still need to ask the question to be sure.

Does the provider offer total transparency of their technology stack so you can confirm optimised application compatibility? And do they offer any kind of performance guarantees?

These insights provide important protection against poor application performance.

4. How do you secure our data? And what about compliance?

Moving data off-site involves a high degree of trust – you are responsible for what happens in the event of loss or theft. You need to ask the cloud provider some tough questions.

Where will the data be physically stored? If personal data is transferred out of the UK / EU, you may be in breach of the GDPR, risking a massive fine.

Does the service meet the specialist security and compliance demands of your industry? Pick an incompatible service and your business could be prosecuted.

How is data secured? Is it encrypted during transfer and at rest? How is access controlled? You must be 100% convinced that your information is safe before migrating to the cloud.

5. What are our data protection and disaster recovery options?

Cloud computing has resilience built-in as standard, but you still need to maintain proper backup, restore and disaster recovery (DR) provisions.

How is data backed up? Is it an additional service? How does it work and what does it cost? How can you recover data quickly? And what are your options for restarting operations following a disaster?

Maintaining continuity of operations is extremely important – and the right cloud service provider can become a key aspect of your DR strategy.

6. How will your network work with ours?

Accessed via the Internet, cloud services should “just work”. However, linking applications – or even other cloud services – is likely to require advanced networking skills.

You need to assess the abilities of your in-house engineers and whether they match with the service on offer. If you lack any skills, can they be sourced easily / cost-effectively? If a particular service requires specialist skills you cannot source, that may not be the right platform for your needs.

7. Can you assist with defining our IT strategy moving forwards?

Integrating cloud with the corporate IT strategy means being able to assess workloads and define the best location for each. Without extensive existing experience of using the cloud, this is often a difficult task.

Your ideal cloud computing provider should be a partner, capable of aligning their service with your strategy to deliver ongoing benefits. They can also help you refine and extend your strategy to take advantage of previously unrealised cloud benefits.

8. How will you assist with the onboarding process?

The most cost-effective, efficient cloud deployments tend to be more complicated than simply lifting-and-shifting existing infrastructure. Some of your mission-critical applications may need to be refactored for maximum compatibility for instance. Or you may just need assistance with uploading large volumes of data to a remote data centre.

Check with the provider what is included with the service, and whether additional assistance can be purchased. You should also ask for suggestions about how best to streamline the migration process and to maximise the return on your investment.

9. What level of support can we expect?

Some core business systems need to be available 24x7x365 – and you need a platform you can rely on. Although more resilient than the on-premise equivalent, you still need support coverage for your systems in the cloud.

Consider how your internal support resources need to be augmented to improve service coverage. And don’t forget to ask the provider what is included in the contract – including SLAs – as standard. If you need 24×7 support, it will almost certainly be a chargeable extra

10. What is the cloud going to cost us?

The pay-as-you-use cloud billing model is brilliant for reducing capital spend on IT hardware. But there are potentially dozens of consumption-based metrics used for billing – CPU cycles, disk storage, bandwidth used etc. This can make bills more complicated to understand – and higher than expected.

You will need a potential provider to explain their billing model, what it includes and whether there are pricing tiers to help better control costs. Using their experience, you should be able to define a cloud platform that will deliver against your strategic goals without busting your budget.

Ask us

The choice of cloud service should be treated as a major strategic decision. With the right partner in place, you have a platform on which to build IT systems capable of delivering on your strategic growth goals.

So, put WTL to the test. Give give us a call and grill us about what you need – and how we can deliver cloud computing stategy that is right for your business.

Oralce-on-Oracle

The Oracle-on-Oracle advantage – stay on top of your game

Faced with demands to demonstrate an immediate return on investment, the lure of generic hardware can be irresistible. Cheaper at the point of purchase, it is relatively easy to demonstrate an immediate saving when deploying white box servers.

Over the longer term, however, those savings rapidly dwindle to almost nothing. The hardware, itself is usually perfectly robust and reliable, but it is not fully optimised for your Oracle database technology stack. Which means that you are losing out on the performance and efficiency gains of systems offered by Oracle’s own server solutions.

Because unlike generic servers, Oracle-on-Oracle Engineered Systems have been designed and optimised for a single specific task – maximising Oracle database performance.

The Oracle Exadata range

For enterprise-class organisations, the Exadata platform is a game-changer. Offering persistent memory and RDMA over converged ethernet (RoCA), Exadata servers are unique, offering performance gains that simply cannot be matched by a generic alternative.

The Exadata platform has been designed and developed alongside the Oracle Database engine to reduce latency and increase performance. The entire technology stack – software, compute, networking and storage – is tailored towards delivering unrivalled database efficiency.

The use of cutting-edge persistent memory and RoCE technologies reduces IO latency by up to 10x and improves performance by 2.5x. When dealing with millions of database transactions per day, these improvements quickly convert into measurable returns – more output, reduced operating costs, increased turnover and profits etc.

The Exadata platform also offers improved future-proofing capabilities. Systems can be deployed on-premise, in your own private cloud environment, or in the Oracle Cloud proper. With the added benefit of capacity-on-demand software licensing and pay-as-you-grow scalability, you can contain costs without limiting growth when required.

The Oracle Database Appliance (ODA)

Oracle Engineered Systems offer similar benefits for midmarket deployments. Oracle Database Appliance products provide significant performance benefits, optimised for single-instance Oracle databases. Fully integrated, the appliances pre-configure software, compute, networking and storage into a completely integrated package.

This bundled approach helps to reduce costs and accelerate deployments to lower total cost of ownership. According to Oracle, their Database Appliance range is the simplest, lowest-cost turnkey X86 database system available today. One customer reports generating 498% ROI over five years and a 54% reduction in operating costs .

Although designed for midmarket organisations, the ODA family offers enterprise-class features. Built-in automation helps to streamline storage configuration and database provisioning. And flexible licensing options allow you to scale usage up and down in line with demand without limiting availability or performance. Importantly, these appliances also offer Oracle Cloud connectivity, so you can easily move workloads offsite as and when required.

The smart choice

When compared to the performance, availability and scalability offered by Oracle Engineered Solutions, the false economies of generic storage and servers are obvious. With products designed for both midmarket and enterprise-class organisations, there is a viable solution for most businesses, allowing them to unleash the full power of their Oracle databases and information.

To learn more about the Oracle-on-Oracle advantage and the beneifts of Oracle Engineered Solutions and which is the best choice for your business, please get in touch.

Data Protection as a Service

Data Protection as a Service

Ensuring data is properly protected against loss or theft has to be a strategic priority. Maintaining a secure, up-to-date copy of your data is critical – to help restore operations quickly after a local disaster for instance. Data protection obligations (think GDPR) attach a significant financial penalty to permanently losing data, further emphasising the importance of recovery.

Configuring, managing and testing backup and recovery is a major undertaking – particularly as your data estate continues to grow. The modern hybrid operating environment simply adds to the complexity, creating more opportunities for misconfiguration.

Given its strategic importance, you will need to ensure adequate resources are assigned to disaster recovery-related tasks. But when IT departments are already stretched, diverting key people to what is a relatively routine operation could delay or derail other strategic projects.

One of the most effective ways to deal with the problem is to consider data protection as a service and outsource to a specialist.

Applying the cloud model to data protection

Cloud backups are now a routine aspect of both professional and consumer life – our smartphones automatically copy data to the cloud for instance. But in terms of data protection, Disaster Recovery as a Service (DRaaS) is arguably more important.

Under the DRaaS model, everything operates almost exactly the same as it always has with one key difference – your outsourcing partner shoulders responsibility for making sure everything works properly. Their expert consultants will configure the necessary cloud connections, create backup routines, automate common tasks, verify backup sets, and regularly test recovery routines.

Importantly, their expert consultants are also on hand to assist with recovery in a genuine disaster scenario, ensuring you can recover your data and resume operations as quickly as possible. As well as having the skills and expertise you need in an emergency, service level agreements ensure tasks are always completed in a timely fashion.

Does DRaaS deliver value for money?

Although DRaaS will often update and improve your backup and recovery capabilities, its true value lies in convenience. Your data is fully protected and recoverable, and your in-house team is free to participate in other projects and activities tied to your business’ strategic goals.

Having a DRaaS partner also allows your business to pursue increasingly flexible operating models to meet the changing demands of your customers and staff. They will perform the necessary platform reconfigurations to ensure data continues to be collected and stored safely off-site, for whenever it is required.

With the assistance of the cloud, advanced DRaaS providers are able to provide instant fail-overs during a data loss event. Rather than having to maintain a costly co-location data centre for such scenarios, fail-over switches operations to the cloud. This allows your business to maintain near-normal levels of service while the local data centre s being restored.

In these respects, DRaaS offers excellent value for money. With access to DR expertise and the ability to operate more flexibly without being constrained by current DR provisions, you reduce friction that normally slows growth.

To learn more about DRaaS and how WTL can help you meet your data protection obligations and save time and resources, please get in touch.

IT Network

9 Trends That Will Impact Your IT Network

Data centric operations are changing the way we work – and placing new demands on your IT network. Here are nine new trends you need to be aware of – can your current network cope?

1. Cloud hosted apps

The unbeatable flexibility provided by public cloud platforms makes them ideal for new app deployments. Containerisation and micro services are increasing in popularity because they offer unrivalled portability and resource control – but they also rely on uninterrupted connectivity between network edge, core and cloud data centre to perform adequately.

2. Distributed apps

Interconnected micro services can be hosted anywhere – on-site, at the network edge or in the cloud. Location is determined by performance needs – and again, reliable, speedy connectivity is critical.

3. Continuous development

Agile development and fail-faster methodologies result in continuous delivery of updates apps. The development team need a network infrastructure that allows them to increase the speed of production and delivery, whilst containing operational costs.

4. Virtual becomes serverless

Moving away from the concept of servers (physical or virtual) requires a different approach to infrastructure architecture. According to Cisco, future networks will be built around “nerve clusters”, mini networks located where the data is, with a reliable backbone to connect each cluster as required.

5. IoT goes mainstream

Smart sensors and IoT devices are no longer the preserve of manufacturing or self driving cars. The ability to capture – and action – real-time data can be used in a broad range of industries. As well as improving connectivity between edge IoT devices and the network core, network administrators will need a more flexible way to manage them. Infrastructure will have to become smarter to allow administrators to identify and classify connected devices and to apply policies that maintain performance without impacting other networked assets.

6. Here comes AI

Using Artificial Intelligence (AI) to automate and accelerate operations relies on the ability to access and process data quickly. As AI adoption grows, more processing will take place at the network edge. Network infrastructure will have to be capable of delivering information to AI engines in near real time in order to succeed. This will require improvements in connectivity between network edge, core and the cloud depending on where computation is being performed.

7. We’re all mobile now

Cisco once predicted that mobile data traffic would increase at annual growth rate of 42% – but that was before the 2020 global pandemic shut down offices across the world. That estimate now looks increasingly conservative. Workforces are likely to remain highly distributed and mobile for the foreseeable future – or even permanently. Accessing corporate systems from a range of devices outside the company network decreases visibility and control. Careful thought will have to be given as to how to control access to resources, particularly as IoT devices further increase network complexity and ‘noise’.

8. Cybersecurity must get smarter

As corporate systems extend outside the network perimeter, the attack surface available to hackers increases. Cyberattacks are increasingly sophisticated, so businesses will need to online investing in network infrastructure that allow them to identify, contain and mitigate threats. These protections will need to be extended to cloud environments too, providing similar defences for data and applications hosted outside the network perimeter.

9. AR and VR are finally happening

Augmented Reality and Virtual Reality technologies have begun to mature, moving from consumer novelty to business productivity tool. New applications include improved collaboration, training and even remote working ‘experiences’. But every productivity gain comes at a cost, increasing demand on your network resources. The future-ready network will need to deliver improved end-to-end throughput with minimal latency. Using dynamic performance controls will help to guarantee a decent end-user experience and ensure that other mission-critical activities are not impacted without overwhelming the network administrator.

The future is more

Clearly all nine of these trends have one thing in common – more network resources. Or more specifically, more efficient, flexible network resources that will support changing workloads and priorities. Without planning for these significant changes soon, businesses may find they are unable to support the applications they need in future.

To learn more about how WTL and Cisco can help you meet these challenges head-on, please get in touch.

Useful Links

Cisco – 2020 Global Networking Trends Report

building a containers stategy that works

Building A Container Strategy That Works

As we discussed in our last WTL Blog , containers are the future of application development in the age of the cloud. However, there are some factors you need to be aware of as you make the transition – here are thirteen to consider when building a container strategy that works.

1. Management buy-in will take time

Containerisation is a paradigm shift for development, so don’t be surprised if non-technical executives don’t understand the concepts. Expect to receive the same basic questions repeatedly, along with more frequent requests for progress updates, as your business gets to grips with the new technologies.

2. Your existing operating model won’t work

As containers are created and destroyed with every code change, your current escalation process will quickly become overwhelmed. As you roll out Kubernetes, investigate building a team of site reliability engineers who can develop a system to automate the management process.

3. The skills gap is greater than you think

Kubernetes is a relatively new technology, so skills remain in short supply. You already know that your team will not be fully up-to-speed but be under no illusion that they are probably further behind the curve than you realise. Make sure that you invest heavily in training as well as container technologies to address the shortfall.

4.  Data volumes will explode

Encapsulating every application and service in its own container will result in far more nodes than your current virtual server environment. And when each generates its own data and logs, the volumes of data generated will increase exponentially. Automation will again be key, helping you to manage data and telemetry and uphold compliance.

5. Container sprawl is a fact of life

As your developers grasp the potential of Kubernetes they will want to deploy containers everywhere, on premise and in the cloud. Although a potential management headache, your business will be better served implementing a control plane and data fabric that supports Kubernetes anywhere, than trying to reign in the ambitions of your developers.

6. Your Kubernetes cluster won’t scale automagically

Because it is relatively new, Kubernetes is not the easiest technology to deploy. Containers do not necessarily scale automagically, and the sheer volumes of data being produced exacerbates an existing challenge. You will also need to investigate how containers are deployed to endpoint devices that may not be connected to the corporate network.

7. A “one cloud” strategy is doomed to fail (initially)

Choosing a single cloud provider helps simply infrastructure management, but it also goes against the experience and knowledge of your team. People know how to work with some providers and not others for instance. Rather than trying to force a single cloud platform of choice, investigate the potential for using a single control pane that allows you to deploy and manage Kubernetes containers in any cloud service.

8. Kubernetes version adoption will be inconsistent

The Kubernetes platform is undergoing rapid development with new releases shipping very quickly – faster than your whole team will adopt them. As a result, there are three officially supported versions in circulation at all times. This means that you will need to implement a control pane capable of managing multiple Kubernetes versions and a rolling upgrade program as new versions are released.

9. The container model will break your firewall networking segments

Firewalling the various nodes of your current VM environment is challenging but manageable. But once you deploy containers there will be too many nodes trying to communicate with each other for traditional firewall rules to cope. You will need to review and update your networking strategy to protect this new network paradigm correctly.

10. Agility is king – don’t tie your developers down too early

Kubernetes containers are specifically designed to support agile development, usually by breaking the structures and conventions that underpin traditional waterfall techniques. Consequently, trying to impose a rigid development structure will limit development agility. Instead you should simply focus on giving developers the tools they need to build containers where they are best suited.

11. Avoid vendor lock-in at all costs

One of the benefits of containers is their portability. But if your development is tailored to a specific platform, you compromise that ability. You must embrace platform-agnostic development to avoid reducing your strategic options in future.

12. Containers are not VMs

Conceptually, containers are similar to virtual server, operating on shared hardware. But because they are created to fulfil a single task, they are much more lightweight. They are also intended to be disposable, being created and destroyed with every code release. Your team needs to change their approach to development, adopting stateful and stateless apps as required.

13. Kubernetes won’t solve all your problems

Kubernetes is invaluable for rapid development and portable applications – but it can’t do everything. Some of your legacy systems will never be suited to containerisation and will have to remain hosted in virtual servers. Do not waste time and resources forcing applications into containers when you gain nothing from the exercise.

Conclusion

Containerisation is the future of rapid application development in the cloud era. As a rapidly developing technology, your team will need to adapt a mindset of constant change and improvement. As you move forward, don’t forget to address these 13 factors when building a container strategy that works  for your business. And if you need further advice and guidance on building a successful Kubernetes containerisation strategy, don’t hesitate to contact the WTL team.

Useful Links

The Doppler – The State of Container Adoption Challenges and Opportunities