Linux managed services in the West Midlands

We know technology…

We know technology…


Solaris managed services in Birmingham

We know Systems…

We know Systems…


Solaris managed services in the West Midlands

We know Virtualisation…

We know



Solaris managed systems in Birmingham

We know Cloud…

We know Cloud…


Linux managed systems in Birmingham

We know Data Management…

We know

Data Management…


Linux managed services in Birmingham

We know Network and Security…

We know Network

and Security…


We know Cyber Security…

We know Cyber


Oracle Cloud at Customer

All the power of the public cloud in your data centre

Private cloud technologies have been instrumental in helping businesses build fully scalable databases. As these platforms age, attention turns to the next evolution; systems must continue to deliver improved performance and ROI, so the next round of investments is crucial. The fact that your business has been using a private database cloud indicates that your workloads have been deemed too sensitive for public alternatives. This does mean that you have missed out on some of the additional benefits available when using public cloud though. With Oracle Exadata Cloud at Customer, the functionality and feature gap is narrowing. Which means that the next generation of private database cloud can deliver more for less. How much less? Oracle suggests a 47% reduction in the total cost of operations for starters, followed by a 256% return on investment over five years.

Simplified licensing

One of the biggest headaches associated with the private cloud has been licensing. Pooled resources allow for instant scaling according to need and application, but perpetual licenses are a fixed commodity – and you must have enough to cover usage at all times. This means buying additional licenses ‘just in case’ to meet occasional spikes in demand. With Exadata Cloud at Customer (ExaCC), licenses can be activated and released in line with demand – just like popular public cloud services. Your private cloud database can quickly scale from 2 to 400 cores and back again. You only pay for what you use, when you use it – and you never risk having the wrong number of licenses again.

Reduced management overheads

An in-house private cloud typically relies on a collection of technologies held together by custom scripts and management tools. Although they “work”, there is a lot of manual integration required to keep everything running. Upgrading to ExaCC dramatically reduces management overheads. As part of the service, Oracle assumes full responsibility for managing infrastructure, reducing the burden on your IT team. Oracle clients report their management processes are 69% more efficient with EXaCC. This approach also ensures you are unaffected by the ongoing cloud skills shortage.

Cloud scale-out

Data storage and processing needs continue to increase year on year, placing an additional burden on the in-house infrastructure. You existing private cloud is still constrained by a finite pool of underlying infrastructure resources. ExaCC provides optional automated scale-out capabilities, seamlessly linking on-premise private cloud with the hosted public cloud to extend capabilities for specified workloads. This allows you to increase resource usage beyond the constraints of your private database cloud as and when required, also billed on a pay-as-you-use basis.

Enhanced security

One of the key reasons for choosing a private cloud is the degree of control you maintain over applications and data. ExaCC helps to further de-risk the operating environment by bundling Oracle ‘defence in depth’ tools as part of the service. Security safeguards are built into both hardware and software, with additional encryption included in the database engine. And it’s all a standard part of the ExaCC service.

Just scratching the surface

As public cloud feature sets continue to evolve, there’s no reason why the private cloud has to be left behind. Oracle Exadata Cloud at Customer brings many of the very best features to the local data centre – and delivers incredible benefits in the process.

To learn more about ExaCC and how you can generate an ROI on your private cloud investments in as little as six months, please get in touch.

Useful Links

Gartner: Oracle offers customers the first full on-premises public cloud experience.

Data Protection as a Service

Data Protection as a Service

Ensuring data is properly protected against loss or theft has to be a strategic priority. Maintaining a secure, up-to-date copy of your data is critical – to help restore operations quickly after a local disaster for instance. Data protection obligations (think GDPR) attach a significant financial penalty to permanently losing data, further emphasising the importance of recovery.

Configuring, managing and testing backup and recovery is a major undertaking – particularly as your data estate continues to grow. The modern hybrid operating environment simply adds to the complexity, creating more opportunities for misconfiguration.

Given its strategic importance, you will need to ensure adequate resources are assigned to disaster recovery-related tasks. But when IT departments are already stretched, diverting key people to what is a relatively routine operation could delay or derail other strategic projects.

One of the most effective ways to deal with the problem is to outsource to a specialist.

Applying the cloud model to data protection

Cloud backups are now a routine aspect of both professional and consumer life – our smartphones automatically copy data to the cloud for instance. But in terms of data protection, Disaster Recovery as a Service (DRaaS) is arguably more important.

Under the DRaaS model, everything operates almost exactly the same as it always has with one key difference – your outsourcing partner shoulders responsibility for making sure everything works properly. Their expert consultants will configure the necessary cloud connections, create backup routines, automate common tasks, verify backup sets, and regularly test recovery routines.

Importantly, their expert consultants are also on hand to assist with recovery in a genuine disaster scenario, ensuring you can recover your data and resume operations as quickly as possible. As well as having the skills and expertise you need in an emergency, service level agreements ensure tasks are always completed in a timely fashion.

Does DRaaS deliver value for money?

Although DRaaS will often update and improve your backup and recovery capabilities, its true value lies in convenience. Your data is fully protected and recoverable, and your in-house team is free to participate in other projects and activities tied to your business’ strategic goals.

Having a DRaaS partner also allows your business to pursue increasingly flexible operating models to meet the changing demands of your customers and staff. They will perform the necessary platform reconfigurations to ensure data continues to be collected and stored safely off-site, for whenever it is required.

With the assistance of the cloud, advanced DRaaS providers are able to provide instant fail-overs during a data loss event. Rather than having to maintain a costly co-location data centre for such scenarios, fail-over switches operations to the cloud. This allows your business to maintain near-normal levels of service while the local data centre s being restored.

In these respects, DRaaS offers excellent value for money. With access to DR expertise and the ability to operate more flexibly without being constrained by current DR provisions, you reduce friction that normally slows growth.

To learn more about data protection as a service and how WTL can help you meet your data protection obligations and save time and resources, please get in touch.

IT Network

9 Trends That Will Impact Your IT Network

Data centric operations are changing the way we work – and placing new demands on your IT network. Here are nine new trends you need to be aware of – can your current network cope?

1. Cloud hosted apps

The unbeatable flexibility provided by public cloud platforms makes them ideal for new app deployments. Containerisation and micro services are increasing in popularity because they offer unrivalled portability and resource control – but they also rely on uninterrupted connectivity between network edge, core and cloud data centre to perform adequately.

2. Distributed apps

Interconnected micro services can be hosted anywhere – on-site, at the network edge or in the cloud. Location is determined by performance needs – and again, reliable, speedy connectivity is critical.

3. Continuous development

Agile development and fail-faster methodologies result in continuous delivery of updates apps. The development team need a network infrastructure that allows them to increase the speed of production and delivery, whilst containing operational costs.

4. Virtual becomes serverless

Moving away from the concept of servers (physical or virtual) requires a different approach to infrastructure architecture. According to Cisco, future networks will be built around “nerve clusters”, mini networks located where the data is, with a reliable backbone to connect each cluster as required.

5. IoT goes mainstream

Smart sensors and IoT devices are no longer the preserve of manufacturing or self driving cars. The ability to capture – and action – real-time data can be used in a broad range of industries. As well as improving connectivity between edge IoT devices and the network core, network administrators will need a more flexible way to manage them. Infrastructure will have to become smarter to allow administrators to identify and classify connected devices and to apply policies that maintain performance without impacting other networked assets.

6. Here comes AI

Using Artificial Intelligence (AI) to automate and accelerate operations relies on the ability to access and process data quickly. As AI adoption grows, more processing will take place at the network edge. Network infrastructure will have to be capable of delivering information to AI engines in near real time in order to succeed. This will require improvements in connectivity between network edge, core and the cloud depending on where computation is being performed.

7. We’re all mobile now

Cisco once predicted that mobile data traffic would increase at annual growth rate of 42% – but that was before the 2020 global pandemic shut down offices across the world. That estimate now looks increasingly conservative. Workforces are likely to remain highly distributed and mobile for the foreseeable future – or even permanently. Accessing corporate systems from a range of devices outside the company network decreases visibility and control. Careful thought will have to be given as to how to control access to resources, particularly as IoT devices further increase network complexity and ‘noise’.

8. Cybersecurity must get smarter

As corporate systems extend outside the network perimeter, the attack surface available to hackers increases. Cyberattacks are increasingly sophisticated, so businesses will need to online investing in network infrastructure that allow them to identify, contain and mitigate threats. These protections will need to be extended to cloud environments too, providing similar defences for data and applications hosted outside the network perimeter.

9. AR and VR are finally happening

Augmented Reality and Virtual Reality technologies have begun to mature, moving from consumer novelty to business productivity tool. New applications include improved collaboration, training and even remote working ‘experiences’. But every productivity gain comes at a cost, increasing demand on your network resources. The future-ready network will need to deliver improved end-to-end throughput with minimal latency. Using dynamic performance controls will help to guarantee a decent end-user experience and ensure that other mission-critical activities are not impacted without overwhelming the network administrator.

The future is more

Clearly all nine of these trends have one thing in common – more network resources. Or more specifically, more efficient, flexible network resources that will support changing workloads and priorities. Without planning for these significant changes soon, businesses may find they are unable to support the applications they need in future.

To learn more about how WTL and Cisco can help you meet these challenges head-on, please get in touch.

Useful Links

Cisco – 2020 Global Networking Trends Report

Cybersecurity backup

Improving your cybersecurity protection with cloud-based services

For many businesses, the cloud has become an important aspect of IT-based operations. Many already host infrastructure, applications and data in hosted services, so why not leverage that same power to enhance your cybersecurity protection?

Backup – the ultimate insurance against a cybersecurity breach

Every business operation now relies on data; ensuring access to that information is mission critical. A robust, workable backup regime is similarly vital, providing a way back in the event of an attack.

For many established businesses, backup relies on removable media like LTO tapes or portable hard drives. Each night the media is swapped, and a copy is taken offsite to prevent loss in the event of a localised disaster. Should data be corrupted or deleted, you can recover the information from the relevant media.

Obviously, this regime does work – but it also has limitations. Aside from physical damage to the removable media, criminals are increasingly targeting backup systems as part of their attacks. If they can destroy data in the live system and the backup media, recovery is almost impossible.

Invoking the power of the cloud

As a result, “air-gapped” backup systems are increasingly important. By saving backup information to a completely separate, unconnected system – which is where our cloud security service comes into play.

By exporting to the cloud, you immediately benefit from your backup sets being stored securely offsite. This protects information from loss in a localised event (such as a fire) and prevents physical access by unauthorised third parties. The data centres are also protected by enterprise grade security provisions.

Built on Veeam Cloud Connect technology, there are three key protective aspects to this service:

1. Malware defence

Ransomware and malware have the potential to take a business offline for hours. If backup systems are also infected, that could easily become days or weeks. If the infection is severe, some data may never be fully recovered.

By moving backup copies offsite, you create another layer of physical security that makes it harder for malware to breach.

2. Malicious deletion protection

If hackers break into your systems, they may manually delete data – or they may deploy malware to automate the process. WTL ensures that maliciously deleted data can always be recovered using a virtual recycle bin. Everything that is deleted is moved to this recycle bin – effectively a backup of your backup, just in case.

3. Accidental deletion protection

There is always a risk that data is deleted accidentally; a well-meaning IT tech clears down a ‘temporary’ shared folder, only to discover that there was important business data in there. Or the accounts team applies an annual tax code patch that corrupts their database. Whatever the exact cause, this information needs to be recovered quickly.

Again, the WTL recycle bin comes to the rescue – every deleted file is first copied to a secure, air-gapped area in the cloud. If something goes wrong, you can simply copy the information back from the recycle bin for near-instant data recovery.

Ensuring a copy is always available

The key to effective backups – and disaster recovery – is ensuring you always have at least one copy of your data available at all times. By integrating WTL cloud services into your cybersecurity defences, you have more options available for when the worst does happen.

To discuss your current cybersecurity protection stategy and how WTL can strengthen your data protection and recovery options please get in touch.

building a containers stategy that works

Building A Container Strategy That Works

As we discussed in our last WTL Blog , containers are the future of application development in the age of the cloud. However, there are some factors you need to be aware of as you make the transition – here are thirteen to consider when building a container strategy that works.

1. Management buy-in will take time

Containerisation is a paradigm shift for development, so don’t be surprised if non-technical executives don’t understand the concepts. Expect to receive the same basic questions repeatedly, along with more frequent requests for progress updates, as your business gets to grips with the new technologies.

2. Your existing operating model won’t work

As containers are created and destroyed with every code change, your current escalation process will quickly become overwhelmed. As you roll out Kubernetes, investigate building a team of site reliability engineers who can develop a system to automate the management process.

3. The skills gap is greater than you think

Kubernetes is a relatively new technology, so skills remain in short supply. You already know that your team will not be fully up-to-speed but be under no illusion that they are probably further behind the curve than you realise. Make sure that you invest heavily in training as well as container technologies to address the shortfall.

4.  Data volumes will explode

Encapsulating every application and service in its own container will result in far more nodes than your current virtual server environment. And when each generates its own data and logs, the volumes of data generated will increase exponentially. Automation will again be key, helping you to manage data and telemetry and uphold compliance.

5. Container sprawl is a fact of life

As your developers grasp the potential of Kubernetes they will want to deploy containers everywhere, on premise and in the cloud. Although a potential management headache, your business will be better served implementing a control plane and data fabric that supports Kubernetes anywhere, than trying to reign in the ambitions of your developers.

6. Your Kubernetes cluster won’t scale automagically

Because it is relatively new, Kubernetes is not the easiest technology to deploy. Containers do not necessarily scale automagically, and the sheer volumes of data being produced exacerbates an existing challenge. You will also need to investigate how containers are deployed to endpoint devices that may not be connected to the corporate network.

7. A “one cloud” strategy is doomed to fail (initially)

Choosing a single cloud provider helps simply infrastructure management, but it also goes against the experience and knowledge of your team. People know how to work with some providers and not others for instance. Rather than trying to force a single cloud platform of choice, investigate the potential for using a single control pane that allows you to deploy and manage Kubernetes containers in any cloud service.

8. Kubernetes version adoption will be inconsistent

The Kubernetes platform is undergoing rapid development with new releases shipping very quickly – faster than your whole team will adopt them. As a result, there are three officially supported versions in circulation at all times. This means that you will need to implement a control pane capable of managing multiple Kubernetes versions and a rolling upgrade program as new versions are released.

9. The container model will break your firewall networking segments

Firewalling the various nodes of your current VM environment is challenging but manageable. But once you deploy containers there will be too many nodes trying to communicate with each other for traditional firewall rules to cope. You will need to review and update your networking strategy to protect this new network paradigm correctly.

10. Agility is king – don’t tie your developers down too early

Kubernetes containers are specifically designed to support agile development, usually by breaking the structures and conventions that underpin traditional waterfall techniques. Consequently, trying to impose a rigid development structure will limit development agility. Instead you should simply focus on giving developers the tools they need to build containers where they are best suited.

11. Avoid vendor lock-in at all costs

One of the benefits of containers is their portability. But if your development is tailored to a specific platform, you compromise that ability. You must embrace platform-agnostic development to avoid reducing your strategic options in future.

12. Containers are not VMs

Conceptually, containers are similar to virtual server, operating on shared hardware. But because they are created to fulfil a single task, they are much more lightweight. They are also intended to be disposable, being created and destroyed with every code release. Your team needs to change their approach to development, adopting stateful and stateless apps as required.

13. Kubernetes won’t solve all your problems

Kubernetes is invaluable for rapid development and portable applications – but it can’t do everything. Some of your legacy systems will never be suited to containerisation and will have to remain hosted in virtual servers. Do not waste time and resources forcing applications into containers when you gain nothing from the exercise.


Containerisation is the future of rapid application development in the cloud era. As a rapidly developing technology, your team will need to adapt a mindset of constant change and improvement. As you move forward, don’t forget to address these 13 factors when building a container strategy that works  for your business. And if you need further advice and guidance on building a successful Kubernetes containerisation strategy, don’t hesitate to contact the WTL team.

Useful Links

The Doppler – The State of Container Adoption Challenges and Opportunities


VM ware logo
Veritas logo
Amazon web services