Edge Computing

Life on The Edge – Facing the Challenge of Edge Computing

Businesses face an unusual dilemma as they prepare for a data-driven future. First is big data analytics, centralising data to ensure as much information is available as possible. Second is automation, applying data to increase efficiency and reduce manual intervention.

This creates a problem however. In order to function correctly, automation systems needs to process and action data at the point of collection, at the edge of the network. This is in complete opposition to the centralised model favoured by data-driven industry.

What does this actually look like?

Take self-driving cars for instance. Each vehicle is equipped with thousands of sensors to navigate routes and avoid collisions. In order to succeed, information must be processed in real time – the vehicle cannot tolerate any latency, ruling out cloud-based systems.

At the same time, vehicle manufacturers need to collect data from onboard sensors to drive product development and safety improvements. And this is where centralised cloud systems do make sense.

Autonomous vehicles are just one example of this dilemma. Factories, retailers, operators and producers all face the same challenge as they try to embrace the data-driven future. Any business deploying smart sensors, IoT devices and predictive analytics will encounter similar issues.

Ever-increasing data volumes

The introduction of IoT devices has exponentially increased the volumes of data being generated. Each sensor can output multiple messages every second. Although small in size, each signal needs to be analysed and actioned immediately.

In most cases, sensor output is nothing more than ‘status ok’ type messages and can be safely ignored, and simply sent to archive storage. In fact, it may be perfectly reasonable to discard them entirely as they offer little long-term value.

Without rules that filter and direct this constant stream of information, businesses will see their data capacity requirements – and costs – escalate even faster than anticipated. The right information must be retained however, otherwise the results of your predictive analytics efforts will be unbalanced or incomplete.

The fundamental challenges you face

In order to succeed in a data-driven operating environment, your business needs to adapt to computing at the edge. You will need to address:

  • How to provide adequate processing power to deal with incoming data in real time.
  • How to specify storage for machine-generated information.
  • How to provide sufficient network bandwidth between the edge, data centre and cloud.

With the right infrastructure, these challenges can be overcome. And the benefits of edge computing are significant – you can read more in part 2 of this blog series next week.

Useful Links

Dr. Tom Bradicich, HPE, The Intelligent Edge: What it is, what it’s not, and why it’s useful

When OT and IT collide: Managing convergence on the industrial edge

IDC FutureScape: Worldwide IoT 2019 predictions (Analyst report)

2018 National Retail Security Survey (National Retail Federation report)

The merging of cybersecurity and operational technology (ISACA and ISA report)

Mining 24 hours a day with robots (MIT Technology Review)

Managed Service Provider

Top Tips: Choosing a Managed Service Provider

Choosing a Managed Service Provider is one of the most important decisions your business makes. Once you’ve entrusted your technology to an MSP, it will be difficult, time consuming and costly to move. We have put together these top tips to help you choose the right partner.

The right portfolio and scalability

A quick Google search can find multiple MSPs but finding one with the right portfolio that can meet your specific business needs, plus has the breadth of service portfolio to grow with you, can be more difficult. Are you looking for specific skill sets and a technology portfolio that will drive your business forward? WTL is an experienced Solaris and Linux specialist and can drive real transformation in your business. When coupled with a deep understanding of your business and your specific challenges these are hard to find qualities.

Provides SLAs that meet your requirements

When looking for an managed service provider, the SLAs they offer will be critically important. Do they match what you are looking for? What happens if the MSP doesn’t meet the SLA’s? Are there any financial incentives? It is important to choose an MSP who puts its money where its mouth is.

When negotiating a contract, is the MSP flexible and willing to consider something that isn’t standard? The level of flexibility that is evident at this stage could be an indicator of how accommodating the MSP will be once you’ve been onboarded.

Expertise, qualifications and accreditations

It might sound obvious, but a quality MSP needs to have the right qualifications and should keep these up to date. Don’t just look for standard accreditations, expect to see deep expertise in the latest technology; cloud, AI, virtualisation, mobility, security, networks, edge, analytics and more.

Accreditations demonstrate the MSP’s commitment and investment and should indicate that they are taking your technology needs seriously. WTL holds all the most current certifications for Solaris, is a Red Hat Ready partner, Oracle Gold Partner, an Enterprise Solution Provider for VMware, a Silver Veeam ProPartner, and many other leading technology accreditations.

Culture and values

Does the MSP share the culture, ethics and code of practice of your own business? If this is to a long-term, mutually respectful partnership, your MSP should hold the same values and be committed to helping you to achieve your goals.

You need a partner that will evolve with the wider marketplace, utilising the best technology and the best services for your business. Not one that will get comfortable and forget about innovation. WTL partners with the leading vendors and is always seeking out innovation that will drive real business benefits for you.

Meet the team, ask to see the company handbook, do some research on Glassdoor, and find out more about the culture of the partner you are considering.

Best Practice Policies and Procedures

Ensure the MSP utilises industry best practice across the organisation. MSPs that adhere to common frameworks ensure that the right processes, people and systems are in place to help you meet your business objectives. WTL holds the recognised ISO27001 and ISO9001 certifications for physical security and quality processes.

MSPs should be able to detail its processes and policies to you, providing full visibility and transparency.


As with policies and procedures, cybersecurity concerns will be high on your list. Look for best practice frameworks and inspect security policies and procedures which cover monitoring, detection, incident logs, remediation, risk management, patch installation and incident response processes. Ask the MSP to demonstrate compliance with regulations and to ensure that your data will be stored in accordance with the security requirements that the industry demands (PCI DSS, HIPAA etc) and with wider data protection regulations.

WTLis a Cyber Essentials approved partner, demonstrating its commitment to an industry standard cybersecurity framework and offers customers a high level of systems and data security and governance.

Word of mouth

Ask to speak to at least one or two existing customers who share your business transformation goals and some common demographics or use your own network to verify the partner’s reputation and reliability. You’re looking for an honest appraisal, that you won’t find by looking at a brochure or website. There is no greater endorsement than a peer endorsement.

When you have satisfied all of the above, then you are ready to choose a managed service provider. WTL will provide references on request and can satisfy any due diligence questions you may have if you are looking for an experienced and trusted managed service partner.

Useful Links

Intercity Technology – Top 5 Tips for Choosing a Managed Service Provider

IBM : Top 10 tips for selecting a managed service provider

How to choose a Managed Services Provider: A 20 point checklist to choosing the right MSP for your business

To outsource or not to outsource IT

IT: To outsource or not to outsource?

Businesses choose to outsource their IT services for a variety of reasons, including accessing additional skills and expertise not held inhouse. Statista figures from 2018 identified that 46% of businesses across the globe that outsourced IT services did so to plug skills gaps. This is particularly relevant when a business needs specialist skills like Solaris or Linux, which can be hard to recruit and retain. 36% of those surveyed outsourced to save money and 35% wanted to free up resources to focus on their core business. 33% wanted to add scale to their business, 29% wanted to improve flexibility in the use of their resources and 10% did it to encourage and facilitate innovation.

As the statistics above show, outsourcing takes many forms. Some businesses bolster their existing team with additional resources and some outsource their entire IT operation, including technology and people. Depending on the business’ requirements, outsourcing can facilitate 24/7/365 cover, with stringent SLAs and no issues with staff holiday cover, sickness or transience.

Outsourcing can save businesses money in a number of ways. Businesses that outsource their infrastructure gain access to the latest technology and systems without the huge upfront investment. Costs are spread and paid on an OpEx basis rather than CapEx, which can make budgeting and future planning easier. The not insignificant HR costs associated with an in-house IT team are eliminated, NI, tax, sickness and holiday pay is the responsibility of the outsource partner.

Cybersecurity defences are stronger, with many managed service providers running security operations centres that monitor customer networks and systems continuously to predict and protect against threats.

With such compelling reasons to outsource, why do businesses keep their IT services inhouse?

For some businesses, the appeal of an inhouse IT team lies in the ability to get help from someone in person, in the moment, rather than at the end of the phone.

While some organisations appreciate the time that an inhouse team can take to focus on projects, or solutions, as they aren’t calling off hours from a monthly quota, others are aware that inhouse teams can be overworked and understaffed, constantly dealing with time-pressured “urgent” issues and never freeing up time to innovate or strategise. In addition, a common issue can arise when large amounts of critical knowledge are held by a single individual. Particularly prevalent with specialist skills around Solaris, Linux or even applications like Oracle databases, this “key man risk” is a serious flag on an operational risk register, mitigated by delegation, expanding the team, sharing responsibility with a different team, or by outsourcing.

As we have outlined the cost savings that can be achieved with outsourcing, hiring a team is expensive. Especially one that needs to incorporate a diverse range of skills and expertise to cover a range of operating systems like Windows, Solaris and Linux.

WTL has deep expertise in the world’s leading enterprise technology, including Oracle Solaris and Linux, employing some of the country’s most experienced engineers and architects to ensure that customers can take advantage of the technology, without worrying about training or skills shortages. WTL works closely with its customers to understand if and how outsourcing some of all of its IT resources can benefit their business, fitting in with existing teams where necessary. Every business is different, generating different types of data and will require different systems to meet its business needs. For some companies, especially those in a growth period, moving towards more complex systems, or approaching commercialization, outsourcing some responsibilities to a board level individual who can help drive strategies forward is the right solution

Outsourcing IT to an MSP can be a flexible, smart approach which allows a business to free up resources allowing them to focus on the important elements of growing the business.

If you are unsure whether outsourcing your technology operations is right for you, give WTL a call today.

Useful Links

Statista Global reasons to outsource 2019

Top 10 reasons to outsource

Advantages of outsourcing services

10 Benefits of working with a Managed Service Provider

Smart City Connected by Hyperconverged Infrastructure

The evolution of Hyperconverged Infrastructure – NetApp’s role in this expanding market

Enterprises and mid-market organisations alike are starting to realise the transformational benefits of Hyperconverged Infrastructure (HCI), where server, storage and networking resources are provided as a combined, modular block and managed by a single interface.

Analysts are predicting that adoption will continue to rise and a recent report by the Evaluator Group highlighted that acceptance and implementation of HCI by enterprise sized firms has increased, with 79% of large enterprises expanding their use of hyperconverged infrastructure and using it for mission-critical workloads.

Traditional data centres are set up with all their resource layers set up separately and often managed individually. Conversely, HCI brings together different resources; server, storage, and networking in a way that is simple to manage, allocate and consume.

So how else do businesses benefit from hyperconvergence? Many HCI users  report improved IT team productivity, a more agile business operation and greater ability to support a hybrid cloud environment, with cloud applications.

Businesses also report lower capex, as SAN-based storage solutions are replaced by industry standard servers and overprovisioning is a thing of the past. Resources can be added as and when they are needed to scale out.

Opex is also reduced, as less resources lead to less floor space, power and cooling consumption. The simplified and automated nature of HCI administration means that management overheads are lower, increasing staff productivity and allowing IT teams to do more with the same number of staff.

Risks are lowered as downtime is reduced during upgrades and system refreshes, which happen automatically. The supply chain is smaller and that inherently reduces the operational risks associated with vendor management.

Modern HCI solutions need to be able to provide predictable and guaranteed service levels for multiple primary workloads all competing for bandwidth. They must integrate with multiple public clouds, creating a seamless, hybrid multi-cloud with a common data fabric for private and public clouds. Whilst the essence of HCI is that all components are provided together, in reality, most organisations do not scale equally, with demands for compute, storage and networking increasing equally. So, modern HCI solutions should be able to scale the individual elements of the solutions independently in order to truly maximise the resousces. A storage intensive environment may not necessarily need additional compute power.

NetApp understands the demands of a modern HCI environment and entered the market quite recently with HCI solutions that have been born in the cloud, for the cloud. NetApp HCI offers workload protection, for multiple workloads, allowing organisations to consolidate many different applications on it, safe in the knowledge that those workloads are replicated, protected and available.

NetApp HCI allows organisations to add compute or storage nodes independently, which eliminates overprovisioning and ensures the HCI environment is flexible enough to meet any business’s needs.

NetApp Data Fabric provides consistent data services across on-premises infrastructure, public and private clouds allowing it to meet the needs of today’s businesses, as according to Flexera’s State of the Cloud Report for 2019, 84% of businesses have a multi-cloud strategy and 58% of businesses have a hybrid cloud strategy.

In terms of management, NetApp HCI offers a automated deployment engine which has reduced the number of deployment steps from 400 manual steps to just 30 highly automated steps. Similar automation features in the management console mean it benefits from highly automated integration into higher-level management, orchestration, backup, and DR tools

In short, for customers looking at HCI solutions with a view to transforming their business should absolutely consider NetApp. It has been designed to be future proofed and meets the brief of what a modern HCI solution should offer.

Useful Links

Hyperconverged Infrastructure adoption rates

What is hyperconvergence?

IT Pro Article – Five business benefits of hyperconvergence

IT Pro Article – What is driving the risk of hyperconverged infrastructure?

Hypercponverged.org –Hyperconverged Infrastructure Basics

Flexera State of the Cloud 2019

Oracle spare servers for enterprise infrastructure and data centres in Birmingham

Why Oracle SPARC?

Choosing server technology is an important part of any businesses’ technology strategy and there are many factors affecting the decision. IT leaders should factor in the business platforms it should be running, will they be frontline applications or backup files, will it be cloud based or on premises, does it need to consolidate an existing server estate, what is the best technology the budget will allow?

What platforms will it be running?

For customers who are looking for server technology to run Oracle database and applications, Oracle SPARC servers are fully optimised for Oracle databases and applications and will deliver the best performance and security available. Oracle SPARC’s reporting and analytics capabilities are incredibly fast and inbuilt virtualisation features secure data and improve application performance. Whilst SPARC is optimised for Oracle applications, it is non-proprietary, enabling transformational performance and efficiency gains for most enterprise applications, at an affordable price point.

Cloud or on-premises?

Most businesses today are using the cloud, the Flexera State of the Cloud Report for 2019 found that 94% of businesses surveyed used cloud services, with 91% using public cloud, 72% using private cloud and 69% using at least one public and one private.

For organisations that are considering migrating services to the cloud, or extending onsite data centres to the cloud, even if it is not an immediate plan, servers that have been designed with cloud services in mind will have greater longevity. By taking a cloud-first approach to technology infrastructure development, Oracle has built its cloud solutions using the same SPARC technology that it uses in its servers allowing customers a clear path to the cloud. Customers undecided on when they will move to the cloud can purchase SPARC servers to use on premises, easily moving to cloud services with few migration costs, and without the need to change applications, driving value from today’s investment in the future.

Server consolidation

By consolidating large numbers of smaller servers onto fewer large symmetric multiprocessing (MSP) servers the workload demands on compute power are evened out improving overall utilisation and performance. Large SMP servers simplify the deployment of applications, and less servers to manage means less management overhead, meaning further savings. Oracle approaches server consolidation with different levels of partitioning within SPARC servers, involving PDoms, Oracle VM Server partitioning and Oracle Solaris Zones technology, getting increasingly more granular and flexible. Different workloads have different service levels and will utilise resources differently, which will require different configurations. Oracle’s centralised, single management console simplifies the management of the consolidated servers.


Oracle SPARC has been priced competitively in the enterprise server market, with feature rich hardware at comparatively lower prices than many other vendors. Cost savings can also come from efficiency gains, enabled because SPARC servers perform more business transactions at a faster rate, so customers need less of them, keeping hardware costs and software license costs down.


Performance is critical for servers that will be running enterprise applications and serving mission critical data, and this is another area where Oracle SPARC performs well. Core and processor performance are strong, and specific features like Software in Silicon have been designed to ensure faster enterprise apps.


Oracle has built in security from the group up, with advanced encryption for data at rest, in transit and in storage, with no degradation of performance. Oracle’s Silicon Secured Memory provides 24/7 intrusion protection. In addition, SPARC servers running Oracle Solaris offer protection for applications in memory, access controls, automated patching, and security compliance auditing.

Whatever the business requirement, application or environment, Oracle SPARC is a viable server technology that can meet the needs of a modern business, today and in the future.

Useful Links

Flexera State of the Cloud Report – 2019