Linux managed services in the West Midlands

We know technology…

We know technology…

ABOUT US

Solaris managed services in the West Midlands

We know Virtualisation…

We know

Virtualisation…

FIND OUT MORE

Solaris managed systems in Birmingham

We know Cloud…

We know Cloud…

FIND OUT MORE

Linux managed services in Birmingham

We know Network and Security…

We know Network

and Security…

FIND OUT MORE

Linux managed systems in Birmingham

We know Data Management…

We know

Data Management…

FIND OUT MORE

Solaris managed services in Birmingham

We know Systems…

We know Systems…

FIND OUT MORE

The questions you should ask when planning your tape-to-cloud migration

With the huge advances in public cloud security, efficiency and value for money, many organisations are now planning to move towards cloud backup strategies, which are less complex and more reliable than traditional tape backup solutions. But migrating your backup to cloud from tape can be a big project and does require careful scoping. There are some key questions to ask before embarking on a migration from tape to cloud, which will help you to understand the scale of the project.

Firstly, do you need to move all historical backups to the cloud, or could you start backing up new data to the cloud and gradually reduce on-premises tape dependency as data reaches end-of-life? This is a straightforward approach but depends on the business being comfortable with different RPO and RTOs for new versus aged data.

Next, what is the best way of migrating a large data set to the cloud initially? You can use on-premises network transport methods, or physical transport methods. High speed internet transfer would only be an option for smaller data sets, as can be time consuming.

You might need to consider that when you move data from tape to cloud, it could be prudent to perform any indexing, transcoding or repackaging that will make it easier to extract value from the data once in the cloud.

Do you know if your current backup vendor can natively support a cloud backup store, or are new feature licenses or major version updates required? Once you’ve migrated, can you restore to cloud virtual machines or will data restore to a physical machine?

Can you write data directly to the cloud and do your backup windows support that too? Should you use a traditional storage protocol such as a network file system (NFS)?

Do you need to change your workflows to suit the cloud environment, or will your cloud solution appear as a virtual tape library allowing you to keep the same processes and save time and management overhead?

Does your cloud backup provider give you the scalability and elasticity needed to make changes without disruption to the backup activity? Enterprise cloud providers should have the provisions, AWS offers Amazon Elastic Cloud Compute which can flex to keep processes consistent.

When accessing backup data will this be done in the cloud, or will it be pulled back and accessed on-premises? It could affect the services you purchase, from archives which are seldom accessed to a virtual tape library which holds frequently accessed, recent files.

Can you leverage the cloud to simplify widely distributed backup workflows?

Many cloud providers offer complementary services such as analytics, data lifecycle management or compliance features. Do you need these as part of your backup solution?

Could a cloud integrator help you to scope, implement and migrate your current backup environment across to the cloud?

Getting answers to these questions now will save immeasurable time during and after your move to the cloud and can help you to maximise your budget, by cutting out unnecessary services.

Benefits of backing up to the cloud versus tape

The benefits of backing up to cloud versus backing up to tape

Tape has been the backup media of choice for over 60 years, due to its portability and reliability. Tape technology has developed and density has increased, meaning cost per gigabyte has been low, but the complexity and time consuming nature of tape management means many organisations are looking for an alternative.

A traditional tiered storage architecture uses local disk or networked storage for speedy access to primary data, then periodically sends snapshots or data to a backup server that writes the data to magnetic tape. Usually stored onsite in tape backup libraries and sometimes replicated to an offsite location, via WAN or even manually moved to an offsite storage facility.

Cloud backup offers organisations a new way of backing up their data, removing the complexity and risk of manually moving and handling magnetic tapes and improving the performance, availability and reliability of backups.

Whilst the cost of tape storage has come down, the costs associated with handling, managing and storing tape media have been increasing. At the same time, the cost of public cloud services has been coming down, allowing customers to take advantage of economies of scale, making them an accessible and affordable backup solution. Cloud has no upfront capital investment costs, no costs associated with media, or configuration and no data retrieval costs.

Using the public cloud to store backup data is generally a very reliable solution, with some CSPs offering a durability service level agreement of 99.999999999%. The chance of data loss through infrastructure failure therefore is incredibly low. The availability that public cloud providers can achieve is generally higher than most organisations can implement in house, with multi-site replication and failover of every single component.

Magnetic tape on the other hand is based on mechanical equipment which can fail and lead to data loss or unavailability. The quality of data stored on tape can be eroded if retrieved and read too often, although more robust tape intended for frequent use is available, the cost is often prohibitive.

Tape can perform well for read write capabilities but can be unpredictable. The retrieval of data is particularly slow, especially for large datasets, from hours to days. When retrieving data from the cloud, organisations are often hindered more by WAN speeds than native storage performance, but there are still options available offering lower cost, longer term storage, which inevitably takes longer to restore.

Whatever requirements an organisation has, there are many reasons why a public cloud backup solution is the right option. Cost, performance, availability, reliability and the ability to restore quickly and easily, are all big reasons to consider cloud over tape.

Understanding the benefits of selecting a cloud-native development platform

“Cloud-native” is a big trend in software development right now, but what does it mean and how do you adopt a cloud-native approach in your organisation?

It’s not about simply moving legacy applications to the cloud, it goes much deeper than that. IDG states that cloud-native is about how applications are developed, not where. Cloud-native can be described as an approach to application development and deployment that is takes full advantage of the benefits of cloud, which results in applications that are fully optimised for distributed, cloud-based deployments. In comparison, typical on-premise, legacy apps are built to run on a centralised system, as a single piece of code, which doesn’t necessarily translate well in a distributed cloud environment. Nor is it easy to bring these applications to market quickly, or fix issues, or roll out a new release regularly.

What then are the main components of a cloud-native development approach?

The cloud-native approach uses a services-based architecture to develop applications. A service in this case is a process or activity that is self-contained and presented in a container to isolate and package just the right amount of resources that service needs. A collection of these loosely coupled, self-contained services makes up the application. Any service can be tested, released, replaced or updated independently of the overall application, making it much more agile and flexible. In addition, the containers not only isolate the service, but they make it more portable.

Cloud-native apps use DevOps automation for speed, quality and agility. By automating DevOps processes, releasing more frequently, monitoring the performance and user experience then making adjustments before deploying another release, developers can bring applications to market quicker and improve the user experience each time. By doing this at the microservice level too, the speed to market is even greater and disruption can be contained and minimised.

It’s a tricky leap to make however, everything is different in a cloud-native environment. With so many microservices being developed independently, on different platforms, sometimes in different languages, with multiple different release schedules, all requiring development, test and production environments, things can spiral out of control quite quickly.

In addition, it’s a rare business that doesn’t have some legacy applications that they need to modernise. They may have decided to adopt a cloud-native approach to all new applications but struggle to know what to do with their legacy applications.

The organisation may also want to maintain a hybrid environment, where they have some infrastructure on premises, some in private cloud and some in public cloud services. The prospect of changing applications to suit different platforms, the potential for different monitoring and operational tools for each platform, can seem daunting.

Looking at a platform like Red Hat OpenShift, a self-service platform where developers can build and run containerised applications, is a great place to start. OpenShift has been built specifically for cloud-native application development and allows automated, continuous integration and delivery build and development pipelines at the click of a button.
In addition, Red Hat OpenShift Application Runtimes provides a number of prescriptive, guided paths for developers to move through the development process with ease. When developers use these runtimes on the development platform, they can develop and deploy faster and with less risk. From microservices development to migrating existing applications to the cloud, there is a runtime to guide you through.

Read more about Red Hat’s developer platform and runtimes here.

Useful Links

Why Developers and Business Leaders Are Going Cloud Native

What is cloud native?

Realising the Value of Modernising Your Legacy Infrastructure

Techopedia defines a legacy system as an outdated system, language or application that is used instead of available upgraded systems. The term “legacy” is often used pejoratively, but the reality is that most organisations do have some legacy infrastructure. It can be problematic as it gets older, becoming incompatible with new and emerging apps and technology. When legacy hardware and software is out of support with unpatched security elements it is at greater risk of a cyber-attack. Costs to run legacy systems increase as services become more frequent and more things start to go wrong. Older systems that have bits added here and there become increasingly complex and they invariably take longer to configure and provision to accommodate new services. New services and apps take longer to go live and therefore bring benefits to the users and businesses suffer from being cumbersome, slow and often with insufficient capacity to grow with the business.

Most CIOs understand that they need to modernise their infrastructure if they are going to keep up with the demands of a modern business. Modern apps and workloads need fast, agile, secure and scalable infrastructure to run as efficiently as they are able.

But a modernisation project involves more than just refreshing hardware when it needs an upgrade, it requires serious consideration and planning, with a long-term strategy. A strategy that leads the business towards the cloud. When planning to refresh infrastructure, consider solutions that will meet current needs, either on premise, or in the cloud, but also be flexible enough to adapt to moving other apps and workloads to the cloud, plus a plan for building and developing new apps and services, in the cloud. The cloud is the way for businesses to scale and to provide the power, speed and agility that modern apps demand. Only by speeding up the time to production, reducing IT overheads and automating business processes will businesses be able to compete. By automating as much as possible, staff can focus on high value work and are better placed to give the business a competitive edge.

The journey to the cloud is the most important aspect of any modernisation strategy and in “The Creative CIO’s Agenda: Six Big Bets for Digital Transformation”, KPMG places the journey to the cloud at the top of the list of digital priorities for CIOs, in order for them to both defend against disruption and be disruptors themselves.

Oracle Solaris was developed for the cloud and can accelerate the adoption of workloads in the cloud, with fast and intelligent provisioning, virtualised networking, simplified administration and stringent security features. By upgrading to Oracle Solaris, businesses can be assured of total protection for data and applications, speedy performance and simpler data management. Choosing infrastructure that is designed for the cloud will mean that whether applications and workloads are ready now, or in the future, it is a simple and seamless process.

Read more in the Oracle SPARC Solaris for Dummies guide or help your applications perform at optimal levels by running a WTL enterprise Solaris and Linux healthcheck.

Useful Links

Tech Funnel Article – Top 10 Priorities of CIOs in 2018

The Creative CIOs Agenda – Six Big Bets for Digital Transformation

IT Automation

How open source agentless IT automation can help deliver a competitive edge

Automating legacy technology and processes using cloud services is a sound strategy in theory and many businesses have increased their competitive edge by doing just that. IT Automation minimises the risk of human error or inconsistency and allows a business to reduce the time it spends on repetitive IT administration tasks. Automation also enables applications and services to be developed and delivered much quicker. In fact, Gartner estimates that the Application Release Orchestration (ARO) market grew by an estimated 37.5% in 2017, taking what they call the Delivery Automation: ARO market size to over £200million globally.

It’s not all plain sailing though. Moving systems and processes that were built to be on-premise to the cloud for automation, is often difficult. Sometimes the benefits of automation can’t be realised, because the complex processes reduce productivity and negate any gains you may have made.

So how do businesses take advantage of automation technology to make sure they’re running their operations in as lean and efficient way as possible?

Enter RedHat Ansible Tower; the result of a collaboration between RedHat and AWS and an automation solution that allows businesses to harness the power of the public cloud to provision the resources it needs, develop applications in the cloud and simplify configuration file management. In addition, it allows businesses to deploy and manage applications more easily, secure the infrastructure and orchestrate multiple configurations and environments making sure that the necessary storage, networks and databases are in place to support new applications.

Being open source, users of this solution benefit from the large network of developers and open source communities that RedHat has cultivated and many different deployments that are outlined in easy, repeatable playbooks.

But does it matter if you choose a solution which requires agents, or one that is agentless? Actually, yes and especially in the cloud, where agentless technology is faster and easier to implement, is compatible with a broader range of systems and applications and the risks associated with upgrades are minimised as they can be rolled out to the entire system in one go.

RedHat Ansible Tower is an agentless solution that works across a businesses’ entire AWS environment, giving visibility and control over to the business, via a visual dashboard. Applications can be built and deployed continuously in the cloud, with a series of playbooks to speed up and simplify the process. Resources can be provisioned wherever and whenever needed, and the whole set of configurations can be orchestrated from the same dashboard.

With role-based access policies providing control over who sees and can manage what, and custom security policies that will automatically be implemented when new applications or servers are provisioned, security and compliance is built in from start to finish.

Systems Administrators who were previously spending all their time running complex, manual processes to update and provision their environments and developing and deploying applications are now free to focus on core business initiatives. Administration overheads reduce, productivity improves and no time is wasted in getting applications to market. As a result, it’s true to say that agentless automation helps the whole business become more competitive.

Useful Links

Top 5 Challenges of Automating the Software Delivery Pipeline

5 Ways Agentless IT Automation Can Benefit Your Business

Agent vs. agentless: Monitoring choices for diverse IT ops needs

An initiation into infrastructure automation tools and methods

Partners

Cisco logo
Oracle logo
VM ware logo
Veritas logo
Veeam logo
Red Hat logo
NetApp logo
DELL EMC logo
Amazon web services