Still struggling with NetApp cloud terminology? You’re not alone. Here are another four important terms you really need to know.
What is “Quality of Service” (QoS)?
Quality of Service is a commonly understood concept within IT service provision and networking. The idea is that applications are allocated priority access to resources based on their importance to business operations.
In the context of NetApp, QoS is configured on each storage volume with minimum, maximum, and burst IOPS values that are strictly enforced within the system. These configurations ensure consistent application performance and solve several cloud architecture problems including:
- Delivery of predictable performance to multiple applications
- The ability to scale performance and capacity resources on demand
- Reducing unpredictable I/O patterns
- Eliminating noisy neighbour applications
- Eliminating manual adjustments and major hardware upgrades when workload requirements change
- Enabling enterprise scale growth without system disruption
Unlike other “bolt-on” solutions, QoS is built into the fabric of NetApp storage to ensure effective, reliable and scalable performance guarantees.
What is “CI/CD”?
The move towards agile development and DevOps means that businesses can code and deploy application updates faster than ever. Alongside this methodology framework, we have seen the emergence of a new concept – “Continuous Integration and Continuous Deployment” (CI/CD).
Under this model, much of the development process, particularly integration, testing, delivery and deployment is automated. This frees developers to focus more of their time and effort on writing better code instead of infrastructure management and integration.
What is “Data Centre Automation”?
As infrastructure becomes increasingly complex, administrative overheads increase exponentially. To help reduce these overheads, NetApp provides data centre automation tools.
Using these tools, common tasks and processes like scheduling, monitoring, maintenance and application delivery can be automated. Patch deployment becomes self-regulating, low-level monitoring and responses can be automated and standards and policies enforced without human intervention.
Ultimately, data centre automation allows the IT infrastructure team to focus more resources on strategic developments without compromising operations or systems integrity and availability.
What is “Site Reliability Engineering” (SRE)?
Site reliability engineering is similar in concept to DevOps in that it is concerned with enhancing the release cycle by helping dev and ops see each other’s side of the process throughout the application lifecycle. However, where DevOps focuses on the ‘what’ needs to be done, SRE is interested in the ‘how’.
In this regard, SRE measures and incrementally improves every process from source code to deployment. It does this by applying software engineering practices to infrastructure and operations problems, creating ultra-scalable software and systems in the process.