DevOps building block : Lean
- In DevOps, “Lean” refers to the application of Lean principles and practices to streamline and optimize the software development and delivery process.
- The goal of Lean in DevOps is to maximize value delivery to the customer while minimizing waste, inefficiencies, and delays in the development pipeline.
- Lean practices can be highly complementary to DevOps principles and practices, as both aim to increase efficiency, reduce waste, and enhance the overall software delivery process.
- Many organizations that adopt DevOps principles also incorporate Lean practices to create a more streamlined and customer-centric approach to software development and delivery.
7 Principles of Lean Software
-
Eliminate Waste (Muda):
- To identify and eliminate any activities, processes, or resources that do not add value to the customer or the product. Common types of waste in software development include overproduction (building features not needed), extra features, delays, and partially completed work.
-
Amplify Learning (Mudan):
- Encourages a culture of continuous learning and feedback.
- Teams should actively seek feedback from users, stakeholders, and the market to iterate and improve the product continuously.
- This principle aligns with Agile practices like frequent customer feedback and iterative development.
-
Decide as Late as Possible (Muda o Akirame):
- Delaying decisions until they are absolutely necessary is a way to reduce uncertainty and make more informed choices.
- In software development, this means deferring design and implementation decisions until they are required, often based on user feedback or changing requirements.
-
Deliver as Fast as Possible (Kanban):
- Lean emphasizes delivering value to the customer as quickly as possible. By minimizing work in progress (WIP), focusing on flow, and reducing cycle times, teams can achieve faster delivery and better responsiveness to changing demands.
-
Empower the Team (Kanban):
- Empowering teams means giving them the autonomy and authority to make decisions about how they work and what they prioritize.
- Self-organizing teams are more likely to find creative solutions and improve processes.
-
Build Integrity In (Jidoka):
- This principle emphasizes the importance of building quality and integrity into the product from the beginning.
- By using practices like automated testing, code reviews, and continuous integration, teams can prevent defects and ensure that the product works correctly from the start.
-
See the Whole (Zaiten):
- Seeing the whole means looking at the entire software development process from end to end.
- Instead of optimizing individual components or stages, Lean encourages optimizing the entire value stream to improve efficiency and reduce waste.
CALMS in DevOps
CALMS is an acronym that represents five key principles or areas of focus in DevOps. These principles are used to guide organizations in adopting DevOps practices effectively. CALMS stands for ->
C - Culture -> It involves fostering a collaborative, transparent, and innovative culture within the organization.
A - Automation -> It involves automating manual and repetitive tasks throughout the software development and delivery pipeline.
L - Lean -> Lean principles, derived from Lean manufacturing, focus on eliminating waste, optimizing processes, and delivering value efficiently.
M - Measurement -> Measurement involves collecting and analyzing data to gain insights into the performance and efficiency of the DevOps process.
S - Sharing -> It involves sharing information, knowledge, and experiences among team members and across teams.
CALMS provides a holistic framework for organizations to adopt and implement DevOps practices successfully. It emphasizes the importance of not only technological aspects but also cultural and organizational factors in achieving the goals of DevOps, such as faster delivery, improved quality, and enhanced collaboration.
Infrastructure as Code in DevOps
Infrastructure as Code (IaC) is a fundamental practice in DevOps that involves managing and provisioning infrastructure using code and automation. It allows development and operations teams to treat infrastructure as software, enabling the automated creation, configuration, and management of infrastructure resources such as servers, networks, databases, and storage.
Common IaC Tools ~
Terraform: A widely-used open-source tool for provisioning and managing infrastructure resources across various cloud and on-premises providers.
Ansible: An open-source automation tool that can be used for configuration management and infrastructure provisioning.
Puppet and Chef: Configuration management tools that can also be used for Infrastructure as Code, particularly for managing server configurations.
AWS CloudFormation: A service-specific IaC tool for provisioning AWS resources using JSON or YAML templates.
Configuration management pipeline in DevOps
- A Configuration Management Pipeline, often referred to as a Configuration Management Continuous Integration (CI) Pipeline, is a key component of DevOps practices.
- It is designed to automate and manage the configuration of infrastructure and software, ensuring that systems are consistently configured, tested, and deployed throughout the software development and delivery process.
- It integrates with the broader CI/CD pipeline to ensure that configurations are consistent, tested, and deployed reliably across various environments while emphasizing security, compliance, and continuous feedback and improvement.
- This practice plays a crucial role in maintaining a stable, agile, and efficient software delivery process.
Container Pipeline in DevOps
- A Container Pipeline in DevOps is a specialized part of the overall Continuous Integration/Continuous Deployment (CI/CD) pipeline that focuses on the automation of container-based application development and deployment processes.
- Containers, often managed by platforms like Docker, provide a way to package an application and its dependencies into a single, portable unit. Containerization enhances consistency, scalability, and deployment efficiency.
- By implementing a Container Pipeline in DevOps, organizations can achieve greater consistency and efficiency in deploying containerized applications, enhance security, and accelerate the delivery of software updates while maintaining a high level of reliability and scalability.
- Containerization, combined with automation and orchestration, has become a cornerstone of modern application deployment practices.
The Bakery Pipeline
- The “Bakery” pipeline in DevOps, often referred to as the “Golden Image” or “Immutable Infrastructure” pipeline, is a concept and practice related to infrastructure provisioning and deployment.
- This approach is particularly useful in cloud-based environments and focuses on creating consistent, pre-configured, and version-controlled machine images (virtual machine or container images) that can be easily and quickly deployed to various environments.
- The Bakery pipeline is well-suited for cloud-native and containerized applications, as it ensures consistency and repeatability of infrastructure and application deployments.
- It aligns with DevOps principles by emphasizing automation, version control, and continuous testing, while also promoting the practice of immutable infrastructure to improve stability and security.
Continuous Delivery in DevOps
Continuous Delivery (CD) is a DevOps practice that focuses on automating and streamlining the software delivery pipeline to enable the rapid, reliable, and frequent release of software updates to production or staging environments. CD builds upon Continuous Integration (CI) and extends the automation process further into the deployment and release phases.
-
Continuous Integration (CI): CD begins with CI. CI ensures that code changes from multiple contributors are regularly merged into a shared codebase and that automated tests are run to validate these changes.
-
Automated Testing: CD relies heavily on automated testing and a critical component of the CD pipeline to ensure the quality of software throughout the development process.
-
Artifact Management: CD pipelines typically involve the generation and storage of build artifacts and are versioned and stored in artifact repositories for consistency and traceability.
-
Deployment Automation: Automated deployment scripts and tools are used to provision and configure infrastructure, deploy application code, and perform any necessary database schema changes.
-
Environment Parity: CD seeks to maintain parity between different environments (e.g., development, testing, staging, production) to minimize configuration-related issues. This is often achieved through Infrastructure as Code (IaC) practices.
-
Feature Toggles (Feature Flags): CD encourages Feature flags to control the activation or deactivation of specific features in production. This allows for the gradual rollout of new features to select users or groups.
-
Continuous Testing: Beyond automated testing, CD promotes continuous testing in production-like environments. This includes canary deployments, A/B testing, and monitoring for performance, security, and other non-functional aspects.
-
Automated Rollback: The ability to quickly and safely roll back to a previous version is crucial for minimizing downtime and user impact.
-
Feedback Loops: Continuous Delivery relies on feedback loops. This feedback informs decisions about whether a release is ready for production or needs further refinement.
-
Security and Compliance: Security checks and compliance testing should be integrated into the CD pipeline to ensure that code changes and infrastructure configurations meet security and regulatory requirements.
Trunk Based Development in Continuous Delivery
- Trunk-Based Development (TBD) is a software development practice that focuses on keeping a single, shared codebase or “trunk” as the primary branch for all development activities.
- This approach contrasts with other branching strategies, such as feature branching or Git flow, where developers work on separate branches for individual features or tasks.
- Trunk-Based Development is often associated with DevOps practices and principles.
- It aligns with DevOps goals of delivering software more quickly and reliably, reducing waste, and fostering a culture of continuous improvement.
- However, it may require adjustments in workflows and tooling to support the practice effectively, especially in larger and more complex development environments.
Branch Based Development in Continuous Delivery
- Branch-Based Development, also known as Feature Branching or Git Flow, is a common software development practice that involves creating separate branches for different features, bug fixes, or tasks during the development process.
- Each branch represents a specific piece of work and allows developers to work on these tasks in isolation without affecting the main codebase (often referred to as the “master” or “main” branch).
- While this branching model has some advantages, it also presents challenges in terms of complexity and integration.
How Artifacts flow through the system
- Code is checked into version control system that commit triggers a build in your CIS system.
- Once the build finishes the resulting artifacts are published to a central repository.
- Next, we have a deployment workflow to deploy those artifacts to a live environment.
- That’s as much of a copy of production as possible. One may call this environment CI, staging, test or pre-prod. At this point, smoke testing, integration testing and acceptance testing all happen. And they should be automated as much as possible.
- Once it passes all those tests the artifact is released When one wants to deploy the artifacts to the production environment.
- And finally, one wants to have a pre-production environment that’s as identical as possible to the production environment.
Five Practices when building out a continuous Delivery Pipeline
- Only build artifacts once.
- Artifacts should be immutable.
- Deployment should go to a copy of production.
- Stop deploys if a previous step fails.
- Deployments should be idempotent.
Software Testing and its Types
Software testing is a critical part of the software development lifecycle, and it involves evaluating an application or system to identify and resolve defects, validate that it meets requirements, and ensure its quality and reliability.
Types of Software Testing ~
-
Unit Testing:
- Scope: Tests individual components or units of code in isolation (e.g., functions, methods, classes).
- Purpose: To verify that each unit of code functions correctly in isolation.
-
Integration Testing:
- Scope: Focuses on testing the interactions and interfaces between different units or modules of code.
- Purpose: To ensure that integrated components work together as expected.
-
Functional Testing:
- Scope: Evaluates the functionality of the software as a whole, typically based on defined requirements.
- Purpose: To verify that the software functions correctly according to specified functional requirements.
-
Regression Testing:
- Scope: Repeatedly tests the application to ensure that recent code changes haven’t introduced new defects or broken existing functionality.
- Purpose: To maintain the integrity of previously tested and functioning code.
-
User Acceptance Testing (UAT):
- Scope: Involves end-users or stakeholders testing the software to ensure it meets their requirements and expectations.
- Purpose: To gain user approval and ensure that the software aligns with business needs.
-
Alpha Testing:
- Scope: Conducted by the development team internally to assess the application’s functionality and quality.
- Purpose: To identify and address issues before releasing the software to a wider audience.
-
Beta Testing:
- Scope: Involves a select group of external users or early adopters testing the software in a real-world environment.
- Purpose: To gather feedback and identify any issues before a wider release.
-
Usability Testing:
- Scope: Evaluates the user-friendliness and overall user experience of the application.
- Purpose: To ensure that the application is intuitive and meets the needs of its intended users.
-
Security Testing:
- Scope: Focuses on identifying vulnerabilities, weaknesses, and security risks in the application.
- Purpose: To ensure that the application is secure and protected against threats and attacks.
-
Performance Testing:
- Scope: Assesses the application’s performance characteristics, including speed, scalability, and responsiveness.
- Purpose: To identify performance bottlenecks, optimize code, and ensure the application can handle expected loads.
Six Phases of Continuous Delivery
six key phases for continuous delivery and the tooling that’s associated with it. The key areas are:
-
Version control -> where we commit code changes and can view the entire history of all changes ever made. It allows the developers to stay in sync with each other by treating each change as an independent layer in the code.
-
CI system -> Jenkins being open source is popular in many organizations. It has tons of community support and almost every provider integrates with it.
-
Build -> Build tools are very language dependent. It executes a consistent set of steps every time or one can take a workflow approach with Maven that can allow to run reproducible builds and tests from the developer desktop all the way to your CI system.
-
Test -> Integration testing is usually performed with test driven frameworks or by using in-house scripts. Testing frameworks and tools in this area include: Robot, Protractor, and Cucumber.
-
Artifact repository -> Artifactory is a popular artifact repository manager used in DevOps and software development processes to serve a centralized repository for storing and managing binary artifacts such as libraries, dependencies, packages, and build artifacts.
-
Deployment -> lets you define a job, put off and permissions around it, and then automate a workflow across your systems. And deployment is a popular workflow that people use it to automate.
Continuous Integration ToolChain
- The Continuous Integration (CI) toolchain in DevOps is a set of tools and practices that facilitate the automation and orchestration of the CI/CD (Continuous Integration and Continuous Delivery) pipeline.
- This toolchain plays a fundamental role in enabling teams to build, test, and deploy software rapidly and reliably.
- The choice of a CI toolchain depends on the specific needs of your project, including the technology stack, development methodology, and deployment targets. The core goal is to automate and streamline the development and delivery process, resulting in faster development cycles and higher software quality.
Some of the examples of ToolChain ~
-
Basic CI Toolchain:
- Version Control System (VCS): Git, SVN, Mercurial.
- Build Server: Jenkins, Travis CI, CircleCI.
- Artifact Repository: Nexus, JFrog Artifactory.
- Testing Frameworks: JUnit, NUnit, Selenium.
-
Container CI/CD Toolchain:
- Version Control System (VCS): Git, GitHub, GitLab.
- Containerization: Docker, containerd.
- Container Orchestration: Kubernetes, Docker Swarm.
- Container Registry: Docker Hub, AWS ECR.
- CI/CD Platform: GitLab CI/CD, GitHub Actions, AWS CodePipeline.
-
Mobile App CI/CD Toolchain:
- Mobile App Platforms: iOS, Android.
- CI/CD Platform: Jenkins, Bitrise, Fastlane.
- Testing Frameworks: XCTest, Espresso, Appium.
-
DevSecOps CI/CD Toolchain:
- Security Scanning: SonarQube, OWASP ZAP, Nessus.
- Secrets Management: HashiCorp Vault, AWS Secrets Manager.
- CI/CD Platform: Jenkins, GitLab CI/CD.
- Logging and Monitoring: ELK Stack, Prometheus.
-
IoT CI/CD Toolchain:
- IoT Platforms: Raspberry Pi, Arduino, ESP8266.
- CI/CD Platform: Jenkins, Travis CI.
- IoT Testing: Device simulators, MQTT testing tools.
Reliability Engineering in DevOps
What does Reliability means
The ability of a system or component to function under stated conditions for a specified period of time. In IT, this includes availability, performance, security and all the other factors that allow the service to actually deliver its capabilities to the users.
Mean Time to Recovery(MTTR) ~ How quickly a service can recover from a disruption and restore service.
Mean Time between Failures(MTBF) ~ The average time between service disruptions.
The total disruption of the service is a function of the MTBF and the MTTR
Definition of Reliability Engineering ~
Reliability Engineering is a discipline that focuses on designing, building, and maintaining systems and products to ensure they perform their intended functions under specified conditions, while also minimizing the likelihood and impact of failures. In the context of DevOps, reliability engineering plays a crucial role in creating software systems and services that are highly available, performant, and resilient.
Logging in DevOps
- Logging in DevOps refers to the practice of recording events, actions, and system information in a structured format for the purpose of monitoring, troubleshooting, and auditing applications and infrastructure components.
- Logging is a critical aspect of DevOps because it provides valuable insights into the behavior and performance of software systems, helping teams identify and resolve issues quickly.
Usage of Logging ~
- troubleshooting
- resource management
- intrusion detection
- user experience
Five Ws of Logging
- What happened?
- When did it happen?
- Where did it happen?
- Who was involved?
- Where did that entity come from?
Centralized Logging
- Centralized logging is a practice in which log data generated by various components and systems across an organization’s infrastructure is collected, stored, and managed in a central repository or logging platform.
- This centralized approach to logging offers several advantages in terms of monitoring, troubleshooting, and managing complex systems and applications.
- Centralized logging enables organizations to gain deep insights into the behavior and performance of their systems and applications, while also facilitating efficient incident management and troubleshooting.
Five principles of Centralized Logging:
- Don’t collect log data you won’t use.
- Retain log data for as long as they can be used.
- Alert only what you must respond to.
- Don’t exceed business security needs.
- Log change, as their format or their messages.
Rational Unified Process (RUP)
The Rational Unified Process (RUP) is a comprehensive software development process framework, provides a structured and disciplined approach to software development, focusing on iterative development, best practices, and customizable processes.
From business modeling through deployment and has deliverables for each phase in addition to well-defined handoffs.
RUP places a stronger emphasis on upfront planning, extensive documentation, and a well-defined process framework. RUP projects often have more structured phases and roles.
Comparing Scrum with Kanban
Both Scrum and Kanban are agile methodologies but they have some differences from each others.
Scrum | Kanban |
---|---|
Prescriptive with sprints, burn down charts and cross-functional teams. | No prescribed iterations, continuous and optional cross-functional teams. |
Defined roles | No defined roles |
Change waits for next sprint | Change can happen at any time |
Explain - Lean
- Lean is a systematic approach to optimizing processes and creating value for customers while minimizing waste.
- Its principles have been adapted and applied to various industries, including software development, healthcare, and service sectors.
- Lean is also inspired by Statistical Process Control (SPC) and is focused on delivering more value with fewer resources, reducing inefficiencies, and continuously improving processes.
- Lean inherits from Just in Time (JIT) Manufacturing i.e Increasing efficiency and decreasing waste by receiving goods only as they are needed.
Lean is a systematic method to eliminate waste and maximize the flow of value through a system.
The Seven Wastes of Lean Software
- Partially done work
- Extra features or over production
- Relearning
- Handoffs
- Delays
- Task or Context switching
- Defects
Each on of these can identified and reduced in the development process by using lean techniques.
Important terms of using methodologies
There are some ceremonies required to run any project smoothly and follow the rules of the methodologies
- Stand-Up (or Scrum) ~ A Short daily meeting the team uses to sync up with each others.
- Backlog - The list of work items for the team, sorted in the order they should be performed.
- Iteration (Sprint) - A single time boxed development cycle, at the end of which something of business value is produced.
- Theory of Constraints - A methodology for optimizing flow by identifying the limiting factor in a system, improving it until it is no longer the bottleneck, and repeating the process.
- Minimum Viable Product - An initial deliverable of a product with just enough features to engage early customers and start gathering feedback for future development.
Summary of Part 2
DevOps is not a framework or a workflow. It’s a culture that is overtaking the business world. DevOps ensures collaboration and communication between software engineers (Dev) and IT operations (Ops). With DevOps, changes make it to production faster. Resources are easier to share. And large-scale systems are easier to manage and maintain.
Happy Learning!!