Let’s start with the basics and learn what DevOps is before moving on to DevSecOps. DevOps is a set of beliefs, practices, and technologies that improves a company’s capacity to release software and services rapidly.

When projects are developing quickly, it’s easy to lose sight of security, which can result in vulnerabilities and breaches. Let’s take a look at how incorporating security into our DevOps pipeline will lessen the likelihood of an attack.

DevSecOps (DevOps plus Security): What’s It All About?

DevSecOps is a mindset in which all members of the development and operations teams work together to ensure the application’s safety at every stage. By integrating necessary security checks into CI/CD automation using the right technologies, it guarantees that security is implemented across the whole application software development lifecycle (SDLC).

Take the zero-day vulnerability in log4j as an example; the DevSecOps approach makes it easy to find and stop such threats. The SBOM for our application code may be generated with the help of the Syft tool, and then we can run it via Grype to check for newly discovered vulnerabilities and see if a patch is available. Our CI/CD process includes these checks, so when a problem is discovered, our developers and security team are immediately notified.

What are the benefits of utilizing DevSecOps?

  • Detects flaws and weak spots in code before it’s fully implemented.
  • Facilitated conformity
  • Get well soon!
  • Save money by
  • Possible to use AI-based anomaly detection monitoring
  • Confidence is boosted and the risk of a surface attack is decreased.
  • All risks and potential solutions are clearly visible.
  • The best way to establish a habit of safe behavior.

How does a DevSecOps continuous integration/continuous delivery pipeline function?

DevOps Pipeline with Jenkins

The following common CI/CD phases, and how to secure them, will be discussed in this article:

  • Plan/Design
  • Develop
  • Analyzing the Structure and Source Code
  • Test
  • Deploy
  • Control and Warn

The time has come to get started with DevSecOps implementation.

Plan/Design

In this phase, we define the time, place, and method for doing integration, deployment, and security testing.

Risk assessment 1.1:

It effectively puts you in the mentality of an attacker and allows you to see the application through the attacker’s eyes and block their attack before they get a chance to do anything about it. For our threat modeling, we can utilize either the Open Web Application Security Project (OWASP) model or Microsoft’s Simple Questions approach. For our safe development lifecycle, we can also use the open-source OWASP Threat Dragon and Cairis tools for creating threat model diagrams.

Security testing should be integrated into the software development life cycle (SDLC) at every level, from requirements gathering to code review to production. The initial planning phase should incorporate security risk elements, such as building applications in a way that guarantees a secure architecture. Snyk Secure System Development Lifecycle

Separating development, testing, and production environments and instituting permission protocols to regulate the promotion of deployments between environments are essential for a safe software development lifecycle. A developer is less likely to make unauthorized changes if this is in place. This guarantees that any changes are reviewed and approved in accordance with established procedures.

Develop

Shift-left security best practice allows us to think about security from the very beginning of the development process, when we’re just starting to write code.

  • Linting tools can be installed directly into an IDE like Visual Studio Code. SonarLint is a well-known lint checker. Which warns you as you type about potential security flaws in your code.
  • Avoid hiding sensitive information in your code by using pre-commit hooks.
  • Create a secure branching and code review system.
  • Use a GPG key to sign a git commit.
  • Verify the hash value of any binary or file you download.
  • Make use of two-factor authentication.

Code and build analysis

We need to perform a vulnerability and secret scan on our code before we begin construction. By undertaking static code analysis, it may find a flaw or a possible overflow in code and these overflows lead to a memory leak, which reduces the system performance by reducing the amount of memory accessible for each program. It’s possible for hackers to utilize it as a platform to gain access to the data.

3.1 Find Private Information and Login Details

detect-secret is a secret-finding and -prevention program designed for use in enterprise environments. The files that aren’t in git can still be scanned. Gitleaks is only one example of a technology that serves a similar purpose.

detect-secrets scan test_data/ --all-files

3.2 SBOM (Stock Keeping Unit) for Software

SBOM allows us to discover every piece of code, library, and module in use, along with their dependencies. Reduces the delay in fixing emerging security flaws, such as zero-days in Log4j.

For the SBOM report, we can use the resources provided below.

3.2.1 Laughing at Grype and Trivy’s Expense

The Syft utility produces openly distributable CycloneDX container images and filesystem SBOMs. Syft also allows cosign attestation for determining the authenticity of photos.

syft nginx:latest -o cyclonedx-json=nginx.sbom.cdx.json

Therefore, we have produced an SBOM report detailing the libraries and modules currently being used by our application. Let’s use Grype to look for security holes in SBOM reports.

[root@laptop ~]# grype sbom:./nginx.sbom.cdx.json | head
 ✔ Vulnerability DB    	[no update available]
 ✔ Scanned image       	[157 vulnerabilities]
NAME          	INSTALLED          	FIXED-IN      	TYPE  VULNERABILITY 	SEVERITY
apt           	2.2.4                                		deb   CVE-2011-3374 	Negligible
bsdutils      	1:2.36.1-8+deb11u1                   	deb   CVE-2022-0563 	Negligible
coreutils     	8.32-4+b1                            		deb   CVE-2017-18018	Negligible
coreutils     	8.32-4+b1          	(won't fix)   	deb   CVE-2016-2781 	Low
curl          	7.74.0-1.3+deb11u1                   	deb   CVE-2022-32208	Unknown
curl          	7.74.0-1.3+deb11u1                   	deb   CVE-2022-27776	Medium
curl          	7.74.0-1.3+deb11u1 	(won't fix)   	deb   CVE-2021-22947	Medium
curl          	7.74.0-1.3+deb11u1 	(won't fix)   	deb   CVE-2021-22946	High
curl          	7.74.0-1.3+deb11u1 	(won't fix)   	deb   CVE-2021-22945	Critical

# Or we can directly use Grype for SBOM scanning
grype nginx:latest

Please take note that – many of the vulnerabilities identified by SCA tools cannot be exploited and cannot be patched with standard updates. Among these are curl and glibc. Therefore, these diagnostics indicate that the problems cannot be fixed.

Newer versions of Trivy can be used to discover vulnerabilities in containers and filesystems, but they can also be used to generate SBOM reports.

OWASP Dependency-Check (Version 3.2.2)

OWASP Dependency-Check a Software Composition Analysis (SCA) tool that aims to detect publicly published vulnerabilities embedded inside a project’s dependencies. It does this by checking to see if the dependent has a corresponding Common Platform Enumeration (CPE) identifier. If a problem is discovered, a report will be generated with inbound links to the relevant CVE entries. Our software components and their security flaws can be visualized by publishing an SBOM report to Dependency-Track.

dependency-check.sh --scan /project_path

As soon as we identify the specific flaws in our code, we can fix them and ensure the security of our application.

3.3.2 Static Analysis of Software (SAST)

It’s a way to fix bugs in a program before actually running it. The program examines the code by comparing it to a set of rules.

Cleaner and safer code is within everyone’s reach with SonarQube’s help. It can be used with a wide variety of languages, including Java, Kotlin, Go, and JavaScript. It also supports doing unit testing for code coverage. Jenkins and Azure DevOps integration is simple. Similar capability can be found in the commercially available alternatives Checkmarx, Veracode, and Klocwork.

docker run \
–rm \
-e SONAR_HOST_URL=”http://${SONARQUBE_URL}” \
-e SONAR_LOGIN=”AuthenticationToken” \
-v “${YOUR_REPO}:/usr/src” \
sonarsource/sonar-scanner-cli

A Unit Test (3.3)

In Unit tests, individual program code components are checked if it is working as expected or not. Unit tests are used to ensure the validity of a specific section or module of code. Unit test reports can be generated with the help of tools like JaCoCo for Java and Mocha or Jasmine for NodeJS. These reports can also be sent to SonarQube, where we can view code coverage and the proportion of your code that is covered by test cases.

When SAST is complete, we can next run a scan of our Dockerfile.

3.5.1 Static Analysis of Dockerfiles

Always check the Dockerfile for security flaws, as it is easy to overlook potential problems while building a Dockerfile. To list a few frequent faults that we can avoid.

The newest docker image tag should not be used.

Verify that a container user has been set up.

Checkov or docker scan can be used to scan Dockerfile which follows best practice rules to write Dockerfile.

docker run -i -v $(pwd):/output bridgecrew/checkov -f /output/Dockerfile -o json

After a container image has been constructed, we do a vulnerability scan and then sign it.

The Container Image Scan (v3.6)

By scanning photographs, we can learn about the current security level of container images and take steps to improve their safety. We should avoid installing superfluous packages and use a multistage strategy. This ensures a pristine and secure public profile. Image scanning is a process that needs to happen in both the testing and the live settings.

Here are some popular options, both free and paid, for scanning containers:

  • Popular open-source container scanning tools include Trivy, Gryp, and Clair.
  • Docker scan relies on Snyk as its scanning backend. Also, Dockerfile scanning is possible.
  • To detect container escapes, malware, cryptocurrency miners, code injection backdoors, and other threats, Aqua scan offers container image scanning, but its standout feature is Aqua DTA (Dynamic Threat Analysis) for containers, which tracks behavioral patterns and Indicators of Compromise (IoCs) like malicious behavior and network activity.

Verifying and signing container images

An attacker may replace the legitimate container image with a malicious one if they were able to hack the construction process. Always running the authentic container image requires signing and validating the container.

The size of the container image and the attack surface can both be decreased by switching to distroless images. The requirement for container image signing is because even with the distroless images there is a potential of experiencing some security issues such as getting a malicious image. Both cosign and skopeo are available for usage in container signing and verification. This blog goes into greater detail on using Cosign and Distroless Images to secure containers.

cosign sign –key cosign.key custom-nginx:latest
cosign verify -key cosign.pub custom-nginx:latest

Test for Validating Container Images 3.8

Including an additional check to ensure the container image is secure and contains all necessary files with the right permissions. To verify container pictures, we can utilize dgoss.

For example, let’s execute a validation test for the nginx image that is operating on port 80, has internet access, and validates the correct file permission of /etc/nginx/nginx.conf, and the nginx user shell in the container.

dgoss edit nginx
goss add port 80
goss add http https://google.com
goss add file /etc/nginx/nginx.conf
goss add user nginx

Once we exit it will copy the goss.yaml from the container to the current directory and we can modify it as per our validation.

Validate

[root@home ~]# dgoss run -p 8000:80 nginx
INFO: Starting docker container
INFO: Container ID: 5f8d9e20
INFO: Sleeping for 0.2
INFO: Container health
INFO: Running Tests
Port: tcp:80: listening: matches expectation: [true]
Port: tcp:80: ip: matches expectation: [[“0.0.0.0”]]
HTTP: https://google.com: status: matches expectation: [200]
File: /etc/nginx/nginx.conf: exists: matches expectation: [true]
File: /etc/nginx/nginx.conf: mode: matches expectation: [“0644”]
File: /etc/nginx/nginx.conf: owner: matches expectation: [“root”]
File: /etc/nginx/nginx.conf: group: matches expectation: [“root”]
User: nginx: uid: matches expectation: [101]
User: nginx: gid: matches expectation: [101]
User: nginx: home: matches expectation: [“/nonexistent”]
User: nginx: groups: matches expectation: [[“nginx”]]
User: nginx: shell: matches expectation: [“/bin/false”]
Total Duration: 0.409s
Count: 13, Failed: 0, Skipped: 0
INFO: Deleting container

Test

The goal of testing is to verify that the software functions as intended and is free of security holes.

Exit poll 4.1

Smoke tests are brief, but they examine vital parts and features of a program. When in use, it is executed on each application build to ensure fundamental features work properly before more time-consuming integration and end-to-end testing is performed. Rapid feedback loops are an integral part of the software development life cycle, and smoke tests help to facilitate their creation.

The curl command on the API, for instance, can be used to determine the HTTP response code and latency during a smoke test.

Testing APIs 4.2

Today’s applications might expose hundreds of extremely valuable endpoints that are very tempting to hackers. Ensuring your APIs are safe before, during, and after production is critical. This is why we need to conduct API testing.

If you want to skip the login process, API testing will tell you what kind of authentication is needed and if private information is encrypted across HTTP and SQL injections.

We can utilize Jmeter, Taurus, Postman, and SoapUI tools for API testing. Here is a quick Jmeter demo, with the API test cases located in the test.jmx file.

jmeter -n –t test.jmx -l result.jtl

It is necessary to perform dynamic application security testing (DAST).

DAST, or Dynamic Application Security Testing, is a tool for identifying vulnerabilities in live web applications. In addition to identifying SQL injection, cross-site scripting, and other concerns detailed in the OWASP Top 10, DAST tools can also detect security misconfigurations. HCL Appscan, ZAP, Burp Suite, and Invicti are all useful tools for scanning a live online app for security flaws. OWASP has a collection of DAST scanners, which you can find here. These tools are straightforward to incorporate into our existing CI/CD process.

zap.sh -cmd -quickurl http://example.com/ -quickprogress -quickout example.report.html

Deploy

Our deployment files should be scanned regardless of whether we are deploying hardware or software. We can also implement an automated or manual trigger where the pipeline awaits external user validation before moving on to the next level.

Scanning a Kubernete manifest file or Helm chart at rest 5.1

Kubernetes deployments and Helm charts should be scanned before being deployed. We may use Checkov to scans Kubernetes manifests and discovers security and configuration concerns. Scanning Helm charts is possible as well. In addition to kubeLinter, terrascan can be used to inspect the Kubernetes manifest.

docker run -t -v $(pwd):/output bridgecrew/checkov -f /output/keycloak-deploy.yml -o json

#For Helm

docker run -t -v $(pwd):/output bridgecrew/checkov -d /output/ –framework helm -o json

5.2 Verify the Kubernete manifest YAML file for policies before deploying

Kyverno provides an additional layer of security by ensuring that only permitted types of manifest are deployed into kubernetes; otherwise, it will reject the deployment and log the policy violation message. Kubewarden and Gatekeeper are two other options for controlling access to Kubernetes CRD and enforcing regulations.

Here’s a quick Kyverno rule to stop using the most recent tag on your images.

CIS Scan Using Kube-Bench Version 5.3

Using the tests detailed in the CIS Kubernetes Benchmark, kube-bench verifies that Kubernetes has been installed securely. Daily kube-bench runs as a Job, and its results can be used in CI/CD to determine whether or not a pipeline should succeed.

kubectl apply -f eks-job.yaml
kubectl logs kube-bench-pod-name

Scanning for IaC 5.4:

Checkov, Terrascan, and Kics can be used to scan our Infrastructure code. It works with Azure Resource Manager (ARM), Cloudformation, and Terraform.

Infrastructure testing in real time is possible with Terratest.

terraform init
terraform plan -out tf.plan
terraform show -json tf.plan | jq ‘.’ > tf.json
checkov -f tf.json

Our application deployment and testing can begin once we have completed a scan for Kubernetes deployment and kube-bench.

Tracking and Notifying

Logs and metrics about our infrastructure’s activity are gathered through monitoring and alerting, and notifications are sent out when certain metrics reach predetermined thresholds.

Measurement and analysis 6.1

  • Prometheus is an open-source tool for monitoring metrics and has gained widespread adoption. It offers numerous exporters for gathering system or program statistics. Prometheus metrics can also be viewed in Grafana.
  • Nagios and Zabbix are two examples of open source software used to keep tabs on computer systems and services.
  • Sensu Go is an end-to-end answer for scalable monitoring and observability.

Logs are monitored in 6.2

  • OpenSearch/Elasticsearch is a distributed and analytical engine that may be used in real time to facilitate a wide range of search-related tasks.
  • Graylog is a centralized log management system that may be used to gather information, store it, and analyze it.
  • If you need a lightweight solution to store and query logs from all your apps and infrastructure, go no further than Grafana Loki.

    Alert

To ensure that private data is not logged in plain text, a security-centric logging and monitoring strategy is implemented. Our logging system allows us to create a test case that can be used to detect anomalies in the data. For instance, a regular expression to locate sensitive information in order to discover the logs in a lower setting.

  • Prometheus Alertmanager: The Alertmanager manages alarms issued by client applications such as the Prometheus server.
  • Using phone calls, SMS, slack, and telegram, Grafana OnCall provides a developer-friendly incident response system.

The transparency of a distributed microservices architecture is enhanced by Application Performance Monitoring (APM). The APM data provides a comprehensive look at the application, which can improve software security. Distributed tracing solutions such as Zipkin and Jaeger “stitch” together all logs and provide end-to-end insight into requests. New vulnerabilities or attacks can be countered more quickly.

Although many cloud providers have their own monitoring toolsets and certain tools are accessible from the marketplace. Newrelic, Datadog, Appdynamics, and Splunk are just some of the paid monitoring tool providers out there.

Security incident and event monitoring (SIEM) system 6.4

Real-time monitoring and analysis of events, as well as the recording and logging of security data for compliance or auditing, are all features of security information and event management (SIEM). Anomalies can be detected with prebuilt ML jobs in Splunk, Elastic SIEM, and Wazuh, which provide automatic detection of suspicious activity and tools with behavior-based rules.

Auditing, 6.5%

After the deployment visibility comes from the level of auditing that has been put in place on application and infrastructure. The purpose of auditing is to gather enough information to feed into a security instrument. On AWS, we have CloudTrail for enabling audits, and on Azure, we have platform logs. We can use auditbeat or Splunk to deliver audit data to any logging platform, such as Elasticsearch, and then build an auditing dashboard based on that data.

Kubernetes Real-Time Monitoring for Security

Falco is a Kubernetes threat detection tool built specifically for the cloud. Unexpected actions, intrusions, and stolen data can all be spotted instantly. It use Linux eBPF technology in the background to track your running system and programs. A webhook or logs can be sent to the monitoring system if a container is breached, a pod is accessed as root, or any other unauthorized action is attempted. Similar tools that offer Kubernetes runtime security include Tetragon, KubeArmor, and Tracee.

We have seen what a DevSecOps CI/CD pipeline looks like up to this point. Let’s take the next step and beef up the security measures.

DevSecOps continuous integration and continuous delivery best practices

Safety in a Network

The network is the first line of protection against any form of assault, so making it more secure will also protect our application.

Build a private network just for the workload (e.g., apps and databases), and restrict internet access to the NAT exclusively.

Establish granular control over network traffic in both directions. In addition, we may employ Cloud custodian to enforce our security policies, which will filter out suspicious data transmissions on its own.

Always set up Network Access Control Lists (NACL) for AWS subnets. The recommended procedure is to first prevent all outgoing traffic and then to permit the necessary rules.

Make use of a WAF (Web Application Firewall).

Enable anti-DDOS safeguards.

Tools like Nmap, Wireshark, and tcpdump can be used to probe networks and examine packets.

Use VPN or Bastion host for connecting to infrastructure networks.

Security for Web Programs; WAF

WAF is a layer 7 firewall that safeguards our web applications against bots and common web attacks like XSS and SQL injection that could impact availability, security, or resource usage. The majority of cloud providers have WAF available, and it only takes a few mouse clicks to include it into our software.

All types of web traffic, services, DDoS attacks, and APIs can be protected by using Curiefense, an open source cloud native self-managed WAF application. The WAF services of Cloudflare and Imperva are also available.

Controlling who can do what (IAM)

IAM is a centrally determined policy to manage access to data, applications, and other network assets. Some approaches to securing your system from prying eyes are detailed below.

  • Have centralized user management using Active Directory or LDAP.
  • Manage user access with RBAC.
  • Policy for AWS IAM roles should allow for granular access control.
  • Change access and secret keys on a regular basis.
  • Teleport can be used to manage your network’s connections, user access, and auditing from a single location.
  • Vault your secrets away and make sure that only the right people may access them.
  • Incorporate zero-trust practices into your offerings.

Securing the Cloud, the Server, and the Apps

Cloud, OS, and app security may all be improved using the CIS benchmark. It is always a good idea to utilize a hardened OS as it decreases the attack surface of the server. We can use the protected image that most cloud providers offer, or we can make one ourselves.

These days, containerization is the norm for running software. We need to perform static analysis on our applications and scan the images of our containers to make them more secure.

To guard against viruses, trojans, malware, and other dangerous threats we can install Antivirus like Falcon, SentinelOne, or Clamav.

Patching the server

The majority of attacks target servers via taking advantage of security holes in the server’s operating system or applications. Regular package updates and vulnerability scans for the environments can help lower the chance of exposure.

Foreman and Red Hat Satellite can be used to automate the server patching process, while OpenVAS and Nessus can be used to search for vulnerabilities and generate a list of potential exploits.

To ensure that Kubernetes is being run safely, we can make use of the following resources:

Make sure your Kubernetes YAML file has the proper security-context.

With Network Policy, you can set up default restrictions that will allow only the traffic you need.

In order to establish Authorization and enable mTLS communication across microservices, you should use a Service Mesh (Linkerd, Istio).

Create a CIS benchmark report for your Kubernetes cluster with the help of kube-bench. We can perform this scan daily in our Kubernetes cluster and repair any discovered issues.

Kubernetes security vulnerabilities and misconfigurations can be spotted and remedied with the use of tools like Kube-hunter, Popeye, and Kubescape.

Scan your Kubernetes YAML and Helm chart for best practices and vulnerabilities using Checkov, KubeLinter, and Terrascan.

Implement pre-deployment policy checks like Kyverno, Kubewarden, and Gatekeeper can block non-standard Deployment.

Protect the worker server with a special picture. Every cloud service out there has CIS benchmark-hardened image options. Using amazon-eks-ami, we can also create our own protected image from scratch.

Kubernetes secrets should be stored in an encrypted way, or a third-party secret manager should be used, such as Vault.

Kubernetes service accounts can have AWS roles assigned to them directly by using IAM roles for service accounts.

The stability and behavior of an application in real-world scenarios can be better understood with the help of the Chaos Mesh and Litmus chaos engineering frameworks.

Protect Kubernetes by adhering to recommended practices.

Tools like Falco and Tracee can be used to keep an eye on unauthorized system calls made during program execution.

Containers

When it comes to today’s computing architecture, the smallest unit of abstraction is the container. Having shown how to incorporate containers into our CI/CD pipeline, we will now examine many techniques to secure them.

  • Examine the Container’s Dockerfile and image.
  • Minimize the attack surface by using a distroless image and a multi-stage build to shrink the size of your Docker container.
  • Never run containers as the root user.
  • The kernel can be isolated in Gvisor and Kata containers.
  • Make use of signed and verified container images.
  • Build a database of trusted containers.
  • Best practices for protecting containers should be put into effect.

The safety of a software product relies heavily on the integrity of its supply chain. Injecting backdoors into the source code or adding vulnerable libraries in the final product are only two examples of what an attacker who controls a portion of the supply chain can do. Detailed Requirements

As the Anchore 2022 Software Supply Chain Security Report 62% of Organizations Surveyed have been Impacted by Software Supply Chain Attacks. Scanning all of our software components like Code, SBOM, Containers, infrastructure, Sign and verify containers, etc. with DevSecOps CI/CD greatly lowers the Supply chain assault.

Guidelines established by the Center for Internet Security

The Center for Internet Security (CIS) is a nonprofit that serves as a benchmark for security standards. One of the benefits of following the CIS benchmark is that it directly translates to various recognized standards guidelines such the NIST Cybersecurity Framework (CSF), ISO 27000 family of standards, PCI DSS, HIPAA, and others. Plus, the additional safeguards provided by the CIS Level 2 Benchmark are a big plus.

Conclusion

In a nutshell, we searched for secrets, SAST, and SBOM to identify security flaws in our code as part of the DevSecOps pipeline development process. After that, we signed our container image and ran security scans on our Dockerfile, container image, and Kubernete manifest. Smoke testing, API verification, and DAST scanning were all carried out after deployment to guarantee that it went off without a hitch. Keep in mind that security is an ongoing process that requires your full attention. However, these may just be the beginning of a long and winding road to DevSecOps.

The potential for security flaws and intrusions is diminished when DevSecOps best practices are put into action. Scanning all aspects of your infrastructure and application delivers comprehensive visibility of potential vulnerabilities and possible strategies to remedy them. We discussed several ways and tools for discovering vulnerabilities because “the only way to do security right is to have multiple layers of security.”