Back to Blog
DevSecOpsSecurityCI/CDSASTDASTSCA

DevSecOps in Practice — SAST, DAST, SCA and Secrets Scanning in CI/CD Pipelines

A practical guide to embedding security into CI/CD pipelines: static analysis with Semgrep and Bandit, dynamic testing with OWASP ZAP, dependency scanning with Snyk and OWASP Dependency-Check, secrets detection with Gitleaks and TruffleHog, container image scanning with Trivy, and composing a layered security gate that blocks vulnerabilities before they reach production.

2026-05-09

Why DevSecOps — Security as a Pipeline Stage, Not an Afterthought

Traditional security reviews happened at the end of the development cycle — a penetration test before the quarterly release, a manual code audit when time permitted. The mean time to fix a vulnerability discovered at that stage was measured in weeks. DevSecOps shifts security left: every commit, every pull request, every container build triggers automated security checks. Vulnerabilities found at commit time cost a fraction of what they cost after deployment.

The NIST Cybersecurity Framework and the OWASP DevSecOps Guideline both emphasise continuous security verification as a core engineering practice. The four pillars of a mature DevSecOps pipeline are: SAST (static analysis of source code), DAST (dynamic testing of running applications), SCA (dependency vulnerability scanning), and secrets detection (preventing credentials from entering version control). Add container image scanning and you have a five-layer defence that catches the vast majority of common vulnerabilities before they ship.

SAST — Static Application Security Testing

Analyses source code or compiled artefacts for security vulnerabilities without executing the application. Finds SQL injection, XSS, insecure deserialization, hardcoded credentials, and logic flaws. Runs on every pull request in seconds. Tools: Semgrep, Bandit (Python), ESLint security plugins, CodeQL.

DAST — Dynamic Application Security Testing

Tests a running application by sending malicious inputs and observing responses. Finds runtime issues SAST cannot detect: authentication bypasses, session management flaws, server-side request forgery, and OWASP Top 10 categories that only manifest at runtime. Tools: OWASP ZAP, Burp Suite.

SCA — Software Composition Analysis

Scans dependency manifests (package.json, requirements.txt, pom.xml) and installed packages for known CVEs. Generates SBOMs for compliance. Finds Log4Shell, Spring4Shell, and similar supply chain vulnerabilities before they reach production. Tools: Snyk, OWASP Dependency-Check, Trivy.

Secrets Detection

Scans Git history and file contents for API keys, passwords, tokens, and private keys committed to version control. A single leaked AWS key can result in six-figure cloud bills within hours. Tools: Gitleaks, TruffleHog, detect-secrets.

SAST in Practice: Semgrep Rules and Bandit for Python

Semgrep is the most versatile open-source SAST tool: it supports 30+ languages, uses a pattern-matching syntax that mirrors the source language, and ships with thousands of rules maintained by the community and security researchers. It runs in milliseconds on large codebases and integrates natively with GitHub Actions, GitLab CI, and pre-commit hooks.

For Python specifically, Bandit provides deep Python AST-based analysis that catches issues Semgrep's generic rules miss: dangerous use of subprocess with shell=True, use of weak cryptographic functions, hardcoded passwords via regex, and insecure temporary file creation. Run both tools together: Semgrep for cross-language rules, Bandit for Python-specific depth.

# .semgrep.yml — custom Semgrep rules for your codebase
# Run with: semgrep --config .semgrep.yml --config p/python --config p/security-audit .

rules:
  # Prevent SQL injection via string interpolation
  - id: sql-injection-string-format
    languages: [python]
    severity: ERROR
    message: >
      Potential SQL injection: SQL query built with string formatting.
      Use parameterised queries instead.
    patterns:
      - pattern: |
          $CURSOR.execute(f"... {$VAR} ...")
      - pattern: |
          $CURSOR.execute("..." % $VAR)
      - pattern: |
          $CURSOR.execute("..." + $VAR)
    fix-regex:
      regex: ''
      replacement: ''

  # Detect insecure deserialization
  - id: insecure-pickle-load
    languages: [python]
    severity: ERROR
    message: >
      Insecure deserialization: pickle.loads() on untrusted data can execute
      arbitrary code. Use JSON or protobuf for untrusted input.
    patterns:
      - pattern: pickle.loads($DATA)
      - pattern: pickle.load($FILE)

  # Detect hardcoded secrets (common patterns)
  - id: hardcoded-api-key
    languages: [python, javascript, typescript]
    severity: WARNING
    message: >
      Possible hardcoded secret. Move credentials to environment variables
      or a secrets manager.
    patterns:
      - pattern: |
          $KEY = "sk-..."
      - pattern: |
          api_key = "..."
      - pattern: |
          password = "..."
    pattern-not:
      - pattern: |
          $KEY = ""
      - pattern: |
          $KEY = os.environ[...]
      - pattern: |
          $KEY = os.getenv(...)

  # Prevent path traversal
  - id: path-traversal-open
    languages: [python]
    severity: ERROR
    message: >
      Potential path traversal: file path includes user-supplied input.
      Validate and sanitize before using os.path.join with user input.
    patterns:
      - pattern: open(os.path.join(..., $USER_INPUT, ...), ...)
      - pattern: open($BASE + $USER_INPUT, ...)

  # Insecure random for security-sensitive use
  - id: insecure-random-security
    languages: [python]
    severity: WARNING
    message: >
      random.random() is not cryptographically secure.
      Use secrets.token_hex() or secrets.token_urlsafe() for tokens.
    patterns:
      - pattern: random.random()
      - pattern: random.randint(...)
      - pattern: random.choice(...)
# bandit-config.yml — Bandit configuration for Python security scanning
# Install: pip install bandit
# Run: bandit -r . -c bandit-config.yml --format json -o bandit-report.json

# Severity threshold: MEDIUM and above will fail the build
# Confidence threshold: MEDIUM and above

exclude_dirs:
  - tests/
  - venv/
  - .venv/
  - node_modules/

skips:
  # B101: assert_used — allow asserts in test files only
  # (excluded via exclude_dirs/tests above)
  []

# Plugin-level configuration
any_other_function_with_shell_equals_true:
  no_shell:
    - os.execl
    - os.execle
    - os.execlp
    - os.execlpe
    - os.execv
    - os.execve
    - os.execvp
    - os.execvpe
    - os.spawnl
    - os.spawnle
    - os.spawnlp
    - os.spawnlpe
    - os.spawnv
    - os.spawnve
    - os.spawnvp
    - os.spawnvpe
    - os.startfile
  shell:
    - os.system
    - os.popen
    - os.popen2
    - os.popen3
    - os.popen4
    - popen2.popen2
    - popen2.popen3
    - popen2.popen4
    - popen2.Popen3
    - popen2.Popen4
    - commands.getoutput
    - commands.getstatusoutput
# GitHub Actions: SAST job using Semgrep and Bandit
# .github/workflows/sast.yml

name: SAST

on:
  pull_request:
  push:
    branches: [main, master]

jobs:
  semgrep:
    name: Semgrep SAST
    runs-on: ubuntu-latest
    container:
      image: semgrep/semgrep
    steps:
      - uses: actions/checkout@v4

      - name: Run Semgrep
        run: |
          semgrep             --config p/python             --config p/security-audit             --config p/owasp-top-ten             --config .semgrep.yml             --error             --json             --output semgrep-report.json             .
        continue-on-error: true

      - name: Upload Semgrep report
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: semgrep-report
          path: semgrep-report.json

      - name: Fail on HIGH/CRITICAL findings
        run: |
          python3 - <<'EOF'
          import json, sys
          with open("semgrep-report.json") as f:
              data = json.load(f)
          errors = [r for r in data.get("results", []) if r["extra"]["severity"] == "ERROR"]
          if errors:
              print(f"SAST FAILURE: {len(errors)} high-severity finding(s):")
              for e in errors:
                  print(f"  {e['path']}:{e['start']['line']} — {e['check_id']}")
                  print(f"    {e['extra']['message'][:120]}")
              sys.exit(1)
          print(f"SAST passed. {len(data.get('results', []))} warning(s), 0 errors.")
          EOF

  bandit:
    name: Bandit Python SAST
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install Bandit
        run: pip install bandit[toml]

      - name: Run Bandit
        run: |
          bandit             -r .             -c bandit-config.yml             --severity-level medium             --confidence-level medium             --format json             --output bandit-report.json || true

      - name: Fail on MEDIUM+ findings
        run: |
          python3 - <<'EOF'
          import json, sys
          with open("bandit-report.json") as f:
              data = json.load(f)
          results = data.get("results", [])
          high = [r for r in results if r["issue_severity"] in ("HIGH", "MEDIUM")]
          if high:
              print(f"Bandit FAILURE: {len(high)} finding(s):")
              for r in high[:20]:
                  print(f"  {r['filename']}:{r['line_number']} [{r['issue_severity']}] {r['issue_text'][:100]}")
              sys.exit(1)
          print(f"Bandit passed. {len(results)} low-severity finding(s).")
          EOF

      - name: Upload Bandit report
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: bandit-report
          path: bandit-report.json

Note

Semgrep's p/owasp-top-ten ruleset maps findings directly to OWASP Top 10 categories — useful for compliance reporting. For teams on the free tier, p/python and p/security-audit cover the most impactful rules without requiring a Semgrep account.

Secrets Detection: Gitleaks, TruffleHog, and Pre-Commit Hooks

Secrets leaking into Git history is one of the most common and most costly security incidents. A leaked AWS key discovered by a credential-scanning bot can result in thousands of dollars in unauthorised charges within minutes. The fix after the fact — rotating credentials, auditing usage, notifying affected services — takes far longer than preventing the commit in the first place.

Gitleaks scans Git history and staged changes using entropy analysis and pattern matching for 150+ secret types. It runs as a pre-commit hook (blocking the commit before it happens), in CI (catching anything that slipped through), and as a scheduled full-history scan. TruffleHog goes further: it verifies discovered secrets against live APIs to confirm they are active, reducing false positives significantly.

# .gitleaks.toml — Gitleaks configuration
# Install: brew install gitleaks  OR  docker pull zricethezav/gitleaks
# Scan staged files: gitleaks protect --staged
# Scan full history: gitleaks detect --source .

title = "datasops gitleaks config"

[extend]
# Extend the default ruleset
useDefault = true

[[rules]]
id = "custom-internal-token"
description = "Internal service token pattern"
regex = '''datasops-[a-z0-9]{32}'''
tags = ["key", "internal"]
severity = "ERROR"

[[rules]]
id = "db-connection-string"
description = "Database connection string with embedded credentials"
regex = '''(postgresql|mysql|mongodb|redis)://[^:]+:[^@]+@'''
tags = ["connection-string", "database"]
severity = "ERROR"

[[rules]]
id = "private-key-file"
description = "Private key file content"
regex = '''-----BEGIN (RSA|EC|DSA|OPENSSH) PRIVATE KEY-----'''
tags = ["private-key"]
severity = "CRITICAL"

[allowlist]
description = "Global allowlist"
regexes = [
  # Allow example/placeholder values
  '''EXAMPLE_API_KEY''',
  '''YOUR_SECRET_HERE''',
  '''<your-key>''',
  # Allow test fixtures with fake keys
  '''test_[a-f0-9]{40}''',
]
paths = [
  # Never scan these paths
  '''.gitleaks.toml''',
  '''tests/fixtures/''',
  '''docs/examples/''',
]
# .pre-commit-config.yaml — Pre-commit hooks for secrets and SAST
# Install: pip install pre-commit && pre-commit install
# Run manually: pre-commit run --all-files

repos:
  # Gitleaks: secrets detection on staged files
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.18.4
    hooks:
      - id: gitleaks
        name: Detect secrets with Gitleaks
        entry: gitleaks protect --staged --redact --config .gitleaks.toml
        language: golang
        pass_filenames: false

  # detect-secrets: entropy-based secret detection
  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.5.0
    hooks:
      - id: detect-secrets
        name: Detect secrets with detect-secrets
        args: ["--baseline", ".secrets.baseline"]
        exclude: package.lock.json

  # Semgrep: SAST on Python files
  - repo: https://github.com/semgrep/semgrep
    rev: v1.73.0
    hooks:
      - id: semgrep
        name: Semgrep SAST
        args: ["--config", "p/python", "--config", ".semgrep.yml", "--error"]
        types: [python]

  # General quality: no large files, no merge conflict markers
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.6.0
    hooks:
      - id: check-merge-conflict
      - id: detect-private-key
      - id: check-added-large-files
        args: ["--maxkb=1000"]
      - id: no-commit-to-branch
        args: ["--branch", "main", "--branch", "master"]
# GitHub Actions: secrets scanning job
# .github/workflows/secrets-scan.yml

name: Secrets Scan

on:
  pull_request:
  push:
    branches: [main, master]
  schedule:
    # Full history scan every Sunday at 02:00 UTC
    - cron: "0 2 * * 0"

jobs:
  gitleaks:
    name: Gitleaks
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0   # full history for scheduled scans

      - name: Run Gitleaks (PR diff)
        if: github.event_name == 'pull_request'
        uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          GITLEAKS_LICENSE: ${{ secrets.GITLEAKS_LICENSE }}  # optional for teams

      - name: Run Gitleaks (full history)
        if: github.event_name == 'schedule'
        run: |
          docker run --rm             -v ${{ github.workspace }}:/repo             zricethezav/gitleaks:latest             detect             --source /repo             --config /repo/.gitleaks.toml             --report-format json             --report-path /repo/gitleaks-report.json

      - name: Upload Gitleaks report
        uses: actions/upload-artifact@v4
        if: always() && github.event_name == 'schedule'
        with:
          name: gitleaks-full-history-report
          path: gitleaks-report.json

  trufflehog:
    name: TruffleHog (verified secrets)
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: TruffleHog OSS scan
        uses: trufflesecurity/trufflehog@main
        with:
          path: ./
          base: ${{ github.event.repository.default_branch }}
          head: HEAD
          extra_args: --only-verified --json

Note

Install Gitleaks as a pre-commit hook from day one on new projects. On existing repos with a long history, run the full-history scan first, rotate any discovered secrets immediately, and then add the pre-commit hook. Never delete commits to "remove" secrets — the fix is rotation, not history rewriting, which is error-prone and dangerous in shared repos.

SCA: Dependency Vulnerability Scanning with Snyk and OWASP Dependency-Check

Modern applications are 80–90% open-source dependencies by line count. The OWASP Top 10 lists "Vulnerable and Outdated Components" as a top risk category. Log4Shell (CVE-2021-44228) — a critical RCE in a ubiquitous Java logging library — demonstrated how a single transitive dependency can compromise millions of systems. SCA tools continuously match your dependency tree against CVE databases (NVD, GitHub Advisory, OSV) and alert when a package you use has a known vulnerability.

Snyk provides managed SCA with automatic fix PRs, license compliance checking, and an SBOM export. The open-source OWASP Dependency-Check requires no external account and scans JVM, .NET, Python, Ruby, and JavaScript ecosystems. Trivy from Aqua Security is the most versatile: it scans container images, filesystems, Git repos, and SBOM documents in a single binary with no dependencies.

# GitHub Actions: SCA with Snyk and Trivy
# .github/workflows/sca.yml

name: SCA — Dependency Scanning

on:
  pull_request:
  push:
    branches: [main, master]
  schedule:
    # Daily scan to catch newly published CVEs
    - cron: "0 6 * * *"

jobs:
  snyk:
    name: Snyk SCA
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install dependencies (for accurate scan)
        run: pip install -r requirements.txt

      - name: Run Snyk test
        uses: snyk/actions/python@master
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
        with:
          args: >
            --severity-threshold=high
            --sarif-file-output=snyk.sarif
        continue-on-error: true

      - name: Upload Snyk SARIF to GitHub Security
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: snyk.sarif

      - name: Generate SBOM with Snyk
        uses: snyk/actions/python@master
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
        with:
          command: sbom
          args: --format cyclonedx+json --output sbom.json
        continue-on-error: true

      - name: Upload SBOM artifact
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: sbom
          path: sbom.json

  trivy-filesystem:
    name: Trivy Filesystem Scan
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Run Trivy on filesystem
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: fs
          scan-ref: .
          format: sarif
          output: trivy-fs.sarif
          severity: HIGH,CRITICAL
          exit-code: "1"
          ignore-unfixed: true

      - name: Upload Trivy SARIF
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: trivy-fs.sarif
# trivy-policy.rego — Open Policy Agent policy for Trivy
# Enforce severity thresholds and licence compliance
# Run: trivy fs . --policy trivy-policy.rego --compliance custom

package trivy

import future.keywords.if
import future.keywords.in

# Deny any CRITICAL CVE that has a fix available
deny[msg] if {
    vuln := input.Results[_].Vulnerabilities[_]
    vuln.Severity == "CRITICAL"
    vuln.FixedVersion != ""
    msg := sprintf(
        "CRITICAL CVE %s in %s@%s — fix available in %s",
        [vuln.VulnerabilityID, vuln.PkgName, vuln.InstalledVersion, vuln.FixedVersion]
    )
}

# Deny HIGH CVEs with CVSS score >= 9.0 that have fixes
deny[msg] if {
    vuln := input.Results[_].Vulnerabilities[_]
    vuln.Severity == "HIGH"
    vuln.CVSS.nvd.V3Score >= 9.0
    vuln.FixedVersion != ""
    msg := sprintf(
        "HIGH CVE %s (CVSS %.1f) in %s@%s — fix available in %s",
        [vuln.VulnerabilityID, vuln.CVSS.nvd.V3Score, vuln.PkgName, vuln.InstalledVersion, vuln.FixedVersion]
    )
}

# Warn on GPL licences in non-GPL projects
warn[msg] if {
    pkg := input.Results[_].Packages[_]
    pkg.License in ["GPL-2.0", "GPL-3.0", "AGPL-3.0"]
    msg := sprintf(
        "Copyleft licence %s in package %s — review for licence compatibility",
        [pkg.License, pkg.Name]
    )
}

Container Image Scanning with Trivy

A clean application codebase can still ship vulnerabilities through its base container image. An Alpine 3.15 base from two years ago may contain dozens of unpatched CVEs in system packages. Container scanning inspects the full image filesystem — OS packages, language runtimes, and application dependencies — and should run as part of every container build pipeline.

# Dockerfile best practices for minimal attack surface

# ---- Builder stage ----
FROM python:3.12-slim AS builder

WORKDIR /app

# Copy and install dependencies first (layer caching)
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt

# ---- Runtime stage ----
# Use distroless for minimal attack surface — no shell, no package manager
FROM gcr.io/distroless/python3-debian12:nonroot

WORKDIR /app

# Copy only installed packages from builder
COPY --from=builder /root/.local/lib /root/.local/lib
COPY src/ .

# Run as non-root (distroless nonroot image uses uid 65532)
USER nonroot

# Health check — also aids security (signals liveness to orchestrators)
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3     CMD python3 -c "import urllib.request; urllib.request.urlopen('http://localhost:8080/health')"

EXPOSE 8080
ENTRYPOINT ["python3", "main.py"]
# GitHub Actions: Container build + Trivy image scan
# .github/workflows/container-security.yml

name: Container Security

on:
  pull_request:
  push:
    branches: [main, master]

jobs:
  build-and-scan:
    name: Build and Scan Container Image
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Build image (don't push yet)
        uses: docker/build-push-action@v5
        with:
          context: .
          push: false
          load: true
          tags: app:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: Scan image with Trivy
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: app:${{ github.sha }}
          format: sarif
          output: trivy-image.sarif
          severity: HIGH,CRITICAL
          exit-code: "1"
          ignore-unfixed: true
          vuln-type: os,library

      - name: Upload Trivy SARIF
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: trivy-image.sarif

      - name: Generate CycloneDX SBOM for image
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: app:${{ github.sha }}
          format: cyclonedx
          output: sbom-image.cdx.json

      - name: Upload SBOM
        uses: actions/upload-artifact@v4
        with:
          name: image-sbom
          path: sbom-image.cdx.json

      - name: Push image (only after scan passes)
        if: github.ref == 'refs/heads/main' && success()
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: |
            ghcr.io/${{ github.repository }}:latest
            ghcr.io/${{ github.repository }}:${{ github.sha }}

DAST with OWASP ZAP: Automated Runtime Security Testing

OWASP ZAP (Zed Attack Proxy) is the world's most widely used web application security scanner. In CI/CD pipelines it runs in "automation" mode: deploy the application to a staging environment, point ZAP at it, and it will crawl the application and probe for OWASP Top 10 vulnerabilities — XSS, SQL injection, SSRF, insecure headers, and more. ZAP's automation framework uses YAML configuration to define scan policies, alerting thresholds, and output formats.

# zap-automation.yml — OWASP ZAP automation framework configuration
# Run: docker run --rm #   -v $(pwd):/zap/wrk:rw #   ghcr.io/zaproxy/zaproxy:stable #   zap.sh -cmd -autorun /zap/wrk/zap-automation.yml

env:
  contexts:
    - name: "App Context"
      urls:
        - "https://staging.app.internal"
      includePaths:
        - "https://staging.app.internal.*"
      excludePaths:
        - "https://staging.app.internal/logout.*"
        - "https://staging.app.internal/admin.*"

  parameters:
    failOnError: true
    failOnWarning: false
    progressToStdout: true

jobs:
  # Step 1: Spider the application
  - type: spider
    parameters:
      context: "App Context"
      maxDuration: 5        # minutes
      maxChildren: 10
      acceptCookies: true

  # Step 2: AJAX spider for single-page applications
  - type: spiderAjax
    parameters:
      context: "App Context"
      maxDuration: 3
      maxCrawlDepth: 5
      browserId: firefox-headless

  # Step 3: Active scan (probes for vulnerabilities)
  - type: activeScan
    parameters:
      context: "App Context"
      policy: "Default Policy"
      maxScanDurationInMins: 15
      threadPerHost: 2
      delayInMs: 100       # be gentle on staging

  # Step 4: Generate reports
  - type: report
    parameters:
      template: traditional-html
      reportDir: /zap/wrk/reports/
      reportTitle: "DAST Report"
      reportDescription: "Automated DAST scan via OWASP ZAP"
    risks:
      - high
      - medium
      - low
      - informational

  - type: report
    parameters:
      template: sarif-json
      reportDir: /zap/wrk/reports/
      reportFile: zap-results.sarif

# Alert thresholds — fail CI if these are found
alertFilters:
  - ruleId: 10016   # Web browser XSS protection
    newLevel: IGNORE
  - ruleId: 10098   # Cross-domain misconfiguration
    newLevel: IGNORE  # expected for CDN assets
# GitHub Actions: DAST job against staging environment
# .github/workflows/dast.yml

name: DAST

on:
  # Run after deploy to staging completes
  workflow_run:
    workflows: ["Deploy to Staging"]
    types: [completed]
  # Also allow manual trigger
  workflow_dispatch:
    inputs:
      target_url:
        description: "Target URL to scan"
        required: true
        default: "https://staging.app.internal"

jobs:
  zap-dast:
    name: OWASP ZAP DAST
    runs-on: ubuntu-latest
    if: >
      github.event_name == 'workflow_dispatch' ||
      github.event.workflow_run.conclusion == 'success'

    steps:
      - uses: actions/checkout@v4

      - name: Set target URL
        id: target
        run: |
          URL="${{ github.event.inputs.target_url || 'https://staging.app.internal' }}"
          echo "url=$URL" >> $GITHUB_OUTPUT

      - name: Run OWASP ZAP baseline scan
        uses: zaproxy/action-baseline@v0.12.0
        with:
          target: ${{ steps.target.outputs.url }}
          rules_file_name: .zap/rules.tsv
          cmd_options: >
            -j
            -l WARN
            -z "-config scanner.strength=HIGH -config scanner.level=MEDIUM"
          artifact_name: zap-baseline-report

      - name: Run OWASP ZAP full scan (scheduled only)
        if: github.event_name == 'schedule'
        uses: zaproxy/action-full-scan@v0.10.0
        with:
          target: ${{ steps.target.outputs.url }}
          rules_file_name: .zap/rules.tsv
          artifact_name: zap-full-report

      - name: Upload SARIF to GitHub Security
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: zap-baseline-report.sarif

Note

DAST requires a running application, so it naturally runs later in the pipeline than SAST and secrets scanning. Keep baseline DAST (header checks, passive scanning) in the PR pipeline. Reserve full active scans for nightly or post-deploy runs — active scans mutate application state and should never run against production.

Composing the Full DevSecOps Pipeline

Each security tool covers a different vulnerability class. The full pipeline stacks them in order of speed and cost: fast checks (secrets, SAST) run first on every commit; slower checks (SCA, container scanning) run on every PR; DAST runs post-deploy to staging. This structure minimises developer wait time while maintaining comprehensive coverage.

# .github/workflows/devsecops.yml — Full DevSecOps pipeline
# Orchestrates all security jobs with correct ordering and fail-fast behavior

name: DevSecOps Pipeline

on:
  pull_request:
  push:
    branches: [main, master]

jobs:
  # ── Layer 1: Fast checks (< 2 min) ──────────────────────────────────────
  secrets:
    name: "Layer 1: Secrets Detection"
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Gitleaks
        uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

  sast:
    name: "Layer 1: SAST"
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Semgrep
        uses: semgrep/semgrep-action@v1
        with:
          config: >
            p/python
            p/security-audit
            p/owasp-top-ten
            .semgrep.yml
          generateSarif: "1"

      - name: Upload SARIF
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: semgrep.sarif

  # ── Layer 2: Dependency and container (< 5 min) ─────────────────────────
  sca:
    name: "Layer 2: SCA"
    needs: [secrets, sast]
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Trivy filesystem scan
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: fs
          scan-ref: .
          severity: HIGH,CRITICAL
          exit-code: "1"
          ignore-unfixed: true
          format: sarif
          output: trivy-fs.sarif

      - name: Upload SARIF
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: trivy-fs.sarif

  container:
    name: "Layer 2: Container Scan"
    needs: [secrets, sast]
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: docker/setup-buildx-action@v3

      - name: Build image
        uses: docker/build-push-action@v5
        with:
          load: true
          push: false
          tags: app:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: Trivy image scan
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: app:${{ github.sha }}
          severity: HIGH,CRITICAL
          exit-code: "1"
          ignore-unfixed: true
          format: sarif
          output: trivy-image.sarif

      - name: Upload SARIF
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: trivy-image.sarif

  # ── Layer 3: Security gate ───────────────────────────────────────────────
  security-gate:
    name: "Security Gate"
    needs: [sca, container]
    runs-on: ubuntu-latest
    if: always()
    steps:
      - name: Check all security jobs passed
        run: |
          echo "Secrets:   ${{ needs.secrets.result }}"
          echo "SAST:      ${{ needs.sast.result }}"
          echo "SCA:       ${{ needs.sca.result }}"
          echo "Container: ${{ needs.container.result }}"

          if [[ "${{ needs.secrets.result }}" != "success" ]] ||              [[ "${{ needs.sast.result }}" != "success" ]] ||              [[ "${{ needs.sca.result }}" != "success" ]] ||              [[ "${{ needs.container.result }}" != "success" ]]; then
            echo "::error::Security gate FAILED — one or more checks did not pass"
            exit 1
          fi
          echo "Security gate PASSED — all checks green"

DevSecOps Maturity Model: A Phased Rollout

Trying to implement every security layer at once creates friction and resistance. Teams that introduce all controls simultaneously often end up disabling them when false positives block releases. A phased approach — starting with high-signal, low-friction controls — builds trust and allows teams to tune policies before adding complexity.

Phase 1 — Foundations (Week 1–2)

Add Gitleaks as a pre-commit hook and basic secrets scan in CI. Add Trivy filesystem scan to every PR. This alone catches the two most damaging vulnerability classes with near-zero false positives. Accept all findings as warnings initially to establish a baseline.

Phase 2 — SAST Integration (Week 3–4)

Add Semgrep with p/security-audit and one language-specific ruleset. Review findings for one sprint without blocking CI. Tune allowlists to reduce false positives to under 5%. Then enable --error flag to fail PRs on HIGH severity findings.

Phase 3 — Container and SCA (Week 5–6)

Add Trivy image scanning to the container build step. Add Snyk or OWASP Dependency-Check for dependency CVEs. Set severity threshold to HIGH with ignore-unfixed=true to avoid noise from unpatched upstream vulnerabilities. Generate and store SBOMs as CI artifacts.

Phase 4 — DAST and Automation (Month 2)

Add OWASP ZAP baseline scan to the staging deploy pipeline. Configure nightly full active scans. Integrate findings into GitHub Security tab via SARIF uploads. Set up Dependabot or Snyk fix PRs for automated dependency updates. Define a vulnerability SLA policy: CRITICAL = 24h, HIGH = 7d, MEDIUM = 30d.

DevSecOps Production Checklist

Gitleaks pre-commit hook is installed on all developer workstations

Pre-commit hooks are developer-side and catch secrets before they ever reach the remote. Enforce installation via onboarding scripts and periodically audit with `pre-commit run --all-files` in CI.

Semgrep runs on every PR with OWASP Top 10 ruleset

Semgrep's p/owasp-top-ten ruleset maps findings to CWE/CVE categories required for SOC 2 and ISO 27001 compliance evidence. Enable --sarif output and upload to GitHub Security tab for a centralised vulnerability dashboard.

Trivy scans both filesystem and container image on every PR

Filesystem scan catches dependency CVEs; image scan catches OS package CVEs in the base image. Run both. Enable ignore-unfixed to avoid noise from vulnerabilities with no available fix — focus on actionable findings.

SBOM is generated and stored as a CI artifact for every release

Software Bill of Materials (SBOM) in CycloneDX or SPDX format is increasingly required by enterprise customers and government procurement. Trivy and Snyk can generate them as part of the normal scan step at zero extra cost.

A vulnerability SLA policy is defined and tracked

Without a written SLA, every CRITICAL finding becomes a negotiation. Define: CRITICAL = patched within 24h, HIGH = 7 days, MEDIUM = 30 days. Track open findings with GitHub Security Advisories or Snyk's dashboard. Review SLA compliance in the weekly engineering meeting.

DAST runs against staging after every deploy

Static analysis cannot catch runtime vulnerabilities: authentication logic flaws, insecure CORS configurations, missing security headers, and session management issues only manifest under HTTP traffic. OWASP ZAP baseline scan adds 3–5 minutes to a staging deploy and catches header misconfiguration automatically.

Secrets rotation runbook exists for all production credentials

Gitleaks and TruffleHog will eventually find a secret (historically, most repos have at least one). When that happens, you need a rotation runbook ready: which systems use this credential, how to rotate it with zero downtime, how to verify the old credential is revoked.

Security findings are visible in developer PRs, not just security team dashboards

DevSecOps fails when findings go to a security Jira board that developers never read. Upload SARIF to GitHub Security so findings appear inline in the PR diff. This puts the fix in the hands of the person who introduced the issue, in the context where they can immediately address it.

Work with us

Embedding security into your CI/CD pipelines or building a DevSecOps programme from scratch?

We design and implement DevSecOps pipelines — from SAST and secrets scanning in pre-commit hooks to SCA, container scanning, OWASP ZAP DAST automation, SARIF integration with GitHub Security, and vulnerability SLA frameworks. Let’s talk.

Get in touch

Related Articles