Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App

Overview

Deploy

Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App

This example provisions a Google Kubernetes Engine (GKE) cluster, the Snyk controller for the Kubernetes Integration, confiugures auto-import of workloads from the apples namespace and then deploys a sample workload as a Deployment into the cluster, to test that the Snyk Kubernetes integration is working using infrastructure-as-code. This demonstrates that you can manage both the Kubernetes objects themselves, in addition to underlying cloud infrastructure, using a single configuration language (in this case, Python), tool, and workflow.

Prerequisites

Ensure you have Python 3, a Pulumi account Pulumi Account, and the Pulumi CLI.

We will be deploying to Google Cloud Platform (GCP), so you will need an account. If you don't have an account, sign up for free here. In either case, follow the instructions here to connect Pulumi to your GCP account.

This example assumes that you have GCP's gcloud CLI on your path. This is installed as part of the GCP SDK.

Snyk Pre Steps

Note: These must be done before running this example. We will need an existing Snyk ORG which does not have Kubernetes integration configured.

  1. Select a Snyk ORG where you want to automatically setup the Kubernetes Integration for. It can be an empty Snyk ORG or even an existing one with projects but please ensure that the Kubernetes integration is not configured in this ORG

  2. Click on Integrations then click on Kubernetes and finally click on Connect. Make a note of the Integration ID we will need it shortly

alt tag

That's it you now ready to setup our Snyk Integration demo all from Pulumi using infrastructure-as-code, which will do the following

  • Create a GKE cluster
  • Deploy the Snyk Controller into the cluster
  • Setup the Snyk Kubernetes Integration for auto import of K8s workloads into Snyk App
  • Deploy a sample workload into the apples namespace as per our REGO policy file

Running the Snyk Kubernetes Integration Setup

After cloning this repo, cd into it and run these commands.

  1. Auth to Google Cloud using local authentication this is the easiest way to deploy this demo. There are other ways to configure pulumi with GCP but this is the easiest way for this demo

    $ gcloud auth login
  2. Create a new stack, which is an isolated deployment target for this example. Please use dev as the example is setup to use the stack name dev :

    $ pulumi stack init dev
  3. Set the required configuration variables for this program. You can leave the defaults but please ensure you setup a GKE cluster password as that is manfatory here:

    $ pulumi config set gcp:project [your-gcp-project-here] # Eg: snyk-cx-se-demo
    $ pulumi config set gcp:zone us-central1-c # any valid GCP zone here
    $ pulumi config set password --secret [your-cluster-password-here] # password for the cluster
    $ pulumi config set master_version 1.21.5-gke.1302 # any valid K8s master version on GKE

    By default, your cluster will have 3 nodes of type n1-standard-1. This is configurable, however; for instance if we'd like to choose 5 nodes of type n1-standard-2 instead you can do that, run these commands to setup a 3 node cluster:

    $ pulumi config set node_count 3
    $ pulumi config set node_machine_type n1-standard-2

    Finally lets set the Snyk Kubernetes integration settings we will need to automatically setup the the Kubernetes integration into our cluster for us. We will need our Kubernetes Integration ID and our Snyk App ORG ID which will be the same ID's

    $ pulumi config set snyk_K8s_integration_id K8S_INTEGRATION_ID #same as ORG_ID at the moment
    $ pulumi config set snyk_org_id ORG_ID # your Snyk ORG ID under settings

    This shows how stacks can be configurable in useful ways. You can even change these after provisioning.

    Once this is done you should have a file Pulumi.dev.yaml with content as follows

    config:
     gcp-K8s-integration-demo:master_version: 1.21.5-gke.1302
     gcp-K8s-integration-demo:node_count: "3"
     gcp-K8s-integration-demo:node_machine_type: n1-standard-2
     gcp-K8s-integration-demo:password:
         secure: AAABAFeuJ0fR0k2SFMSVoJZI+0GlNYDaggXpRgu5sD0bpo+EnF1p4w==
     gcp-K8s-integration-demo:snyk_K8s_integration_id: yyyy1234
     gcp-K8s-integration-demo:snyk_org_id: yyyy1234
     gcp:project: snyk-cx-se-demo
     gcp:zone: us-central1-c
  4. Deploy everything with the pulumi up command. This provisions all the GCP resources necessary for the Kubernetes Integration with Snyk, including your GKE cluster itself, Snyk Controller helm chart, and then deploys a Kubernetes Deployment running a Spring Boot application, all in a single step:

    $ pulumi up

    This will show you a preview, ask for confirmation, and then chug away at provisioning your Snyk K8s integration demo:

     ❯ pulumi up
     Previewing update (dev)
    
     View Live: https://app.pulumi.com/papicella/gcp-K8s-integration-demo/dev/previews/1db6492c-ae23-4e87-abf0-41e09fb62177
    
         Type                                                              Name                          Plan
     +   pulumi:pulumi:Stack                                               gcp-K8s-integration-demo-dev  create
     +   ├─ kubernetes:helm.sh/v3:Chart                                    snyk-monitor                  create
     +   │  ├─ kubernetes:core/v1:ServiceAccount                           snyk-monitor/snyk-monitor     create
     +   │  ├─ kubernetes:networking.k8s.io/v1:NetworkPolicy               snyk-monitor/snyk-monitor     create
     +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         snyk-monitor                  create
     +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  snyk-monitor                  create
     +   │  └─ kubernetes:apps/v1:Deployment                               snyk-monitor/snyk-monitor     create
     +   ├─ gcp:container:Cluster                                          pulumi-gke-cluster            create
     +   ├─ pulumi:providers:kubernetes                                    gke_k8s                       create
     +   ├─ kubernetes:core/v1:Namespace                                   snyk-monitor                  create
     +   ├─ kubernetes:core/v1:Namespace                                   apples                        create
     +   ├─ kubernetes:core/v1:ConfigMap                                   snyk-monitor-custom-policies  create
     +   ├─ kubernetes:core/v1:Service                                     springboot-employee-api       create
     +   ├─ kubernetes:core/v1:Secret                                      snyk-monitor                  create
     +   └─ kubernetes:apps/v1:Deployment                                  springboot-employee-api       create
    
     Resources:
         + 15 to create

    After about five minutes, your cluster will be ready, with the snyk controller installed, sample workload Deployment, auto imported into your Snyk ORG

    Do you want to perform this update? yes
     Updating (dev)
    
     View Live: https://app.pulumi.com/papicella/gcp-K8s-integration-demo/dev/updates/1
    
         Type                                                              Name                          Status
     +   pulumi:pulumi:Stack                                               gcp-K8s-integration-demo-dev  created
     +   ├─ kubernetes:helm.sh/v3:Chart                                    snyk-monitor                  created
     +   │  ├─ kubernetes:core/v1:ServiceAccount                           snyk-monitor/snyk-monitor     created
     +   │  ├─ kubernetes:networking.k8s.io/v1:NetworkPolicy               snyk-monitor/snyk-monitor     created
     +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         snyk-monitor                  created
     +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  snyk-monitor                  created
     +   │  └─ kubernetes:apps/v1:Deployment                               snyk-monitor/snyk-monitor     created
     +   ├─ gcp:container:Cluster                                          pulumi-gke-cluster            created
     +   ├─ pulumi:providers:kubernetes                                    gke_k8s                       created
     +   ├─ kubernetes:core/v1:Namespace                                   snyk-monitor                  created
     +   ├─ kubernetes:core/v1:Namespace                                   apples                        created
     +   ├─ kubernetes:core/v1:Service                                     springboot-employee-api       created
     +   ├─ kubernetes:core/v1:ConfigMap                                   snyk-monitor-custom-policies  created
     +   ├─ kubernetes:core/v1:Secret                                      snyk-monitor                  created
     +   └─ kubernetes:apps/v1:Deployment                                  springboot-employee-api       created
    
     Outputs:
         kubeconfig: "[secret]"
    
     Resources:
         + 15 created
    
     Duration: 6m28s

    The GKE cluster created on GCP

    alt tag

    The Snyk Kubernetes Integration automatically configured

    alt tag

    The sample workload auto imported from the apples namespace

    alt tag

    alt tag

    The Snyk Controller installed in the snyk-monitor namespace plus the config map and secret now managed by Pulumi

    ❯ kubectl get all -n snyk-monitor
     NAME                              READY   STATUS    RESTARTS   AGE
     pod/snyk-monitor-db67744d-szl79   1/1     Running   0          8m52s
    
     NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
     deployment.apps/snyk-monitor   1/1     1            1           8m53s
    
     NAME                                    DESIRED   CURRENT   READY   AGE
     replicaset.apps/snyk-monitor-db67744d   1         1         1       8m53s
    
     ❯ kubectl get secret -n snyk-monitor -l app.kubernetes.io/managed-by=pulumi
     NAME           TYPE     DATA   AGE
     snyk-monitor   Opaque   2      42m
    
     ❯ kubectl get configmap -n snyk-monitor -l app.kubernetes.io/managed-by=pulumi
     NAME                           DATA   AGE
     snyk-monitor-custom-policies   1      42m

    REGO policy file used by the Snyk Controller which is currently hardcoded to only import workloads from the apples namespace. This can be changed in __main__.py and used as an external file rather then hard coded in the python code

    snyk_monitor_custom_policies_str = """package snyk
     orgs := ["%s"]
     default workload_events = false
     workload_events {
         input.metadata.namespace == "apples"
             input.kind != "CronJob"
             input.kind != "Service"
     }""" % (SNYK_ORG_ID)
  5. From here, you may take this config and use it either in your ~/.kube/config file, or just by saving it locally and plugging it into the KUBECONFIG envvar. All of your usual gcloud commands will work too, of course.

    For instance:

    $ pulumi stack output kubeconfig --show-secrets > kubeconfig.yaml
    $ KUBECONFIG=./kubeconfig.yaml kubectl get po -n apples
     NAME                                                READY   STATUS    RESTARTS   AGE
     springboot-employee-api-fyrj9hr2-66d8456f5f-hqqhx   1/1     Running   0          17m
  6. At this point, you have a running cluster. Feel free to modify your program, and run pulumi up to redeploy changes. The Pulumi CLI automatically detects what has changed and makes the minimal edits necessary to accomplish these changes. This could be altering the existing chart, adding new GCP or Kubernetes resources, or anything, really.

  7. Once you are done, you can destroy all of the resources, and the stack:

    $ pulumi destroy
    $ pulumi stack rm
    ❯ pulumi destroy
    Previewing destroy (dev)
    
    View Live: https://app.pulumi.com/papicella/gcp-K8s-integration-demo/dev/previews/44fb2e8b-641c-4f55-9b4b-4ffa78f340ee
    
        Type                                                              Name                          Plan
    -   pulumi:pulumi:Stack                                               gcp-K8s-integration-demo-dev  delete
    -   ├─ kubernetes:core/v1:Namespace                                   snyk-monitor                  delete
    -   ├─ kubernetes:core/v1:ConfigMap                                   snyk-monitor-custom-policies  delete
    -   ├─ kubernetes:core/v1:Secret                                      snyk-monitor                  delete
    -   ├─ kubernetes:core/v1:Namespace                                   apples                        delete
    -   ├─ kubernetes:core/v1:Service                                     springboot-employee-api       delete
    -   ├─ kubernetes:apps/v1:Deployment                                  springboot-employee-api       delete
    -   ├─ pulumi:providers:kubernetes                                    gke_k8s                       delete
    -   ├─ kubernetes:helm.sh/v3:Chart                                    snyk-monitor                  delete
    -   │  ├─ kubernetes:core/v1:ServiceAccount                           snyk-monitor/snyk-monitor     delete
    -   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  snyk-monitor                  delete
    -   │  ├─ kubernetes:networking.k8s.io/v1:NetworkPolicy               snyk-monitor/snyk-monitor     delete
    -   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         snyk-monitor                  delete
    -   │  └─ kubernetes:apps/v1:Deployment                               snyk-monitor/snyk-monitor     delete
    -   └─ gcp:container:Cluster                                          pulumi-gke-cluster            delete
    
    Outputs:
    - kubeconfig: "[secret]"
    
    Resources:
        - 15 to delete
    
    Do you want to perform this destroy? yes
    Destroying (dev)
    
    View Live: https://app.pulumi.com/papicella/gcp-K8s-integration-demo/dev/updates/2
    
        Type                                                              Name                          Status
    -   pulumi:pulumi:Stack                                               gcp-K8s-integration-demo-dev  deleted
    -   ├─ kubernetes:core/v1:Secret                                      snyk-monitor                  deleted
    -   ├─ kubernetes:core/v1:ConfigMap                                   snyk-monitor-custom-policies  deleted
    -   ├─ kubernetes:core/v1:Namespace                                   apples                        deleted
    -   ├─ kubernetes:core/v1:Namespace                                   snyk-monitor                  deleted
    -   ├─ kubernetes:core/v1:Service                                     springboot-employee-api       deleted
    -   ├─ kubernetes:apps/v1:Deployment                                  springboot-employee-api       deleted
    -   ├─ pulumi:providers:kubernetes                                    gke_k8s                       deleted
    -   ├─ kubernetes:helm.sh/v3:Chart                                    snyk-monitor                  deleted
    -   │  ├─ kubernetes:core/v1:ServiceAccount                           snyk-monitor/snyk-monitor     deleted
    -   │  ├─ kubernetes:networking.k8s.io/v1:NetworkPolicy               snyk-monitor/snyk-monitor     deleted
    -   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  snyk-monitor                  deleted
    -   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         snyk-monitor                  deleted
    -   │  └─ kubernetes:apps/v1:Deployment                               snyk-monitor/snyk-monitor     deleted
    -   └─ gcp:container:Cluster                                          pulumi-gke-cluster            deleted
    
    Outputs:
    - kubeconfig: "[secret]"
    
    Resources:
        - 15 deleted
    
    Duration: 3m40s
    
    The resources in the stack have been deleted, but the history and configuration associated with the stack are still maintained.
    If you want to remove the stack completely, run 'pulumi stack rm dev'.

Pas Apicella [pas at snyk.io] is an Solution Engineer APJ at Snyk
Owner
Pas Apicella
Pas Apicella
Tools and Docker images to make a fast Ruby on Rails development environment

Tools and Docker images to make a fast Ruby on Rails development environment. With the production templates, moving from development to production will be seamless.

1 Nov 13, 2022
Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.

Welcome to the Spinnaker Project Spinnaker is an open-source continuous delivery platform for releasing software changes with high velocity and confid

8.8k Jan 07, 2023
A Python Implementation for Git for learning

A pure Python implementation for Git based on Buliding Git

shidenggui 42 Jul 13, 2022
Define and run multi-container applications with Docker

Docker Compose Docker Compose is a tool for running multi-container applications on Docker defined using the Compose file format. A Compose file is us

Docker 28.2k Jan 08, 2023
CDK Template of Table Definition AWS Lambda for RDB

CDK Template of Table Definition AWS Lambda for RDB Overview This sample deploys Amazon Aurora of PostgreSQL or MySQL with AWS Lambda that can define

AWS Samples 5 May 16, 2022
Push Container Image To Docker Registry In Python

push-container-image-to-docker-registry 概要 push-container-image-to-docker-registry は、エッジコンピューティング環境において、特定のエッジ端末上の Private Docker Registry に特定のコンテナイメー

Latona, Inc. 3 Nov 04, 2021
Hw-ci - Hardware CD/CI and Development Container

Hardware CI & Dev Containter These containers were created for my personal hardware development projects and courses duing my undergraduate degree. Pl

Matthew Dwyer 6 Dec 25, 2022
CTF infrastructure deployment automation tool.

CTF infrastructure deployment automation tool. Focus on the challenges. Mirrored from

Fake News 1 Apr 12, 2022
Dockerized iCloud drive

iCloud-drive-docker is a simple iCloud drive client in Docker environment. It uses pyiCloud python library to interact with iCloud

Mandar Patil 376 Jan 01, 2023
Utilitaire de contrôle de Kubernetes

Utilitaire de contrôle de Kubernetes ** What is this ??? ** Every time we use a word in English our manager tells us to use the French translation of

Théophane Vié 9 Dec 03, 2022
Azure plugins for Feast (FEAture STore)

Feast on Azure This project provides resources to enable running a feast feature store on Azure. Feast Azure Provider The Feast Azure provider acts li

Microsoft Azure 70 Dec 31, 2022
Manage your azure VM easily!

Azure-manager Manage your VM in Azure using cookies.

Team 1injex 129 Dec 17, 2022
Emissary - open source Kubernetes-native API gateway for microservices built on the Envoy Proxy

Emissary-ingress Emissary-Ingress is an open-source Kubernetes-native API Gateway + Layer 7 load balancer + Kubernetes Ingress built on Envoy Proxy. E

Emissary Ingress 4k Dec 31, 2022
GitGoat enables DevOps and Engineering teams to test security products intending to integrate with GitHub

GitGoat is an open source tool that was built to enable DevOps and Engineering teams to design and implement a sustainable misconfiguration prevention strategy. It can be used to test with products w

Arnica 149 Dec 22, 2022
Ralph is the CMDB / Asset Management system for data center and back office hardware.

Ralph Ralph is full-featured Asset Management, DCIM and CMDB system for data centers and back offices. Features: keep track of assets purchases and th

Allegro Tech 1.9k Jan 01, 2023
A Habitica Integration with Github Workflows.

Habitica-Workflow A Habitica Integration with Github Workflows. How To Use? Fork (and Star) this repository. Set environment variable in Settings - S

Priate 2 Dec 20, 2021
Phonebook application to manage phone numbers

PhoneBook Phonebook application to manage phone numbers. How to Use run main.py python file. python3 main.py Links Download Source Code: Click Here M

Mohammad Dori 3 Jul 15, 2022
Micro Data Lake based on Docker Compose

Micro Data Lake based on Docker Compose This is the implementation of a Minimum Data Lake

Abel Coronado 15 Jan 07, 2023