Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App

Overview

Deploy

Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App

This example provisions a Google Kubernetes Engine (GKE) cluster, the Snyk controller for the Kubernetes Integration, confiugures auto-import of workloads from the apples namespace and then deploys a sample workload as a Deployment into the cluster, to test that the Snyk Kubernetes integration is working using infrastructure-as-code. This demonstrates that you can manage both the Kubernetes objects themselves, in addition to underlying cloud infrastructure, using a single configuration language (in this case, Python), tool, and workflow.

Prerequisites

Ensure you have Python 3, a Pulumi account Pulumi Account, and the Pulumi CLI.

We will be deploying to Google Cloud Platform (GCP), so you will need an account. If you don't have an account, sign up for free here. In either case, follow the instructions here to connect Pulumi to your GCP account.

This example assumes that you have GCP's gcloud CLI on your path. This is installed as part of the GCP SDK.

Snyk Pre Steps

Note: These must be done before running this example. We will need an existing Snyk ORG which does not have Kubernetes integration configured.

  1. Select a Snyk ORG where you want to automatically setup the Kubernetes Integration for. It can be an empty Snyk ORG or even an existing one with projects but please ensure that the Kubernetes integration is not configured in this ORG

  2. Click on Integrations then click on Kubernetes and finally click on Connect. Make a note of the Integration ID we will need it shortly

alt tag

That's it you now ready to setup our Snyk Integration demo all from Pulumi using infrastructure-as-code, which will do the following

  • Create a GKE cluster
  • Deploy the Snyk Controller into the cluster
  • Setup the Snyk Kubernetes Integration for auto import of K8s workloads into Snyk App
  • Deploy a sample workload into the apples namespace as per our REGO policy file

Running the Snyk Kubernetes Integration Setup

After cloning this repo, cd into it and run these commands.

  1. Auth to Google Cloud using local authentication this is the easiest way to deploy this demo. There are other ways to configure pulumi with GCP but this is the easiest way for this demo

    $ gcloud auth login
  2. Create a new stack, which is an isolated deployment target for this example. Please use dev as the example is setup to use the stack name dev :

    $ pulumi stack init dev
  3. Set the required configuration variables for this program. You can leave the defaults but please ensure you setup a GKE cluster password as that is manfatory here:

    $ pulumi config set gcp:project [your-gcp-project-here] # Eg: snyk-cx-se-demo
    $ pulumi config set gcp:zone us-central1-c # any valid GCP zone here
    $ pulumi config set password --secret [your-cluster-password-here] # password for the cluster
    $ pulumi config set master_version 1.21.5-gke.1302 # any valid K8s master version on GKE

    By default, your cluster will have 3 nodes of type n1-standard-1. This is configurable, however; for instance if we'd like to choose 5 nodes of type n1-standard-2 instead you can do that, run these commands to setup a 3 node cluster:

    $ pulumi config set node_count 3
    $ pulumi config set node_machine_type n1-standard-2

    Finally lets set the Snyk Kubernetes integration settings we will need to automatically setup the the Kubernetes integration into our cluster for us. We will need our Kubernetes Integration ID and our Snyk App ORG ID which will be the same ID's

    $ pulumi config set snyk_K8s_integration_id K8S_INTEGRATION_ID #same as ORG_ID at the moment
    $ pulumi config set snyk_org_id ORG_ID # your Snyk ORG ID under settings

    This shows how stacks can be configurable in useful ways. You can even change these after provisioning.

    Once this is done you should have a file Pulumi.dev.yaml with content as follows

    config:
     gcp-K8s-integration-demo:master_version: 1.21.5-gke.1302
     gcp-K8s-integration-demo:node_count: "3"
     gcp-K8s-integration-demo:node_machine_type: n1-standard-2
     gcp-K8s-integration-demo:password:
         secure: AAABAFeuJ0fR0k2SFMSVoJZI+0GlNYDaggXpRgu5sD0bpo+EnF1p4w==
     gcp-K8s-integration-demo:snyk_K8s_integration_id: yyyy1234
     gcp-K8s-integration-demo:snyk_org_id: yyyy1234
     gcp:project: snyk-cx-se-demo
     gcp:zone: us-central1-c
  4. Deploy everything with the pulumi up command. This provisions all the GCP resources necessary for the Kubernetes Integration with Snyk, including your GKE cluster itself, Snyk Controller helm chart, and then deploys a Kubernetes Deployment running a Spring Boot application, all in a single step:

    $ pulumi up

    This will show you a preview, ask for confirmation, and then chug away at provisioning your Snyk K8s integration demo:

     ❯ pulumi up
     Previewing update (dev)
    
     View Live: https://app.pulumi.com/papicella/gcp-K8s-integration-demo/dev/previews/1db6492c-ae23-4e87-abf0-41e09fb62177
    
         Type                                                              Name                          Plan
     +   pulumi:pulumi:Stack                                               gcp-K8s-integration-demo-dev  create
     +   ├─ kubernetes:helm.sh/v3:Chart                                    snyk-monitor                  create
     +   │  ├─ kubernetes:core/v1:ServiceAccount                           snyk-monitor/snyk-monitor     create
     +   │  ├─ kubernetes:networking.k8s.io/v1:NetworkPolicy               snyk-monitor/snyk-monitor     create
     +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         snyk-monitor                  create
     +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  snyk-monitor                  create
     +   │  └─ kubernetes:apps/v1:Deployment                               snyk-monitor/snyk-monitor     create
     +   ├─ gcp:container:Cluster                                          pulumi-gke-cluster            create
     +   ├─ pulumi:providers:kubernetes                                    gke_k8s                       create
     +   ├─ kubernetes:core/v1:Namespace                                   snyk-monitor                  create
     +   ├─ kubernetes:core/v1:Namespace                                   apples                        create
     +   ├─ kubernetes:core/v1:ConfigMap                                   snyk-monitor-custom-policies  create
     +   ├─ kubernetes:core/v1:Service                                     springboot-employee-api       create
     +   ├─ kubernetes:core/v1:Secret                                      snyk-monitor                  create
     +   └─ kubernetes:apps/v1:Deployment                                  springboot-employee-api       create
    
     Resources:
         + 15 to create

    After about five minutes, your cluster will be ready, with the snyk controller installed, sample workload Deployment, auto imported into your Snyk ORG

    Do you want to perform this update? yes
     Updating (dev)
    
     View Live: https://app.pulumi.com/papicella/gcp-K8s-integration-demo/dev/updates/1
    
         Type                                                              Name                          Status
     +   pulumi:pulumi:Stack                                               gcp-K8s-integration-demo-dev  created
     +   ├─ kubernetes:helm.sh/v3:Chart                                    snyk-monitor                  created
     +   │  ├─ kubernetes:core/v1:ServiceAccount                           snyk-monitor/snyk-monitor     created
     +   │  ├─ kubernetes:networking.k8s.io/v1:NetworkPolicy               snyk-monitor/snyk-monitor     created
     +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         snyk-monitor                  created
     +   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  snyk-monitor                  created
     +   │  └─ kubernetes:apps/v1:Deployment                               snyk-monitor/snyk-monitor     created
     +   ├─ gcp:container:Cluster                                          pulumi-gke-cluster            created
     +   ├─ pulumi:providers:kubernetes                                    gke_k8s                       created
     +   ├─ kubernetes:core/v1:Namespace                                   snyk-monitor                  created
     +   ├─ kubernetes:core/v1:Namespace                                   apples                        created
     +   ├─ kubernetes:core/v1:Service                                     springboot-employee-api       created
     +   ├─ kubernetes:core/v1:ConfigMap                                   snyk-monitor-custom-policies  created
     +   ├─ kubernetes:core/v1:Secret                                      snyk-monitor                  created
     +   └─ kubernetes:apps/v1:Deployment                                  springboot-employee-api       created
    
     Outputs:
         kubeconfig: "[secret]"
    
     Resources:
         + 15 created
    
     Duration: 6m28s

    The GKE cluster created on GCP

    alt tag

    The Snyk Kubernetes Integration automatically configured

    alt tag

    The sample workload auto imported from the apples namespace

    alt tag

    alt tag

    The Snyk Controller installed in the snyk-monitor namespace plus the config map and secret now managed by Pulumi

    ❯ kubectl get all -n snyk-monitor
     NAME                              READY   STATUS    RESTARTS   AGE
     pod/snyk-monitor-db67744d-szl79   1/1     Running   0          8m52s
    
     NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
     deployment.apps/snyk-monitor   1/1     1            1           8m53s
    
     NAME                                    DESIRED   CURRENT   READY   AGE
     replicaset.apps/snyk-monitor-db67744d   1         1         1       8m53s
    
     ❯ kubectl get secret -n snyk-monitor -l app.kubernetes.io/managed-by=pulumi
     NAME           TYPE     DATA   AGE
     snyk-monitor   Opaque   2      42m
    
     ❯ kubectl get configmap -n snyk-monitor -l app.kubernetes.io/managed-by=pulumi
     NAME                           DATA   AGE
     snyk-monitor-custom-policies   1      42m

    REGO policy file used by the Snyk Controller which is currently hardcoded to only import workloads from the apples namespace. This can be changed in __main__.py and used as an external file rather then hard coded in the python code

    snyk_monitor_custom_policies_str = """package snyk
     orgs := ["%s"]
     default workload_events = false
     workload_events {
         input.metadata.namespace == "apples"
             input.kind != "CronJob"
             input.kind != "Service"
     }""" % (SNYK_ORG_ID)
  5. From here, you may take this config and use it either in your ~/.kube/config file, or just by saving it locally and plugging it into the KUBECONFIG envvar. All of your usual gcloud commands will work too, of course.

    For instance:

    $ pulumi stack output kubeconfig --show-secrets > kubeconfig.yaml
    $ KUBECONFIG=./kubeconfig.yaml kubectl get po -n apples
     NAME                                                READY   STATUS    RESTARTS   AGE
     springboot-employee-api-fyrj9hr2-66d8456f5f-hqqhx   1/1     Running   0          17m
  6. At this point, you have a running cluster. Feel free to modify your program, and run pulumi up to redeploy changes. The Pulumi CLI automatically detects what has changed and makes the minimal edits necessary to accomplish these changes. This could be altering the existing chart, adding new GCP or Kubernetes resources, or anything, really.

  7. Once you are done, you can destroy all of the resources, and the stack:

    $ pulumi destroy
    $ pulumi stack rm
    ❯ pulumi destroy
    Previewing destroy (dev)
    
    View Live: https://app.pulumi.com/papicella/gcp-K8s-integration-demo/dev/previews/44fb2e8b-641c-4f55-9b4b-4ffa78f340ee
    
        Type                                                              Name                          Plan
    -   pulumi:pulumi:Stack                                               gcp-K8s-integration-demo-dev  delete
    -   ├─ kubernetes:core/v1:Namespace                                   snyk-monitor                  delete
    -   ├─ kubernetes:core/v1:ConfigMap                                   snyk-monitor-custom-policies  delete
    -   ├─ kubernetes:core/v1:Secret                                      snyk-monitor                  delete
    -   ├─ kubernetes:core/v1:Namespace                                   apples                        delete
    -   ├─ kubernetes:core/v1:Service                                     springboot-employee-api       delete
    -   ├─ kubernetes:apps/v1:Deployment                                  springboot-employee-api       delete
    -   ├─ pulumi:providers:kubernetes                                    gke_k8s                       delete
    -   ├─ kubernetes:helm.sh/v3:Chart                                    snyk-monitor                  delete
    -   │  ├─ kubernetes:core/v1:ServiceAccount                           snyk-monitor/snyk-monitor     delete
    -   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  snyk-monitor                  delete
    -   │  ├─ kubernetes:networking.k8s.io/v1:NetworkPolicy               snyk-monitor/snyk-monitor     delete
    -   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         snyk-monitor                  delete
    -   │  └─ kubernetes:apps/v1:Deployment                               snyk-monitor/snyk-monitor     delete
    -   └─ gcp:container:Cluster                                          pulumi-gke-cluster            delete
    
    Outputs:
    - kubeconfig: "[secret]"
    
    Resources:
        - 15 to delete
    
    Do you want to perform this destroy? yes
    Destroying (dev)
    
    View Live: https://app.pulumi.com/papicella/gcp-K8s-integration-demo/dev/updates/2
    
        Type                                                              Name                          Status
    -   pulumi:pulumi:Stack                                               gcp-K8s-integration-demo-dev  deleted
    -   ├─ kubernetes:core/v1:Secret                                      snyk-monitor                  deleted
    -   ├─ kubernetes:core/v1:ConfigMap                                   snyk-monitor-custom-policies  deleted
    -   ├─ kubernetes:core/v1:Namespace                                   apples                        deleted
    -   ├─ kubernetes:core/v1:Namespace                                   snyk-monitor                  deleted
    -   ├─ kubernetes:core/v1:Service                                     springboot-employee-api       deleted
    -   ├─ kubernetes:apps/v1:Deployment                                  springboot-employee-api       deleted
    -   ├─ pulumi:providers:kubernetes                                    gke_k8s                       deleted
    -   ├─ kubernetes:helm.sh/v3:Chart                                    snyk-monitor                  deleted
    -   │  ├─ kubernetes:core/v1:ServiceAccount                           snyk-monitor/snyk-monitor     deleted
    -   │  ├─ kubernetes:networking.k8s.io/v1:NetworkPolicy               snyk-monitor/snyk-monitor     deleted
    -   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding  snyk-monitor                  deleted
    -   │  ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole         snyk-monitor                  deleted
    -   │  └─ kubernetes:apps/v1:Deployment                               snyk-monitor/snyk-monitor     deleted
    -   └─ gcp:container:Cluster                                          pulumi-gke-cluster            deleted
    
    Outputs:
    - kubeconfig: "[secret]"
    
    Resources:
        - 15 deleted
    
    Duration: 3m40s
    
    The resources in the stack have been deleted, but the history and configuration associated with the stack are still maintained.
    If you want to remove the stack completely, run 'pulumi stack rm dev'.

Pas Apicella [pas at snyk.io] is an Solution Engineer APJ at Snyk
Owner
Pas Apicella
Pas Apicella
Inferoxy is a service for quick deploying and using dockerized Computer Vision models.

Inferoxy is a service for quick deploying and using dockerized Computer Vision models. It's a core of EORA's Computer Vision platform Vision Hub that runs on top of AWS EKS.

94 Oct 10, 2022
Hackergame nc 类题目的 Docker 容器资源限制、动态 flag、网页终端

Hackergame nc 类题目的 Docker 容器资源限制、动态 flag、网页终端 快速入门 配置证书 证书用于验证用户 Token。请确保这里的证书文件(cert.pem)与 Hackergame 平台 配置的证书相同,这样 Hackergame 平台为每个用户生成的 Token 才可以通

USTC Hackergame 68 Nov 09, 2022
strava-offline is a tool to keep a local mirror of Strava activities for further analysis/processing:

strava-offline Overview strava-offline is a tool to keep a local mirror of Strava activities for further analysis/processing: synchronizes metadata ab

Tomáš Janoušek 29 Dec 14, 2022
Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App

Google Kubernetes Engine (GKE) with a Snyk Kubernetes controller installed/configured for Snyk App This example provisions a Google Kubernetes Engine

Pas Apicella 2 Feb 09, 2022
Deploying a production-ready Django project using Nginx and Gunicorn

django-nginx-gunicorn This project is for deploying a production-ready Django project using Nginx and Gunicorn. Running a local server of Django is no

Arash Sayareh 8 Jul 03, 2022
Autoscaling volumes for Kubernetes (with the help of Prometheus)

Kubernetes Volume Autoscaler (with Prometheus) This repository contains a service that automatically increases the size of a Persistent Volume Claim i

DevOps Nirvana 142 Dec 28, 2022
Webinar oficial Zabbix Brasil. Uma série de 4 aulas sobre API do Zabbix.

Repositório de scripts do Webinar de API do Zabbix Webinar oficial Zabbix Brasil. Uma série de 4 aulas sobre API do Zabbix. Nossos encontros [x] 04/11

Robert Silva 7 Mar 31, 2022
Run your clouds in RAID.

UniKlaud Run your clouds in RAID Table of Contents About The Project Built With Getting Started Installation Usage Roadmap Contributing License Contac

3 Jan 16, 2022
Nagios status monitor for your desktop.

Nagstamon Nagstamon is a status monitor for the desktop. It connects to multiple Nagios, Icinga, Opsview, Centreon, Op5 Monitor/Ninja, Checkmk Multisi

Henri Wahl 361 Jan 05, 2023
Convenient tool to manage multiple VMs at once using libvirt

Convenient tool to manage multiple VMs at once using libvirt Installing To install the tool and its dependencies: pip install -e . Getting completion

Cedric Bosdonnat 13 Nov 11, 2022
🎡 Build Python wheels for all the platforms on CI with minimal configuration.

cibuildwheel Documentation Python wheels are great. Building them across Mac, Linux, Windows, on multiple versions of Python, is not. cibuildwheel is

Python Packaging Authority 1.3k Jan 02, 2023
A repository containing a short tutorial for Docker (with Python).

Docker Tutorial for IFT 6758 Lab In this repository, we examine the advtanges of virtualization, what Docker is and how we can deploy simple programs

Arka Mukherjee 0 Dec 14, 2021
framework providing automatic constructions of vulnerable infrastructures

中文 | English 1 Introduction Metarget = meta- + target, a framework providing automatic constructions of vulnerable infrastructures, used to deploy sim

rambolized 685 Dec 28, 2022
DC/OS - The Datacenter Operating System

DC/OS - The Datacenter Operating System The easiest way to run microservices, big data, and containers in production. What is DC/OS? Like traditional

DC/OS 2.3k Jan 06, 2023
Visual disk-usage analyser for docker images

whaler What? A command-line tool for visually investigating the disk usage of docker images Why? Large images are slow to move and expensive to store.

Treebeard Technologies 194 Sep 01, 2022
This is a tool to develop, build and test PHP extensions in Docker containers.

Develop, Build and Test PHP Extensions This is a tool to develop, build and test PHP extensions in Docker containers. Installation Clone this reposito

Suora GmbH 10 Oct 22, 2022
Jenkins-AWS-CICD - Implement Jenkins CI/CD with AWS CodeBuild and AWS CodeDeploy, build a python flask web application.

Jenkins-AWS-CICD - Implement Jenkins CI/CD with AWS CodeBuild and AWS CodeDeploy, build a python flask web application.

Ning 1 Jan 01, 2022
Chef-like functionality for Fabric

/ / ___ ___ ___ ___ | | )| |___ | | )|___) |__ |__/ | __/ | | / |__ -- Chef-like functionality for Fabric About Fabric i

Sébastien Pierre 1.3k Dec 21, 2022
More than 130 check plugins for Icinga and other Nagios-compatible monitoring applications. Each plugin is a standalone command line tool (written in Python) that provides a specific type of check.

Python-based Monitoring Check Plugins Collection This Enterprise Class Check Plugin Collection offers a package of more than 130 Python-based, Nagios-

Linuxfabrik 119 Dec 27, 2022