Commit 98d1a267 authored by Prout, Ryan's avatar Prout, Ryan
Browse files

Merge branch 'prout-dev' into 'master'

Prout dev

See merge request ryu/slate_helm_examples!1
parents 29d34ced 0cadeb19
Loading
Loading
Loading
Loading

.gitlab-ci.yml

0 → 100644
+26 −0
Original line number Diff line number Diff line
lint Helm Charts:
  image:
    name: linkyard/docker-helm
    entrypoint: ["/bin/sh", "-c"]
  stage: test
  script:
    - helm lint charts/*
#    - helm lint charts/*/charts/*

pages:
  image:
    name: linkyard/docker-helm
    entrypoint: ["/bin/sh", "-c"]
  stage: deploy
  script:
    - helm init --client-only
    - mkdir -p ./public
    - "echo \"User-Agent: *\nDisallow: /\" > ./public/robots.txt"
    - helm package charts/* --destination ./public
    - helm repo index --url https://${CI_PROJECT_NAMESPACE}.gitlab.io/${CI_PROJECT_NAME} .
    - mv index.yaml ./public
  artifacts:
    paths:
      - public
  only:
    - master
+44 −0
Original line number Diff line number Diff line
# Slate_Helm_Examples

This repo exists to provide examples to build off of, when deploying your own applicaton on OLCF's Slate Platform.

The charts/ directory is where you will find various example application deployments, via helm charts, and their metadata. 

## Getting Started

### Prerequisites

NOTE: These examples will be done on Slate's Marble Cluster. Marble resides in OLCF's Moderate enclave, a peripheral system to Summit.

- You have a Slate project allocatiom
- You have the OC tool isntalled
- You can log into Slate's Marble cluster via the OC CLI
- You have helm3 installed

Please refer to the Slate user documentation (link coming soon) if you have any questions.

### Helm Chart directory/file structure

A [Chart](https://helm.sh/docs/topics/charts/) is organized as a collection of files inside a directory. The directory name, within the charts/ directory, is the name of the chart. So, you can see all of the example application charts in this repositories charts/ directory.

Here is the overall structure of a simple helm chart, using minio-standalone in this repository, as an example:
```
charts/
|--- minio-standalone/
|    |--- templates/
|    |    |--- helpers.tpl
|    |    |--- minio-standalone-deployment.yaml
|    |    |--- minio-standalone-pvc.yaml
|    |    |--- minio-standalone-service.yaml
|    |    |--- network-policy.yaml
|    |    |--- route.yaml
|    |--- Chart.yaml
|    |--- README.md
|    |--- values.yaml
|--- anothoer-application/
|    |--- templates/
|    |    |--- deployment.yaml
|    |    |--- pvc.yaml
|    |    |--- other.yaml
|    |--- Chart.yaml
|    |--- README.md
|    |--- values.yaml
```
 No newline at end of file
+5 −0
Original line number Diff line number Diff line
apiVersion: v1
description: A Helm chart example for deploying MinIO in standalone mode
name: minio-standalone
version: 1.0.0
+237 −0
Original line number Diff line number Diff line
# Standalone MinIO example application

This Chart provides an example that deploys a standalone MinIO application. It is a single MinIO server running in a pod, as a "deployment", with a small persistent volume claim to persist data.

This example is a derivative of the [MinIO provided example](https://github.com/minio/minio/blob/master/docs/orchestration/kubernetes/k8s-yaml.md) to target Slate specifically.

We will look at the individual [template files](https://code.ornl.gov/ryu/slate_helm_examples/-/tree/prout-dev/charts%2Fminio-standalone%2Ftemplates) in this minio-standalone chart, the deploy the application in your project space at the end.

**NOTE:** The main objective of this example is to show how the smaller, individual core, components come together to create a simple application deployment. This example can be used to build off of, for a variety of data applications, but it is not meant to be a production deployment. A production deployment may require more thought and robust configurations. 

## Prerequisites

**NOTE:** This example will be done on Slate's Marble Cluster. Marble resides in OLCF's Moderate enclave, a peripheral system to Summit.

- You have a Slate project allocation
- You have the OC tool isntalled
- You can log into Slate's Marble cluster via the OC CLI
- You have helm3 installed

Please refer to the Slate user documentation (link coming soon) if you have any questions.

## Deploying the MinIO Standalone Server on Slate

This section describes the process to deploy a standalone MinIO server on Slate. The deployment uses the [official MinIO Docker image](https://hub.docker.com/r/minio/minio/) from Docker Hub.

This example uses these core components of Kubernetes:

- [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
- [Services](https://kubernetes.io/docs/concepts/services-networking/service/)
- [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
- [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)

In the following we will look at the files of our Minio Standalone application/chart, that use the above core components.

### The Persistent Volume Claim

Minio needs persistent storage to store objects. The idea of persistent data applies to many data applictions, so this fundemental piece is applicable in many scenarios. Without persistent storage, the data stored in an application instance will be stored in the container file system, which will be destroyed as soon as the container restarts. This is true for any application needing to persist data on Slate. So, to persist data in your application use a persistent volume claim, or "PVC".

The file we use to do this is [minio-standalone-pvc.yaml](https://code.ornl.gov/ryu/slate_helm_examples/-/blob/prout-dev/charts/minio-standalone/templates/minio-standalone-pvc.yaml): 

```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  # This name uniquely identifies the PVC. This is used in deployment.
  name: minio-standalone-pv-claim
spec:
  # Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
  accessModes:
    # The volume is mounted as read-write by a single node
    - ReadWriteOnce
  resources:
    # This is the request for storage. Should be available in the cluster.
    requests:
      storage: 10Gi
```
### The Deployment

A deployment encapsulates [ReplicaSets](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) and [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/). If a pod goes down, the replication controller makes sure another pod comes up automatically. This creates the ability to handle pod failures automatically, without you having to worry about them, enabliing stable applications and services.

The file we use to do this is [minio-standaline-deployment.yaml](https://code.ornl.gov/ryu/slate_helm_examples/-/blob/prout-dev/charts/minio-standalone/templates/minio-standalone-deployment.yaml):

```
apiVersion: apps/v1
kind: Deployment
metadata:
  # This name uniquely identifies the Deployment
  name: minio-standalone
spec:
  selector:
    matchLabels:
      app: minio-standalone # has to match .spec.template.metadata.labels
  strategy:
    # Specifies the strategy used to replace old Pods by new ones
    # Refer: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
    type: Recreate
  template:
    metadata:
      labels:
        # This label is used as a selector in Service definition
        app: minio-standalone
    spec:
      # Volumes used by this deployment
      volumes:
      - name: data
        # This volume is based on PVC
        persistentVolumeClaim:
          # Name of the PVC created earlier
          claimName: minio-standalone-pv-claim
      containers:
      - name: minio
        # Volume mounts for this container
        volumeMounts:
        # Volume 'data' is mounted to path '/data'
        - name: data 
          mountPath: "/data"
        # Pulls the lastest Minio image from Docker Hub
        image: minio/minio:RELEASE.2020-05-08T02-40-49Z
        args:
        - server
        - /data
        env:
        # MinIO access key and secret key
        - name: MINIO_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: SECRET_TOKEN
              name: minio-standalone-access-key
        - name: MINIO_SECRET_KEY
          valueFrom:
            secretKeyRef:
              key: SECRET_TOKEN
              name: minio-standalone-secret-key 
        ports:
        - containerPort: 9000
        # Readiness probe detects situations when MinIO server instance
        # is not ready to accept traffic. Kubernetes doesn't forward
        # traffic to the pod while readiness checks fail.
        readinessProbe:
          httpGet:
            path: /minio/health/ready
            port: 9000
          initialDelaySeconds: 120
          periodSeconds: 20
        # Liveness probe detects situations where MinIO server instance
        # is not working properly and needs restart. Kubernetes automatically
        # restarts the pods if liveness checks fail.
        livenessProbe:
          httpGet:
            path: /minio/health/live
            port: 9000
          initialDelaySeconds: 120
          periodSeconds: 20
```
### The Service

A service allows us to expose our deployment externally. There are three major service types - the default is ClusterIP, which exposes a service to connection from inside the cluster. The NodePort and LoadBalancer types enable the ability to expose services to external traffic directly.

In this example we use the ClusterIP type in our [minio-standalone-service.yaml](https://code.ornl.gov/ryu/slate_helm_examples/-/blob/prout-dev/charts/minio-standalone/templates/minio-standalone-service.yaml):

```
apiVersion: v1
kind: Service
metadata:
  # This name uniquely identifies the service
  name: minio-standalone-service
  labels:
    app: minio-standalone
spec:
  type: ClusterIP
  ports:
    - name: 9000-tcp
      port: 9000
      targetPort: 9000
      protocol: TCP
  selector:
    # Looks for labels `app:minio-standalone` in the namespace and applies the spec
    app: minio-standalone
```

### The Route

Since we are using ClusterIP, in the service type above, and our MinIO application is accessible via HTTP/HTTPS, we expose the application via a route. If our service used another protocol, besides HTTP/HTTPS, we would use a NodePort service type.

Here is our [route.yaml](https://code.ornl.gov/ryu/slate_helm_examples/-/blob/prout-dev/charts/minio-standalone/templates/route.yaml) file:

```
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: minio-standalone
spec:
  host: minio-standalone.apps.marble.ccs.ornl.gov
  to:
    # Associate with the service we created
    kind: Service
    name: minio-standalone-service
  port:
    # Needs to match name of port in the service file
    targetPort: 9000-tcp
  tls:
    # Terminate TLS at the router, before sending to service. Preferred method of securing a route.
    termination: edge
 ```

Simple diagram of networking relationship and the deployment wrapping the pod(s):

```
                                                                                     Deployment
                                                                                     ----------
https://minio-standalone.apps.marble.ccs.ornl.gov/minio/ ---> Route --> Service --> |  Pod(s)  |
                                                                                     ---------- 
```

At the end, once we install our minio-standalone application, we will have the above and be able to access the app via the URL.

### The Network Policy

Network Policies are specifications of how groups of pods are allowed to communicate with each other and other network endpoints. A pod is selected in Network Policy based on a label, defining rules for network traffic, specific to that pod.

In this example, we create a Network Policy for our minio-standalone applicaiton. Here is the [network-policy.yaml](https://code.ornl.gov/ryu/slate_helm_examples/-/blob/prout-dev/charts/minio-standalone/templates/network-policy.yaml) file:

```
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: web-allow-external
  namespace: stf007
spec:
  podSelector:
    matchLabels:
      # how we match our application
      app: minio-standalone
  ingress:
    # allow all
    - {}
  policyTypes:
    - Ingress
```

### The Values File

Helm templates provide built-in objects. One of these objects is "Values". This object provides access to values passed into the chart. The contents, in this case, come from [values.yaml](https://code.ornl.gov/ryu/slate_helm_examples/-/blob/prout-dev/charts/minio-standalone/values.yaml):

```
# This can be used to provide variables to your chart. 
# A simple example would be resources in the minio-standalone-deployment.yaml:

minio:
  resources:
    requests:
      cpu: 2
      memory: 1Gi
    limits:
      cpu: 2
      memory: 1Gi
```
+16 −0
Original line number Diff line number Diff line
{{/*
Expand the name of the chart.
*/}}
{{- define "minio-standalone.name" -}}
{{- default .Release.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "minio-standalone.fullname" -}}
{{- $name := default "minio-standalone" .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
Loading