5/5 - (1 vote)

There are several layers of infrastructure, and each of them has a weak spot.The first tier is the clustered servers that carry your entire workload. The next levels are clusters and containers. Our goal is to minimize areas vulnerable to attack.First, make sure your cluster is deployed on a private network and traffic is coming from the load balancer and ingress services. Don’t open ports like SSH or RDP, try to use SSM or nothing at all, since Kubernetes doesn’t need much of a basic configuration system. Also, using Kubernetes management services , you don’t even have to worry about initial settings. You will simply manage the operators.

Unprivileged users (rootless)

Dockerfile-alpine

FROM alpine: 3.12 ,
# Create user and set ownership and permissions as required
RUN adduser - D myuser &&  chown - R myuser / myapp - data
COPY myapp / myapp
USERmyuser
ENTRYPOINT [ "/myapp" ]

By default, many container services run as the privileged user root. At the same time, programs running inside a container as root do not need privileged execution. Preventing the user from being rooted with non-root or rootless containers will help reduce the chances of the container being compromised.

Immutable container file systems

read-only-deployment. yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
selector:
match Labels:
app: web
template:
metadata:
labels:
app: web
name: web
spec:
containers:
- command: [ "sleep" ]
args: [ "999" ]
image:ubuntu:latest
name: web
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /writeable/location/here
name: volName
volumes:
- emptyDir: {}
name:volName

Protection against creating files, loading scripts and modifying programs in a container. However, these restrictions also affect legitimate containerized programs and may cause crashes. To avoid corrupting legitimate programs, you can mount read/write secondary filesystems for specific directories where applications require write access.

Without shell, cat, grep, less, tail, echo, etc.

Dockerfile
# Start by Building the application.
FROM golang: 1.13 -buster as buildWORKDIR /go/src/app
ADD. /go/src/appRUN go get -d -v ./ ...
RUN go Build -o /go/bin/app# Now copy it to our base image.
FROM gcr. io /distroless/base-debian10
COPY --from=build /go/bin/app/
cmd [ "/app" ]

“Distoless” images contain only your application and its runtime dependencies. They don’t have package managers, shells, or any of the other programs you’d expect to find in a standard Linux distribution.

Less is better

Focus on less data stored inside the container. You should only keep your own programs, no source code or build dependencies.

# Start by building the application., 
FROM golang: 1.13 -buster as buildWORKDIR /go/src/app
ADD. /go/src/appRUN go get -d -v ./ ...
RUN go Build -o /go/bin/app# Now copy it to our base image.
FROM gcr. io /distroless/base-debian10
COPY --from=build /go/bin/app/
cmd [ "/app" ]

secrets

a-
Video: v1
kind: Pod
metadata:
name: volume test
spec:
containers:
-name: container-test
image:busybox
volumeMounts:
-name : all- in -one
mountPath: "/projected-volume"
readOnly: true
volumes:
-name : all- in -one
projected:
sources:
-secret:
name: mysecret
items:
- key: username
path: my-group/my-username

Kubernetes secrets, which grow over the life of the application itself, are used to pass sensitive information such as passwords or tokens to application events.You can store the type secret in the Kubernetes API, mount them as files, or simply declare them as an environment variable. There are also operators, such as Bitnami Sealed Secret, which help encrypt the contents of the secret and allow you to send the conference data to the repository. Even in public.

Scan Docker containers

The good news is that Docker and Snyk have recently teamed up to provide better container vulnerability scanning. What does this mean for you? Snyk is now integrated with Docker Hub to scan official images. In addition, Docker has integrated Snyk scanning directly into Docker clients. This means you can now integrate it into a single command to scan containers in CI.Of course you can use other providers like Quay. But they require more integration and configuration. In addition, services such as Docker hub, AWS ECR, Quay provide image scanning after you submit the container to the Docker registry. But while you fix these vulnerabilities, the container may already be used in several environments, including production.There are also several services that can be deployed, such as docker-bench-security. They will scan your Kubernetes cluster. However, this may be redundant, since we have a Pod Security Policy that will cover most of our security measures, which we will discuss below.

Kubernetes Security

The first question we need to ask when working with Kubernetes is “how can I install system statements and project applications in Kubernetes glassware?” Why not use the CLI tools? Yes it is possible. But that doesn’t necessarily mean it’s the right way. All workloads should be structured as packages and deployed with some flexibility. For this we have Helm.Helm helps deploy the same YALM files, but with templating like variables and conditions. In addition, it has a revision history, which allows you to restore one or another version. In other words, Helm will make your life easier. In addition, almost all services provide their own Helm charts that you can install with one click. It’s as easy as installing packages on Linux.There are two approaches to automated and secure deployment of Helm traits: push-based and pull-based.The push-based approach is what I like to call the classic approach. We all know it because we use it every day. Let’s say you need to build a CI/CD process . You have chosen a CI system; you create and submit an artifact, and then run the deployment directly from CI. This is the easiest way and has significant benefits such as feedback.It also has pitfalls. First, you must grant access to the CI cluster, which usually contains admin rights. Secondly, the state of the application may have changed since the last release, or someone may change the configurations and your CI will be broken. Thus, with administrator access, CI can easily use, modify, or delete other resources. To avoid this, we can use a different approach.A pull-based approach, also known as GitOps, is based on an operator running inside a cluster. It tracks changes to the repository and applies them automatically. Since the operator has access to the repository, we do not need to give the CI system access to the cluster.The advantage of using this approach is that you always have a Single Source of Truth (SSOT). What’s more, the operator will notice any manual changes and restore them to the state of the repository, so we’ll never run into a configuration drift. There are two pull-based tools: Flux and ArgoCD. Let’s talk about ArgoCD.

kind: Application
metadata:
 name: bookinfo
 namespace: argocd
spec:
 destination:
 namespace: bookinfo
 server: https ://kubernetes.default.svc
 project:default
 source:
 path:applications/bookinfo
 repoURL: git@github. com :sqerison/gitops-demo-kubernetes-workloads. git
 targetRevision:main
 syncPolicy:
 automatic:
 prune: true
 selfHeal: true
kind:AppProject
metadata:
 name: bookinfo
 namespace: argocd
spec:
 destinations:
 - namespace: '*'
 server: '*'
 clusterResourceWhitelist:
 - group : '*'
 kind: '*'
 sourceRepos:
 - git @github .com:sqerison / gitops - demo - kubernetes - workloads. git
 - git @github .com:sqerison / gitops - demo - bookinfo - app. git

ArgoCD   is the operator responsible for the pull-based approach and follows GitOps principles. Its main role is to manage resources and update them when changes are received from the repository. ArgoCD works with two main resources: Application and AppProject.Application  describes the resource itself, that is, the application to be installed. It can be a helm chart, or regular YAML files, be it Kustomize resources. We also indicate which project (AppProject) this application belongs to. And a few other options.AppProject  provides a logical grouping of applications, which is useful when Argo CD is used by multiple teams. It has several features like limiting what can be deployed, limiting the types of objects that may or may not be deployed, such as RBAC, CRD, DaemonSets, Network Policy, etc. When creating an application, we can choose within which project it will exist, and what access it will receive and will not allow it to go beyond what is permitted.

kind:AppProject
metadata:
 name: bookinfo
 namespace: argocd
spec:
 destinations:
 - namespace: '*'
 server: '*'
 clusterResourceWhitelist:
 - group : '*'
 kind: '*'
 sourceRepos:
 - git @github .com:sqerison / gitops - demo - kubernetes - workloads. git
 - git @github .com:sqerison / gitops - demo - bookinfo - app. git

Pod Security Policy

At the beginning of the article, non-root containers, read-only filesystems, and other docker practices were mentioned, like passing a docker socket, or using the system’s network (–net=host).With PSP, we can inject and prevent events from being executed if these requirements are not met.

psp-non-privileged. yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name:example
spec:
privileged: false # You can't use pods!
# Dealing with fills in some required fields.
seLinux:
rule: RunAsAny
Supplemental Groups:
rule: RunAsAny
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: RunAsAny
volumes:
- '*'

Here is a common example of what the PSP can be used for. Prohibits running containers in privileged mode (privileged: false) and prohibits running under the root user (MustRunAsNonRoot). Important! In order for the PSP rules of a resource to take effect, they must be authorized using RBAC.Because of the complexity, engineers often don’t use this resource because, along with policies, you need to take care of other configurations, figuring out how and where to use them. That’s why PSP functionality will soon become obsolete.But the PSP is not the only thing we can use to provide security features. We have an Open Policy Agent.

Open Policy Agent

The Open Policy Agent is essentially a Gatekeeper. This is an operator that evaluates requests to the admission controller to determine if they match the rules.With this tool, we can extend the control over the resources that are created. We can control things like labels, requests and limits. This is important when you want to scale your application. We can also restrict the list of docker repositories to only allow enterprise or AWS ECR. With Gatekeeper, you can restrict any option or argument across all Kubernetes resources.But, like any powerful tool, it has a fairly complex policy syntax. In essence, this is the REGO language. Let me show you an example.

opa-k8spspprivilegedcontainer-ct. yaml
apiVersion: templates. gatekeeper . sh /v1beta1
kind: ConstraintTemplate
metadata:
name: k8spspprivilegedcontainer
spec:
crd:
spec:
name:
kind:
K8sPSPPrivilegedContainer
-target: admission. k8s . gatekeeper . sh
rego: |
package k8spspprivilegedviolation [{ "msg" : msg, "details" : {}}] {
c := input_containers [ _ ]
c. securityContext . privileged
msg := sprintf ( "Privileged container is not allowed: %v, securityContext: %v" , [ c. name , c. securityContext ])
}
input_containers [ c ] {
c := input. review . object . spec . containers [ _ ]
}
input_containers [ c ] {
c := input. review . object . spec . initContainers [ _ ]
}

This is a policy to restrict docker repositories with type ConstraintTemplate. I don’t know how to write REGO. This is just an example I found in the library provided by Gatekeeper on GitHub. So, this configuration is just a template, and the arguments for that are given in the following example.

opa-k8sallowedrepos-inuse. yaml
apiVersion: constraints. gatekeeper . sh /v1beta1
kind: K8sAllowedRepos
metadata:
name: repo-is-openpolicyagent
spec:
match:
kinds:
- apiGroups: [ "" ]
kinds: "
"
- "default"
parameters:
repos:
- "openpolicyagent/"
- "quay.io/"
- ".dkr.ecr..amazonaws.com/"

Once you create a template, Gatekeeper will create a CRD based on it, which you can refer to, describing exactly which repositories you want to allow and for which namespaces it should be applied. We apply this policy to modules and namespaces by default, but you can define some namespaces yourself. In the end, we have a list of registries that we want to trust. The most difficult thing is to write templates that can be googled at all, and the rest is not difficult.

Network Policies

kind: NetworkPolicy
APIVersion: networking. k8s . io /v1
metadata:
 namespace:default
 name: deny-from-other-namespaces
spec:
 selector:
 match Labels:
 input:
 -from:
 - subSelector: {}

Network Policies is much easier to use than Gatekeeper. As you know, namespaces in Kubernetes are not isolated from each other, and any pod can communicate with any other pod. This is not very good, especially if you have sensitive data or monitoring services that should only have access to the metrics port. Also, if you have a multi-client cluster containing applications from different clients, you must be sure that they will not interact with each other.In this example, you can see how to allow traffic from other namespaces based on labels. And yes, if you want to experiment with networking, you can run a pod in a test namespace labeled prod and traffic will be allowed. But this is not very critical. As a last resort, you can always contact the Gatekeeper and specify which labels should be present in this namespace.

kind: NetworkPolicy
APIVersion: networking. k8s . io /v1
metadata:
 name: web-allow-prod
spec:
 selector:
 match Labels:
 app: web
 input:
 -from:
 -namespaceSelector:
 match Labels:
 purpose: production

For Network Policies to work, you need to install the network plugin (CNI) that supports them. Calico is a good candidate. AWS EKS has its own network plugin, and in a pinch, you can use Security Groups to view and manage AWS console rules.

kind: NetworkPolicy
APIVersion: networking. k8s . io /v1
metadata:
 name: api-allow- 5000
spec:
 selector:
 match Labels:
 app: apiserver
 input:
 ports:
 - port: 5000
 from:
 - subselector:
 match Labels:
 role: monitoring

The last example is port restrictions. To be precise, only allow traffic to a port with metrics to which the monitoring system will be available.One more note on AWS CNI. If you use Custom Networking Network Policies, they will lose their effect. Therefore, you must choose which policy system suits you best.In addition, there is also the Istio Mesh system, which has its own policies. It works at the seventh OSI layer, and allows you to manage traffic more flexibly. But Istio is a fairly broad topic, so we won’t talk about it today.

secrets

Everyone is now wondering how we can securely deliver sensitive information to the cluster without worrying about loss or disclosure. A good candidate for managing your secrets is Bitnami Sealed Secrets. You just need to encrypt the secret with the encryption key, and the result of the encryption will be a ready-made resource that can already be deployed in the cluster.Another alternative is GitCrypt, which encrypts files with GPG keys and can only be decrypted with another key that has been previously added. This is not the best option for Kubernetes secrets, but good for other sensitive data like private keys or kubeconfig.

apiVersion: bitnami. com /v1alpha1
kind:SealedSecret
metadata:
 annotations:
 sealed secrets. bitnami . com /cluster-wide: "true"
 creationTimestamp: null
 name: demo secret
spec:
 encrypted data:
 api-key: AgCK + iL6mZX6woqKiYQPWeELNt4 / JrpaiwLR75d24OnshhsNveGB7CqGF1dr + rAxal4gr + d4No4Q + uAQgUizgLnY2IdWvAKVh / 3miCgPW8SO8p8BOxpD8U1qBgJBb74M8rPbvxh47L0y2iSymSa4wdf4zcyju5CvoWnnB0Qbsx6lNzGMDnt6rjjzje3Su7ktrj4qnmX6BhRnukGmw + bErT31DzVWDrgrlcd2eQFuAflysckJv7wdIZXKSZwAWHAJzipUcNbG + O + UHJwia7RXwef9F2Ruebnl2jXH5 / 7iCV + 83NLivdl0aW2TzLGOLR1NMG63NtN3T95Qfisame2QkYCBmYRCCCn3iwwxzDXDymAFE9 / RqnnIPzhA / K0YayPZnLInoO3pTVxF1DL + RnmWRojUOwoO5ZkY ++ Behzq7nn9nRrEC + u / aDk2CXwJe9WbHwVgznKM7N6v4IUlcQz93VhRUbDetnWhA3TnD + HDsc85z0hvFp8c2U4giqRL4CnXHQIfBG63hLHoAogWOH8I + paVId180DWFpwjsAsKXVbESUa2ORL7LmuiDg1qKLoVFxiEEVJmnYPv5F8P1XMvJPW6L6QRQnJqj / ntyRSyEKnNh3umRTBoJzfXNDhsDXMPMu0leuYN1D + arx6IHBCKPexevE53iE7JK05bj / Oq8ujCOJRyv6TqjX4gQM3 +kgXmi8rnCYB1CJg6lvhH1+pw==
 template:
 data: null
 metadata:
 annotations:
 sealed secrets. bitnami . com /cluster-wide: "true"
 creationTimestamp: null
 name: demo secret

You can see an example of Sealed Secrets here. As you can see, the value is encrypted. And once you deploy this file to the cluster, the Bitnami Sealed Secrets. converts and decrypts this file into a plain secret. Like this.As you can see, now our Sealed Secret is a regular secret. Do you remember ArgoCD? As mentioned earlier, you can easily share ArgoCD with developers and other engineers. But remember to make sure they have read-only access.

Kubernetes Hardening

Now it’s time to talk about the cluster components themselves. They also have their own vulnerabilities and attackers will gladly take advantage of them in case of misconfiguration.

API Server

API Server is the core of Kubernetes. On some vanilla and older clusters, the API server runs not only over HTTPS, but also on an insecure port (HTTP) that does not check for authentication and authorization. Be sure to make sure you disable unsecured ports. You can also try sending a curl request to port 8080 to see if you get a response.

Etcd

Etcd is like a database that stores information about the cluster and cluster secrets. This is an important component, and anyone who can write to etcd can effectively control your Kubernetes cluster. Even just reading the contents of etcd can easily provide useful clues to a potential attacker. The etcd server must be configured to only trust certificates that are intended for API servers. Thus, it will only be available to the verified components in the cluster.

Kubelet

Kubelet. This is the agent receiving instructions on what to do and where to do it. Mainly responsible for scheduling your applications in the cluster. Check for anonymous access and the correct authorization mode and you will be safe.

Kubernetes dashboard

Kubernetes panel. It is better to disable this system completely, since we have other tools that can help us understand the status of the cluster. Like, for example, ArgoCD. You will only see the resources created by Argo, but this is good, since usually problems arise due to the application and project resources, and the cluster itself is quite stable.

Other auxiliary tools

That’s not all. I have several tools that will help in finding vulnerabilities.

Kubescape

To run this tool, just run two commands. One is for loading the script, and the other is for executing it. As a result, you will get a list of vulnerabilities and misconfigurations with a total and a total score at the end. Remember the best practices for hardening Kubernetes? This tool will check them for you. In addition, Kubescape uses databases that are updated accordingly to detect new vulnerabilities.

Kube-bench

It is almost the same as Kubescape except that it runs inside a cluster and can be deployed as a Cronjob for regular scans.

Kubesec

A simple plugin for scanning Kubernetes modules.

Kubeaudit

Has similar functionality but is used for debugging and creates a nice list with examples to help troubleshoot them.