Examples¶
Running a Job¶
The simplest workload you can set up in RAIL is a Job. It will schedule a Pod that runs its container to completion and then stops.
In this example, we will create a job that runs a Python program that computes plenty of digits of 𝜋.
You will need to have access to a container image that does the job and is available
from one of the registry locations that RAIL is allowed to pull images from.
We have prepared one from the public GitLab project at
<https://git.app.uib.no/gisle/k8s-pi-job>. In the repo you can inspect the Python code
that runs to do the computation and the Dockerfile
required to build the image.
This manifest defines a Job called pi
for RAIL:
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
ttlSecondsAfterFinished: 600
template:
spec:
restartPolicy: Never
containers:
- name: pi
image: git.app.uib.no:4567/gisle/k8s-pi-job/pi-job:f32c04f6
command: ["python", "pi.py", "10000"]
# RAIL requires us to specify how resource hungry each container is
resources:
requests:
cpu: 200m
memory: 5Mi
limits:
cpu: 200m
memory: 20Mi
# This states the defaults for the securityContext and will get rid of
# the warning that you should set these values. These values can not be
# set at the Pod-level, so they need to be specified here.
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type:
RuntimeDefault
On the login hosts of the cluster where you want to run this job,
save this text in a file called job-pi.yaml
. Then create this job
in the cluster by running:
kubectl apply -f job-pi.yaml
Then inspect the state of the job with:
kubectl describe job/pi
When the job finishes you can read the output generated with:
kubectl logs job/pi
The job and the pod are automatically deleted after 10 minutes (as specified by
the ttlSecondsAfterFinished
setting). If you want to clean up before this
time run this command:
kubectl delete -f job-pi.yaml
If you want to learn about the meaning of an option in a manifest you can look it up with kubectl
as well.
For example to learn about ttlSecondsAfterFinished
run:
kubectl explain job.spec.ttlSecondsAfterFinished
The dotted path given as argument starts out with the type of object (job
in
this case) and then you just nest the names of the fields until you get to the
one you are interested in.
Running a Service¶
A service is an abstraction in Kubernetes for an interface that clients can access to be connected to an application. The most common interface is the HTTP protocol that web servers provide.
In this example we will set up a service that will talk to an instance of nginx application as an example of a web server. This is the Service resource specification required for that:
apiVersion: v1
kind: Service
metadata:
name: xyzzy
spec:
selector:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: xyzzy
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
Save this to a file and pass it to kubectl apply
. This makes the service
available inside current namespace in the cluster, which means that client code
that runs in containers in pods in the namespace can simply fetch the front page
content from http://xyzzy. The address is now available, but it’s not
functional yet as we have not provided any backends that actually implement the
server side component.
Let’s first explain what this service specification expresses.
metadata.name
is the handle to reference this service as well as the “hostname” that clients connect to.spec.ports[].port
is the port that clients use. Here it’s just the standard HTTP port of 80.spec.selector
is a set of labels that together will determine which pods will be used as backends to implement this service. Each web request will be routed to one of the pods selected. The label names here are from the well-known labels whereapp.kubernetes.io/name
is the application name andapp.kubernetes.io/instance
is the name given to this instance.spec.ports[].targetPort
is the name of port on the selected Pod that the request will be forwarded to. Using a name instead of a number here allow the pod itself to declare where it want to be contacted. This could be port 80, but some backends servers choose to use a different port.
Ingress: Exposing a service to the outside world¶
Most web servers also want to be available to world outside of the cluster. In the scary world outside it’s also a requirement to use secure HTTP, aka https or HTTP over TLS. In RAIL we can set this up with an Ingress resource that looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xyzzy
annotations:
cert-manager.io/cluster-issuer: harica-temp
cert-manager.io/private-key-algorithm: ECDSA
cert-manager.io/usages: "digital signature"
cert-manager.io/private-key-rotation-policy: Always
#nginx.ingress.kubernetes.io/whitelist-source-range: 129.177.0.0/16,2001:700:200::/48
spec:
ingressClassName: nginx
rules:
- host: xyzzy.osl1.prod.rail.uib.no
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: xyzzy
port:
name: http
tls:
- hosts:
- xyzzy.osl1.prod.rail.uib.no
secretName: xyzzy.osl1.prod.rail.uib.no-tls
This will register the requested DNS domain name and create an valid TLS
certificate for the domain name. It will also forward all traffic received for
that domain name to service xyzzy
set up above. The example here assumes that
we run from the osl1-prod RAIL cluster. The domain names for other RAIL
clusters will have to be adjusted accordingly.
Attention
cluster-issuer sectigo-clusterwide
is defunct. Certificates issued by Sectigo in
all clusters are valid until January 2026. You can not deploy new TLS certificates from Sectigo.
Important
At this point in time, there are only one available cluster-issuer, harica-temp
.
This issuer is only providing TLS certificates for cluster specific URLs. Thus, you
can not issue certificates for domains like .uib.no, you need to include clustername,
for example myapp.bgo1.prod.uib.no
, or pre-register your application in DNS to point
to a specific cluster, since harica-temp
requires a http challenge for every name in
the certificate request.
The nginx.ingress.kubernetes.io/whitelist-source-range annotation is commented out in the example above. You can’t enable it until after the certificate has been issued. Harica must be able to reach the server to verify the certificate.
Save this specification to a file and pass it to kubectl apply
. Then wait a
minute for the certificate and hostname to be set up. You can
inspect the output of kubectl get ingress xyzzy
to see when the address has
been allocated.
At this point you can test out the service from a host on the Internet by running:
curl -i http://xyzzy.osl1.prod.rail.uib.no
You will see that this just redirects the client to https://, so let’s try that instead:
curl -i https://xyzzy.osl1.prod.rail.uib.no
and this should then output something like this:
HTTP/2 503
date: Sun, 21 Apr 2024 22:12:50 GMT
content-type: text/html
content-length: 190
strict-transport-security: max-age=15724800; includeSubDomains
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
...
which is as expected, since we have still not provided any backends to actually implement this service.
Deployment: Pods that runs the backends¶
Finally here is the specification of the Deployment that will create the Pods that run the backends. We also need to set up a matching NetworkPolicy so that the Pods are able to receive incoming traffic on port 8080, which is the port that the container we run here listens on.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-xyzzy
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: xyzzy
template:
metadata:
labels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: xyzzy
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged:1.25
ports:
- name: http
containerPort: 8080
protocol: TCP
# RAIL requires us to specify how resource hungry each container is
resources:
requests:
cpu: 100m
memory: 20Mi
limits:
cpu: 500m
memory: 100Mi
# This states the defaults for the securityContext and will get rid of
# the warning that you should set these values. These values can not be
# set at the Pod-level, so they need to be specified here.
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type:
RuntimeDefault
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nginx-xyzzy
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: xyzzy
policyTypes:
- Ingress
ingress:
- ports:
- port: http
from: