- Published on
Testing Network Policies with Minikube
- Authors
Recently, I had to update a Network Policy for one of our services. Before deploying the change, I wanted to test the change with Minikube to ensure I wouldn't break anything.
This post documents the steps I went through to perform that test.
Definitions
Before we get started, let's just define a few terms.
What is a Network Policy?
Network Policies are an application-centric construct which allows you to specify how a pod is allowed to communicate with various network "entities" over the network. NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to other connections.
What is the difference between Ingress and Egress?
- Egress refers to traffic that is outgoing from the pod/network/entity.
- Ingress refers to traffic that is incoming to the pod/network/entity.
What are we testing?
The service I wanted to update had a NetworkPolicy
that denies connection to anything outside of its namespace. The update was to allow connection to one other namespace, on a specific port; to allow Prometheus to scrape for metrics.
Setting up minikube
- Start minikube with the
cni
network plugin and calico cni
minikube start --network-plugin=cni --cni=calico
- Verify Calico installation
kubectl get pods -n kube-system -l k8s-app=calico-node NAME READY STATUS RESTARTS AGE
calico-node-9ln25 1/1 Running 0 54s
Test that Network Policies work
- Run a nginx Pod with labels
app=web
and expose it at port 80
kubectl run web --image=nginx --labels="app=web" --expose --port=80
- Wait for the pod to be up and running
kubectl get pods web NAME READY STATUS RESTARTS AGE web 1/1 Running 0 2m19s
- Run a temporary Pod and make a request to
web
Service
kubectl run --rm -i -t --image=alpine test-$RANDOM -- sh / # wget -qO- http://web
- Save the following manifest to
web-deny-all.yaml
, then apply to the cluster
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-deny-all
spec:
podSelector:
matchLabels:
app: web
ingress: []
kubectl apply -f /tmp/web-deny-all.yaml networkpolicy.networking.k8s.io/web-deny-all created
- Run a test container again, and try to query web
kubectl run --rm -i -t --image=alpine test-$RANDOM -- sh / # wget -qO- --timeout=2 http://web
If everything works as it should, you should get a wget: download timed out
error
- Clean up
kubectl delete pod web kubectl delete service web kubectl delete networkpolicy web-deny-all
Setting up the test
I mentioned earlier that the update I was testing was to go from a NetworkPolicy
that denies connection to anything outside of its namespace => allow connection to one other namespace, on a specific port, without breaking the connection within the namespace.
Deny all traffic from other namespaces
- Start a web service in
default
namespace
kubectl run web --namespace=default --image=nginx --labels="app=web" --expose --port=80
- Save the following manifest to
deny-from-other-namespaces.yaml
and apply to the cluster
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: default
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
kubectl apply -f /tmp/deny-from-other-namespaces.yaml
networkpolicy.networking.k8s.io/deny-from-other-namespaces created
- Create a
foo
namespace
kubectl create namespace foo
- Query the
web
service from thefoo
namespace
kubectl run test-$RANDOM --namespace=foo --rm -i -t --image=alpine -- sh / # wget -qO-
--timeout=2 http://web.default
If everything works as it should, you should get a wget: download timed out
error
- Query the
web
service from thedefault
namespace
kubectl run test-$RANDOM --namespace=default --rm -i -t --image=alpine -- sh / # wget -qO-
--timeout=2 http://web.default
If everything works as it should, you should have got a HTML response instead of a timeout.
- Cleanup
kubectl delete networkpolicy deny-from-other-namespaces
Allowing access from the foo namespace on a specific port
Now we have our local environment set-up to replicate what is currently deployed, we can test our new changes.
- Create a
bar
namespace
kubectl create namespace bar
- Create a label on the
bar
namespace
kubectl label namespace/bar purpose=allow
- Run a piserver pod called
apiserver
kubectl run piserver --image=ahmet/app-on-two-ports --labels="app=apiserver"
- Expose the pod as Service, mapping
8000
to8001
, and5000
to5001
kubectl create service clusterip apiserver \ --tcp 8001:8000 \ --tcp 5001:5000
- Save the following manifest to
allow-from-bar.yaml
and apply to the cluster
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: default
name: allow-from-bar
spec:
podSelector:
matchLabels:
app: apiserver
ingress:
- from:
- podSelector: {}
- from:
- namespaceSelector:
matchLabels:
purpose: 'allow'
ports:
- port: 5000
protocol: TCP
kubectl apply -f /tmp/allow-from-bar.yaml networkpolicy.networking.k8s.io/allow-from-bar created
- Query the
apiserver
service from thebar
namespace (access should be granted to port 5000, but not 8000)
kubectl run test-$RANDOM --namespace=bar --rm -i -t --image=alpine -- sh / # wget -qO-
--timeout=2 http://apiserver.default:8001 / # wget -qO- --timeout=2
http://apiserver.default:5001/metrics
If everything works as it should, the first request to http://apiserver.default:8001
should return a timeout error, whereas the second request to http://apiserver.default:5001/metrics
should return a list of metrics
- Query the
web
service from thefoo
namespace (access should not be granted to either port)
kubectl run test-$RANDOM --namespace=foo --rm -i -t --image=alpine -- sh / # wget -qO-
--timeout=2 http://apiserver.default:8001 / # wget -qO- --timeout=2
http://apiserver.default:5001/metrics
If everything works as it should, both requests should return a timeout error.
- Query the
web
service from thedefault
(access should be granted to both port 5000 and 8000)
kubectl run test-$RANDOM --namespace=default --rm -i -t --image=alpine -- sh / # wget -qO-
--timeout=2 http://apiserver.default:8001 / # wget -qO- --timeout=2
http://apiserver.default:5001/metrics
If everything works as it should, the first request to http://apiserver.default:8001
should return Hello from HTTP server
, and the second request to http://apiserver.default:5001/metrics
should return a list of metrics
This proves that the update to the NetworkPolicy
works as expected.