This has been the third time I went to devopsdays Amsterdam. And I love this conference!
Some of the reasons:
I had heard about Go, some of my co-workers have some experience with it, but I never wrote anything in the language. I was curious about it though.
The workshop from Michael Hausenblas was a nice intro. Based on what he told and showed us I cannot say that I expect that Go will replace Bash and Python for me. However, I will make some time to actually write some code myself to get a better feel for it.
We are already using the Elastic Stack in some places at work, but I have not used it for monitoring purposes. (I gravitate towards Prometheus combined with Alertmanager for alerting and Grafana for dashboards with graphs.) However, Philipp Krenn showed us that you can also do very interesting things with Kibana in the monitoring and debugging realm. Especially since you can correlate metrics with logs in the same tool.
I could say that Bridget Kromhout’s Kubernetes workshop was a nice refresher of what I had learned in the Kubernetes workshop last year but, to be honest, that would be a lie. I am glad I took this workshop.
It was a good workshop with lots of hands-on tasks. But it went a bit too fast to make it stick. I would have to spend more time on a Kubernetes cluster to really understand everything and get fluent with it. Luckily there is lots of information on container.training (including the sheets of this workshop) and there are plenty of cloud providers where you can get a Kubernetes cluster without having to create or maintain it yourself.
The talk that resonated most with me this year was the one from Waldo Grunenwald about product teams. Perhaps because (in my opinion) this is something that could be better in my job. Product management, development and operations are three different teams with different managers. Then again, I currently try to be the “ops guy” in our development team so that’s also DevOps, right? :)
The other most memorable talks for me were:
I have been using Emacs for quite a while. I was a Vim user in the past, but switched somewhere between 2007 and 2009. (The first time I wrote about Emacs here was in 2009.)
I have tried PyCharm a couple of times and it is a really nice editor with very useful features. It just never stuck with me and I always went back to Emacs after a while.
During the conference I used Visual Studio Code to write my notes. And I have to say I quite liked it. I intend to also give it a go at work. Who knows, I might even switch…
]]>Michael walked us through the features of the Go language by giving numerous examples. This is a workshop that usually takes a full day so we were in for a nice ride.
One thing he mentioned that he liked about the language is that there is (almost) no magic involved.
Some things that stood out to me, Mark, (as someone who writes Python most of the time and does not know much about Go):
gofmt
.Common pattern to handle errors:
mail, err := mailof(uid, aproject)
if err != nil {
...
os.Exit(1)
}
Slide 27:
if you feed the printf
function a different type, e.g. a string, it will not
even compile. (Mark: this is something I’m not used to, coming from Python.)
To expose things like functions (make them available to other packages): start the name with an uppercase letter. Functions starting with a lowercase letter are internal/private to the package. If you try to access an internal function, you get a nice error message (again: at compile time).
Slide 31: “log.Fatalf()
”
triggers the os.Exit(1)
you can see when you run this example.
You can add a call to the defer
function at the end of a scope (e.g. “defer f.Close()
”
in slide 33).
Since the Go runtime will execute this always (even if there was an error), you can
use this e.g. as a cleanup of an open file. You can have as many defer
s as you
like; they will be executed in reverse order.
Starting with writing tests is quite simple: create file with <module name>_test.go
.
The function name of the test is irrelevant as long as it starts with “Test
”.
Run the tests with “go test
” (plus options, if you like). Go offers test
coverage information. Tip: use a nice editor/IDE and integrate running the tests
and code coverage there.
As you can see on slide 39 it is possible to add encodings to your struct to be able to, for instance, encode and decode JSON. There are other encodings, see e.g. https://pkg.go.dev/encoding#section-directories
Google, where Go was created, uses a monorepo. As a result they did not need dependency management in Go. Use e.g. dep to help you out here. It looks like vgo will be part of the language in the future.1
You can either trust upstream (and Github to be available) and not put your dependencies in your repo, or chose not to and version control the code you depend on yourself.
About running a Go application in a container: you can either pick an image with
debug tools (like centos:7
), or pick a minimal image like alpine
or
scratch
as the basis of your image. You have to decide whether you want the
smallest image possible or want (some) tools included.
For Michael, Go replaced a lot of Bash and Python. However, Michael is not convinced that Go is a good fit to write a complete web application in, for instance. But decide for yourself. On slide 56 there are a couple of links to some pages with criticism.
As already stated, Go has an extensive standard library. Michael advises to use it. If it does not have or do what you want, your second best option is to use a drop-in replacement. Only if that is not possible, search for a package with a different API.
Useful resources:
Distributed services make debugging … interesting.
The code for this workshop, a highly monitored “hello world” app can be found on Github.
The server provided for the workshop is an Amazon Lightsail instance created with Terraform and provisioned with Ansible. (The code for this deployment is also included in the aforementioned repo.)
Notable changes in Kibana 6.3:
Packetbeat is using libpcap, just like Wireshark. Philipp thinks the future of Packetbeat is in tracking down DNS + TLS errors since you should encrypt the data between your services (which means that Packetbeat can no longer extract much information from the packets).
Previously you used Logstash to get the Nginx access logs into Elasticsearch. Filebeat modules can help you there. Filebeat is just forwarding the data; the parsing is done by Elasticsearch. Filebeat has processors to enrich events with e.g. cloud and host metadata (quite cheaply actually since this information is collected on startup of Filebeat and cached).
Auditbeat has the same type configuration as auditd.
Journalbeat (from a third party) can be used for journald support. Philipp doesn’t guarantee anything, but this is on the list of the Elastic team and he hopes there will be official support for journald.
You can have a rule to collect multiline messages, like stack traces, together in one document by telling Filebeat that if a line start with e.g. a timestamp, it is the start of a new line and if it starts with e.g. a space it is part of a stack trace. You could also use structured logs (which is recommended if you can).
As of version 6 you can tell beats to enable (and update) the related dashboards in Kibana.
For alerting with the Elastic stack you need a commercial license.
The machine learning (also only available in the commercial X-Pack license) takes three iterations to detect a pattern. For example the pattern of how much traffic your application receives on a workday can be learned in three days. For a weekday/weekend pattern, it would need three weeks.
Kibana also has support for APM (Application Performance Monitoring). There are agents for e.g. Python and Node and a bunch of others (some in beta or alpha stage, see the docs).
Elastic is working on Index Lifecycle Management (ILM) which will run as part of the cluster. Philipp is not sure when it will be available though. For now use Curator.
Elasticsearch already supports metrics aggregation (called “rollups”) via the API. In a future version there will also be an graphical interface to configure this.
Philipp compared his workshop to Lego. He showed us some configuration, visualizations, etcetera but “some assembly is required.”
This was a fast paced, highly interactive workshop about Kubernetes so I only took a few notes. However, the slides have so much information on them, you can follow the workshop perfectly fine without comments from me.
Resources:
Warning: we have done stuff you should not do in production. :)
Kubernetes is highly unopinionated.
By default Kubernetes uses one big, flat network. However, you can configure Kubernetes so that customers cannot access each other.
In real life you would not host your own Docker registry in the production environment. We do it in the workshop because it is easier than messing with credentials to other registries.
Kubernetes has extensive role based access control support.
Update 2021-10-08: According to the deb
repository: Dep was an official
experiment to implement a package manager for Go. As of 2020, Dep is deprecated
and archived in favor of Go modules, which have had official support since Go
1.11. For more details, see https://golang.org/ref/mod.
↩︎
While I was on my way to Amsterdam, I was reading up on my RSS feeds and ran across the most recent comic on turnoff.us. It was so appropriate that I decided to copy it here:
Arnab wanted to be able to easily create lab environments for trainings. This workshop not only discusses how the lab is setup but also uses such a lab environment (in this case to provide an Ansible training environment).
The nature of the setup of the lab he used today: each participant got a control node and two managed nodes. Each node was in fact a Docker container which was managed by Ansible.
The first part of the workshop was basically an introduction to Ansible with topics like the history of Ansible and basic command line usage. Arnab demonstrated how to use a custom inventory file, limiting plays to a group or certain tasks (or skipping tasks) and how to syntax check your playbook.
A few examples:
$ ansible all -i "localhost," -c local -m shell -a whoami
$ ansible -i demo.ini all -m shell -a whoami -v
$ ansible-playbook playbook.yml --syntax-check
Some best practices:
.ini
extension for your inventory file.ansible-playbook --list-tags playbook.yml
” to show all available
tags.)In the category “today I learned”:
ansible-pull
). Who knew? :-)ansible-doc
.with_sequence
(see
the docs).#!/usr/bin/env ansible-playbook
at the top (and using chmod
).
If you want to run your own lab, you can use Arnab’s GitHub repo: arnabsinha4u/ansible-traininglab. Note that this assumes a CentOS host.
In order to be able to log in to the “master” node (via ssh ansiblelabuser1@localhost
) I had to enable PasswordAuthentication
in /etc/ssh/sshd_config
. But since I had run the Ansible playbook
already, I was not allowed to change that file. I first had to run
this command:
$ chattr -i /etc/ssh/sshd_config
Other GitHub repos from Arnab that you can use:
Kubernetes is a container orchestration platform. It has a huge open source backing and new features are being built quickly. It does one thing (in an elegant way).
Kubernetes has three main components:
When you look at it from a ‘physical’ perspective, you have a Kubernetes node and this node runs Docker, which in turn runs the containers. Pods are a logical wrapper around containers; we don’t care about nodes.
Pods are mortal. What this means is that processes are expected to die. But we do not care because Kubernetes ensures availability by making sure that there are enough of them running.
During the workshop we used the following GitHub repo: Seth-Karlo/intro-to-kubernetes-workshop.
The pod you can create with the pod/pod.yml
file can be used for a
toolbox to examine other pods.
More terminology: a replica set is basically a way of saying “make sure there are N copies of a pod.” If you look at the specification of a replica set, you can see that it contains a Pod spec.
Using the readinessProbe
directive you can make sure that a
container does not receive traffic until it is actually ready. Note
that this is different from Docker’s health check which is meant to
determine if a container is still working or should be killed.
With the replica set example in aforementioned repo, Kubernetes will automatically start a pod again if it is killed. Even if you kill a pod yourself—Kubernetes doesn’t care why it has gone down.
If you edit a replica set (e.g. to update to a newer version of an image), it has no immediate impact because the pod spec is nested. Deployments can enforce changes are being executed though.
To get the whole configuration of a pod, including the default and not just the stuff we specified, run:
$ kubectl get pod <podname> -o yaml
Note that volumeMounts
appear by default on every pod you create.
Secrets, although the name implies something different, are not encrypted; all pods in the same namespace can access the secret and decode it (base64). It is an easy way to put information in a pod, it is not secure!
Services don’t “exist” like containers do. A service is a purely logical idea. A service exposes pods to other pods.
A service automatically gets a DNS entry: <service name>.<namespace name>
. This means that from inside your containers, you can use DNS
to access other containers.
About scheduling:
For this workshop we used kops
because it was easier. At Schuberg Philis they actually use Terraform
to manage their cluster(s). Note that you can use a flag and then
kops
will spit out Terraform code.
If you are worried about your pods going down gracefully, you are doing your pods wrong.
If your application depends on long running processes: don’t use Kubernetes. Use the right tool for the right application.
Combine containers inside pods if latency matters, if they need to share configuration files or if they need to connect via loopback device.
Miscellaneous:
Resources:
Why OpenShift: because developers need a platform to be able to deploy their applications. OpenShift is a platform to run your containers at scale. Meant for enterprise: not necessarily the latest features, but focus on stability.
OpenShift was originally written in Ruby, but it has been rewritten in Go and it is built upon Kubernetes. Openshift is always one release (circa three months) behind on Kubernetes.
Everything you can deploy in Kubernetes, you can deploy on OpenShift.
OpenShift Origin is community supported. If you want a commercially supported version, you have to run on Red Hat Enterprise Linux (RHEL). Red Hat OpenShift uses RHEL images, where OpenShift Origin uses CentOS.
OpenShift Online runs on AWS, but you can for instance also run it on bare metal if you want. But public clouds are a more natural fit for cloud-native applications.
Pods are the orchestrated units in OpenShift. Containers in a pod can talk to each other via localhost and local sockets. The security boundary is extended from the container to the pod. Containers can see each others processes and files. You only want to run one process in a container though.
A service can be seen as a sort of load balancer to redirect traffic to
the right pods. Internally it is using iptables
.
OpenShift provides its own Docker registry which you can use if you want to.
OpenShift has solved the persistent storage problem before Kubernetes did. You can use the native storage for your solution (e.g. EBS for AWS). Note that block storage solutions require mounting/unmounting and thus takes a little longer.
As with Kubernetes, there is no built-in autoscale for OpenShift. Red Hat CloudForms can monitor your cluster and do the scaling for you.
The routing layer is your entrypoint into the cluster. It’s based on HAProxy. Comparable with Kubernetes' Ingress.
RHEL Atomic is a minimalistic OS designed to run Docker containers. (It is similar to CoreOS, but Red Hat wanted to have its own OS.) Everything you want to run has to run in a container. You can install OpenShift on RHEL Atomic.
Fun fact: you can create resources in Azure with Ansible.
Unfortunately there were some problems with the Red Hat OpenShift Azure Test Drive. As an alternative I used minishift to run OpenShift on my laptop. With it, I could work on the workshop.
Further reading:
]]>