Posts

Yes we can: Writing Firestore Admin functions in Dart, and using the Firestore Emulator

Image
Thanks to the awesome Firestore Admin Interop library,  you can write Firestore admin functions in Dart (and yes Googlers, if you are reading this, we *really* need a native Dart Firebase SDK).

If you are curious how the interop library works: The dart2js compiler translates your Dart code to Javascript. The interop library wraps the native nodejs library for Firestore,  allowing you to call it from your Dart program. See here for information on the Admin SDK.

As a nodejs and Firestore newbie, there were a couple of missing pieces for me on how to use this library:
How do I wire this into my Firestore project?How do I use the local Firestore emulator? I won't cover this in depth (please see the example ), but here are few tips:
The library is wired to your Firestore project via the service-account.json file that you download from the Firebase console (Project Settings > Service Accounts). Download and save this file (and DO NOT check it in to git !!)You do *NOT* need to have you…

Deploying the ForgeRock platform on Kubernetes using Skaffold and Kustomize

Image
If you are following along with the ForgeOps repository, you will see some significant changes in the way we deploy the ForgeRock IAM platform to Kubernetes.  These changes are aimed at dramatically simplifying the workflow to configure, test and deploy ForgeRock Access Manager, Identity Manager, Directory Services and the Identity Gateway.


To understand the motivation for the change, let's recap the current deployment approach:

The Kubernetes manifests are maintained in one git repository (forgeops), while the product configuration is another (forgeops-init).At runtime,  Kubernetes init containers clone the configuration from git and make it  available to the component using a shared volume. The advantage of this approach is that the docker container for a product can be (relatively) stable. Usually it is the configuration that is changing, not the product binary.
This approach seemed like a good idea at the time, but in retrospect it created a lot of complexity in the deployment…

Kubernetes Process Namespace sharing and the JDK

Kubernetes 1.12 introduced process namespace sharing, which is the ability for containers in a pod to share the same process namespace.  One of the neat things that you can do with this feature is to split your containers up into slim runtime containers, and optional debug containers that have all the tools required for troubleshooting.

For Java applications we want to use slimmed down JRE for our runtime container, but if things go sidewise, we want to use tools that are only available in the full JDK. Things like jmap, jstack, jps and friends.

I was curious to see if process namespace sharing would work for the JDK tools. It turns out it does.

Here is a sample that you can try out on Kubernetes >= 1.12 (this was tested in minikube). Follow the instructions to deploy this, and exec into the jdk container.  Find the tomcat process using jps and then use jmap / jstack to debug the tomcat container.


Kubernetes: Why won't that deployment roll?

Image
Kubernetes Deployments provide a declarative way of managing replica sets and pods.  A deployment specifies how many pods to run as part of a replica set, where to place pods, how to scale them and how to manage their availability.

Deployments are also used to perform rolling updates to a service. They can support a number of different update strategies such as blue/green and canary deployments.

The examples provided in the Kubernetes documentation explain how to trigger a rolling update by changing a docker image.  For example, suppose you edit the Docker image tag by using the kubectl edit deployment/my-app command, changing the image tag from acme/my-app:v0.2.3 to acme/my-app:v0.3.0.

When Kubernetes sees that the image tag has changed, a rolling update is triggered according to the declared strategy. This results in the pods with the 0.2.3 image being taken down and replaced with pods running the 0.3.0 image.  This is done in a rolling fashion so that the service remains available …

Save greenbacks on Google Container Engine using autoscaling and preemptible VMs

There is an awesome new feature on Google Container Engine (GKE) that lets you combine autoscaling, node pools and preemptible VMs to save big $!


The basic idea is to create a small cluster with an inexpensive VM type that will run 7x24. This primary node can be used for critical services that should not be rescheduled to another pod. A good example would be a Jenkins master server. Here is an example of how to create the cluster:

gcloud alpha container clusters create $CLUSTER \ --network "default" --num-nodes 1 \ --machine-type ${small} --zone $ZONE \ --disk-size 50
Now here is the money saver trick:  A second node pool is added to the cluster. This node pool is configured to auto-scale from one node up to a maximum. This additional node pool uses preemptible VMs. These are VMs that can be taken away at any time if Google needs the capacity, but in exchange you get dirt cheap images. For example, running a 4 core VM with 15GB of RAM for a month comes in under $30.

Automating OpenDJ backups on Kubernetes

Image
KubernetesStatefulSets are designed to run "pet" like services such as databases.  ForgeRock's OpenDJ LDAP server is an excellent fit for StatefulSets as it requires stable network identity and persistent storage.

The ForgeOps project contains a Kubernetes Helm chart to deploy DJ to a Kubernetes cluster. Using a StatefulSet, the cluster will auto-provision persistent storage for our pod. We configure OpenDJ to place its backend database on this storage volume.

This gives us persistence that survives container restarts, or even restarts of the cluster. As long as we don't delete the underlying persistent volume, our data is safe.

Persistent storage is quite reliable, but we typically want additional offline backups.

The high level approach to accomplish this is as follows:
Configure the OpenDJ container to support scheduled backups to a volume.Configure a Kubernetes volume to store the backups.Create a sidecar container that archives the backups.  In this example we wi…

OpenDJ Pets on Kubernetes

Image
Stateless "12-factor" applications are all the rage, but there are some kinds of services that are inherently stateful. Good examples are things like relational databases (Postgres, MySQL) and NoSQL databases (Cassandra, etc).

These services are difficult to containerize, because the default docker model favours ephemeral containers where the data disappears when the container is destroyed.

These services also have a strong need for identity. A database "primary" server is different than the "slave". In Cassandra, certain nodes are designated as seed nodes, and so on.

OpenDJ is an open source LDAP directory server from ForgeRock. LDAP servers are inherently "pet like" insomuch as the directory data must persist beyond the container lifetime. OpenDJ nodes also replicate data between themselves to provide high-availability and therefore need some kind of stable network identity.

Kubernetes 1.3  introduces a feature called "Pet Sets" that …