Posts

Showing posts from 2019

Yes we can: Writing Firestore Admin functions in Dart, and using the Firestore Emulator

Image
Thanks to the awesome Firestore Admin Interop library,  you can write Firestore admin functions in Dart (and yes Googlers, if you are reading this, we *really* need a native Dart Firebase SDK). If you are curious how the interop library works: The dart2js compiler translates your Dart code to Javascript. The interop library wraps the native nodejs library for Firestore,  allowing you to call it from your Dart program. See here for information on the Admin SDK . As a nodejs and Firestore newbie, there were a couple of missing pieces for me on how to use this library: How do I wire this into my Firestore project? How do I use the local Firestore emulator? I won't cover this in depth (please see the example ), but here are few tips: The library is wired to your Firestore project via the service-account.json file that you download from the Firebase console (Project Settings > Service Accounts). Download and save this file (and DO NOT check it in to git !!) You do

Deploying the ForgeRock platform on Kubernetes using Skaffold and Kustomize

Image
If you are following along with the  ForgeOps repository, you will see some significant changes in the way we deploy the ForgeRock IAM platform to Kubernetes.  These changes are aimed at dramatically simplifying the workflow to configure, test and deploy ForgeRock Access Manager, Identity Manager, Directory Services and the Identity Gateway. To understand the motivation for the change, let's recap the current deployment approach: The Kubernetes manifests are maintained in one git repository ( forgeops ), while the product configuration is another ( forgeops-init ). At runtime ,  Kubernetes init containers clone the configuration from git and make it  available to the component using a shared volume. The advantage of this approach is that the docker container for a product can be (relatively) stable. Usually it is the configuration that is changing, not the product binary. This approach seemed like a good idea at the time, but in retrospect it created a lot of c

Kubernetes Process Namespace sharing and the JDK

Kubernetes 1.12 introduced process namespace sharing , which is the ability for containers in a pod to share the same process namespace.  One of the neat things that you can do with this feature is to split your containers up into slim runtime containers, and optional debug containers that have all the tools required for troubleshooting. For Java applications we want to use slimmed down JRE for our runtime container, but if things go sidewise, we want to use tools that are only available in the full JDK. Things like jmap, jstack, jps and friends. I was curious to see if process namespace sharing would work for the JDK tools. It turns out it does. Here is a sample that you can try out on Kubernetes >= 1.12 (this was tested in minikube). Follow the instructions to deploy this, and exec into the jdk container.  Find the tomcat process using jps and then use jmap / jstack to debug the tomcat container.