Easily Deploy Optimizely Agent to K8s via Helm Charts
By Mike Chu
Distributed systems pose new challenges to companies seeking to build experimentation platforms that support their business goals. In a monolith architecture that combines a frontend and backend, the implementation of one Optimizely Full Stack SDK may be sufficient to enable full stack experimentation. In headless architectures, where the frontend is decoupled from the backend, two SDKs may be sufficient. But, when the backend is broken down into microservices, organizations frequently turn to implementing one of our SDKs in each service, which can accrue high development and maintenance costs.
Microservice architectures have become the de facto standard for building cloud-native systems. Engineering organizations tasked with maintaining highly reliable, available, and scalable systems are also increasingly breaking down their monoliths into microservices. As high-performing organizations plug in additional capabilities, some orchestrate their ever-growing array of services with the help of Kubernetes to manage the workloads, provide scalability, and ultimately reduce their development, maintenance, and infrastructure costs.
The popularity of microservice architecture is why we introduced Optimizely Agent. With Agent companies can centralize their interactions with Optimizely in one place, eliminating the need to install SDKs in each microservice and reducing the maintenance costs over time as we update our platform. Until now, Agent was only available as a Docker image, with the expectation that our customers would need to deploy Agent optimally for their architectures with little guidance. Today, we’re happy to announce that it’s easier than ever to deploy Agent to Kubernetes (K8s) using Helm Charts.
Optimizely Agent
Agent is a microservice providing integration with Optimizely’s experimentation offering, allowing organizations to easily build and run experiments and deliver features across multiple environments and into existing applications without having to implement our language-specific SDKs in each service.
Optimizely Agent provides a RESTful API that exposes experimentation functionality as a service. Customers can run A/B tests, multivariate testing, and (soon) adaptive targeting experiments with much less implementation time and effort.
Ideal use cases for Agent
Incorporating Optimizely Agent into your tech stack can save you time and effort in the following situations:
Existing Service Oriented Architecture
If you already run a service-oriented architecture (SOA) Optimizely Agent can easily be included in your application stack to unlock your digital potential. SOA is a technique used to separate system functionality into discrete, but loosely coupled units. With Agent deployed to your infrastructure, your various microservices can then interact with Agent as a centralized point of decision evaluation and results reporting. The low implementation effort of Agent allows you to easily implement Optimizely experimentation across your existing operations.
Centralized Security & Privacy Compliance
Experiment decisions and event data flowing through your infrastructure are easier to identify when a specific service is involved. Securing and monitoring this traffic becomes much easier with Optimizely Agent.
For instance, consider your infrastructure administrators and infosec staff. They may need firewall rules configured for experimentation data on a containerized workload to comply with data handling regulations or company policies. By implementing Agent, you can easily configure these rules and regulations in one location.
Support for Multiple Programming Languages
Optimizely customers often leverage several of our software development kits (SDKs) depending on the product on which experiments are being run.
In some scenarios, especially in server-side applications, it can make sense to support a single implementation instead of coding the same solution in several programming languages.
This abstraction over HTTP using Optimizely Agent is instrumental if you have a nuanced tech stack or bespoke programming language in use. Leveraging Agent will bypass these hurdles when cultivating your culture of experimentation.
In cases where Optimizely does not offer an SDK for a chosen language, Agent can also stand in as a language-agnostic interface to Optimizely.
Future Proofing
Agent becomes a near plug-and-go component for your future initiatives. Running instances of Optimizely Agent as part of your infrastructure can provide current and future support for experimentation with both internal workloads and customer-facing experiences. As your company unlocks the tangible value of experimentation, you'll likely find other systems that can be optimized through testing.
How Is Optimizely Agent Typically Deployed?
Running via Docker
Optimizely Agent was designed to be executed in a container runtime, typically Docker. Using simple variables when starting the container, we offer customizability, such as networking options. We maintain two Agent images on Docker Hub, both based on the official Golang container image. The primary version of Agent uses Debian Linux, and we also support an Alpine Linux-based Agent image with a smaller footprint.
Engineers will often run a local instance on the Agent during development, but the Docker image can also be included as a part of a smaller swarm. Please review the Docker Hub page instructions to learn how to set up and run the Agent via Docker.
The Dockerized Optimizely Agent also supports the industry shift towards a declarative, infrastructure-as-code approach. Configuration via YAML files can easily be committed to your software repositories, versioned, and distributed in a controlled manner by leveraging orchestration technologies like Docker Compose, Ansible, Terraform, and especially Kubernetes.
Running From Source
It's rarely done, but the Golang-based Optimizely Agent can be built and run from the source code hosted on GitHub. Based on the Optimizely Go SDK, it gives Agent high-performance throughput.
Please review the README markdown instructions in the Agent repo for more information. We've included Windows, macOS, and Linux instructions and helper scripts.
We typically recommend that clients run Agent as a containerized service.
Running in Kubernetes
Kubernetes has become the leader in the container orchestration space. Running Agent in a multi-node cluster provides high availability and dovetails into an infrastructure-as-code paradigm.
As with many of the orchestration technologies mentioned earlier, there's complexity cost to consider. Developing and maintaining configuration files requires (in my best Liam Neeson voice) a very particular set of skills.
Fortunately, the DevOps community has a solution.
What is Helm?
Helm is a package manager that helps DevOps engineers develop, test, manage, publish, and distribute Kubernetes deployments. Helm reduces the effort of working with Kubernetes while adhering to container & infrastructure-as-code paradigms.
Helm helps Kubernetes engineers:
- simplify sharing
- manage complexity
- easily update deployments
- coordinate rollbacks (as needed)
- perform repetitive operations consistently
Having developed my share of Kubernetes deployments, it's easy to find yourself copying and pasting chunks of YAML configurations to accomplish the above goals. Most developers will see this duplication and, after cringing, will look to satisfy patterns and reuse code, or in this case, develop templates.
To stick with Kubernetes' declarative approach, Helm has its own way of packaging K8s configurations for distribution.
What Are Helm Charts?
Helm Charts are a set of file manifests that combine YAML templating and basic programmatic flow to produce a native Kubernetes configuration. Helm handles generating and executing the output configuration against a target Kubernetes cluster.
Developing Helm charts has a low barrier to entry and a high ROI.
Fortunately, in the context of the Optimizely Agent, we've done the work for you. We maintain a Helm Chart on Artifact Hub that can be used to install and manage an Agent deployment in your K8s cluster.
Minimize Development & Maintenance Effort for Agent via Helm Charts
We want to help you unlock the hidden value in your organization's digital properties. The faster we can incorporate experimentation into your technical operations, the sooner you can focus on designing experiments that return that value.
There are tangible advantages to deploying Optimizely Agent via Helm charts
- Reduce the upfront effort needed to implement experimentation
- Remove duplication of effort by centralizing decision support and reporting
- Simplify deployment and upgrades to Agent
- Codify desired values used by Agent
- Infrastructure versioning (and rollbacks!)
Helm charts make it easy to scale your Agent deployment. Once you've integrated experimentation in your stack, you can easily add additional Agent instances to handle the increased load, as needed. When traffic decreases, you can remove extra instances, freeing resources on your cluster, and saving you money.
Templates and Values
Helm uses Go's template capabilities which provides a way to define how data is rendered into a Helm chart. This template and the values you choose merge to produce the necessary Kubernetes configuration files.
As a small but useful example, the fullNameOverride from the values.YAML file is used throughout the resulting K8s configuration YAML files. Keeping them synchronized is otherwise a manual effort.
Template Files
A Helm chart's templates directory contains all the templates used to generate the chart. Each Kubernetes resource needed by your cloud-native app stack, like Deployments, ConfigMaps, Secrets, Services, Persistent Volumes, etc., can have associated templates that Helm can fill in.
When you run helm install, Helm enumerates this templates directory and begins to load the declared values into each template file.
Values File
A values file allows you to define and set variables you can access in your templates. During installation, Helm uses the values you specify in your custom values.yaml file or via the --set flag in a command line interface (CLI) to override the defaults defined in the templates.
We recommend carefully reading the documentation for the Optimizely Agent Helm chart and the internal documentation for the values.yaml file.
Remember to commit your version of the Agent values.yaml to your version control system as a record of Optimizely Agent's configuration during each iteration of your implementation.
Common Optimizely Agent Helm Chart Values to Edit
The values you supply to your Optimizely Agent instance(s) will vary based on your infrastructure and experiment traffic. Here are some common variables to review from your custom values.YAML file supplied to the Helm chart:
- replicaCount: Set an initial count of replicas based on an estimation of experiment traffic load
- autoscaling: Configure how to scale Agent up and down and under which load situations
- service: Use this section to configure how your Agent pods are expected to be available inside and outside your cluster
- logs: Decide the log level and format of collected logs
- config: Provide the configuration used in each instance of Agent
Remember that our Customer Success Managers are here to help you with the set-up and configuration of the Optimizely Agent, so don't hesitate to reach out to us with any questions you might have.
Comments