Augusto Davalos
Apr 28, 2026
  48
(0 votes)

From Prompting to Production: Optimizely Opal University Cohort and the Future of Agentic MarTech

Most organizations today are still playing with AI. They experiment with prompts, test ideas in isolated chats, and occasionally automate a task or two. It creates value, but it’s inconsistent, difficult to scale, and almost impossible to govern in a meaningful way.

What the Optimizely Opal University Cohort makes clear is that we’re moving beyond that phase. This is no longer about using AI—it’s about operationalizing it.

Over the course of the workshop, the experience wasn’t just about building agents. It was about understanding what it actually takes to make AI reliable, reusable, and embedded into the day-to-day workflows of marketing and digital teams. And that distinction matters, because the organizations that figure this out won’t just be more efficient—they’ll fundamentally change how work gets done.


The Real Shift: From Conversations to Systems

One of the most important ideas reinforced early in the cohort seems simple on the surface, but it has far-reaching implications once you start applying it in practice:

Chats are disposable. Agents are systems.

That shift alone reframes how you think about AI. In most teams today, someone writes a prompt, tweaks it until the output looks acceptable, uses it once, and then repeats the process the next time they need something similar. It works, but it doesn’t scale, and more importantly, it doesn’t create institutional knowledge.

With Opal, you’re encouraged to step back and ask a different question: instead of solving the problem once, how do you design something that solves it consistently?

The workshop used a restaurant analogy to explain this, which actually holds up surprisingly well when you think about it more deeply. The prompt becomes your recipe, the instructions are your ingredients, the tools are your kitchen equipment, and evaluations act as your quality control. What you’re ultimately producing is not just an answer, but a repeatable outcome that can be delivered again and again with a predictable level of quality .

What’s often missed in the broader AI conversation is that this isn’t really about prompting anymore. It’s about system design, and that’s where Opal starts to feel meaningfully different.


The Agent Creation Experience: Fast, Structured, and Intentionally Imperfect

One of the more impressive aspects of the workshop is how quickly you can go from idea to working agent. The two-tab workflow—building in Opal University and then deploying in the Opal platform—removes a lot of the friction that typically comes with setting these things up.

In practice, this approach can save hours of manual effort, especially when compared to building something similar from scratch . But what stood out wasn’t just the speed—it was the expectation that speed alone isn’t the goal.

There’s a consistent message throughout the sessions: getting an agent to work is relatively easy; getting it to work well takes iteration.

That gap between 80% and 95% quality becomes very real once you start testing outputs. It’s where you begin to notice inconsistencies, edge cases, and subtle misalignments with your expectations. And it’s also where most of the real work happens.

The workflow itself reinforces good habits. You generate a structured prompt using a dedicated builder, validate the output before committing, export it as JSON, and then bring it into the Opal platform for testing and refinement . It’s not just efficient—it introduces a level of discipline that most teams don’t naturally apply when working with AI.


What Actually Makes Agents Work in Practice

As the sessions progressed, a pattern started to emerge. It wasn’t any single feature that made agents effective—it was how a few key components worked together.

Instructions: Where Consistency Lives

If prompts define what an agent does, instructions define how it behaves over time.

Opal allows you to establish both instance-wide instructions—things like brand voice or company guidelines—and more personal layers that reflect individual preferences. You can even generate these from an existing website, effectively turning your digital presence into a baseline for how your agents communicate .

This is one of those features that sounds operational, but it has strategic implications. It means you’re no longer relying on individuals to remember how to phrase things or maintain consistency across outputs. Instead, that consistency is embedded into the system.

That said, there’s an important caveat. When instructions conflict, outputs can become unpredictable. It’s a reminder that governance isn’t optional—it’s foundational.


Evaluations: Defining What “Good” Looks Like

If there’s one capability that feels underappreciated in most AI discussions, it’s evaluation. Not in the abstract sense, but as a structured, repeatable way to define quality.

Opal’s approach to evals is both simple and powerful. You link examples of what good output looks like, and the agent uses those examples both to score itself and to guide future responses. Over time, this creates a feedback loop that improves consistency and reliability .

What’s particularly useful is how this forces teams to be explicit. You can’t just say “this looks good”—you have to define it. And once you do, you can measure against it.

In practice, this also introduces nuance. Structured outputs might require higher thresholds—90% or more—while creative work needs more flexibility. That balance is something teams will need to learn, but having the mechanism in place is a significant step forward.


Workflow Agents: Where AI Becomes Operational

This is where things start to feel less like a feature set and more like a platform.

Workflow agents allow you to connect multiple agents together, passing outputs from one to the next and triggering them based on real-world events. You can initiate workflows through chat, webhooks, email, or scheduled runs, which opens up a wide range of possibilities .

What makes this particularly compelling is how it mirrors actual business processes. Instead of thinking about isolated tasks, you start thinking about sequences—how information flows, how decisions are made, and where automation can meaningfully reduce effort.

The newsletter-to-LinkedIn example from the workshop illustrates this well. An incoming email triggers a workflow, which transforms the content, applies formatting improvements, and prepares it for distribution. It’s a relatively simple use case, but it captures the essence of what’s possible when you move beyond single-agent interactions .


Reusability Through Variables

Another detail that becomes increasingly important at scale is the use of variables.

By parameterizing inputs—things like URLs, languages, or datasets—you can reuse the same agent across different contexts without rebuilding it from scratch. This might seem like a small feature, but it’s what turns agents into reusable assets rather than one-off solutions.


The Use Cases That Actually Matter

What made the workshop particularly valuable was seeing how these capabilities translate into real work.

Some use cases were more obvious, like competitive analysis or lead generation, but others highlighted where agents can quietly drive significant value.

For example, transforming meeting notes into structured user stories is not glamorous, but it’s incredibly useful. It reduces manual effort, ensures consistency, and accelerates downstream work. Similarly, automating reporting workflows or translating strategic goals into measurable KPIs addresses very real operational bottlenecks.

More forward-looking use cases, like AEO and GEO optimization, point to where the industry is heading. As AI-driven search becomes more prevalent, understanding how brands appear in those environments—and how to improve that presence—will become a critical capability. Agents are particularly well-suited to this kind of analysis because they can combine research, evaluation, and recommendation into a single workflow.


What’s Actually Impressive (And What’s Not)

It’s easy to get caught up in features, but what stood out here was less about what Opal can do and more about how it encourages teams to think.

There’s a clear emphasis on iteration over perfection, which aligns with how real teams operate. There’s also a deliberate separation between different components—creation, instructions, evaluations—which reduces complexity as usage grows.

At the same time, it’s important to recognize that this isn’t a fully abstracted experience. You still need to think about how agents interact, how outputs are structured, and how workflows are designed. In other words, the platform gives you the building blocks, but it doesn’t remove the need for strategy.


Implementation Realities: Where Things Get Hard

If there’s one consistent theme that emerged, it’s that technology is only part of the equation.

From a people perspective, someone needs to take ownership of agent design and governance. Without that, it’s easy for things to become fragmented or inconsistent.

Process-wise, teams need to define when to use agents versus chats, how to evaluate outputs, and how to maintain and improve what they’ve built. These aren’t decisions that can be deferred—they shape how effective the system becomes over time.

On the technical side, integration and orchestration introduce their own challenges. Even something as straightforward as email triggers can have limitations that require workarounds, as seen during the workshop .

And then there’s governance. If agents are going to play a meaningful role in operations, there needs to be visibility into how they’re performing, control over what data they access, and clear accountability for their outputs.


The Bigger Picture: Why This Matters Now

What the Opal University Cohort ultimately highlights is not just a set of capabilities, but a shift in how AI fits into the MarTech ecosystem.

We’re moving from a world where AI is something you occasionally use, to one where it becomes part of how work gets done. That shift requires new ways of thinking about design, quality, and governance.

The organizations that adapt will find themselves operating differently. They’ll move faster, not just because they’re more efficient, but because they’ve reduced the friction between idea and execution. They’ll also be more consistent, because their processes are embedded into the systems they use.

And perhaps most importantly, they’ll be better positioned to adapt as the technology continues to evolve.


Final Thoughts

The experience of going through the Opal University Cohort makes one thing clear: the future of AI in marketing and digital experience is not about better prompts. It’s about better systems.

Opal is not trying to replace how teams work—it’s trying to reshape it in a way that makes AI usable at scale. That’s a much harder problem to solve, but it’s also the one that matters.

For teams willing to invest the time to understand and apply these concepts, the payoff is not just incremental improvement. It’s a fundamentally different way of operating—one where AI is no longer an experiment, but an integral part of the organization’s capability.

Take this opportunity and join the waitlist to be a part of a Cohort: https://www.optimizely.com/ai-marketing-certificate/

 
Apr 28, 2026

Comments

Please login to comment.
Latest blogs
Six Compelling Reasons for Upgrading to CMS 13

Most software updates ask you to keep up. Optimizely CMS 13 asks something different — it asks whether your digital strategy is built for a world...

Muhammad Talha | Apr 28, 2026

Optimizely CMS 13 breaking changes: GetContentTypePropertyDisplayName

When upgrading from CMS 12 to 13, resolving property display names may not work as before. Here’s what changed.

Tomas Hensrud Gulla | Apr 27, 2026 |

Accelerate Optimizely DAM Adoption: Unlocking Business Value with Metadata Bulk Import

Accelerating Optimizely DAM Adoption How a Metadata-Driven Bulk Import Utility Unlocks Real Business Value Executive Summary For enterprises runnin...

Vaibhav | Apr 27, 2026