Try our conversational search powered by Generative AI!

Mar 5, 2019
(1 votes)

Alexa skill integration with Episerver - Part 1

We all know Episerver is a very powerful Enterprise CMS. The content authors and marketers have complete control over content, personalization, analytics as well as access to user data at their finget tips. 

With that said, the technological landscape is changing and so is user interaction with new innovative devices. In my opinion, websites (including responsive mobile websites) will always be the most popular way for presenting information, but as developers & marketers we should be prepared for this giant wave of new trend heading our way. Example: "50% of all searches will be voice searches by 2020". Episerver is prepared with it's Headless API to allow developers and marketers to ride the wave with ease. Episerver headless API allows serving same content to various devices including voice devices, mobile apps and others.  

I recently implemented an Alexa skill that queries Episerver CMS and returns meaningful data to Alexa device. Though this is a very basic version/proof-of-concept for an Alexa skill it opens up a whole lot of possibilites depending on your user's interaction with the website. The Alexa skill I implemented reads out the two latest news or events from a website. This can very well be extended to "Give me the store locations for XYZ near Boston", "Are there any new promotions for ABC?", "Company's profit summary for this quarter". You get the point :) 

As an end user, if you can get quick information from your favorite brand without having to spend time searching for it on a website, it's a huge value add in terms of convienience and saved time. No more booting up a device, navigating to a website, typing in a search box, enduring frustrating UI and performance issues. Simply ask "Alexa" what you need and listen to it while getting ready to go to work or during commercials on TV. If you are a technology (or tech savvy) organization it's even more important to display innovation to let your users know that you always "keep up" with new trends in technology.

Alexa Skill:

The core Alexa concept is pretty simple. It consists of:

  1. Alexa skill kit (Front-end code) - 
  2. Lambda function (Back-end code) - 

You will need to setup 2 accounts:

  1. Amazon Developer account (to configure front end interactions using Amazon provided UI)
  2. AWS account (to host the back end code called lambda function)

NOTE: Amazon Developer Account has a BETA feature that allows you to host the skill and the code in the same interface which is a super convinient and easy to understand. This is highly recommed if you want to get your Alexa skill up and running in minutes. When you create a new skill select the option "Alexa Hosted (Beta)" and you should be able to host and update the code in the same Amazon developer account.  


Skill Name: The name of the skill that will be used when you publish your skill to Amazon.

Invocation Name: The term user will call out to invoke/start interaction with your skill. For ex: If the invocation name is "Fun Demo" the user can say Alexa open "Fun Demo" or Alexa start "Fun Demo"

IntentIntents allow you to specify what a user will say to invoke the skill. For ex: Get me the latest news or find me the closest stores in Boston area. You can create a custom intent as well as update the out-of-the-box Amazon provided Intents (such as CancelIntent or HelpIntent).

Slots: Slots are nothing but parameters you can pass to the Intent to allow dynamic terms. For example: Order me {number} {size} pizza. The terms number and size are two dymamic parameters passed to the Intent.

Endpoint: Endpoint is to connect your front end code (Invocation, Intents) to the backend code. If you are using Alexa hosted beta feaure, then no configuration is necessary. If the back end code is self-hosted (or a rest end point) these values need to be configured. The end point can be a REST endpoint which returns valid data or a lambda function that hosts your backend code. 

To get started, as a first step I recommend setting up a simple Web API REST endpoint in Episerver that returns a JSON object. Ideally you would want to setup Episerver Headless API but for a quick demo a simple REST endpoint should be enough.

I will get into the details of Alexa skill implementation and integrating it with Episerver in my next blog post.

Stay tuned!

Mar 05, 2019


Bien Nguyen
Bien Nguyen Mar 6, 2019 04:31 AM

Interesting topic! Waiting for next parts :)

Peter Bennington
Peter Bennington Mar 6, 2019 09:31 AM

I'm also looking forward to the rest of this series. This is something I am interested in creating a POC for, although I believe I will do it with the nuget package , which seems like it might be quicker. Unsure about performance and security though.

David Knipe
David Knipe Mar 6, 2019 10:53 AM

Interesting post and thanks for sharing! If you wanted to take some inspiration on using the Content Delivery API in an Alexa skill then I wrote about it here (search for Alexa in the page): 

Please login to comment.
Latest blogs
Fix your Search & Navigation (Find) indexing job, please

Once upon a time, a colleague asked me to look into a customer database with weird spikes in database log usage. (You might start to wonder why I a...

Quan Mai | Apr 17, 2024 | Syndicated blog

The A/A Test: What You Need to Know

Sure, we all know what an A/B test can do. But what is an A/A test? How is it different? With an A/B test, we know that we can take a webpage (our...

Lindsey Rogers | Apr 15, 2024

.Net Core Timezone ID's Windows vs Linux

Hey all, First post here and I would like to talk about Timezone ID's and How Windows and Linux systems use different IDs. We currently run a .NET...

sheider | Apr 15, 2024

What's new in Language Manager 5.3.0

In Language Manager (LM) version 5.2.0, we added an option in appsettings.json called TranslateOrCopyContentAreaChildrenBlockForTypes . It does...

Quoc Anh Nguyen | Apr 15, 2024