Try our conversational search powered by Generative AI!

Stefan Svebeck
Apr 29, 2020
  3253
(2 votes)

Publish Feature Analysis - BlockEnhancements Labs 0.7.2

We have had the BlockEnhancements Labs out for a while now and we have recieved a lot of praise for it but we have been unsure what you actually like about it. In version 0.7.0 we introduced some telemetry to allow us to track some of the usage. So to follow up that initial blog post we can now answer most of our questions we had back then and the picture is pretty clear: it strongly indicates the new Publish features are not the main reason why this add-on is installed.

Publish Features

Just for recap and to put everything in context I have taken screenshots of each publish feature.

Default Publish (default)

Inline Menu Publish (content-area)

Inline Block Publish (inline-edit-form)

Smart Publish (smart-publish)

Results

We started by comparing each new publish feature with each other and as we can see from the charts below the default publish is by far the most frequently used publish method.

Total Published

Block Publish

Page Publish

What have we learned?

We expected some of the new publish features to be used a lot more and it leads us to ask many new questions. But we can definetly see that default publish is the most frequently used publish method. Publish from the inline edit form seem to be quite popular and can indicate that inline block edit is used quite frequently. Perhaps the inline block editing or draft preview is the main reason for installing the labs add-on? These are questions we want to answer in our next analysis episode.

Coming improvements

We will do some small tweaks to the Publish features to see if that changes any usage numbers, but we will also add a few more trackers to the other features of BlockEnhancements to understand what's being used.

Apr 29, 2020

Comments

Mark Everard
Mark Everard Apr 29, 2020 10:24 PM

Interesting data, though I guess its difficult to ascertain the true feature value

How much do you think the results are skewed by 'default' user behaviour and knowledge? How can you be sure that editors within your sample are fully aware of what those options mean and can achieve? Does that lend itself to inline contextual help and feature prompts? 

As you said, one answer leads to many new questions :) Enjoy the science!

 

Stefan Svebeck
Stefan Svebeck Jun 2, 2020 10:32 AM

Yes, its very difficult to know exactly what the data means. We have a little bit more data now and we can see that some features are not used that much and some are used a lot. What we can't answer directly is why. But we are working on this and hopefully we will present some answers in our next episode.

Please login to comment.
Latest blogs
From Procrastination to Proficiency: Navigating Your Journey to Web Experimentation Certification

Hey there, Optimizely enthusiasts!   Join me in celebrating a milestone – I'm officially a certified web experimentation expert! It's an exhilarati...

Silvio Pacitto | May 17, 2024

GPT-4o Now Available for Optimizely via the AI-Assistant plugin!

I am excited to announce that GPT-4o is now available for Optimizely users through the Epicweb AI-Assistant integration. This means you can leverag...

Luc Gosso (MVP) | May 17, 2024 | Syndicated blog

The downside of being too fast

Today when I was tracking down some changes, I came across this commit comment Who wrote this? Me, almost 5 years ago. I did have a chuckle in my...

Quan Mai | May 17, 2024 | Syndicated blog

Optimizely Forms: Safeguarding Your Data

With the rise of cyber threats and privacy concerns, safeguarding sensitive information has become a top priority for businesses across all...

K Khan | May 16, 2024