How We Built the Experience of OutSystems AI-Assisted Development

Magda Pereira
OutSystems Engineering
8 min readJan 8, 2020

Having the users enjoy a new feature is the cherry on top of the cake. It’s the end of a very long journey through user experience, research, and continuous iteration. And that was the road we took when building our recent AI-Assisted Development feature, as we figured out along the way how we could make it work the best for our users.

The AI-Assisted Development provides you next-step suggestions in your flows, anywhere in the flow.

After announcing our assistant last year, we opened the feature to a few users to get insights from the field and help us understand how we could improve it. This was the start of the journey that would be crucial to define what AI-Assisted Development is today, but let’s backtrack a bit and start with what and why we’ve built in the first place.

The First Version of AI-Assisted Development

At the outsystems.ai team, we have the goal of making application development 100x faster and more accessible to everyone. The idea of having AI-Assisted Development came up as a way to move closer to that objective. With it, we would be able to accelerate power users, while making it easier for novices to develop in our low-code visual language.

Here lies one of the biggest challenges — how to create an AI-powered UX that can enable both experts and novices alike?

NextStep 2018 version (left) and the latest version in 2019 (right).

Can you notice the differences? The left version is what we showed you in NextStep 2018, and on the right side you have the version that is currently in Service Studio. I want to tell you how we’ve reached the publicly available version, what we understood had to change, what we tried to do, our failures, and our wins.

Building the Experience

Before digging into some expressions that you might not be familiar with, let me explain them upfront. The radar is the blue dot used to trigger the suggestions, which are the logic that we recommend to add to the flow, and the connector is the arrow that connects the logic elements.

The three most important concepts to understand our feature.

To be able to go from the past version to the current one we had to iterate a lot. And to iterate in the experience, we used the Product Experimentation Rings method. This process appeared naturally and was used as a mechanism to give us more confidence regarding what we were building.

The method we’ve used to iterate: Product Experimentation Rings.

Our team has people from different backgrounds — from AI research and product experience to product development and OutSystems developers — which gives us different perspectives, making the first ring a great start. I would, for instance, prototype a mockup, and then validate it with the team with a small usability test. Usually, at this phase, big usability issues stand out very quickly. If it did but there wasn’t a complete understanding of the issue, we would validate it in the next ring.

Each ring should provide more confidence about which direction to take. The main goal of each stage was always to validate experiments, highlighting the pros and cons of each solution.

Our Experimentation Rings process.

Don’t think that this was easy or straightforward. We made thousands of experiments, sometimes dozens in the same day, and I can say that around 95% of those were discarded. We’ve had lots of iterations through the first three rings before sending our feature to be tested by a few selected beta users.

Moving to the Outer Rings

After working hard to get the first version of AI-Assistant Development out there, we opened an early access program where everyone who was interested could subscribe, and we made the feature available for that group. It didn’t take long for us to learn a ton, as users were eager to provide feedback and identify ways in which we could improve our feature.

I’ll walk you through the main learnings that we’ve found at this point, as well as the solutions we’ve come up with to address them. This is just a small part of all the feedback we gathered, but for every improvement opportunity we came up with quick solutions.

Improving the Assistant’s Suggestions

Initially, the feature only had a list of suggestions that were already appearing in the left side toolbox in Service Studio. Experienced users knew this by memory, and we were not improving their work in any way. New users saw value in those tips, but quickly got frustrated and confused if suddenly the feature couldn’t provide them any next step suggestion.

To benefit users of all skill levels, we considered starting to suggest nodes that were harder for them to do manually, allowing users to get more time-consuming tasks done with only a couple of clicks. After mapping this solution, we made mockups with examples of more complex suggestions and performed usability tests to validate our theory.

The evolution of the suggestions the assistant provides.

Assistance is about guiding the user through a task. While we were observing users with less OutSystems experience, we noticed that they knew what they wanted to do, but didn’t know how to get there. It made sense to turn the suggestions’ language more oriented to the human language.

The evolution of the suggestions’ language.

We used usertesting.com to validate how understandable each label was with users working in the software development field and with technological backgrounds. This would guarantee that users would both understand the suggestions, and be able to improve their work through them.

Reducing Friction

In its first version, the assistant was presenting suggestions in every connector of the logic’s flow, so that users could access them at any time. Besides the suggestions, these connectors already had several other possible interactions: they could be dragged, selected, moved, and switched.

Interacting with suggestions over time.

This didn’t seem like an issue, but placing a radar in the middle of a connector turned out to be a massive effort since it would have to work smoothly and without disturbing other user interactions. I’m not going to lie: this was the toughest challenge. More than a technological issue, this was a huge experience challenge!

To solve it, we first ran field observations to understand how users were interacting with the connector. Then, we matched this information with the average size of a connector and started making some compromises. We couldn’t give suggestions when the connector was very small, something that was kept until the current version.

The suggestions won’t trigger in smaller connectors so they don’t get in the way.

In later stages, after getting feedback from the beta users, we made additional tweaks. The hover and clickable areas started having dynamic sizes, changing according to the connector’s size.

Hover and clickable areas to interact with the assistant.

We also enabled a different cursor when the user moves the mouse to the radar’s clickable area and added a mouse loading icon that comes up when the suggestions are being retrieved.

Now you’ll see a small loading icon while the suggestions are being loaded.

When we were using the feature in Service Studio, we understood that everytime we added an If node, one of the connectors was not being created automatically. This was breaking the flow. We then remembered that we could use this manual drag of the connector from the If node to trigger the suggestion box automatically.

We turned the broken experience (left) into automatically triggered suggestions (right).

I must confess that I wasn’t feeling very confident about the way this could be discovered, as we weren’t telling users how to achieve it. However, during usability tests, users were finding this out by themselves, as they felt it was the natural way to do it. The curious fact is that, nowadays, there are users deleting the End node just to build the entire logic flow using the connector drag!

Exceeding Expectations

When we started thinking about how to give better suggestions and support to our users, automatically filling properties of the suggested elements seemed like a huge help we could provide! And when we asked ourselves how the user would notice it, we thought that having a stars animation pop-up, similar to what we already have on Service Studio for similar actions, would be the best solution!

Stars animation popping out when automatically filling properties.

When we validated it in the usability tests, the results were very consistent, and the most common reaction was: “Stars! Wow, this is filling things for me!”

All the iterations I covered here are just a few of all the experiments we’ve made to put together the feature as you know it today, but we always followed this structure. We worked closely with our users in two sequential moments: defining what the problem was and what was the best way to assess it, and then understanding what the best solution was.

The Road Ahead

The best part of iterating in the feature’s experience is that we didn’t stop there, we’re still doing it — since the general availability we’ve already shipped several improvements! And none of this would be possible without your precious feedback, that’s one of the key factors that helps us to keep evolving.

We’re already working on other areas such as data mapping and binding to accelerate and assist all developers even further. This includes those pesky structure-to-structure data mappings, ensuring assignment suggestions have the right pre-filled variables and guaranteeing that all suggestions are smarter and more accurate.

You can expect much more updates soon, so stay tuned!

--

--

Magda Pereira
OutSystems Engineering

Product Designer working at the outsystems.ai team, crafting the future of smart applications. Always ready for brainstorm about anything. Misses her Flamingo!