Design Principles for Intelligent Systems

The introduction of SAP Leonardo Machine Learning Foundation in 2017 opened the door for a new generation of intelligent enterprise systems. With it, people can create, run, consume, and maintain machine self-learning apps. The foundation connects developers, partners, and customers to machine learning technology through the SAP Cloud Platform. The main challenge here, from a design perspective, is to understand the impact of these new technologies and to come up with a reliable design system for these intelligent applications.

How does machine learning impact the user interface (UI)? Should I explicitly surface the system intelligence? How much should I explain? What is the feedback loop and when it is important? Are there any UI patterns I can follow?

To get your valuable feedback on this important topic, we are starting a new series of articles in order to share our thoughts on the subject.

The 4 Main Principles

A properly designed intelligent SAP system extends the cognitive capabilities of a human user. As with past generations of tools, our aim should be to empower users and improve the outcome of human work. Based on our experience in recent projects, we have elaborated on several design principles, which we would like to share with you.

Principle 1: Human in control

In a business environment, actions triggered in a system have a tangible outcome in the real world; these impact the goals and profits of the company. Because the responsibility and accountability for these actions still lies with the human user, humans should remain in control of the outcome or, at least, define which level of control they want to preserve.

Example: A master data specialist needs to adjust an existing business partner in the system. At this point, an intelligent system may recommend several additional improvements (e.g. data enrichment from 3rdparty sources or data quality improvements). However, if the system just overwrites something the human already did in the system, this can easily become critical and break the user’s trust. It is better if the system supplies the user with suggestions and gives her the opportunity to resolve the conflict if needed.

Principle 2: Augment human capabilities

To gain the user’s trust and foster successful adoption, an intelligent system should aim to upskill human experts, rather than replace them. Measures that extend the power and reach of the individual could be:

  • Providing more transparency and efficient tools for decision-making processes
  • Integrating user feedback
  • Presenting information in an understandable way

By contrast, hiding information, simplifying the truth, or reducing the number of options without sufficient transparency are things that make the user a “slave” to the system. The user must be able to understand and control the intelligent system.

Example: A sales representative checks why one of his key accounts was not invited to an important marketing event. No-go: the system tells him that the customer does not have a sufficient ranking score. Better: the system explains that the customer has not participated in the last 3 events.

In addition, the representative’s current technological stack is in the upgrade phase and does not allow him to profit from the product, which will be introduced at this event. Even better: the system suggests an alternative event for the customer.

Principle 3: Shared values and ethics

Embedding shared values is an important aspect of building trust. The basis of an intelligent system, its data, and its processes must be protected from, both, intentional and unintentional bias. A trusted intelligent system is robust and accountable. It can learn, but it should also be able to forget. It is not only trained on data, but it also learns from human experience and best practices. On the other hand, it should be balanced with a set of rules, or an “ethics module,” which reflects the core values and goals of the enterprise.

Principle 4: Efficient automation

The degree of automation depends on the business case and what you want to achieve. We believe that intelligent systems should reduce the effort a user needs to get something done. This means defining the right level of automation for each use case. Where full automation is not feasible, we should aim for greater efficiency. By combining automation with transparency, improved use of existing information and learning from many sources, including users’ feedback, intelligent systems can help users obtain better result with fewer steps.

Example: A manufacturing engineer would be very upset If the system postponed an important production order because of ongoing changes in the product design without giving him a chance to find a workaround. The better approach in this case would be a step-by-step introduction of the intelligence: notify the user about the new situation, provide him with the relevant information for a better decision and learn from his feedback. As trust builds, the system may provide more meaningful recommendations and take over in repetitive cases.


We believe that these guiding design principles cannot be communicated and discussed early enough within a team. Designing an intelligent system is a tedious process that requires a solid knowledge of business processes and the underlying technology. Which level of automation is appropriate in the future business process? What kind of data do we have? Which data attributes and user signals should be collected for training? Sharing and understanding the principles will help guide your team through the journey to great AI applications that people will want to use.

We hope that the above principles will help you to evaluate and design your own business use cases. We would be delighted to get your feedback and collaborate with you on pilot projects that involve intelligent systems and machine learning.

Originally published at on January 29, 2018.