Approachable AI — Useability (part 1)
Elipsa’s mission is to create Approachable AI, enabling organizations to scale by empowering their business users to assume the role of the data scientist through a no-code solution. The three pillars of this mission are useability, explainability, and accessibility. This is a three-part series explaining how a focus on each of these will enable broader adoption of predictive analytics and a faster journey from data to insight.
According to a Forbes survey, >75% of organizations list limited AI skills as a roadblock for their AI initiatives. Business users are not skilled in machine learning and data science, requiring organizations to scale up teams of Data Scientists or hire expensive consulting firms just to get started. As a result, it is no wonder that this same survey shows that 90% of organizations struggle to justify the ROI.
Existing machine learning platforms suffer from a lack of useability. Imagine using excel if every cell had to be a formula, or Salesforce if you needed to write SQL queries just to find and connect customer information. The complexities of existing predictive analytics platforms are too nuanced for non-technical users, causing firms to take the data out of the hands of those that understand it best.
The elipsa platform is centered around useability. A simple and intuitive interface to enable the business user to apply predictive analytics with clicks not code.
Step 1 — What type of question are you looking to answer?
What is the problem you are looking to solve? Predictive analytics need the question in order to find the answer. Data scientists don’t come up with the question, the business knows the questions to ask but lack the capabilities to use advanced methods to help find the answers.
So, the first step is the simple first question of what type of problem are you looking to solve. Through the elipsa platform, users can apply predictive analytics to help predict a value, the outcome of an event, to group like items together, and to find outliers in your data.
For the rest of this post, we will focus on the ability to predict an event. When people think of predicting events they often think of the weather, or the outcome of an election. However, an event is anything with a defined outcome. The flip of a coin is an event, a customer purchasing an item on a website is an event, and a machine part failure is an event. Predicting such event is a function of the question you are asking and the data you are using to answer that question.
Step 2 — What are the answers to the question that you want the system to learn from?
The elipsa focus is on clicks not code. Users can drag and drop their own csv/excel files and seamlessly select the column that they are looking to predict. Think of the target as the answers to the test that you want the system to learn. The key here is that you want the system to learn from this column though and not memorize it. That is a common problem with many machine learning algorithms and a concept called overfitting. We will not get into the technical details on this post but the elipsa platform optimizes the models behind the scenes in such a way that looks to promote model learning and not model memorization.
Step 3 — What do you want to use to predict the answer?
Once you tell the system what you want to predict, you need to simply choose which columns you want to use as the predictors. The system will sort through the large data sets to find predictive patterns but it relies on the user’s domain expertise to tell it which relevant attributes to use.
Step 4 — Putting it all together, without the technical jargon
Things have been pretty straight forward thus far, even compared to predictive analytics platforms that are on the market. However, this is where the jargon typically kicks in that begins to lose the non-technical users.
Once you know the question (your target) and what you want to use to predict that target (your predictors), things start to get fairly technical on the journey from data to insight. A few of the key next steps involve feature engineering, model selection, and parameter tuning. Here is where legacy platforms will ask if you want to remove collinearity or utilize logarithmic transformation and the process goes from straight forward to overly complicated.
Machine learning needs to be transparent (we get into that in the post on explainability) but that fine detail and choice need to be in the results not in the process to create the model if you want that process to be useable for the average insight seeker. The user does not typically know the difference between a decision tree and an xgboost algorithm to select which to utilize and our stance is they should not have to. Machine learning has matured to the point where the system can choose the appropriate algorithm for them.
The elipsa platform automates the data science experiment. We do not ask the user to specify which algorithm to run but instead, we automatically run through up to 20 different algorithms. The platform iterates through different combinations of model parameters and different combinations of methods to find which predictors are most useful. The end result is a model with the best parameters and best predictors to answer your inquiry, all in an automated workflow that is intuitive and most importantly useable for the business user.
A deep understanding of data lies in the hands and minds of the business user. When the data transitions away from those users to their technical counterparts, there is a lot of value lost. This results in weeks or even months to see results of AI initiatives. This leads to the statistic referenced at the top of the post where organizations struggle to see and justify ROI. The best approach is a hybrid that combines data science with domain expertise. This approach speeds up time to insights, results in better predictive models, and a clearer sense of ROI in minutes not weeks. Our goal is to provide that with approachable AI, and it starts with useability.