Thanks for the thoughts man. Here is what I am thinking about:
- The Whys of ML solution depends on the specific algorithm, some are quite clear while some are obscure. For example, using a Decision Tree to recommend Offer X to a customer. This is easily explained by looking at the “Split rule” of the Decision Tree. It usually looks like “Based on the training data, if the customer age is between 25 and 35 and he has a kid, then he’s likely to buy X in 3 months”. So the system will recommend offer X based on all these conditions. This can be translated to some human-readable front-end design. It gets tricky when using Deep Learning algorithms. Because of how these algorithms work, the “rules” are not easily readable by humans. This is the hurdle of deploying deep learning solutions in mission-critical and regulatory heavy areas. I just had a similar discussion with another friend and we are going to write something about this. Stay tuned. But yes, been thinking about this too.
- Regarding feedback data quality, an ML approach is to build another model that focuses on, for example, automatic tagging. Another way, more front-end focused, is to have a combination of pre-designed drop down data (e.g. some common reasons based on business domain knowledge) and free-form text entry fields. This helps to strike a balance amongst quality control, ease of use, and freedom to capture cases the team didn’t see before. Teams probably need to add additional drop-down selection iteratively to incorporate the free-form text inputs as time goes. With the incentives and front-end design, I think they strike the 80–20 for feedback quality control :)