Generative AI & Learning Mendix: a blessing or a curse? (Banner Image)
Generative AI & Learning Mendix: a blessing or a curse?

Generative AI & Learning Mendix: a blessing or a curse?

Yves Rocourt
Mendix Community
Published in
8 min readMay 3, 2024

--

Every self-respecting software solution has included it (This includes Mendix since the introduction of Mendix Chat), probably even the small bakery on the corner uses it!

Of course, there are some great use cases. Who hasn’t used it to summarize some text or get action notes from a meeting? You can even use it to set up a business! Company name? Plenty of available AI tools. The same goes if you need a logo. Why would you write your business plan yourself? Feed ChatGPT the relevant info and you are set. A new company can be set up in just a few minutes. What is not to love, right?

Potential risks of using GenAI for Learning purposes

As with everything, there may also be some downsides. There is a conflict of interest between GenAI and academics. AI tools can become a curse when people stop thinking for themselves and rely solely on them. Before going any further, let’s ask ChatGPT what it thinks about this issue.

Question:
What are the risks of using ChatGPT or similar tools for study purposes?

Answer:
Using ChatGPT or similar AI tools for learning purposes comes with several potential risks:

1. Accuracy and Reliability: While AI models like ChatGPT strive for accuracy, they are not infallible. Responses may contain errors, inaccuracies, or outdated information. It’s essential to cross-verify information obtained from AI with reliable sources.

2. Bias: AI models can inadvertently reflect the biases present in the data used to train them. This can lead to biased or skewed information being presented. Users should be aware of this potential bias and critically evaluate the information provided.

3. Overreliance: Relying too heavily on AI tools for study purposes can hinder critical thinking skills and independent learning. It’s important to balance the use of AI with traditional study methods and personal research.

4. Privacy Concerns: Interacting with AI models may involve sharing personal or sensitive information. Users should be cautious about the data they disclose and ensure they are using reputable AI platforms with strong privacy protections.

5. Limited Understanding: AI models lack true understanding and consciousness. They can only generate responses based on patterns in the data they were trained on. As a result, they may struggle with nuanced or complex topics and cannot provide the depth of understanding that a human tutor or instructor might offer.

6. Ethical Considerations: There are ethical implications to consider when using AI tools, such as the potential for misuse or the impact on human labor (e.g., displacing human tutors or educators). Users should consider the broader ethical implications of relying on AI for study purposes.

To mitigate these risks, users should approach the use of AI tools for study purposes with critical thinking, skepticism, and awareness of their limitations. It’s essential to supplement AI-generated information with human expertise and traditional study methods to ensure a well-rounded learning experience.

Overreliance and limited understanding

Let’s focus on two topics in this list that are tightly connected: Overreliance and limited understanding (#3 & #5).

Both are major risks when using AI tools. Especially if you are new to a certain field. For example, take someone who is just starting as a Mendix developer and a user story requires the implementation of a feature you are unfamiliar with: there are a couple of ways to deal with the situation.

Solve it yourself by going through learnings, documentation, and experimentation, or immediately ask for the solution from someone who knows. Granted, the last one is probably the fastest, but is it also the best?

As a certified Mendix trainer, I don’t think so. Of course, there have been moments where I pleaded for someone to give me the answer. I remember vividly going to my dad to ask him the solution to a math problem and getting frustrated when he wouldn’t give it to me, instead asking me counter questions and letting me figure it out through research and experimentation. Was it frustrating? Yes, no doubt about it.

At this point, I would like to take a moment and say to everyone I ever trained: I feel you… I know what you went through with me. But I will not say sorry. Because, however much frustrated I was not getting the solution when I asked for it, it taught me an important skill. It taught me how to figure something out and understand what I was doing, and how something works. And that is, I believe, a vital skill. It is a skill that is hard-won and earned. If my dad would have given me the solution when I asked for it, I would have known the answer to that specific question. But when I would be presented with a similar but slightly different problem, would I have known how to solve it? No, because the trick I learned did not work and I would have been back at square one. Now substitute my father with AI tools and the story is the same.

It is also happening the other way around. People ask questions on forums to get help on a specific issue and others ‘help’ them with an answer generated by an AI tool. Quite often these answers are very generic and may not help answering the question at all. Over-reliance on AI tools will lead to a limited understanding of the underlying issues, which in turn will lead to ever more reliance on AI tools.

Can GenAI be a tool for good?

So, what does this mean? Should you as a (starting) Mendix developer use AI tools? Is ChatGPT always wrong and should you avoid using it?

No, of course not. These tools are wonderful and can be enormously helpful. ‘Helpful’ is the keyword here. Developers shouldn’t solely rely on tools like ChatGPT. As ChatGPT responds to our query: a developer should always do the groundwork themself. Learn and know what you are doing and why you are doing it. Not just what (which is what a tool like ChatGPT will give you). One of the strengths of GenAI, next to collecting a lot of information very fast, is that you can have a conversation with it. You can continue with your initial question. For example, if you get an answer and you’re not quite sure why it’s giving this particular answer; don’t follow blindly, just ask. “Hi ChatGPT, why is this the best solution to my problem?” Or ask it to explain something in a specific way.

Mendix has been including a variety of AI tools in Studio Pro, next to Mendix Chat. The most popular of which is undoubtedly the possibility to automatically create validation microflows. As with ChatGPT or Mendix Chat, it saves the developer tons of time having these microflows auto-created instead of having to do everything manually.

But also here, it is important to know why this tool creates the validation microflow as it does. Why does it use the “Sharktooth pattern”? Things like this are important to understand. If you need to create a far more complex validation with interdependencies, the auto-validation generator may not yield the best results in the sense of readability and maintainability. If a developer is dependent on such a tool, they may not be able to realize this, potentially resulting in bad application. However, having learned the basics and understanding the reasons why it should be that way, the chance of something like that happening will be far smaller.

Many more examples can be discussed, but all in all, I think AI tools are neither a blessing nor a curse for training. If it is used “the right way” they can benefit and speed up the learning process, and help developers find more information faster than they could by themselves. But — and as discussed — this is a big but… if you just use it to get a “how-to” which you’re following blindly without knowing what you are doing and why or simply because “it says so”, then it can be a curse.

…there would be great value in Mendix developers learning how to integrate GenAI into their applications, using it to deliver more business value to their customers.

Conclusion

The key to this discussion should be twofold. Firstly, when you are starting with something new to you, make sure you put in the hours, practice, and discuss your outcomes. Dig into how you can do something and why option A may be the better solution for scenario X.

Secondly, would be to learn more about GenAI. Understanding how it works will also help (starting) developers to navigate its pitfalls better. Developments in the field of GenAI are happening at lightning speed. This makes it difficult to predict what the future will bring. Regardless of this ‘uncertainty’, there is great value in Mendix developers learning how to integrate GenAI into their applications, and using it to deliver more business value to their customers. A colleague and former AWS product manager at Mendix, Freek Brinkhuis, will dive into this topic soon. Keep an eye out for his blog!

Are you curious to learn more about how GenAI can help your business? Or do you want to know more about the Learning & Development opportunities we provide? Reach out to us at labs@theorangeforce.com.

Read more

From the Publisher -

Inspired by this article to bring your ideas to life with Mendix? Sign up for a free account! You’ll get instant access to the Mendix Academy, where you can start building your skills.

For more articles like this one, visit our Medium page. And you can find a wealth of instructional videos on our community YouTube page.

Speaking of our community, join us in our Slack community channel. We’d love to hear your ideas and insights!

--

--

Yves Rocourt
Mendix Community

Yves is the Learning & Development Manager and a Senior Consultant at The Orange Force.