Mendix ML Kit — Be a machine learning practitioner in low code development (Banner Image)
Mendix ML Kit — Be a machine learning practitioner in low code development.

Mendix ML Kit — Be a machine learning practitioner in low code development.

--

The Machine Learning Kit is a comprehensive tool designed to facilitate the incorporation of machine learning (ML) models into applications developed on the Mendix low-code platform.

The Kit enables developers to deploy AI-enhanced applications with ease, supporting a wide range of ML capabilities and use cases, from sentiment analysis and object detection to anomaly detection, recommendations, and forecasting. It’s tailored to increase workplace efficiency, reduce costs and risks, and enhance customer satisfaction through smarter, automated solutions.

In this article, I will explain how Mendix uses the ONNX model in ML Kit and give some examples of classification, object detection, and style transfer.

ONNX Support: The ML Kit is based on the Open Neural Network Exchange (ONNX) framework, promoting interoperability among different AI frameworks. This means developers can train ML models in their preferred AI framework, convert them into the ONNX format, and then seamlessly integrate them into Mendix applications.

Case 1: Iris classification with Decision Tree.

In this example, we will focus on the IRIS data set. It is a basic data set for new developers to learn machine learning. The Iris dataset is a classic example in machine learning and statistics, often used to demonstrate the capabilities of various algorithms, including Decision Trees. The dataset consists of 150 samples from three species of Iris flowers (Iris Setosa, Iris Virginica, and Iris Versicolor). Each sample includes four features: the lengths and the widths of the sepals and petals, in centimeters.

  • In a Mendix application, we can create an input for the ML Kit.

Normally, to get a pre-trained model you need to build it by training it, a process usually done in Python or other tools. This example link below is a way to build an iris model and convert it to an ONXX file.

https://onnxruntime.ai/docs/api/python/auto_examples/plot_train_convert_predict.html

However, I will add the model below the article. So you just need to import iris_dt.onnx model into your app.

  • Add ML Model mapping:
  • Click import and browse the model .onnx:
  • Double check again in your application directory. The file will store there:

The output label will be returned as

if $outputObject/Output_label= 0
then ‘Setosa’
else if $outputObject/Output_label= 2
then ‘Virginica’
else ‘Versicolor’
  1. Create a page to create a new iris input.
  2. Pass the input into ML Model mapping and return the output object.
  3. Verify the output object to define irisClass.
  4. Change the irisClass according to the result.
  5. Close page.

The Inference subflow is simply to call the ML Model and return the output object.

Result for this use case:

Case 2: Object detection with ResNet50.

What is resnet50?

ResNet50 is part of the ResNet (Residual Network) family, which was introduced by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun in their 2015 paper titled “Deep Residual Learning for Image Recognition.” This model was designed to solve the problem of vanishing gradients in very deep neural networks, enabling the training of networks with a much larger number of layers, significantly improving performance on visual recognition tasks.

  • In the Mendix domain model, create one entity named ResNet which is a generalization of System.Image.
  • Add new Mendix ML Mapping by importing resnet50.onxx model.
  • After a successful import, you will need to double-check to ensure that the model has been stored in this path.
  • Make an overview page based on the entity ResNet.
  • In the create a new action, we browse the picture and after saving, it will process the image to detect the object.
  1. Preprocess java action with the input is ResNet, the first setup for this java action is

On the coding site,

// BEGIN USER CODE
final ByteArrayOutputStream bos = new ByteArrayOutputStream();
this.RawImage.getContents(getContext(), bos);
byte[] binaryImage = bos.toByteArray();
Core.getLogger("ResNet50").info("binaryImage size in byte(s): " + binaryImage.length);
Mat img = Imgcodecs.imdecode(new MatOfByte(binaryImage), Imgcodecs.IMREAD_COLOR);
// resize image
Mat rim = new Mat();
Size sz = new Size(224, 224); // this is ResNet50 Specific, change as per your current needs.
Imgproc.resize(img, rim, sz);
// normalize image. Again, specific for this ResNet50 model.
float[] mean = new float[] {0.485f, 0.456f, 0.406f};
float[] std = new float[] {0.229f, 0.224f, 0.225f};
float[][][][] inputArray = new float[1][3][224][224];
for(int i = 0; i < 224; i++) {
for(int j = 0; j < 224; j++) {
for(int k = 0; k <= 2; k++) {
double[] rawValue = rim.get(i, j);
float normalizedValue = (((float) (rawValue[Math.abs(k - 2)] / 255) - mean[k]) / std[k]);
inputArray[0][k][i][j] = normalizedValue;
}
}
}
// convert array to base64
final InputStream is = MLKit.toInputStream(inputArray);
final String base64 = MLKit.toBase64(is);
// create output entity object
final IMendixObject outputObject = Core.instantiate(getContext(), "ResNet50.ML_Input_Entity_ResNet50ModelMapping");
outputObject.setValue(getContext(), "Data", base64);
return outputObject;
// END USER CODE
// BEGIN EXTRA CODE
static {
nu.pattern.OpenCV.loadShared(); //OpenCV initialization
}
// END EXTRA CODE

2. Now the return from the preprocessing will be an input into ML Model. The model will return the out put as object.

3. To understand the output of ML Model we need a post-process to translate the string output to find the highest confidence index number to return the meaningful label.

// BEGIN USER CODE
// convert base64 to array
float[] outputScores = new float[1000];
// decode InputStream from Base64
final InputStream is = MLKit.fromBase64(ResnetCategory.getResnetv17_dense0_fwd());
// read InputStream and write to provided array
MLKit.toArray(is, outputScores);
// find index of Top 1
float max = Integer.MIN_VALUE;
int index = 0;
for(int i = 0; i < outputScores.length; i++) {
if(outputScores[i] > max) {
max = outputScores[i];
index = i;}}
String result = classes.get(index);
Core.getLogger("ResNet50 result:").info(result);
return result;
// END USER CODE
// BEGIN EXTRA CODE
final Map<Integer, String> classes = new java.util.HashMap<>();
{
try {
File basePath = new File(Core.getConfiguration().getBasePath(), "ml");
File filePath = Paths.get("resnet50", "imagenet_classes.txt").toFile();
final File classesFile = new File(basePath, filePath.getPath());
Scanner reader = new Scanner(classesFile);
while (reader.hasNextLine()) {
String line = reader.nextLine();
String[] split = line.split(":");
Integer id = Integer.valueOf(split[0].trim());
String cls = split[1].trim();
classes.put(id, cls);}
reader.close();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();}}
// END EXTRA CODE

4. when the step 3 is completed, the return value will be the label of the object which the model can interpret, then we will need to update it back to the image label.

5. Finally, close the page.

Case 3: Image style transfer.

The concept gained significant attention with Gatys et al.’s 2015 paper, “A Neural Algorithm of Artistic Style,” which demonstrated how convolutional neural networks (CNNs) could be used to separate and recombine the content and style of natural images. The technique leverages the layers of a pre-trained deep neural network (typically VGGNet trained on the ImageNet dataset) to understand the features and textures of both the content and style images.

I will demonstrate how the model can apply the specific style to other images. The static style I use in this example is mosaic.onxx.

In the Mendix domain model, create 2 image entities named $OriginalImage and $AugmentedImage. $OriginalImage is the image that will be input into the model, and $AugmentedImage is a copy of OriginalImage with a new style applied.

  • Import ML Model mapping
  • Make a page with 2 sections, the first one is the original image, and the second one is augmented images.

On the new action from the original image, I create a microflow containing the steps below:

1. Original preprocessing: to convert image to ML Model input.

// BEGIN USER CODE
// 1. read image
final ByteArrayOutputStream bos = new ByteArrayOutputStream();
image.getContents(getContext(), bos);
final byte[] binaryImage = bos.toByteArray();
// 2. resize image to 224x224
final ByteArrayInputStream bis = new ByteArrayInputStream(binaryImage);
final BufferedImage originalImage = ImageIO.read(bis);
final BufferedImage resizedImage = resizeImage(originalImage, 224, 224);
// 3. transform resized image to input feature
final IMendixObject mxObject = Core.instantiate(getContext(), "StyleTransfer.ML_Input_Entity_Mosaic_ML_Model");
final ML_Input_Entity_Mosaic_ML_Model inputObject = ML_Input_Entity_Mosaic_ML_Model.initialize(getContext(), mxObject);
final float[][][][] inputFeature = new float[1][3][224][224];
for(int i = 0; i < 224; i++) {
for(int j = 0; j < 224; j++) {
final Color color = new Color(resizedImage.getRGB(j, i));
inputFeature[0][0][i][j] = color.getRed();
inputFeature[0][1][i][j] = color.getGreen();
inputFeature[0][2][i][j] = color.getBlue();
}
}
final InputStream is = MLKit.toInputStream(inputFeature);
final String base64 = MLKit.toBase64(is);
inputObject.setInput1(base64);
return inputObject.getMendixObject();
// END USER CODE
// BEGIN EXTRA CODE
private BufferedImage resizeImage(BufferedImage originalImage, Integer targetWidth, Integer targetHeight) {
final Image resultingImage = originalImage.getScaledInstance(targetWidth, targetHeight, Image.SCALE_DEFAULT);
final BufferedImage outputImage = new BufferedImage(targetWidth, targetHeight, BufferedImage.TYPE_INT_RGB);
outputImage.getGraphics().drawImage(resultingImage, 0, 0, null);
return outputImage;
}
// END EXTRA CODE

2. the outputImage return from step one will be an input for ML Model, after it processing it will return an output Model.

3. The output model will be an image, but to make it meaningful, we need a post-process to convert these applied styling features to the original image and create a new image with a defined styling.

// BEGIN USER CODE
final float[][][][] augmentedImageArray = new float[1][3][224][224];
final InputStream is = MLKit.fromBase64(image.getOutput1());
MLKit.toArray(is, augmentedImageArray);
//MLKit.fromBase64(image.getOutput1(), augmentedImageArray);
for (int i = 0; i < 1; i++) {
for (int j = 0; j < 3; j++) {
for (int m = 0; m < 224; m++) {
for (int n = 0; n < 224; n++) {
augmentedImageArray[i][j][m][n] = Math.min(Math.max(0, augmentedImageArray[i][j][m][n]), 255);}}}}
final BufferedImage bufferedImage = new BufferedImage(224, 224, BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < 224; i++) {
for (int j = 0; j < 224; j++) {
final int red = (int) augmentedImageArray[0][0][i][j];
final int green = (int) augmentedImageArray[0][1][i][j];
final int blue = (int) augmentedImageArray[0][2][i][j];
final Color color = new Color(red, green, blue);
bufferedImage.setRGB(j, i, color.getRGB());
}
}
final ByteArrayOutputStream bos = new ByteArrayOutputStream();
ImageIO.write(bufferedImage, "jpeg", bos);
final byte[] binaryImage = bos.toByteArray();
final ByteArrayInputStream bis = new ByteArrayInputStream(binaryImage);
final IMendixObject mxObject = Core.instantiate(getContext(), "StyleTransfer.AugmentedImage");
final styletransfer.proxies.AugmentedImage augmentedImage = styletransfer.proxies.AugmentedImage.initialize(getContext(), mxObject);
augmentedImage.setContents(getContext(), bis, binaryImage.length);
return augmentedImage.getMendixObject();
// END USER CODE

4. Just change the name of an AugmentedImage.

5. Commit the object.

6. Then close the page.

Conclusion

I hope you enjoyed reading this! If you would like to explore the possibilities of these use cases, you can download a copy of my project on Github:

You can also download an example project created by Mendix:

Read more

From the Publisher -

Inspired by this article to bring your ideas to life with Mendix? Sign up for a free account! You’ll get instant access to the Mendix Academy, where you can start building your skills.

For more articles like this one, visit our Medium page. And you can find a wealth of instructional videos on our community YouTube page.

Speaking of our community, join us in our Slack community channel. We’d love to hear your ideas and insights!

--

--

QUANG NHAT TRAN
Mendix Community

Certified Mendix Expert MVP, Data Scientist, and Technical Practitioner @ TBN Software