<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[eMagiz - Medium]]></title>
        <description><![CDATA[eMagiz is the model-driven integration Platform as a Service (iPaaS) that integrates your business. Secure. Solid. Scalable. Manageable. Future-proof. - Medium]]></description>
        <link>https://medium.com/emagiz?source=rss----c1d7422f96d6---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 01:59:58 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/emagiz" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Mendix Data Hub as a catalyst for integrating your business data]]></title>
            <link>https://medium.com/emagiz/mendix-data-hub-as-a-catalyst-for-integrating-your-business-data-e91760cb0afc?source=rss----c1d7422f96d6---4</link>
            <guid isPermaLink="false">https://medium.com/p/e91760cb0afc</guid>
            <category><![CDATA[datahub]]></category>
            <category><![CDATA[mendix-world]]></category>
            <category><![CDATA[mendix]]></category>
            <category><![CDATA[integration]]></category>
            <dc:creator><![CDATA[eMagiz]]></dc:creator>
            <pubDate>Tue, 18 May 2021 14:40:12 GMT</pubDate>
            <atom:updated>2021-05-18T14:40:12.194Z</atom:updated>
            <content:encoded><![CDATA[<p>Mendix has been a valued Technology Partner of eMagiz for some time now. In October during Mendix World 2.0, they announced the Mendix Data Hub and launched their Data Hub Partner Program with eMagiz as one of the launching partners. During the session, <a href="https://www.linkedin.com/in/timkuijper/">Tim Kuijper</a> (Program Director @Mendix) explained more about the Mendix Data Hub, <a href="https://www.linkedin.com/in/bartbuschmann/">Bart Buschmann</a> (Commercial Manager @eMagiz) disclosed some of the advantages of integrating with the Mendix Data Hub and <a href="https://www.linkedin.com/in/awillemsen/">Alexander Willemsen</a> (CTO @eMagiz) demonstrated an integration between Salesforce, Hubspot &amp; Mendix Data Hub. To watch their session on Mendix World <a href="https://mxworld2020.mendix.com/session/the-value-of-data-hub-and-ipaas-with-emagiz/">you can follow this link.</a></p><p>Since October, Mendix has further developed their Data Hub and in this blog we want to talk to you about technical details of the Data Hub. I’m <a href="https://www.linkedin.com/in/sametkaya/">Samet Kaya</a>, Software Delivery Manager @eMagiz and Mendix MVP, and in this blog I will tell you a little more about the latest developments of the Data Hub.</p><h3>Share your data effortlessly with the Data Hub</h3><p>Since the <a href="https://www.mendix.com/blog/data-hub-the-low-code-approach-to-data-integration/">announcement</a> of the Mendix Data Hub, Mendix has made it a lot easier to share data between your Mendix apps. Although the first version only supports reading data, this already is a major step in sharing data on a much easier way in your Mendix landscape than we are used to. That said, the benefits of the current feature set only applies to integration between Mendix apps. Integrating with other external systems is only quick &amp; easy if there is OData support.</p><p>Before we dive into the details, it’s good to mention <a href="https://www.mansystems.com/blog/mendix-datahub-integrate-at-full-speed?gclid=Cj0KCQjwutaCBhDfARIsAJHWnHu8wP48wzM6AfbwZ0ul6JOnNZEP1r9VmovWGbPG5NR-OwK-toYOs8oaAsCPEALw_wcB">other Mendix</a> partners also <a href="https://www.timeseries.com/unboxing-the-mendix-data-hub/">published</a> articles on the Mendix Data Hub. There is the official <a href="https://docs.mendix.com/data-hub/data-hub-catalog/register#1-introduction">Mendix Data Hub documentation</a> and Mendix also developed a <a href="https://academy.mendix.com/link/path/111/Share-Data-Between-Apps-Using-the-Data-Hub-Catalog">learning path</a> to give you a quick start on learning how to work with the Data Hub. But Mendix is of course not alone in this area, there are more Data Hubs on the market and <a href="https://www.linkedin.com/pulse/innovation-insight-turbocharge-your-api-platform-digital-pezzini/?trackingId=634lRtJERt%2BbT%2FKgSEguCw%3D%3D">Gartner</a> named ‘Digital Integration Hubs’ as the next big thing. If you want to know a little more about that, I can recommend <a href="https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/how-to-build-a-data-architecture-to-drive-innovation-today-and-tomorrow">this article</a>.</p><p>Now, let’s focus on the Mendix Data Hub. The Mendix Data Hub distinguishes itself from others by tailoring the user experience specifically for Mendix developers. There is a native integration with Studio Pro, Mendix detects automatically published services within your Mendix applications, and it gives you a nice overview in the <a href="https://docs.mendix.com/data-hub/data-hub-catalog/">Data Catalog</a>. Especially when you have many Mendix applications in your organization, the Data Catalog can be beneficial for your DevOps team to find the right data for their business needs. Because of the required extra commercial license, a thorough business case is needed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tjNs2-1BpuGAnUE0BEAKSQ.jpeg" /></figure><h3>The two main limitations</h3><p>Obviously, Mendix has a long roadmap for the Data Hub. With the current product features there are two main limitations:</p><ol><li>You can only read data, writing data back is not supported yet. At least not the easy way, you need to fall back on traditional integration methods within Mendix.</li><li>Only OData is supported for external systems, making it hard to govern all your non-Mendix apps and systems. Except for Mendix, Siemens (Teamcenter and Mindsphere) and SAP products that already support OData, you can only easily integrate with an external system if there is OData support.</li></ol><p>While the Mendix community is waiting for the next set of product features to make it a more complete product, there are a couple of usecases where the Data Hub really makes a difference in terms of development speed &amp; governance.</p><h3>Searching and using data on the fly</h3><p>In a microservices architecture with mainly Mendix apps, organizations can really benefit from the easiness of the Data Hub. It’s a common practice to retrieve data on-demand when you need it and use it in your functional process. For instance, you want to use business data, like address or customer information, which is managed in another app and is the single source. You could call a REST service or use your middleware layer to fetch this data, but you would still need to build a couple of REST services in Studio Pro. Everyone who develops in Mendix knows how easy it is to do this, but it will still take some time. You will need the help of other people and have some dependencies, and of course: with every integration there is always a catch.</p><p>This is where the magic of the purple entities come in, these are called ‘<a href="https://docs.mendix.com/refguide/external-entities">External Entities</a>’. When the Catalog assets are used from the Data Hub pane in Studio Pro, literally drag and dropping is enough to add data from another external apps to your own app. A consume action is automatically generated, and little authentication configuration is needed. You can directly start building pages and microflows on these magic purple entities. No need to worry about paging, sorting and retrieving the data. In a of couple minutes you have a working integration and in this example you can easily use the address or customer information in your Mendix app, without the need for replicating the data or building a specific REST service.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*XAdpjMGIr9A7DmDY.gif" /></figure><h3>Getting an overview of connected apps in a landscape</h3><p>Another helpful feature is that Mendix automatically detects and administers which Mendix applications are using which Catalog assets. This is presented in a nice graphical feature called the Datahub Landscape. From the <a href="https://docs.mendix.com/data-hub/data-hub-landscape/">Mendix documentation</a>: “<em>The Data Hub Landscape presents a graphical view of the registered OData services in your Data Hub. It provides a landscape visualization of items registered in the Data Hub Catalog and their relationships with apps that consume the datasets that they connect to.”</em></p><p>So it gives an overview, together with some nice visuals. With a graph-like presentation layer, Mendix made it really look nice and helpful. Giving a generated &amp; automated overview of your integration landscape is something common for integration and middleware products, but in Mendix landscapes with many apps this was not possible until this day.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/427/1*Uotcx2TD_s3yx2RTAWJjnA.jpeg" /></figure><p>In a real-life practice with hundreds of services, giving a clear overview will still be difficult and messy, but the landscape view is a great help to see which versions of which services are used where. Knowing this as a Mendix developer operating in a large ecosystem is already very valuable because you will get informed on:</p><ol><li>Relationships &amp; dependencies between apps</li><li>Interconnection between datasets</li><li>Multiple versions of datasets used in apps</li><li>Discover the context of the data</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mqggvJGnda2_9JrbwmyBDg.jpeg" /></figure><h3>Connecting the outside world</h3><p>What if you need to integrate with systems and applications with no OData support? Mendix works with partners providing additional services on top of the Mendix Data Hub. The <a href="https://docs.mendix.com/partners/">strategic partners</a> of Mendix, Siemens and SAP both have tightly integrated services with the Data Hub. Siemens with the MindSphere platform, bringing the business intelligence of IoT assets to the Datahub. SAP with out-of-the-box OData services for a wide variety of SAP solutions. Next to the <a href="https://docs.mendix.com/partners/">strategic partners</a>, there are also technology partners like iPaaS Platforms bringing their capabilities to the Data Hub.</p><p>eMagiz, as a <a href="https://www.emagiz.com/en/news-en/press-release-emagiz-as-launching-partner-in-the-mendix-data-hub-partner-program/">launching partner</a>, delivers extra features fueling the Mendix Data Hub capabilities. Registering services and integrations from eMagiz iPaaS is easy with a ‘publish to Datahub’ feature from your eMagiz catalog. In the Data Hub Catalog, eMagiz assets are usable, even when the source systems don’t support OData. eMagiz handles the transformation to other protocols, for example SOAP and REST, but also other exotic and less common protocols (such as IBM RPG Functions on AS400, TCP, etc) are supported. Next to protocol transformation, eMagiz also handles the data, text and semantics transformation. You are no longer forced to use the data structure of the source system.</p><p>eMagiz automatically parses OData queries, which are required to make Data Hub integrations work. This makes it possible to add numerous other types of applications and systems to the Data Hub, which are natively not supported. As an extra, eMagiz makes it also possible to access systems that do not support polling mechanisms. Imagine there are many applications and legacy systems that are only able to push data, eMagiz makes it possible to leverage this data into the Data Hub by supporting OData queries on top of this data.</p><p>If you are interested in a little more information about the Mendix Data Hub or about the extra features eMagiz provides, message me on <a href="https://www.linkedin.com/in/sametkaya/">LinkedIn</a>. I’ll be happy to tell you some more about it. Thank you for reading!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e91760cb0afc" width="1" height="1" alt=""><hr><p><a href="https://medium.com/emagiz/mendix-data-hub-as-a-catalyst-for-integrating-your-business-data-e91760cb0afc">Mendix Data Hub as a catalyst for integrating your business data</a> was originally published in <a href="https://medium.com/emagiz">eMagiz</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unlocking the full potential of event-driven architecture: How to develop a stream processor with…]]></title>
            <link>https://medium.com/emagiz/unlocking-the-full-potential-of-event-driven-architecture-how-to-develop-a-stream-processor-with-d9b46a0a3447?source=rss----c1d7422f96d6---4</link>
            <guid isPermaLink="false">https://medium.com/p/d9b46a0a3447</guid>
            <category><![CDATA[kafka]]></category>
            <category><![CDATA[integration-platform]]></category>
            <category><![CDATA[stream-processing]]></category>
            <category><![CDATA[stream-processor]]></category>
            <dc:creator><![CDATA[eMagiz]]></dc:creator>
            <pubDate>Fri, 30 Apr 2021 09:52:13 GMT</pubDate>
            <atom:updated>2021-04-30T09:52:13.869Z</atom:updated>
            <content:encoded><![CDATA[<h3>Unlocking the full potential of event-driven architecture: How to develop a stream processor with eMagiz.</h3><p>Event-driven architecture is required to quickly and effectively react to real-time events in your organizational landscape. We’ve previously talked about <a href="https://www.emagiz.com/en/blogs-en/becoming-data-driven-start-at-the-foundation/">the benefits of event-driven architecture</a>, but also highlighted that several architectural components and investments may be required in order to unlock the benefits of this concept. Stream Processing is one such component and a great tool to work with real-time events as they are produced. We’ve talked <a href="https://www.emagiz.com/en/blogs-en/get-more-value-from-your-data-streams-with-stream-processing/">about the concept of stream processing in a previous blog</a>. This technical blog will guide you through the basics of developing a stream processor and what to consider during development.</p><p>To develop a stream processor in eMagiz, a few key components are required. You start by building your foundation, the messaging infrastructure. Next, you define your stream processor logic after which you require deployment infrastructure. Finally, you need to manage your stream processing application to enforce security and access throughout all your applications.</p><h3>Start at the foundation, your event streaming infrastructure</h3><p>Before you can start with stream processing, it is a key requirement to have an event broker that distributes events in real-time. Such an event broker must be highly scalable, fault tolerant, and provide ‘exactly once’ delivery guarantees. You may have never heard of these three attributes, so let’s discuss them first.</p><h4>High scalability</h4><p>Being highly scalable means that we must support high data throughputs without introducing delays, and also support horizontal scaling of producers and consumers of messages to prevent bottlenecks. For instance, by distributing incoming events across instances of a certain consumer. Scalability in the broker is usually achieved by spinning up multiple broker instances and distributing incoming events in parallel to the subscribers.</p><h4>Fault tolerance</h4><p>This means that you want to be able to recover from failure, either in your infrastructure, or within one of your consumers. On the infrastructure side, fault tolerance should ensure that if part of the hardware infrastructure of the event broker goes down, the event broker can continue to function without data loss and without a significant impact on performance. Lineage and duplication are two key technologies used to achieve this. Additionally, when consumers go offline, it should also be ensured by the broker that data is not lost. Retention is a key technique that can be used here to ensure that consumers can temporarily go offline, and then resume processing data where they left off.</p><h4>Delivery guarantees</h4><p>Delivery guarantees ensure that all incoming events are processed exactly one time. Fault tolerance is one of the key factors to ensure ‘exactly once’ delivery. Many frameworks support ‘at most once’, or ‘at least once’ delivery. However for crucial business data, such as financial transactions, more strict processing semantics are crucial.</p><p>Kafka is the most popular event broker that supports all these demands, as an open-source framework that can be used to create a distributed publish/subscribe-based messaging infrastructure for real-time communication. Kafka uses a concept called partitioning to ensure clients can read and write data from many brokers at the same time. These partitions are replicated to ensure availability and fault-tolerance.</p><p>Kafka is open source and free to deploy on your own hardware or cloud environment of your choice. However, for enterprise applications, it can be beneficial to opt for Kafka as a service, to ensure uptime, stability and enterprise grade management functionality, such as eMagiz Event Streaming.</p><h3>Next, define your stream processor</h3><p>You’ve finished setting up your streaming infrastructure, the next step is to define your stream processor. There is a large landscape of options for defining your stream processing applications. Again, we must consider non-functional attributes for our stream processor such as fault tolerance, scalability and delivery guarantees. Additionally, we must consider other aspects such as deployment models, batch use-cases, and lookups.</p><p>Similar to your event streaming infrastructure, your processor must guarantee fault tolerance, scalability and delivery guarantees. This can be achieved through distributed instances of the processor, these work together to achieve a shared goal. Some processors highly rely on the distribution capabilities of the event streaming infrastructure, such as Kafka Streams, to achieve scalability, while others use internal distribution mechanisms, such as Apache Flink. Using shared state stores, lineage and other fault recovery methods, stream processors achieve fault-tolerance in a distributed manner as well.</p><h4>The deployment models</h4><p>Stream processors are highly variant in their deployment models. Kafka Streams can be deployed standalone, on a per instance bases. Multiple instances will automatically recognize each other to work together towards their shared goal. Typically however, stream processing frameworks require a central broker to control all instances, such as Zookeeper, which manages the individual instances for processing. Apache Flink is an example for this. The suitability of the deployment model depends on the task at hand. For highly variable throughput rates, clustered solutions may be excellent due to their ability to automatically scale and communicate when needed. However, when throughput is stable, and processing must happen at various locations in the infrastructure, standalone processors are more suitable.</p><p>The deployment model also impacts the type of workload a stream processor is able to handle. Standalone processors are more suitable for event-based workloads, such as filtering and aggregations, while clustered solutions are more ideal for collaborative use cases such as enrichment but also for time triggered events, and batch cases, as they do not depend on incoming events to trigger jobs. Furthermore, standalone processors are not at all suitable for lookups, or iterative problems that require an analysis of the whole data set, as they only have access to individual events rather than the complete dataset.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/768/0*w8p5lXqBVrPufMIL.png" /></figure><p>Overall, what type of processor to use depends on your individual use case, as well as your ability to manage individual instances or host a cluster of processing power. While standalone processors are easier to deploy, they are harder to maintain and scale once deployed. Clustered solutions have an overall higher barrier to get started. To lower this barrier, processing services are available that abstract from infrastructure level deployment choices, and provide management, deployment and scaling out of the box.</p><h3>Keep your engines running with: management, monitoring and governance</h3><p>Once you have established your infrastructure and deployed your stream processing application, you have your stream processor running. But will it remain working? And what happens when your ecosystem grows, how do you keep it manageable? A crucial step that is sometimes forgotten in the lifecycle of stream processing applications is the management phase. We’ll discuss a few key things to consider.</p><h4>Testing &amp; migration</h4><p>Before deploying any new version, make sure to extensively test your stream processor, not only by testing the logic itself, but also by deploying it on a testing environment so your stream processor can be tested with actual data. As stream processors consume live data and have low criteria for downtime, testing can also help you to optimize the deployment process and ensure smooth migrations to newer versions with different business logic, or different input requirements.</p><h4>Monitoring</h4><p>Depending on your choice for a stream processing framework and deployment model, you will have different options for monitoring your applications. Make sure to setup an infrastructure for collecting errors, and trigger the right actions on them so that any failures can be detected immediately. All stream processing frameworks provide you with the option to tap into the exception stream, but by default, they may simply ignore this compromising the delivery guarantees. Similarly, monitoring the metrics stream helps to scale your application appropriately and ensure that throughput times can be maintained. External tools (e.g. Grafana) can help you turn metrics and exceptions into easy to use dashboards with alerting, to assist you with monitoring your stream processors.</p><h4>Management</h4><p>Management is important in order to act when needed, and to enforce security and access throughout your applications. Especially as your number of stream processors grows, it is key to manage which stream processors have access to what data, and who is in charge of managing this data and processor logic. This not only includes access rights and security, but also responsibilities and roles for scaling and monitoring.</p><p>The monitoring and management part of stream processors are currently lacking in most major stream processing frameworks. Therefore, it is crucial to integrate this into your own stream processing solution, or in other monitoring tools in your application landscape. A platform like eMagiz can help you to develop, monitor and manage your stream processing solutions out of the box, with a wide variety of options for deploying, securing, regulating and monitoring your environment, including alerting. eMagiz helps you focus on your business solution instead of the intricacies of event processing. Are you curious about the possibilities for your organization? Give us a call, we are happy to help!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d9b46a0a3447" width="1" height="1" alt=""><hr><p><a href="https://medium.com/emagiz/unlocking-the-full-potential-of-event-driven-architecture-how-to-develop-a-stream-processor-with-d9b46a0a3447">Unlocking the full potential of event-driven architecture: How to develop a stream processor with…</a> was originally published in <a href="https://medium.com/emagiz">eMagiz</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Integrate fast and easy with the API gateway!]]></title>
            <link>https://medium.com/emagiz/integrate-fast-and-easy-with-the-api-gateway-c59312ac8be9?source=rss----c1d7422f96d6---4</link>
            <guid isPermaLink="false">https://medium.com/p/c59312ac8be9</guid>
            <dc:creator><![CDATA[eMagiz]]></dc:creator>
            <pubDate>Mon, 12 Apr 2021 13:54:27 GMT</pubDate>
            <atom:updated>2021-04-12T13:54:27.390Z</atom:updated>
            <content:encoded><![CDATA[<p>API management is aimed at a simple and secure way of managing how APIs (Application Programming Interface) are used internally and externally. API management concerns a broad focus on APIs and not only sees APIs as a technology but also as a product that needs to be managed and that has a lifecycle.</p><p>There are always requirements for APIs. It has to be possible to find and access an API in an IT landscape, you have to be able to test with representative data, it has to be determined who has access to this data and it must be possible to monitor the use of the API. The designed functionality gives the organization control over the data-driven services they offer to their customers. Through this, the organization can get insights into the behavior of customers and into the use of the offered services. The management of APIs is very important to determine the success of your digital applications and services.</p><p>As described above, APIs and API management consists of many processes and aspects. In this blog, we focus on the “integration” aspect of API management and we discuss the transformation of data through an API gateway. Transformation of data is important, because IT-landscapes are becoming more fragmented. This leads to integration questions, where applications often don’t have the same data formats or definitions. Transformation possibilities are then required to integrate applications with each other.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*CZa5JBMFWmkkrtKt.jpg" /></figure><h3>Data transformation using a data model</h3><p>eMagiz offers an API gateway within her hybrid integration platform. Through the API gateway, it’s possible to easily set up Open API Specification based endpoints and support integration scenarios. The eMagiz API gateway supports both ‘Passthrough’ as ‘Transformation’ type scenarios. Passthrough scenarios are situations in which the endpoints are directly coupled to each another and no transformation process is further required. In a transformation type scenario, there is a transformation of data on content, protocol or format level needed in order to connect the endpoints.</p><p>To facilitate the transformations, eMagiz often uses a Canonical Data Model (CDM). The CDM is a central data model in which all applied entities and attributes come together. By using the CDM it is easier to set up the transformations to multiple applications and it makes maintaining integrations easier.</p><p>Within eMagiz an API integration is structured as follows: An API gateway can receive a request, which contains data in a certain data structure (payload), and then offer it, transformed or not, to other systems that expect a different data structure.</p><p>The integration takes care of the conversion of the structure and validates the content and structure so it corresponds to the expectation of the receiving system. This process is repeated when the receiving system responds to the presented data. This response is transformed into the desired structure so that it can be received correctly and the integration is complete.</p><p>To be able to perform all kinds of transformations, the platform uses data-related integration components, with which transformation flows can be configured within the platform (<a href="https://emagiz.github.io/docs/referenceguide/#alt-textlimgreferenceguidetransformerpng-transformers">click here to see all transformation components</a>). Common components that are used in a transformation are:</p><ul><li>JSON to XML — Transformation of a request or response body from JSON to XML.</li><li>XML to JSON — Transformation of a request or response body from XML to JSON.</li><li>String manipulation — Find and replace parts of a string in the request and response.</li><li>Mask URLs in the content — Rewrite links in the response body, so that these lead back to a similar link within the API Gateway.</li><li>Backend service — Adapt from backend service for each incoming request.</li><li>Set body — Consistent message in body for both incoming as outgoing requests.</li><li>Set HTTP header — Add value to an existing header or add a new header for both request as response.</li><li>Set query string parameter — Add, replace or delete a request query string parameter.</li><li>Rewrite URL — Convert a request URL on behalf of an expected structure for a specific web service.</li><li>Transformation XML using XSLT — Use a XSL to XML transformation in both request as response body.</li></ul><h3>An example:</h3><p>To show the value and use of a transformation with the API gateway within your landscape, we outline a use case below.</p><p>The use case concerns obtaining JSON format data from external users and forwarding this data to other internal systems within the landscape. These internal systems require an XML format message.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*lMVTCtE9qRWfh8fu.jpg" /></figure><p>To integrate data using the API gateway, a system has to register an endpoint within the API gateway. In this case, the endpoint is a POST operation. This operation is set up restful, this means that the data is sent and received in a JSON format.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/413/0*uUolaBv3sfuc3riU.jpg" /></figure><p>In the eMagiz low-code flow editor, you can see data flows coming in through the POST endpoint (eg. POST/orders). After the data is received, it is transformed into XML, after which it is converted into a new request structure by means of a mapping, so that the SOAP service can be called. The response is then given in XML, after which a second transformation is used to convert the XML back to JSON.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*zkXelRymuB5EFLLn.jpg" /></figure><p>So users send JSON and get a JSON response, but the integration landscape receives JSON and converts this into the other desired structure (XML in this case), it validates the structure with the target system and puts it through to the target system. Using the mapping and corresponding transformation tool, it is possible to successfully deliver data to every system in your digital landscape.</p><p>The illustrated use-case is also applicable for legacy systems such as AS400, where it is possible to receive messages in JSON format and convert them into the desired XML-structure. These messages are then presented to the AS400 system using an applicable system connector, the system returns an XML message and this XML structure is then converted back to a JSON format and the structure that the API user expects. Such transformations can be set up for different formats or protocols, including oData, grpc, rest, and soap.</p><p>In short, the eMagiz iPaaS platform offers an extensive toolbox to shape transformations within your own digital landscape. The platform is able to transform data at various moments within your process and/or supply chain, both in batch and in real-time. By using hybrid scenarios, in which an API gateway can be combined with event streaming and message bus functionality, it is always possible to access data and transform it into the correct format. This allows users to quickly and easily implement integrations and derive value from their data.</p><p>Do you want to know how an API gateway can provide value to your integration landscape, or would you like to know some more about hybrid solutions, please contact us or send me a message on <a href="https://www.linkedin.com/in/leobekhuis/">LinkedIn</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c59312ac8be9" width="1" height="1" alt=""><hr><p><a href="https://medium.com/emagiz/integrate-fast-and-easy-with-the-api-gateway-c59312ac8be9">Integrate fast and easy with the API gateway!</a> was originally published in <a href="https://medium.com/emagiz">eMagiz</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Get more value from your data streams with Stream Processing]]></title>
            <link>https://medium.com/emagiz/get-more-value-from-your-data-streams-with-stream-processing-d363f7e655c9?source=rss----c1d7422f96d6---4</link>
            <guid isPermaLink="false">https://medium.com/p/d363f7e655c9</guid>
            <dc:creator><![CDATA[eMagiz]]></dc:creator>
            <pubDate>Wed, 17 Feb 2021 10:41:33 GMT</pubDate>
            <atom:updated>2021-02-17T11:51:06.703Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vbyRqoB-CkBWVpJF_958KQ.jpeg" /></figure><p>In this blog, we want to discuss one of the ways to process real-time data within eMagiz iPaaS, namely stream processing. Stream processing allows you to process large amounts of data within a very small timeframe after receiving it. In this blog, we explain why stream processing can be beneficial for your organisation and we elaborate on some typical use-cases.</p><h3>3 benefits of stream processing</h3><p>As we discussed in one of our <a href="https://www.emagiz.com/en/blogs-en/stream-processing-a-component-of-true-value/">previous blogs</a>, stream processing is a great tool to obtain valuable insights from data streams while that data is flowing. But when do you need to process data within a stream? Overall, we identify three key situations when stream processing offers substantial benefits compared to other, non-realtime big data processing methods, such as batch processing.</p><ol><li>First, <strong>stream processing is great for when you need to respond instantaneously to incoming data</strong>. Some information is more valuable when it is immediately derived from the data and loses its value over time. In a scenario where an abnormal event occurs, you want to take immediate action. Stream processing allows you to immediately react to events that occur, possibly <strong>minimizing potential losses or enhancing customer experience</strong>.</li><li>Second, <strong>stream processing is useful to process continuous data</strong> that is less suitable to be processed on a per-event or batch basis. As an example, batch processing is less suitable to detect the length of a user session based on click events on a website, as these events would be distributed across batches. Processing the data stream <strong>allows you to detect patterns in continuous data</strong> as they emerge. This primarily applies to time-series data, such as metrics, IoT data, and transaction logs.</li><li>Third, <strong>stream processing is useful for pre-processing data, as it supports efficient processing of large amounts of data using a limited set of resources.</strong> Batch processing requires building up a large amount of information and then processing all the information at once, requiring substantial computational resources for a short period of time. <strong>With stream processing, you only need a limited set of resources as processing is a long-running continuous process.</strong> Furthermore, stream processing only processes data coming in and discards it afterward. It does not store the data. This is especially applicable for use-cases where large amounts of data with low relevance are produced, as all raw data can be immediately discarded when useful, clean data is extracted from it.</li></ol><p>While there are some obvious benefits, there are also restrictions for using event stream processing. One of those restrictions is querying specific data, for instance looking up a specific value (such as finding customer data using his or her customer ID). Additionally, there are restrictions when there is a need to repeatedly iterate over a dataset, for instance to find missing data. A different example of this is in the field of machine learning. While stream processing can be used to apply machine learning models to process streaming data, it is less suitable to train and develop machine learning models as this requires access to a full dataset. In these instances, you can still benefit from stream processing to pre-process your data before transporting it to your data lake for further processing.</p><h3>Typical use-cases for stream processing</h3><p>But how can stream processing actually provide value to your organisation? We established some use cases explaining how it can add value to your business in particular situations. We elaborate on how stream processing can enable you to make decisions in real-time decision, how it can increase your data quality and how it may enhance your customer experience.</p><h4>Real-time decision making</h4><p>When processing big data sources (such as IoT data, metrics, or log data), its common to simply store all data in a data lake so that it can be used for analytics and data-driven decisions at some point in the future. However, this creates a gap between when a data-driven decision is made and when the events that drive this happen, decreasing the value of the decision. <strong>Stream processing can support real-time decision-making based on an incoming flow of data to instantly respond to events.</strong> A key application for real-time decision-making is security and monitoring to <strong>detect hacking attempts, downtime, and other incidents that impact the stability of your IT systems as they happen.</strong> Quickly intercepting these events allows organisations to immediately take action to reduce the impact of these events, for instance by shutting down certain systems or sending out an alert.</p><h4>Increase data quality</h4><p>Stream processing can be used to reduce stress on your data warehouse and lower the barrier for putting your data to work. Traditionally, working with big data sources results in a lot of raw data being stored in your data warehouse until at some point in time the data is cleaned, enriched, structured and stored in another place for use, such as machine learning or data-driven decision making. But why delay the processing up until this point? By using stream processing, it is possible to <strong>immediately pre-process the raw data even before it is stored.</strong> This way you only store high-quality data that is ready to be used for analysis when needed. <strong>This lowers the barrier for putting your data to use, as consumers can explore and use the data without the need to pre-process it first.</strong></p><h4>Enhance customer experience</h4><p>Finally, stream processing can support new applications based on live and continuous data. There is a wide range of opportunities for integrating live data into applications. For instance, stream processing can be used to discover trends in real-time, such as trending stocks or frequently bought products. However, the applications are broader than just discovering trends, it can also be used to support flight tracking, package tracking, or real-time building of search indexes. Overall, <strong>stream processors can help you with ingesting massive amounts of raw data</strong> <strong>and processing this into usable high-level information </strong>which can be used to further <strong>enhance the experience of customers.</strong></p><p>In conclusion, stream processing is a powerful tool for a wide variety of use cases from real-time decision making to enabling continuous ETL. eMagiz iPaaS can help you develop and manage the infrastructure required for stream processing, so you can focus on the business objectives. Additionally, eMagiz can help you follow-up on the insights obtained during stream-processing by integrating with any back-end system in your application landscape. This allows your organization to turn your insights into actions.</p><p><em>By Mark de la Court, Software developer @eMagiz</em></p><p>For more blogs on integrations, event streaming, API’s and more, go to our website <a href="https://www.emagiz.com/en/news-en/">https://www.emagiz.com/en/news-en/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d363f7e655c9" width="1" height="1" alt=""><hr><p><a href="https://medium.com/emagiz/get-more-value-from-your-data-streams-with-stream-processing-d363f7e655c9">Get more value from your data streams with Stream Processing</a> was originally published in <a href="https://medium.com/emagiz">eMagiz</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Digital transformation in hybrid landscapes, part 2]]></title>
            <link>https://medium.com/emagiz/digital-transformation-in-hybrid-landscapes-part-2-254aaf14384f?source=rss----c1d7422f96d6---4</link>
            <guid isPermaLink="false">https://medium.com/p/254aaf14384f</guid>
            <dc:creator><![CDATA[eMagiz]]></dc:creator>
            <pubDate>Wed, 20 Jan 2021 15:05:00 GMT</pubDate>
            <atom:updated>2021-02-17T11:50:37.154Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*mY9Y5TE-uam8FtAs.jpg" /></figure><p>In the <a href="https://www.emagiz.com/en/blogs-en/digital-transformation-in-hybrid-it-landscapes/">previous blog</a> of this series, I addressed a number of challenges in the business landscapes. Next to that, I briefly explained some changes in the IT industry. Let’s take a deep dive into what’s happening nowadays.</p><h3>Trends in IT industry</h3><p>There are several transitions already taking place in the IT industry. These are mostly event-driven and parallel to each other. Nowadays it’s almost impossible to plan and execute your digital transformation in a predictable manner. On the contrary, it is a very time-consuming and disruptive process and can take many years. Preferably, you want to do this process step by step by gradually changing parts of your landscape, systemically replacing legacy, and by getting rid of technical debt. The right tools for the job and lots of flexibility in your organization is then imperative.</p><p>The IT industry came up with multiple frameworks to help you with your digital transformation. One of those frameworks is a<strong> </strong><a href="https://www.ibm.com/blogs/cloud-computing/2019/07/16/what-is-hybrid-integration-platform/">hybrid integration platform</a>, the evolution of integration providers supporting hybrid solutions.</p><p>A hybrid integration platform is a platform supporting and combining multiple techniques to propose solutions for specific business- and technical problems, challenges and use cases. All accessible within one platform instead of all kinds of separate, specific or standalone tools. A hybrid integration platform is a best of suite solution, giving you a unified overview of your landscape and supporting multiple roles accessing the full lifecycle.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*3KjzJlOf0KoruZPX.jpg" /></figure><p>Regarding to users, you can think of people from the business, architects, developers, but also the service desk. A hybrid integration platform is not only the solution for developing integrations solutions, it supports users during capturing, designing, testing, deploying, documenting and maintaining their integrations solutions. It basically supports the full lifecycle.</p><p>When we compare this hybrid approach with the classic tools and frameworks used a decade ago we see a lot of differences, tools that specialize in 1 or 2 use cases. These tools are often very good in one specific thing or technique but are almost always only usable by diehard integration developers.</p><p>Several years ago <a href="https://www.gartner.com/smarterwithgartner/use-a-hybrid-integration-approach-to-empower-digital-transformation/"><strong>Gartner</strong></a> predicted that at least 65% of large organizations would have implemented a hybrid integration platform to power their digital transformation by 2022. Gartner predicted a transition to some form of hybrid integration for the majority of large companies. We thought this movement would not be limited to only large companies and would also occur in small-medium-sized companies.</p><p>By now, Gartner’s vision is no longer focused on the transition to a new sort of integration platform, it focuses more on the necessity for hybrid features in your integration platform. So hybrid enablement seems to be the next evolution on integration capabilities software, it is actually already happening. How do we respond to that with the eMagiz platform?</p><h3>Building towards a hybrid integration platform</h3><p>Of course, also eMagiz is firmly investing in our own digital transformation. Our integration platform is a <a href="https://www.emagiz.com/en/enterprise-ipaas-hip-low-code-english/">low code enterprise iPaaS</a> and strongly evolving to a hybrid integration platform. Our platform supports a multidisciplinary approach with different techniques.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/466/0*UJp1FAzayk3R_3mi.jpg" /></figure><p>We do this by enabling self-service integration for our clients, but with the luxury of maintaining a helicopter view on their landscape. This results in a clear overview of their landscape and keeps them in control, regardless of the different integration patterns used. This means you don’t have to be an integration expert to use the eMagiz platform, we target various roles in our platform, facilitating them access to multiple integration patterns.</p><h3>Multi integration patterns</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/1*hhF8kgPWMeyzE0vHA16sSQ.png" /></figure><p>There are many use cases you could think of where you need the power of multiple integration patterns in a hybrid way. This is making it hard for organizations to combine multiple tools or platforms without losing their ease on maintainability and governance. And how do you make the right decision for which integration pattern is the best tool for your challenge? Wouldn’t it be great to get some advice? In <a href="https://medium.com/swlh/evolution-of-microservices-1a7d5a5f8c06">this</a> excellent read about the evolution of microservice architecture, multiple patterns pass in review. With the growing popularity of <a href="https://www.infoq.com/articles/whats-the-next-step-for-data-management/">Apache Kafka</a>, <a href="https://blog.christianposta.com/microservices/do-i-need-an-api-gateway-if-i-have-a-service-mesh/">Service meshes over API Gateways</a> and <a href="https://www.mendix.com/blog/data-hub-the-low-code-approach-to-data-integration/">Data Hubs</a> popping up everywhere, taking decisions on the right technology and tools is getting more and more complex. We believe every organization needs advice on selecting the right tool to solve every specific business challenge.</p><p>Looking forward to the future we would like to support clients with their digital transformations. eMagiz as a hybrid platform is more and more becoming a data provider of our customers, regardless of which cloud platform and technology they use. eMagiz provides you to make the combination of multiple integration patterns within only one platform, where every change is auditable and gives you a clear overview of a manageable hybrid landscape.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/964/0*e5q7dKyf-4GEojvo.png" /></figure><p>We think this is needed to meet the requirements and the speed of business demands for solving the challenges of today and the future. An example of this is our <a href="https://www.emagiz.com/en/news-en/press-release-emagiz-as-launching-partner-in-the-mendix-data-hub-partner-program/">latest partnership</a> with the Mendix Datahub, this is a new step towards more easy data sharing within the <a href="https://www.mendix.com/data-hub/">Mendix ecosystem</a>.</p><p>Do you have any comments or experiences you would like to share? Or do you want to know more about a hybrid integration platform, or how eMagiz can support your business? Let us know! You can find me (Samet) on <a href="https://www.linkedin.com/in/sametkaya/">LinkedIn</a> or you can <a href="https://www.emagiz.com/contact/">send us a message</a>. We’d love to hear from you.</p><p><em>by Samet Kaya, Software delivery manager @eMagiz.</em></p><p>For more blogs on integrations, event streaming, API’s and more, go to our website <a href="https://www.emagiz.com/en/news-en/">https://www.emagiz.com/en/news-en/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=254aaf14384f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/emagiz/digital-transformation-in-hybrid-landscapes-part-2-254aaf14384f">Digital transformation in hybrid landscapes, part 2</a> was originally published in <a href="https://medium.com/emagiz">eMagiz</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why stream processing is a component of true value]]></title>
            <link>https://medium.com/emagiz/why-stream-processing-is-a-component-of-true-value-3237706fa1ba?source=rss----c1d7422f96d6---4</link>
            <guid isPermaLink="false">https://medium.com/p/3237706fa1ba</guid>
            <category><![CDATA[event-stream-processing]]></category>
            <category><![CDATA[integration-platform]]></category>
            <category><![CDATA[stream-processing]]></category>
            <dc:creator><![CDATA[eMagiz]]></dc:creator>
            <pubDate>Wed, 13 Jan 2021 10:08:25 GMT</pubDate>
            <atom:updated>2021-02-17T11:52:14.476Z</atom:updated>
            <content:encoded><![CDATA[<p>In this blog we want to shine a light on a valuable component within the integration pattern event streaming, namely stream processing. Stream processing indicates processing data while it still streams between different systems. In this blog we explain what the concept entails and why you would want to adopt it in your landscape. We elaborate on the type of architecture that fits well with stream processing and argue why it might be the tool for IoT environments. Read all about stream processing in this blog. Want to know some more about the integration pattern event streaming first? Read all about it via <a href="https://www.emagiz.com/en/event-streaming-en/">this link.</a></p><p>Stream processing is the act of processing a stream of data. As a method it is closely related to IoT, but also helpful in big data projects, particularly because its <strong>ability to process large amounts of data, fast and continuously.</strong> It is used to immediately discover <strong>and process</strong> events (changes in circumstances) during the period that the data flows from A to B.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/476/0*yWaoDYOHv6ug1iIn.png" /></figure><p>In this short period of time, valuable new information can be immediately derived from your data streams through processing. For example a notification that is sent when a certain storage limit of your warehouse is exceeded, whereby the data streams come from a number of sensors and systems which then process the different data streams to create a warning when the calculated limit value is reached. Compared to traditional processing, the event stream, including the data, is now active and can be continuously questioned for new insights.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/605/0*JxGUolDnw8eZmB7Z.png" /></figure><p>As already indicated, the value of data is nowadays determined by insights derived from processing all kinds of data. Deriving information from data at the right time can be crucial for decision making. Lag between data generation and analysis can often reduce the value of information.</p><p>By deploying stream processing within your organization you improve the circumstances to support ‘real-time decision making’ scenarios. Stream processing enables delivering insights faster, often within milliseconds after a certain event has triggered the system.</p><p>There are countless methods for data processing. Regardless of your use-case, existing integration patterns can be used to get to the desired result. However, there are now clear use cases where one might opt for a stream processing integration pattern instead of a batch processing pattern. For example, processing a continuous stream with an endless amount of events. In this flow, real-time data patterns must be recognized and the results of this process must be grouped, analyzed and processed immediately. This must be done for multiple data streams simultaneously.</p><h3>Event driven architecture</h3><p>Stream processing can be also described as a type of event-driven architecture that is being used increasingly to solve growing demand generated by an ever-expanding data-driven society. What is an ‘event’ driven architecture?</p><p>In an event driven architecture you have a component that performs a certain action that’s important to other components. One component (the producer) produces an event, a record of the event is stored, and another component (the consumer) consumes this event so that it can perform its own tasks as a result of (or influenced by) this event.</p><p>Separating consumers and producers gives an event-driven architecture the following benefits:</p><ul><li>Asynchronous traffic</li><li>Individual components</li><li>Easily scalable</li><li>No additional development for one-to-many integrations</li></ul><p><strong>The difference between stream processing and the message-queue<br></strong>There are two variants of event-driven architecture; message queues and stream processing. Let’s have a look at the differences between the two briefly.</p><p>Within ‘traditional’ event driven architectures, the producer places a message in a queue that is aimed at a specific consumer. That message is kept in the queue (mostly in a first-in, first-out sequence) until the consumer collects it, after which the message is deleted.</p><p>With stream processing, messages are not directed to a particular recipient, but are published on a specific topic and available to all consumers. All recipients that require access to the topic can subscribe and read the message. Because the message must be available to all consumers, it is not deleted when it’s read from the stream.</p><h3>Stream processing, the tool for IoT</h3><p>Stream processing is the ideal architecture to <strong>effectively process and analyze high volumes of event driven data messages.</strong> This is often especially applicable to IoT use cases, thanks to the usage of time series data. Time series data can be described as a collection of observations, obtained by continuously performing measurements. If we were to plot this data in a graph, one axis would always contain time. This type of data is often the result of using sensors in a variety of operations such as traffic, industry and healthcare. It can also be the result of log data, for example: transaction logs, activity logs and all kinds of other logs.</p><p>As promising as stream processing is, organizations must be aware that stream processing as a pattern or architecture isn’t always the panacea. There are plenty of situations and use cases in which other integration patterns will be more effective and/or efficient. An example where stream processing is not the right architecture is when the entire data set must be processed multiple times or if the processing is done on the basis of random access. Furthermore, analyzing in the ‘edge’ of the infrastructure, for example edge machine learning, is an architecture that doesn’t fit well with stream processing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*22Wmw1HprF90DlSZ.jpg" /></figure><p>Stream processing has become a preferred choice for many event-driven systems and patterns. It offers several advantages:</p><ul><li>Real-time decision making</li><li>Enrichment of ‘traditional’ BI with predictive BI</li><li>The ability to process and analyze high volumes of ‘raw’ data without the need of storing it first.</li></ul><p>Generally speaking, it increases the flexibility of your data integration landscape enormously. With that, it’s an ingredient of true value for that landscape.</p><p>By Leo Bekhuis, Software engineer@eMagiz</p><p>For more blogs on integrations, event streaming, API’s and more, go to our website <a href="https://www.emagiz.com/en/news-en/">https://www.emagiz.com/en/news-en/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3237706fa1ba" width="1" height="1" alt=""><hr><p><a href="https://medium.com/emagiz/why-stream-processing-is-a-component-of-true-value-3237706fa1ba">Why stream processing is a component of true value</a> was originally published in <a href="https://medium.com/emagiz">eMagiz</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>