Successful migration of a logistics solution from a legacy on-premises system to the cloud.
To continue with the series of experience reports at Magnet, at this time we bring you eKanban project, a digital platform that helps organizations handle their manufacturing process following the Kanban methodology. This project showcases another implementation of our HOP platform, in this case to automate the logistics system at Orkli.
eKanban is a scheduling system for lean and just-in-time manufacturing. All the manufacturing process is monitored starting from the supplier of the raw materials, the preparation and delivery of them, stock availability in the production line and the usage of it. The main goal is to monitor the process and to provide information to both actors in the process, the suppliers and the consumers of every material in use. That is, generating information and optimizing the stock needed in each part of the process for an optimal production.
The first version of eKanban was created as an internal project to fulfill the organization’s needs, using internal resources. Once it was implemented, other companies showed interest in it, and Orkli decided that although the solution worked and it was a great proof-of-concept, they needed help to improve it and make it commercially viable.
At Magnet we use this mantra often: make it work, make it right and make it fast.
In the case of the eKanban project, our mission was to take this proof-of-concept and make it right and fast.
We started tackling some very specific requirements to improve a partially working in-house system with the goal of evolving it into a market-ready professional solution.
The previous system was deployed on premises which, for several reasons, did not escalate with business needs. Hence we decided to migrate it to a cloud infrastructure and chose AWS for its scalability, robustness, security, accessibility and easy maintenance.
Our client did not have any previous experience with AWS, but they quickly saw the added value and they did not hesitate to embrace the cloud.
Modeling the domain knowledge
After the infrastructure was set we immersed ourselves in the domain at hand, that is logistics, which is a vast and complex area.
The previous data model was not architectured for scalability and this presented a challenge. For better communication and understanding with the client we wrote a domain dictionary and decided to get rid of the previous data model and start fresh with a new and more efficient one.
Thanks to this we have a more logical organization of the data and a much better performance and agility of the system. It also provided more flexibility and the possibility of new features, which the old model couldn’t handle.
Leveraging HOP platform
HOP accelerated the development of eKanban bringing many technical ingredients we needed: database, authentication, business intelligence, web server, the continuous delivery automation, etc.
There were some features that we were missing so we built additional modules such as the scheduling and the object storage implementation for ftp servers.
We identified that quite a lot of functions should be scheduled and fired in certain moments. Not just internal logics as checking if stock should be re-evaluated or alert emails sent, but also providing the companies the flexibility to decide their own frequency or periodicity of the task.
With this new module, our internal functionality is robust and covers what the system needs. Now, they can offer their clients their own personal customization of scheduling, and it can be done in a totally automatic way, with not coding or technical skills.
Object Storage in FTP servers
Another requirement of the project was to enable integration with different ERPs, as some information needed in the eKanban logic is generated in these systems (order numbers), and also, once processed in the system, it has to be persisted in the ERP (stock).
HOP offers a REST API layer and ideally this ERP integration should be done through this HTTP protocol. But alas, the real world is stubborn and some ERPs are not ready, so we had to explore alternatives. Our final decision was to exchange XML files via a FTP server. The actual object storage module can fetch and store objects indistinctly in AWS S3 service or an FTP server. With a boundary that abstracts totally the storing indistinctly to one or other.
The connection to a FTP server is totally configurable and reading and writing to it is customizable. Once this kind of communication was in place, our client has implemented different tasks on top of it providing the users with further services not present in the previous version of the application.
We leverage Grafana as part of HOP when we need to deliver this functionality.
In this case, a further development was done to securitize and automate what each user was allowed to see and how. The same token used to access our backend is then used to fetch graphs with the data owned by the authenticated user.
Additionally, the eKanban team can access Grafana as a platform and use all its features for their own business intelligence purposes.
The exposed REST API is used by two main actors:
- Mobile application developers: there is a whole collection of end-points that offer them all the services they need to provide a functional application.
- Hardware developers with embedded software that fetch information and send it to the platform.
These third parties have integrated the service securely quite easily and transparently.
All the user interfaces have been redesigned, as the previous front-end wasn’t scalable and didn’t fit the functional requirements.
The usability has been improved and the user feedback has been very positive. It’s a reactive and rich-feature user interface that makes the user experience much smoother.
It’s not easy to re-build something from scratch, but sometimes you have to consider your first effort as a proof-of-concept and reuse your knowledge to build a more productive solution.
In this case, the legacy system proved the validity of the business value proposition, but it was not performant and the data model, among other things, made it difficult to scale it. Migrating from an on-premises model to a cloud native model was a game changer as well.
Building the system from scratch was a risky decision but served us well. No looking back.