By Daemonite Kevin Barnes
It can feel like the machines are taking over. Millions of pounds are being spent on taking us humans out of the picture when it comes to warehousing. More and more clever robotic systems are being developed to help: receive, putaway, store, pick and retrieve goods. How long before driverless trucks? There are still a few areas where the machines can’t do it all themselves including:-
- Design the warehouse
- Integration with existing systems
- Environments for developing and testing
As an example of this point, I was asked to look at the speed of messaging between a client’s Warehouse Management System (WMS) and the Warehouse Control System (WCS) from the automation supplier. The requirement was for a guaranteed, once only, and quick (sub 2 second) relay of messages between WMS and WCS and vice-versa. The I.T. architecturally pure way of doing this would have involved centralised messaging systems with queues, dedicated bits of the network, and no certainty of even sub-minute delivery. Looking at it pragmatically the WMS/WCS were in the same place — a machine room back in the days of those! Both the WMS and the WCS used the same database technology. The answer — use a direct connection between the two systems with no need to go off into centralised IT services enterprise messaging, and take advantage of the database’s own built-in transaction safety nets. The result sub-second messaging that was too fast to really measure. Pragmatic yes — Architecturally pure .. maybe.
There is often a wider and more complex integration question involving: merchandising and ordering systems, logistics planning and product catalogues. Companies, where this vital network of capability has evolved over time, may find themselves facing a lack of knowledge of legacy applications and cross-department ‘discussions’. This leads to the ‘integration tax’ for a project — the amount of time, effort and pounds to add the new stuff into the maelstrom. If you find yourself dragged into a debate for instance about ‘Master Data Management’ see if your project can do its own thing and not be dependent on someone else.
Environments and Testing
There is no excuse these days for not using an automatic buildup and teardown of environments. It will pay off massively in the medium to long run and give you a consistent, stable and repeatable platform to work from. Configuration management tools such as Puppet or Ansible allow state to be defined so that a system can be built up and burnt down reliably and quickly. With cloud hosting, infrastructure as code should be developed using Terraform, meaning an entire ‘virtual data centre’ can be built from code in minutes. The application can then be deployed on top using configuration management.
Once created it is vital that automated testing is run to verify the environment. An automated smoke or regression pack will allow stakeholders to have confidence that the environment is fit for use. Tools like cucumber or selenium are common test frameworks for automation.
But how do you deal with having to give a developer a WCS each? Something we’ve done in the past is to create ‘test harnesses’ that exhibit the working behaviour of the WCS and respond in the correct way to instructions. One thing to keep in mind is that you do know what is going to happen. I.e. the system of going to put item X away in location Y passing particular sensors along its journey. Okay, the newer non-rail getting a bit autonomous floor robots complicate things, but the outcomes are still known.