This fact is unavoidable in computer science. Of course, the concept of stateless programming is useful, and powerful to talk about. But requires some deep understanding of why and how to implement properly, to be useful. One cited reference for stateless programming is this blog on Functional Programming For The Rest of Us which includes the following:
It turns out that functional programs can keep state, except they don’t use variables to do it. They use functions instead. The state is kept in function parameters, on the stack
And this is my point. Stateless design is about rethinking state and storing it in a more appropriate place, not actually getting rid of it. From an information analysis point of view, stateless and stateful systems have the same quantity of information, except stateless ones tend to have less duplication, and better coupling.
A number of times, I’ve been told that some complex, slow, data processing system was “designed to be stateless”. This statement is often accompanied by “and we can’t change it, because it took us months to iron out all the failures”.
What this translates to is: the engineers didn’t want the hassle of setting up a database, so ignored the halting problem, and put all state in runtime-memory, hoping that nothing would ever go wrong. These systems then go live, and all the usual problems start to happen, but they remain in denial about the fact that their system has to know about what’s going on, and it can’t be just stored in memory, so a custom storage channel is invented (usually using file metadata coupled with complex heuristics) to store the information, while still pretending that there’s no state.
In this situation you end up relying on the implementer to design a custom database (yes, inferred information from file metadata is still a database, however hard it pretends not to be) to store their ‘stateless’ information. And this is actually a really hard thing to do correctly. Unless you design an ACID way to update the data, then you’ve just gained a whole new set of problems to solve. Which is why such systems tend to have a large set of failure scenarios that have to be coded around.
The point to the above rant is this. If you think your system is stateless, or think you can design your next solution to be stateless, consider the following questions:
- Any component of the solution can (and will!) fail at any time, and for a reason you didn’t anticipate.
- If the component fails, whatever it was doing will be left in limbo, (your exception handlers will fail too at some point)
- If one part of a process depends on any other part of the process in any way, then this is state, and you probably can’t avoid it.
- What should happen if things fail? can you afford to restart everything? is your process idempotent? What if it completed but failed during post-job tasks?
These points lead to some common recommendations:
- Make sure that that indicator/flag/storage can be updated atomically (or as nearly atomically as possible) If you can’t make it happen atomically, find a different way (It’ll make your life easier)
- If you’re storing an indicator, store it in the right place, don’t attach data that’s related to one part of the system in a container that’s used for a different part. This is bad design, and leads to horrible coupling.
- Similarly, trying to infer state from other indicators is usually a bad idea. If something is waiting for a precondition to be met before doing something, unless that precondition is really simple, and can be relied upon to be toggled atomically, then add a separate atomic flag that gets toggled at the right time.
Comments and discussions welcomed.
* If you know of some system that /is/ truly stateless, I’d be interested in knowing about it, but it probably doesn’t invalidate the point I’m trying to make here.