Joe Wood
Joe Wood
Sep 1, 2018 · 1 min read

I think we’re talking about the same thing really.

  1. There’s a “command” event that says “add this item to the basket”. This command is logged as an event.
  2. There’s a stream processor that joins this event stream with the basket event streams, which is represented as a KTable and applies the action.
  3. If the basket size exceeds 3 items, then the action is rejected, emitting a new event stream for “command rejections”.
  4. Otherwise the basket stream is updated via a new event, which is an aggregation of the existing Basket state and the new item.

If these events were propagated back to the client all this processing could be performed earlier on the client side, adding an item could be blocked if the “basket is full” state was sent. In this model there are multiple event streams, each representing different intents, aggregations and validation logic.

I think there may be some confusion around the Kafka Stream processing model. With a stream processor you can source the state of your record from another stream, but this can be materialized to you as a KTable. You never really deal directly with a key/value store — you’re only joining streams. An event processor in Kafka Streams shouldn’t really be sourcing state outside the Kafka model, otherwise the stream processor function could have side effects.

    Joe Wood

    Written by

    Joe Wood

    Working in FinTech, cloud architecture and data visualization https://github.com/joewood