Yes in general as this is very similar technology done in real-time.
In all fairness most big data use cases can be solved in a single machine also. In the case of analytics use cases, most use cases are done in python and R and a small subset is done in Spark when they become big enough. I think a similar case here as well.
Thanks for the great post, and I agree with the most.
IMO what will help most is anomaly detection on data ( using prebuilt models) that generate alerts linked to a UI, and when user click the link and arrive at the UI, it shows the anomalous case in context and user can explore/ edit view (For example, this kind of UIs are…
I think the differentiation is clear. When you are reluctant to give a single organization or few people full control over a critical system, blockchain applies. I believe what is not clear is “How much centralization or decentralization people willing to live with” and “How much additional cost one willing to pay”? It is not a technical choice but a…
If too many events come at a given time
Thanks @fjanon. The way I saw it is that computing is critical, but grows exponentially. However, programmers almost stay constant or grow small linear speed while demand grows exponential. Hence the effect of the latter is much more.
Agree on “resistance to change”, but I felt affect all emerging technologies about same…
When I tried above, I did not make data stationery. ML algorithm can handle that in my belief.
I did not try dynamic windows. What kind of scenarios would you use that for? ( IMO we should have some explanation/ idea of why it would work before trying it. ).
The simple answer is windows size is the amount of data model need to look back for make a prediction. For example, if data has seasonal behavior, it has to be as big as the seasons' period. You have to find it via trial and error, and I think as size increases, accuracy should increase and at some point it should stop improving.