Hi Ashis, thank you for reading it! :-)
As presently all of the tasks need manual labeling of training data and thus on the fly preparation of data to train models still unanswerable.
You’re totally right, if you’re working on streaming data sources and you need manual labeling things get nasty! But please note that this is not always the case: In many scenarios you can get the labels along with the data (for example predicting clicks on web pages).
If this is not the case, you need to find a surrogate supervised signal. For example, on streaming perception data you can almost always use temporal coherence as a surrogate supervised signal, making the assumption that predictions cannot change to much in short time periods .
In this way you can regulate the temporal resolution at which you provide the labels (not that in  we end up not providing any labels at all after the model has already reached fairly good performances)!
Otherwise there are some modalities to provide supervision more easily: https://www.youtube.com/watch?v=HdmDYIL48H4
I would say that it really depends on the task but most of the time you can find a smart way to provide as less labels as possible!