This I see recommended often but still don’t understand.
I think about it this way: Before committing (at least into master, but I do it for all branches) you build and test your code. (Right?) So whenever someone checks out a specific commit they expect the code to compile and pass the test suite.
Now think that you rebased your quick-fixes on latest master. If you’re like me and had tested your code before committing, then immediately all the commits in that quick-fixes branch become untested and unverified since they now include some change that was never a part of them. Say “irrelevant changes” all you want; if you didn’t test it, you’re not sure. That means whole history of quick-fixes branch becomes a lie. Those commits were built and tested when they were created, but not anymore since you rebased.
Even if you don’t care about passing test suite or even having a successful build in feature branches, at step 3 you’re making those untested and not-even-built commits part of the linear history of master… Is this really the graph that we want?
See, I would totally understand if you did this on the quick-fixes branch. Branch out from that quick-fixes branch for every specific-quick-fix so that you can work on them parallel or even with multiple people. Then clean your specific-quick-fix history with “rebase -i origin/quick-fixes” (optional: into a single commit), (optional: build and test,) then fast-forward merge specific-quick-fix into quick-fixes and delete specific-quick-fix. But always do a true merge when you are merging into master, because you don’t want any lying sob’s in the history of your master branch.
So, what am missing here?