When we use A/B testing to introduce new features, we are usually looking for data signals to understand whether or not there's enough user intent.
What could happen in the search case (without auto completion), for example, is that a good percentage — let's say 8% — of users would focus the mouse in the field, but only 1% would actually hit "search" button. In that case, I'd say there's a clear signal of high intent to use the search feature, however users are not able to finish the action because we don't have yet an auto completion system.
That's a great moment to design a follow up A/B test experiment trying the search + autocomplete, since we already have a data point to prove that the demand to use the search box exists.
It's just a matter of doing the changes in baby steps and validating each single small hypothesis to avoid taking bigger risks.
Does that makes sense to you?