My tasks for phase-2 is mostly centered around improving parfive.
Python asyncio turned out to be trickier than I thought. This is surprising because I have had my share of JS and Dart which has more or less similar async paradigms.
Anyway, after learning asyncio, I implemented downloading by parts for parfive. You can see the pull request here.
The improvement in download speed is also very surprising. I had expected minimal to no increase in download speeds of small files (upto 10MB). This is because opening a new request takes time. So there is sweet spot between number of connections and download speed. Therefore if the file is small, downloading the whole file in one request might be better. Or so I thought.
With my internet connection, downloading a 10MB file with a single connection took about 10s. With 12 concurrent connections, the time went down to 5 sec.
Downloading a 100MB file took 36s with single connection, while 12 concurrent connections managed to do the same in about 21s. That is 15s of your life you can waste on something else.
I did mess up in my way during implementing it. I had used a wrong value while offsetting the write to the file. This meant that the file was corrupted. This issue took some time to track down. Speaking with Yash Sharma in riot helped me figure out what the problem was.
The next tasks would be implementing resume support for parfive. Maybe then a CLI interface(?).
I also rebased and beautified the remote data manager PR in the mean time.