Implementing your own Android MediaDataSource

Mark Ian Jackson
6 min readNov 2, 2015

--

Android 6.0 (API level 23) changes

You can see the list of API differences for the release of Android 6.0 Marshmallow here. More importantly, we are going to focus on the new addition to the android.media package, MediaDataSource.

MediaPlayer

A simple implementation for the Android MediaPlayer could be set up in the following manner, to play a video:

As you can see, setDataSource()’s parameter typically is a String or FileDescriptor that describes the URI location of a video file. It allows for a number of protocols such as RTSP and HTTP. The media player abstracts the networking aspect of receiving the video, buffering, etc, but you can’t have video physically in code and set that as a resource. You need to write whatever you have in the heap to internal storage, and give that as a resource to the MediaPlayer.

MediaDataSource

Now with the release of API 23, Android allows you to create a class that derives from MediaDataSource, provide the actual byte array, the video/audio, and handle whatever networking you need to do on your own. Here is what the abstract class consists of:

public abstract int readAt(long position, byte[] buffer, int offset, int size) throws IOException;public abstract long getSize() throws IOException;

The reference for MediaDataSource provides more of a description of each method and what they need to consist of.

Lets Create a Custom MediaDataSource

This tutorial will show the basics of extending a MediaDataSource and a general outline of what the networking could look like for retrieving a media source ourselves. The code will be available here, if you’d like to clone the project and test it out for yourselves. Or you can follow the tutorial in your own project.

We are going to be using Android Studio, and create a new project called “MediaDataSourceExample.”

Make sure to have a device or emulator that is using Android 6 (API 23).

Make sure to make the project’s minimum SDK is for API 23.

We are going to be using a blank activity, and that should just give a basic template to work with.

Testing out the MediaPlayer

Copy and paste the code above from the GitHub Gist into your MainActivity, and we are going to add a couple of things that I didn’t mention before.

In your AndroidManifest.xml add a permission for internet use:

<manifest ...
<uses-permission android:name="android.permission.INTERNET" />
<application>
...
</application>
</manifest>

And add a SurfaceView to your activity_main.xml:

<RelativeLayout ...><SurfaceView
android:layout_width=”match_parent”
android:layout_height=”match_parent”
android:id=”@+id/surface”/>
</RelativeLayout>

Now run the project, and you should get the video playing.

Woohoo!

We are trying to get the same result, but by doing the networking ourselves in a MediaDataSource.

Creating our own MediaDataSource

Create a new Java file, and call it “VideoDataSource”, and then extend MediaDataSource and add the needed imports. Android Studio will want you to generate method stubs for 3 methods.

public int readAt(long position, byte[] buffer, int offset, int size)
public long getSize()
public void close()

Once you add those empty methods, add an empty constructor

public VideoDataSource(){}

Downloading the Video

We need to download the video into a byte array, but we need to do this on a separate thread. Our options include creating an AsyncTask, a new Java thread, or creating a service. For the scope of this tutorial, we are going to define a Java thread.

I would guess the proper way to download this video would be to create a service, but this could depend if your video is fixed sized or being streamed.

Lets define a Runnable that will be contain the code that downloads the video, and updates the byte array with the video contents.

I’ll break down whats happening in a few bullet points, this code is basic in the aspect of how to use an InputStream and a ByteArrayOutputStream.

  • We take the url and open an InputStream to read the video data.
  • While there is data to read, read the input stream and write the bytes to the byte output stream.
  • Close streams and copy the byte output stream’s contents into a byte array (which is the video).
  • Call our listener’s success method to inform that the video has been downloaded.
  • Exception handling in case any error occurs.

This implementation doesn’t include buffering the video, the video will be downloaded fully before it begins to play. A real implementation will need to buffer the video for faster playback. This video is very short, but using a longer video, such as 10 minutes long, or one with higher quality will take much longer.

Now lets add a method called “downloadVideo” that we can initiate our whole download sequence.

public void downloadVideo(VideoDownloadListener videoDownloadListener){
if(isDownloading)
return;
listener = videoDownloadListener;
Thread downloadThread = new Thread(downloadVideoRunnable);
downloadThread.start();
isDownloading = true;
}

This method assigns the VideoListener and starts the thread to begin downloading the video.

Listener Interface

The listener methods are apart of an interface we define for the user to implement when they have successfully downloaded the video. Create an interface called “VideoDownloadListener”, and use these method definitions:

public interface VideoDownloadListener {
public void onVideoDownloaded();
public void onVideoDownloadError(Exception e);
}

MediaDataSource Methods

Our implementation for the MediaDataSource methods are going to just return the wanted information from our video byte array. Nothing fancy, just checking bounds, and copying the length of the array into the buffer parameter. Here is what our implementation will look like:

@Override
public synchronized int readAt(long position, byte[] buffer, int offset, int size) throws IOException {
synchronized (videoBuffer){
int length = videoBuffer.length;
if (position >= length) {
return -1; // -1 indicates EOF
}
if (position + size > length) {
size -= (position + size) — length;
}
System.arraycopy(videoBuffer, (int)position, buffer, offset, size);
return size;
}
}
@Override
public synchronized long getSize() throws IOException {
synchronized (videoBuffer) {
return videoBuffer.length;
}
}

The only information on what is contained in these methods were the Android documentation reference and this GitHub repo that created a test case for testing the MediaPlayer’s MediaDataSource capability.

Our final VideoDataSource.java implementation will look like this:

Using VideoDataSource in the MainActivity

All our code for the data source will be held in the “surfaceCreated” method, right above where we set the data source previously.

//surfaceCreated
VideoDataSource dataSource = new VideoDataSource();
dataSource.downloadVideo(new VideoDownloadListener() {
@Override
public void onVideoDownloaded() {
mp.prepareAsync();
}
@Override
public void onVideoDownloadError(Exception e) {
Log.d("MainActivity", e.toString());
}
});
mp.setDataSource(dataSource);

It’s that easy! Now we should have a working solution that will download the video, notify the user its finished, and play the video. Lets give it a run.

We got video!

Just a reminder, this video is very short, about 10 seconds. Loading videos longer or of higher quality will need some buffering mechanism to play the video before it’s fully downloaded.

Try replacing the video url with this video of Big Buck Bunny.

http://download.blender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4

It will take some time for the video to download. This video is about 10 minutes long, and we are downloading the entire video before playing it. As mentioned before, this tutorial demonstrates the basic function of the MediaDataSource and a real solution would incorporate more efficient networking and buffering of the video.

Conclusion

I created this tutorial because I just came across this new addition to the MediaPlayer while working on a side project, and it being so new, I couldn’t find any other tutorials or helpful examples to get me going. The only helpful resource other than the Android documentation was this repo that used it for a test case.

I have uploaded the code to this repo on GitHub:

I will continue to make blog posts/tutorials on iOS/Android subjects I come across that took me some time and Googling to figure out. I will post links to new posts I make on my Twitter.

Hope this helps somebody, and thanks for reading! Please let me know if I missed anything or any suggestions you have to improve the tutorial.

Edit: I swapped out Big Buck Bunny for a shorter video to make it faster, I included the original video still if anyone wants to try it out.

--

--