How to publish an external video to Twitter — Node.js version

Amey Patil
ameykpatil
Published in
9 min readOct 15, 2018

How does image or video publishing generally work with Twitter API?

The way image or video can be published on twitter involves more than a single step. First, you have to upload a media to twitter using Upload API.
This returns a media id for the uploaded media. Now, in the second step you have to call Twitter’s API to Publish a new tweet. While calling this API you will need to specify the media id, so that twitter includes the uploaded media with this tweet. Below is the API to Publish a new tweet.

There are 2 ways to upload a media to Twitter

Direct Upload : Media can be uploaded using a POST media/upload API of Twitter. This is a simple image upload endpoint, which does not provide all the features. This API supports only certain types of media which does not include gif & video. So you can actually just upload media of type image using this API. Even for image there is a size restriction which is around 3MB. This is not officially documented but many developers have faced this limit while using this API. Response returns media_id (or media_id_string) which can be then passed with POST statuses/update API mentioned above.

Chunked Upload : The alternative to Direct Upload is the chunked upload endpoint. A media is divided into multiple chunks. Each chunk is then uploaded separately one after another. After all the chunks have been uploaded media_id (or media_id_string) can be then passed with POST statuses/update.

Following are the various APIs which in combination makes Chunked Upload of media successful.

INIT — The INIT command request is used to initiate a file upload session. It returns a media_id which should be used to execute all subsequent requests.

APPEND — The APPEND command is used to upload a chunk of the media file. A file could be split into number of chunks, and uploaded usingAPPEND command requests. For each chunk a separateAPPEND command needs to be called.

FINALIZE — The FINALIZE command should be called after the entire media file is uploaded using APPEND commands.

STATUS — If the response of the FINALIZE command contains a processing_info field, it may also be necessary to use a STATUS and wait for it to return success. The STATUS command is used to periodically poll for updates of media processing operation. After the STATUS command response returns succeeded, you can move on to the next step of Tweet creation.

Why is the chunked upload better?

Chunked upload API supports static images, animated gifs as well as videos. It provides better reliability. It is a recommended way by Twitter for uploading media. It also allows pausing & resuming of file uploads, which means you can allow users of your app to pause & resume media upload. It is extremely necessary for the slow connections. Also, Twitter has clearly said that the new features will only be supported for the chunked upload endpoints.

There are not enough examples showing efficient Chunked Upload

There isn’t a library which gives an easy way to upload a media in chunks. Also, there are not enough examples available to start implementing chunked upload on your own. Even if there are few examples to show Chunked Upload they all rely on the fact that the media reside on your local machine.
The examples I found were -
github.com/desmondmorris/node-twitter — which shows how to upload media in chunks if it resides at the local path.
lorenstewart.me/twitter-api-uploading-videos-using-node-js — make use of Twit library but again it just shows how the local media can be uploaded.

Obviously you can download the file from external url or S3 or any other place to your local machine & apply above methods to upload in chunks. You can then cleanup the downloaded media from local machine.
But wouldn’t it be better if we could just stream the media from external path to Twitter in chunks. That way we do not have to worry about cleaning stuff, also we do not have to download entire file & then start the chunked upload. Chunk upload can be much more efficient if we could pull off the above way.

So how can we implement this with Node.js?

Overview of Chunked Upload to Twitter

(Please note, the code below uses yield-generator paradigm in most of its implementation. Yet to switch original implementation to async-await. So you will find a lot of code with yield-generator style along with few references of promises & callbacks. We make use of bluebird library to switch between different styles.)

Let’s first define our basic objects required to be passed to the functions.

oAuthCredentials contains all the properties related to twitter credentials required for accessing twitter APIs.

const oAuthCredentials = {
'consumer_key': 'TFtxxxxxxxxxxDtR4',
'consumer_secret': 'gjWH7xxxxxxxxxxxxxlQz',
'token': '41xxxxx72–8Qaxxxxxxxxxxxx6aN',
'token_secret': 'bt8xxxxxxxxxxxxx7h9v'
}

videoObj contain the url of the video as well as the other metadata such as height, width, duration etc.

const videoObj = {
originalUrl: 'https://tw-media-poc.s3.amazonaws.com/742ee.mp4',
videoMeta: {
height: 123,
width: 234,
size: 1685018,
duration: 120,
mimeType: 'video/mp4'
}
}

Now let’s write functions for each of the command of Twitter API involved in chunked upload.

Init — INIT command require details of the video such as mimeType (mp4, mpeg etc.) & size in bytes. It returns a mediaId which is to be used in subsequent commands.

const _initMediaUpload = function* (oAuthCredentials, videoObj) {
const options = {
url: 'https://upload.twitter.com/1.1/media/upload.json',
oauth: oAuthCredentials,
formData: {
command: 'INIT',
'media_type': videoObj.videoMeta.mimeType,
'media_category': 'tweet_video',
'total_bytes': videoObj.videoMeta.size
}
}
try {
const resultArray = yield requestPost(options)
const resp = resultArray[0]
const body = resultArray[1]
const mediaId = JSON.parse(body).media_id_string
return mediaId
} catch (err) {
throw err
}
}

Append — APPEND command accepts a chunk of data in base64 format, along with mediaId & number of the chunk i.e. segment_index . This command should be called once for each chunk.

const _appendMediaUpload = function* (oAuthCredentials, data, mediaId, segmentIndex) {
const options = {
url: 'https://upload.twitter.com/1.1/media/upload.json',
oauth: oAuthCredentials,
form: {
command: 'APPEND',
'media_id': mediaId,
'segment_index': segmentIndex,
//media: data
'media_data': data.toString('base64')
}
}
try {
const resultArray = yield requestPost(options)
return
} catch (err) {
throw err
}
}

Finalize — FINALIZE command should be called once all the chunks are uploaded using APPEND command. This is to tell Twitter that there are no more chunks left.

const _finalizeMediaUpload = function* (oAuthCredentials, mediaId) {
const options = {
url: 'https://upload.twitter.com/1.1/media/upload.json',
oauth: oAuthCredentials,
formData: {
command: 'FINALIZE',
'media_id': mediaId
}
}
try {
const resultArray = yield requestPost(options)
return
} catch (err) {
throw err
}
}

Status — STATUS command returns the status of the media that was uploaded. This is because Twitter does some type of processing to uploaded media. If the state is not succeeded the corresponding mediaId can not be used with POST statuses/update API. So one should always wait for the STATUS command to return a state as succeeded .

const _getStatusMediaUpload = function* (oAuthCredentials, mediaId, lastProgressPercent) {
const options = {
url: 'https://upload.twitter.com/1.1/media/upload.json',
oauth: oAuthCredentials,
qs: {
command: 'STATUS',
'media_id': mediaId
}
}
try {
const resultArray = yield requestGet(options)
const body = JSON.parse(resultArray[1])
if (body['processing_info']) {
// if processing info is present return it
const processingInfo = body['processing_info']
return processingInfo
} else if (body.errors) {
// if body contains errors build message & throw error
const message = _.get(body, 'errors.0.message')
const code = _.get(body, 'errors.0.code')
throw new Error(`${code} ${message}`)
} else {
// else return custom processing info
const processingInfo = {
state: 'unknown',
'progress_percent': lastProgressPercent
}
return processingInfo
}
} catch (err) {
throw err
}
}

Please note, in the above snippets, requestPost & requestGet are just the promisified functions on top of the request library. Also, _ stands for lodash library.

const _ = require('lodash')
const request = require('request')
const Promise = require('bluebird')
const requestPost= Promise.promisify(request.post,{multiArgs: true})
const requestGet = Promise.promisify(request.get, {multiArgs: true})

Now Let’s try to integrate all the pieces together. We will first try to tie the APPEND & FINALIZE part together which actually deals with the uploading directly. For this, we will open a stream for fetching a data from the video file. The approach will differ depending on where the video resides. I have mentioned both the ways with comments.

We are switching back to callback style to make use of events. Promise.coroutine helps us for calling generator functions inside a callback based flow.

For error event & response event with non-ok http code we will return an error. For data event we will treat that data as chunk & try to upload to twitter using _appendMediaUpload function. Once all the chunks have been uploaded we will call _finalizeMediaUpload function to tell Twitter that we have uploaded all the chunks. This step might happen in either data or end event. To synchronise between data & end event, we use variables such as chunkUploadInProgress & streamReadingEnded .

const _streamMediaToTwitter = function (oAuthCreds, videoObj, mediaId, cb) {  let segmentIndex = 0
let chunkUploadInProgress = false
let streamReadingEnded = false
// if the video url is of S3
const filePath = videoObj.originalUrl
const indexOfForwardSlash = filePath.lastIndexOf('/')
const fileName = (indexOfForwardSlash !== -1) ? filePath.substr(indexOfForwardSlash + 1) : filePath
const startIndex = 'https://'.length
const endIndex = filePath.indexOf('.s3.amazonaws.com')
const bucket = filePath.slice(startIndex, endIndex)
const params = {
Bucket: bucket,
Key: fileName
}
const res = s3.getObject(params).createReadStream()
// if the video url is an external url
const res = request.get(videoObj.originalUrl)
res.on('response', function (resp) {
if (resp.statusCode !== 200) {
const error = new Error(`request failed : ${resp.statusCode}`)
res.resume()
return cb(error)
}
})
res.on('error', function (err) {
return cb(err)
})
res.on('data', function (chunk) {
res.pause()
chunkUploadInProgress = true
Promise.coroutine(_appendMediaUpload)(oAuthCreds, chunk, mediaId, segmentIndex)
.then(() => {
segmentIndex++
res.resume()
chunkUploadInProgress = false
Promise.resolve()
})
.then(() => {
if (!chunkUploadInProgress && streamReadingEnded) {
Promise.coroutine(_finalizeMediaUpload)(oAuthCreds, mediaId)
.then(() => {
return cb(null, mediaId)
})
}
})
.catch((err) => cb(err))
})
res.on('end', function () {
streamReadingEnded = true
if (!chunkUploadInProgress && streamReadingEnded) {
Promise.coroutine(_finalizeMediaUpload)(oAuthCreds, mediaId)
.then(() => {
return cb(null, mediaId)
})
}
})
}

Now tie the remaining parts with the above function that we wrote for streaming media to Twitter. The remaining parts involves the very first & last step i.e. _initMediaUpload which actually initiate the media upload & _getStatusMediaUpload which gives the status of the processing of uploaded media. We will need to check the status repeatedly here.

// initialize media upload & get mediaId
const mediaId = yield _initMediaUpload(oAuthCredentials, videoObj)

// stream from the video url to twitter
yield Promise.promisify(_streamMediaToTwitter)(oAuthCredentials, videoObj, mediaId)
// check status & wait till the media upload is finished
let state = 'pending'
let progressPercent = 0 // you can print this to check progress
const startTime = Date.now()
do {
const processingInfo = yield _getStatusMediaUpload (oAuthCredentials, mediaId, progressPercent)
state = processingInfo.state
progressPercent = processingInfo.progress_percent
} while (state !== 'succeeded' && (Date.now() - startTime) < 30*1000)

So now we have tied all the pieces together which are required to upload a media to Twitter.

Few points to note -

  • While checking for the status of the progress, make sure you keep an exit way after certain period of time. (Date.now() — startTime) < 30*1000 this condition was there for this exact purpose.
  • The implementation is with respect to video but it can be used even for images with minor tweaks such as changing media_category in INIT step.
  • Twitter has some strict conditions for the acceptable videos that can get published to Twitter through API, before starting the upload make sure your video is well within the limit. Limitations are listed here.

This approach is going to be useful even for Facebook

You would be glad to know that even Facebook API follows the similar concepts as like Twitter for media upload.

Facebook provides 2 ways to upload media just like Twitter. Facebook calls them as Non-Resumable Upload & Resumable Upload. Resumable Upload is similar to Chunked Upload in Twitter. It has multiple steps which corresponds to the commands in Twitter described earlier. Read more about Resumable Upload of Facebook here.

So when you write a code for Twitter chunked upload. You can make it generic to be able to handle the Facebook media upload as well.

When I started working on this task, I faced lot of difficulties to make everything work with respect to Node.js. So when I was able to put everything in a proper way I decided to write this post. I hope this will be helpful to someone. Cheers!

--

--