Webcam stream to ascii art with JavaScript

Aleksandr Kubarskii
4 min readMar 4, 2022

--

Not so long ago, when we didn’t have GUI, but only consoles, people used to create many cool applications and games using only ascii characters. I always liked this idea, but in todays world it’s not so useful, but we can still have a lot fun playing with ascii characters, trying to create something. I decided to follow this route and made video stream from the webcam available as ascii characters and it turned out to be pretty simple.

What we’ll need: browser, vanilla JavaScript, Google, MDN
What we’ll have to do:
1. Get video stream;
2. Get each pixel colors;
3. Convert color to grayscale;
4. Get ascii character based on color lightness;

Let’s go step by step.
At first we will need some basic <html> that we will be using to display ascii video stream.

<!doctype html>
<html>
<head>
<meta charset="UTF-8">
<title>Ascii video stream</title>
</head>

<style>
body {
background: #00005e
}

video, canvas {
display: none;
}

#text-video {
font-family: Courier, serif; // use it to make chars width equal
font-size: 6px;
line-height: 4px;
color: white;
}
</style>

<body>
<div>
<div>
<video id="video">Video stream not available.</video>
<canvas id="canvas-video"></canvas>
</div>
<div id="text-video"></div>
<button id="stop">Stop</button>
</div>
</body>
</html>

This html is very simple and contains 3 important things: <video> to capture the video stream easily, <canvas> that will be used for displaying video and processing pixels and <div id=”text-video”> that will display ascii characters.

Now lets move to displaying video stream.

This is pretty simple and easy-to-do thing that can be done in several lines of code.

const video = document.getElementById('video');const initTextVideo = () => {
navigator.mediaDevices.getUserMedia({ video: true, audio: false })
.then(function (stream) {
video.srcObject = stream;
video.play();
})
.catch(function (err) {
console.log("An error occurred: " + err);
});
}

The getUserMedia method is used to get user media, depending on parameters provided, in our case we’ll use only video. You can read more about getUserMedia here. According to MDN: “getUserMedia returns a Promise that resolves to a MediaStream object. If the user denies permission, or matching media is not available, then the promise is rejected with NotAllowedError or NotFoundError DOMException respectively.” After the promise is resolved we can set the <video>’s srcObject the stream returned by Promise and call the play() method that will attempt to begin playback of the media.

Now it’s time for the second step — to get pixels and process them.

At first lets draw the video stream to <canvas>.

const canvas = document.getElementById('canvas-video');
const ctx = canvas.getContext('2d');
const width = 320 / 2, height = 240 / 2;
const clearphoto = (ctx) => {
ctx.fillStyle = "#fff";
ctx.fillRect(0, 0, width, height);
}

const render = (ctx) => {
if (width && height) {
canvas.width = width; // setting canvas resolution
canvas.height = height;
ctx.drawImage(video, 0, 0, width, height);
} else {
clearphoto(ctx);
}
}

This is all we need to start drawing video on <canvas>. Simple, isn’t it? With this code. The main part here is ctx.drawImage(video, 0, 0, width, height). By this line we tell canvas to get video and draw it inside canvas.

Now we are coming to the most interesting part — converting pixels into ascii and drawing it.

const gradient = "_______.:!/r(l1Z4H9W8$@";
const preparedGradient = gradient.replaceAll('_', '\u00A0')
const getPixelsGreyScale = (ctx) => {
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
const data = imageData.data;
let row = 0
const res = new Array(height).fill(0).map(() => []);
for (let i = 0, c = 0; i < data.length; i += 4) {
const avg = (data[i] + data[i + 1] + data[i + 2]) / 3;
let curr = res[row]
curr.push(avg)
if (c < width) {
c++
}
if (c === width) {
c = 0
row += 1
}

}
return res
}

const getCharByScale = (scale) => {
const val = Math.floor(scale / 255 * (gradient.length - 1))
return preparedGradient[val]
}

const renderText = (node, textDarkScale) => {
let txt = `<div>`
for (let i = 0; i < textDarkScale.length; i++) {
for (let k = 0; k < textDarkScale[i].length; k++) {
txt = `${txt}${getCharByScale(textDarkScale[i][k])}`
}
txt += `<br>`
}
txt += `</div>`
node.innerHTML = txt
}

Here we have getPixelsGreyScale() function that actually does all the job. At first it takes “ImageData”, that is array, that contains RGBA values for each pixel on canvas. The structure is the following: [r, g ,b ,a ,r ,g ,b ,a ,….]. As you can see it’s a flat array and each 4 elements of it applies to 1 pixel. And the next thing we do in this function is turning rgba into one value (from 0 to 255) and also splitting flat array into “rows”.

getCharByScale() is used to select ascii character from chars list based based on the color lightness. It’s simply normalizes value range 0–255 to 0-gradient.length.

renderText() is event much simplier. We just take array of arrays of pixel lightness and by using getCharByScale() prepare html and insert it into the passed node.

One thing left — add infinite loop to render the characters.

const interval = setInterval(() => {
requestAnimationFrame(() => {
render(ctx)
const chars = getPixelsGreyScale(ctx)
renderText(textVideo, chars)
})
})

That is all… You can check the gist with the example here.

P.S.

This is my first attempt to publish on medium. Interesting to know your opinion.

--

--