All my books in AR

Andrey Suvorov
The Startup
Published in
9 min readAug 10, 2019
What we’ll get at the end

Idea

It’s all started with a book. One friend at the wedding recommended it to me. I bought it on Kindle and open: Kindle has shown “120 hours left to read”. This label made me think: probably for the last 6 years I read entirely from the Kindle app and when you buy digital books on Amazon you don’t really care how big they are, you don’t visualize or relate them to physical dimensions at all. Also, I don’t remember me going much outside the apartment for the next few weeks. After that I google it. It looked cool!

image from abebooks

I liked the way it looks so I started to wonder: what would all my books look like if they were paper? So I’ve decided to make an app for that!

Table of contents

In the previous post, I’ve described an easy and clean library to gather data from GoodReads. Here, I’ll show you how to render realistic book objects in AR, and in the upcoming post, we’ll discuss the architecture (and clean up all the prototyping mess we’ll come out here now).

  1. GoodReads API from Android with Kotlin
  2. All my books in AR [You are here!]
  3. From stuff.kt to Architecture (How to rewrite your fully functioning prototype, increase the number of files by 20 and stop worrying)

You can play with the resulted app at GooglePlay:

Complete sources for the current post is at the tutorial branch:

Setup the tools

I’ll start where most tutorials end: simple Android Studio project with ArFragment, where you can place a 3d object on a plane. Address hello Sceneform project from Google or this series.

Make yourself comfortable. Emulator could be helpful for asserting some quick changes — this guide will help organize the setup. Did you know you could walk inside your emulator or decorate its walls? I didn’t. (Well, yeah, it’s for the augmented images testing but still).

You’ll spend a lot of time here…

As for the real device, make sure you’ve mastered the adb wifi: short USB cables will easily transform your debugging into yoga.

Gather the data

I’ve done some preliminary work (you might read about it in the previous post: GoodReads API from Android with Kotlin) — long story short: the whole process of obtaining books data was placed into the small android library I’ll be using here. Initialization and login procedure were described before so here I’ll be focusing on usage. Let’s define our data structures:

I declared two separate data classes: BookModel is just what I need from GR data and ARBook is all the data I need to render the book in AR. I’ll keep them in Application instance for the purpose of simplicity (Don’t be too much paranoid about it, I know it’s not the best option, but for the prototype, I think it’s perfectly valid):

Next piece of code load the data, sort it and do some magic with image URLs:

First I’ve tended to wrap all the CPU intensive works by Dispatcher.Default . Second, if you missed the first post— Goodreads API is awful. For most of the best sellers, Goodreads gets cover images from the parent company (did you hear about that little one called Amazon?). By some legal reasons, they obliged to not distribute such data to third-party via API. What that means to us is that for 80% of the book you’ll get this as a cover:

not really impressive, right? and it comes even without a cat

Fortunately, we have OpenLibrary Covers API for such cases! When block in the code above replaces the empty covers from GR into OpenLibrary links. Of course, it’s all works only if the book has an ISBN and OpenLib has the desired image.

Immerse

Go to the Poly and choose a book model. Something like this: brown book from Norbert Kurucz (but of course you can choose whatever you like).

Also if someone could explain to me why all the mouse \ touch rotations in Android Studio Viewer are inverted I’ll be really happy

The process of adding 3d objects to the project has been repeatedly explained so I will not stop here(for the reference, see this). The only enhancement I’ll found suitable is to wrap up the model runtime loading into the coroutine (since the rest of my async code will be coroutine-based, it’s better to do all in a resembling manner):

await is a kotlin extension to Java 8 CompletableFuture . In order to get it, you need one additional dependency besides coroutines modules — jdk8 support:

It wasn’t easy to find…

Scale

First thing you’ll encounter (if you are like me and do not live in a palace) — almost all assets from Poly or any other asset store is enormously HUGE!

Most of the tutorials just adjust the scale value at the sfa \ sfb files:

But for the purpose of specific size rendering, we need a more fine-grained solution. You still could cut some amount of size at the sfb, but for accurate sizing, we need first measure the model in normal physical dimension (if you are not British this means meters):

Bonus points to anyone who knows how to obtain size from obj…

Since I used only one model, after measuring it once I hardcoded its size for future reuse (this is after preliminary scaling at the sfb):

After measuring the original size of your model, it’s time to scale it down to fit our exact needed size:

15 * 23 centimeters — something like 6 * 9 ‘’ default paperback

By the way, the coordinate system used in sceneform is the following:

And now you can verify that the size matches your expectations:

Technically it’s still an AR Ruler…

Colorize

Now it’s time to paint the book’s spine and add some pretty covers. Adjusting the object’s material at runtime is one of the many “hic sunt dracones” areas of sceneform. At the time of writing, the best approach is through a dummy object. Fortunately, if you only need to change the color — we could do without such hacks.

By carefully looking at your model’s material definition, you’ll find a few sections:

This obviously could vary…

At runtime you can access such sections and assign properties to them like this:

Experiment with it and you’ll find the needed sections:

I’ve also rotated the book so that the spine is at the left side…

Cover

Next comes the beautiful covers! It’s no good to download the image during the object placement at sceneform — no one likes glitches and delays. It’s better to prefetch all the needed images in advance. For now, I’m assuming you’ve got the Bitmap beforehand (and later I’ll show how to prefetch covers). We could create a texture map from cover and then assign it to the object, but it requires the creation of big bitmap at runtime per each book (and I’m hoping to render a whole pile of books).

To render UI at sceneform one must understand ViewRenderable and remember that “every 250dp for the view becomes 1 meter for the renderable” (don’t ask). Child node placed by default at zero parental coordinates, so to align cover image perfectly on top of the book model we need to adjust the local position, size, and rotation.

Rotation is easy, even though it involves Quanterion: we just need to rotate on 90 degrees around X-axis:

The scale is a little bit tricky since first we need to calculate in what size the image will be rendered at scale 1 (remember the rule: 250 dp— 1 m):

Then we could scale it down to fit the parent book obj size (already scaled). I also adjust the resulted scale a little bit to take into account the models’ edges.

Suddenly, the danger zone for me was positioning. After rotating the book model programmatically on 180 degrees around the y-axis (and you can’t do this in sfa) your child node (cover) axis is also rotated. So resulted code looks like this:

layout for covers

Colorize with charm

Next, what comes in mind — the book’s spine color usually correlated to the cover image itself. I’ve started to craft the algorithm for that but fortunately enough there is already a great library we could use — ColorArt:

You also could probably achieve similar results with Android Palette. For spine we only need background color:

And now our books look more naturally:

can you tell which book is real?

Placement

Now it’s time to make some mess! Let’s render all the book we have in a grid. We previously downloaded an array of booksModel and now let’s fill a list of ARBooks with all the data we need to proper place books in 3d:

First, let me apologize for the offset list. I’ve done some not hardcoded code to fill it, but it turns out it takes more space and less obvious to understand, so I left the hardcoded version. We traverse around all the books and place it layer by layer. For every pile, we keep the current elevation by summing the thickness of each book in the concrete pile.

As I’ve mentioned before, I rotate books by 180 degrees and bring some variance (I’ve never seen a book pile perfectly well aligned):

As for sizes, since I’ve not found any open API with such data, I’ve used some approximation. According to Basford, K.E., G.J. Mclachlan & M.G. York at Modeling the distribution of stamp paper thickness via finite normal mixtures: “As noted by Izenmend and Sommer, there is some clustering around the value 0.07, 0.08, 0.09, 0.10, 0.11, 0.12, and 0.13 mm, with about half the data between 0.06 and 0.08”. Adding 2 mm on covers and the standard paperback size we get:

And, as promised, the code to prefetch the covers (I used Glide):

The resulted placement looks like this:

Performance

On my Nexus 6P at something around 200 book nodes with covers, the scene starts to glitch. In order to improve performance, you could remove cover nodes for bottom books (since they are overlapped anyway). See master branch ARFragment.kt for that.

Tricks described here, used in my opensource app called bookar. You could get it from google play.

All the sources available here.

Thanks for the reading!

--

--