Scaling the language barrier in French Rap with React and Redux
Looking up and translating song lyrics can get tedious
From Young the Giant to Bigflo & Oli
Before I get into anything about React or Redux, I think it’s worth giving a bit of background on what I set out to do.
For a long time in my music-listening career, I had stayed in and around the indie, alternative and rock genres. I didn’t really listen to a lot of music back then, and I never felt the need to venture out of my limited music domain, except for the occasional viral pop song that grew on me.
About three years ago, I was introduced to Grime, a genre characterized by rapid electronic music and hip-hop/rap vocals. It originated from the UK, and more specifically, London. I was mostly drawn to grime because of the football (soccer) references in the lyrics, and how creatively grime artists manipulate these references to talk about their achievements, insult “haters”, or talk about general cultural phenomena.
I discovered French rap, again, through football. I stumbled upon a documentary called Ballon sur Bitume (“Ball on concrete”). The movie explains how football and rap are deeply connected in the hood, and how both rappers and footballers influence each other. Through the film, I discovered MHD’s “Afro Trap” series, which combines a lot of African beats with more conventional rap music.
My other big discovery from Ballon sur Bitume was the song “Matuidi Charo” by Niska. It‘s chorus is based on Blaise Matuidi (pictured/gif’d above), France’s world-cup winning midfielder, who once played for Paris Saint-Germain, but now plays for Juventus in Italy. His “charo” goal celebration has since become my go-to celebration on FIFA (Hold L1/LB, spin Right Stick clockwise), in real-life football, and in real-life even when I’m not playing football (you get judged far more than I imagined for doing this when you do well on a homework).
I liked the music enough to explore more rappers and artists, and eventually got around to making a playlist for French rap. I wasn’t particularly bothered by not understanding a word of what was being said in the songs, and I just used it as background music to study to because (at least at the beginning) I couldn’t sing along. I was content with just detecting the occasional reference to a French football player or club.
Fast forward to this past summer: I found myself interning at Amadeus, near Nice, France. My consumption of French rap and French music in general increased, from friends and the radio of the shuttle I took to work everyday. I also discovered some cool playlists curated on Spotify, such as Cloud Rap, and Rap FR. Even in the songs I had heard before, I could now understand words other than footballers‘ names thanks to my slightly improved French comprehension and vocabulary. I was now looking up song lyrics more often on Genius, and translating them using Google Translate. This usually turned out to be a pretty tedious process, especially if I was doing it on my phone.
I did realize that these direct “Google” translate results were often very rudimentary and a huge amount of context was lost in translation. However, for me, the bottomline was using these translations as means to expand my vocabulary beyond football players and the common repeating profanities that I had inadvertently picked up from looking up and translating lyrics in the past.
I think one of the challenges with learning the French language as an English speaker is relating what you hear to how it’s spelt, and even though rap is too fast to build up French listening skills, looking up lyrics still allowed me to figure out these correlations between word spellings and pronunciations.
An opportunity to automate
For a long time I had been meaning to use the Spotify Web API, but I never really got around to it because I could never come up with a good idea for a project to use it in. This was that elusive project.
React was my frontend framework of choice for this. I did have some past experience in building React applications, but I quickly passed up on using Redux (a predictive state container framework commonly used with React) last time after reading this post by the creator of React and Redux.
You Might Not Need Redux
People often choose Redux before they need it. “What if our app doesn’t scale without it?” Later, developers frown at…
It talks about how applications of relatively small size and scope, such as what I was building at that time, were too simple to benefit from the predictable state container way of life. Moreover, he also says:
However, if you’re just learning React, don’t make Redux your first choice.
This time though, I pressured myself to use Redux, at the very least, for my own learning. My general philosophy for learning new frameworks or languages is by building something using them to get a basic understanding of how things work. Then, if I need to implement more advanced functionality, I read the advanced recipes on the framework’s documentation and dig around on Stack Overflow.
I was particularly motivated to use Redux because during my internship, a mentor showed me code for an application using ngRx store (Angular’s ReactiveX powered Redux). Even though the way reducers and actions were set up looked confusing to me, I was intrigued by how simple it was to retrieve parts of data from the store. I saw in it the potential to liberate me from convoluted parent-child component dependencies and the challenges associated with passing data between them, especially because the most popular way of doing it in Angular is at the template (HTML) level.
Pre-match pitch inspection
I had a relatively simple flow for the application in mind. I wanted to get information about the song I was listening to from the Spotify API. Then, I would look for this song on Genius, and get lyrics because I assumed that’s what the Genius API would be for. I would then pass these lyrics through some kind of translation API. Finally, I would just display everything in a relatively simple UI, allowing me to switch between the lyrics and translation.
Sooo…Where do I make my API calls?
Something that I really like, or perhaps have grown to like about Angular, is the concept of services. Being able to have different services to make API calls helps me in grouping and marking them as data sources for the application.
With a limited but growing intuition for organizing things the Redux way, I struggled to understand if my API calls would go into my actions, reducers or an entirely different construct altogether. My doubts stemmed from how the reducers I had seen mostly handled synchronous calls, such as changing a simple variable in the application, and asynchronous calls to APIs seemed challenging to write dispatches for. After progressing further into the advanced Reddit API example on the Redux documentation, it turned out that managing asynchronous flows was a lot easier than I had imagined, mainly because of the redux-thunk middleware.
This middleware essentially allows action creators to return functions instead of simple actions, so now the parts of my code that handle the API calls, or more specifically, the promises that the HTTP client (Axios in my case) returns, are where the dispatch calls for the actions would go.
It’s even nicer to be able to track this flow during debugging, which I did using the redux-logger middleware. It prints the current state of the application, and logs each action received, and also how it was reduced and what the next state will be.
Conditional Rendering and ngClass
When I used React, and more specifically, JSX, for the very first time, being able to use HTML syntax as variables threw me back to the days of PHP. (But the documentation was quick to pull me back):
In Angular, having the ability to suppress an element or component by a simple ngIf directive in the template, makes rendering elements quite straightforward and modular. The same goes for being able to dynamically add classes based on boolean variables or conditions using the ngClass directive.
With JSX, the direct equivalent for these directives was the ternary operator, which in my opinion encroached on the readability of my code. What I found great about React is how easy it is to write a function that mirrors what ngClass does in Angular.
After writing this, I innocently thought I was the first person to come up with this idea. Soon enough, found this extremely popular library that did what I wanted to do with my ngClass in React, and a lot more. I still decided to stick with the ngClass function I wrote, mostly because I was salty about someone doing this before me.
When I started making this application, I made certain assumptions about my data sources. To my surprise, the Genius API didn’t really give song lyrics. This seemed to me like a huge travesty. I started looking at alternatives, but it was evident that the Genius database has the best coverage for French rap. The API does have a search endpoint, which allows you to search for a song and get a song ID, that can be used to uniquely identify a song in the Genius database.
I used this song ID to dynamically embed the lyrics page into one of my React components, and since I was using the “dangerouslySetInnerHTML” directive, it didn’t load the stylesheets successfully. However, since the lyrics were showing up in some form, it wasn’t detrimental to the core function of the application, and I decided not to address the issue.
Translation APIs & Combining Promises
My next challenge was translating song lyrics. I found the Yandex API fairly straightforward to use mainly because creating an API key and using it to a certain limit was free. The challenge lay in optimizing the embedded Genius page to minimize how many characters I need to translate, since that is the metric that the API used for rate limiting. I decided it would be best to suppress HTML tags from being sent to the translator, and looking only for the text inside those tags to be sent.
I found a way to create a request for each <div> tag that enclosed a song lyric, and I was able to pass it to the translation API and fetch a response. However, doing it while iterating over each div (song lyric) in an array of divs, I needed to find a way to combine the promise that each request was returning, and wait for all these promises to return a response before I would send data back from my server. After a bit of digging, I found Promise.all, which was basically what I was looking for.
It allowed me to “push” all promises to be returned by the axios requests into an array, and resolve them using a single .then() statement, which greatly simplified the process of returning these translated lyrics somewhat synchronously.
Deployment and Future
Hosting this application and making it available publicly is a challenge because of the monthly character limit that the Yandex API has for translating lyrics, and that limit is easily exceeded if multiple people use the application on one Yandex API key. However, the repository above contains all the code for this application, and the Read Me outlines how it can be deployed.
Even though I don’t see this application as a direct replacement for the Spotify app that I generally use, it would still be nice to have some form of playback control to pause, rewind or skip a song, or move to a different playlist (the Spotify API allows all of this!).
Initially, I also imagined the need for some kind of database service to cache lyrics that are looked up repeatedly, which I abandoned later. Therefore, all the Node and Express backend is doing is making a bunch of API calls, which can be done easily in a server-less environment. This also opens up possibilities for hosting this as a static application, and so that is something I want to pursue in the near future.