We’re Living in a Dark Age of Gaming But All Is Not Lost

Razz Calin
ChasingProducts
10 min readAug 22, 2019

--

A look into the evolution of video games from initial vehicles for fun to the modern day versions that seem optimized more for profits.

CConsole and PC video games have come a long way since the days of the Atari VCS or the Commodore 64 of the ’70s and ’80s. Until around ten years ago, if you wanted to play the latest and greatest games you needed either a thousand-dollar computer or a games console that went up to the mid hundreds of dollars. The proliferation of smartphones represented a tectonic shift in the industry, taking it from a total addressable market restricted to the middle class audience in Tier 1 countries -markets with an elevated standard of living- to everyone who can afford a smartphone anywhere in the world.

Photo by Ciaran O'Brien

FFast forward to today’s world and the scenery could not be more different. Similarly to how the first personal computers employed the office analogy by utilizing files and folders displayed over a desktop, so did the first games meant to be played on personal devices, by borrowing most mechanics from games that were meant for an arcade machine. The high —borderline impossible — difficulty level presented by arcade games was a business tool that guaranteed an optimal monetization strategy and a lot of playing hours needed in order to finish the game.

This model worked fine for the first personal video games but as game developers released new products, they had to adjust their strategy in order to ensure customers would buy the next title, leading to shallower learning curves and more casual gameplay experiences over time compared to the previous ordeal. With the rise of the new business model of paying up-front, the best and biggest PC and console games — known also as triple-A games —began to rely more on providing longer gameplay time, bigger worlds and high quality graphics in order to deliver a gaming experience that’s easy to learn for most but gets harder as you progress.

In the past decade, mobile game developers — which also started as ports of old computer games —found themselves in a very crowded market due to the low friction of making and publishing such games which led to a scenario where payed installs became the user acquisition method. This resulted in a commoditization of these products that eventually culminated with the universal adoption of the Free-to-Play model where games have zero up-front cost.

MMicrotransactions were the engine that originally powered mobile games when they transitioned from the Premium business model to free-to-play. The system’s strength lies in emotionally removing players from the feeling of spending real money on each transaction by setting the cost of items to an intermediary virtual currency that is purchased in bulk and can itself only be purchased with real money.

When developers of triple-A games saw just how profitable this new option was, they started designing games from the ground up with the intention of selling game items that provide in-game advantages to those who can afford them while also maintaining the up-front price tag. To keep their games fresh, they started using a continuous release infrastructure that delivers new content to players on a regular basis, sometimes as often as once per month. The new maps, missions, characters or vanity items needed to be payed for on top of the initial price of the game but were not vital to the game experience, they simply extend it. These models are partially responsible for GTA V becoming the most successful entertainment product, setting sales records six years after launch.

Loot Boxes first appeared in MMO games as a more visually appealing interface for old random loot drop system and then quickly made their way into mobile games and other genres. These virtual boxes contain a random selection of items — vanity or functional — and can sometimes be won during gameplay but they are always available to be purchased individually. What got this mechanic into hot water, on top of the pay-to-play aspect, was the fact that multiple countries found loot boxes to be no different from gambling due to the random chance of getting items, mandating developers to remove them. Despite the publishers’ initial reaction of total opposition towards these rulings, arguing that it’s impossible to cash out earnings from a video game, most of the big ones agreed to a compromise solution. Recently game publishers and console makers Nintendo, Sony and Microsoft agreed to display the exact odds of getting each item inside a loot box in all products published on the respective platforms starting in 2020.

Advertising is a recent trend that ‘s slowly but surely making its way from mobile into triple-A games with players being encouraged to tolerate them on the screen in return for various in-game advantages. As of late, more and more developers are looking into forcing players to watch unskippable ads during their sessions despite the fact that they payed what is considered today to be the full price for the game to begin with.

AtAt this point you’re probably blaming everything on mobile gaming but you really shouldn’t, it was but a stepping stone for practices that would have surfaced sooner or later.

Many are baffled to find out that the price of games is as low as it’s been ever. In 1990 games for the NES were already selling for an average retail price of $50, the equivalent of $96 in 2019 dollars, yet most games still retail for something close to that today.

As the complexity of making video games increases hand in had with the technology they are built upon, so does the time needed to bring a product to market. This leads studios to increase in size, with some employing thousands of people and donning numerous investors. Titles on an annual release cycle are at a point where they are worked on by three or more different teams in different locations, each working on a different iteration of the game in order to ensure the product will be ready on time. Financing bigger teams for longer periods of time in order to obtain a higher quality product inevitably leads to higher development costs that need to be recouped by the developer once the game is launched; add to this the investments in developing new IPs in parallel and the costs skyrocket pretty fast.

It’s this ever-increasing need for funds in an increasingly competitive environment that was the principal motivation behind triple-A developers adopting free-2-play-like, post-launch monetization tactics into their premium products, in an effort to prolong the life of a product until the next one will be launched five to ten years later. Survival comes before gluttony or profiteering.

Are there bad developers+publishers combos out there that optimize for short term profits with less-than-ethical practices? You bet! But the smart ones, those who’ve been playing the game for decades now, know that retaining their fan base is the most important thing for the longevity of their business.

MMany a Youtber have taken up valuable server storage space in defending the ‘average gamer’ from the capitalist greed of game publishers. ‘Games moved away from being fun experiences and are now cash cows for dairy-obsessed developers’ is what you’re likely to hear in this videos followed by some numbers of the last financial report taken out of context that are supposed to trigger a reaction from viewers by confirming their biases. At the end of the day the currency of YouTube is eyeballs.

While other branches of entertainment —like TV, cinema or music — are strictly regulated around the world usually by government bodies, the gaming industry still relies mostly on the moral compass of developers, essentially self-regulating how their product is presented to the audience. The Entertainment Software Rating Board, the organization that is supposed to regulate age and content ratings in the US and Canada with its decisions mirrored by the IARC for other markets, is controversial to say the least. The ESRB is owned by the ESA, a business association comprised of multiple big game publishers that was formed out of fear of government oversight in the mid ’90s when, after a congressional hearing regarding violence in video games, the US government threatened to regulate the industry if publishers couldn’t agree on a rating system themselves; which they predictably did one year later. Having that the financing of the ESRB is closely tied to the level of success experienced by the members of its parent company, it’s easy to see why they are vocal for being in favor of mechanics like Loot Boxes and Microtransactions. Some cultures refer to this as ‘Appointing the wolf shepherd to the sheep’, in legal language we call it a conflict of interest and sooner or later it will manifest itself in ways that are detrimental to the final consumer.

Despite what they state publicly, for-profit companies operating in free market economies will, first and foremost, look after their shareholders by prioritizing for financial gains while over-performing the rest of the market. While in theory it is true that a company dealing in consumer products derives its revenues from satisfied customers, in reality the relationship is less of an ideal system with parameters that are constant inf value and more of a tug of war between the two parties. One year publishers will aim to extract more profits from their clients with a novel tactic like rewarding you for watching ads, the next they will pull back if players protest in order to not alienate the fan base.

When governments wash their hands of regulating video games and instead leave it up to the publishers to com up with their own ratings things are bound to go bad. While it’s in the publisher’s best interest to keep their users happy, it is definitely not the first item on their list. Nor should it be, in the end no one is relying on banks to offer products that are advantageous to their customers, the government regulates interest rates pretty strictly. The job of protecting citizens falls upon elected officials, regardless of the industry. If they create new legislation or adjust the old one to address video game products specifically, game publishers will have no option but to respect it if they want to keep operating in those markets. This powerlessness for lack of modern legislation has been raised in the UK, the European country with the longest standing history in developing and consuming video games, two years ago, concluding that it is ‘unable to step in’ regarding a case that was looking at loot boxes as gambling mechanisms. As with any regulation of technology-adjacent industries, this one will not be able to keep up with the pace of innovation involved so it will need frequent updates, but this is no reason to stop them from beginning in the first place.

Indie band practicing in small room
Photo by Hans Vivek

AsAs always, when a certain industry is at a point where the players decide to consolidate into massive, eventually risk-averse, corporations, the unsustainable environment creates an opportunity for smaller-size, more agile, disruptors to enter the scene.

Independent — or indie game developers started as a small group of passionate people, usually less than ten individuals, experimenting with the idea of making games. At first these individuals were savvy programmers from different fields that wanted to try their hands at the newest thing, then it evolved into gamers who were enabled by increasingly user-friendly game making tools to progress from making mods for their favorite game to making their own experience from scratch. The makeup of the newest indie game studios is mimicking the rest of the software industry, with most of the members being ex-employees of big gaming companies that decided to take a swing at the industry on their own with what is commonly known as triple-I games, products similar in polish to triple-A but providing a shorter experience overall at a fraction of the cost.

What’s interesting is that more and more of the big publishers are recognizing the advantages of these smaller studios and are branching out a new indie label geared towards taking over the publishing burden for those in need. Such publishers, like Private Division, live completely outside of the creative process and give creators ownership of their IP while providing some financial support along the way plus taking care of the distribution and marketing logistics in exchange for a cut of the revenue once the game launches. This is a win-win situation, small developers make their game discoverable by more people which implicitly means increased revenue for them and big publishers gain access to the creativity that comes with a small team.

On top of the ability to pivot on a dime when market conditions change with little financial repercussion, these small studios have a resource that’s been depleted in big game developers by the pressure to perform at a high level consistently. Creativity has always been the most valuable asset in the creation o video games ever since Pong came out and is even more so in today’s over-saturated market. Indie games are not generally developed with he intention of being played by millions of people, repeatability or a long tail, instead they rely more innovative gameplay mechanics — which usually trickle down to triple-A games afterwards — and compelling storytelling.

It’s these elements, coupled with the financing brought in by the new breed of publishers that give us hope for the future of gaming, a future that’s no longer dominated by monolithic developers of gargantuan proportions but instead split between many mid-sized studios of fifty creators or less, producing diverse gaming products that are experience-first but still manage to turn out good profits. I’m all for a future like this!

If you liked this article subscribe to our Newsletter and follow us on Twitter.

--

--

Razz Calin
ChasingProducts

I spent most of the past decade working in gaming, I usually write about Tech from a product perspective