Today, I came across a viral story about an 8-year-old girl getting locked out of her Zoom classes for three weeks.
Mike Piccolo, CTO of a software firm named FullStack Labs, shared this not so cute story about his cute 8-year old niece.
According to what he shared on his Twitterstream, the girl got locked out of her Zoom class mysteriously. After so many attempts by everyone involved, including:
- Class teacher creating a new class meeting
- Parents helping her logging in again
- Zoom technical support on-call for 3 weeks
-no one could figure out what really happened.
Having tried every option on earth, the girl got rewarded to get homeschooled by the resigned mom.
The trick was finally unveiled by the mom’s friend, at whose house the girl attempted her Gen Z digital prank (possibly for the final time):
Entering an incorrect password 20 times to get her Zoom account locked.
But.….that should have been obvious!
It looks stupid that it took them 3 weeks to figure out that it was an account lockout issue.
But it’s stupid on Zoom’s part. Apparently, Zoom didn’t make it any easier for parents (and billions of other users) to figure out the difference between a locked account and a forgotten password.
Techie’s excuse: It’s deliberate to let the bad guy feel we still think he didn’t do anything wrong
So even after getting locked out of her account, she kept getting the idiotic message “Incorrect password. Please try again” or some variation of it.
What was worse (actually, better for herself): Every time she got locked out, the locking time period started expanding.
At first, you might say: Zoom is only being extra careful. Any tech guy will tell you:
It’s deliberate to let the bad guy feel we still think he didn’t do anything wrong, while we know he has been locked out. Yay!
What’s not OK is the fact that it’s bad UX to report to the user her authentication failed, while in reality her access has been revoked.
What’s not OK is the underlying assumption that invalid authentication = unauthorized user.
What’s not OK is the fact that there is an inherent assumption that was founded 20 years ago is still being exploited by the distributed systems across the world:
The user always thinks it is in her best interest to use the system.
This assumption isn’t true anymore. With children using digital experiences way more than adults, their credentials being managed by parents, the authentication use cases must be revisited.
It’s not just the Zoom:
This isn’t the only use case that defeats a valid user’s claim to the usage of the system. The list is endless.
- Requiring complex passwords such as IAmExhausted#666: This is a classic case that I have to wrestle with every time I access my online bank account. What’s worse: They force me to change it every 3 months, a time window that makes me most accustomed to it. What’s even worse: I cannot use any of the last 5 passwords. I must write them down! I know there are services such as 1Password, but I am principally against paying $3/m for a problem that shouldn’t have existed in the first place. Imagine paying a guy for keeping your ID cards.
- Assuming the system is under attack after 5 clicks: This is forced labor in the disguise of punishing unauthorized activity. It gets switched on every time frequent clicks or submits are detected. The problem is, the value of frequent is set in the 2000s when 2G was an expensive commodity. This happened to me when I scoured Google’s 18th page. For no fault of mine, it made me perform community service of identifying traffic lights in their AI captcha.
- Committing an unspeakable crime: I experienced a lockout similar to Zoom with Amazon Web Services for entering a valid but unacceptable payment method — only no one told me it was unacceptable. AWS happily accepted my card (it was curious what I was using!), but soon I was logged out. Then, it kept luring me into entering the captcha. I abandoned after solving almost 30+ indiscernible captchas, only to be followed by locked out status. It was a nightmare dealing with their customer care. Finally, out of my intuition, I tried it past three days of idleness, it let me in with a warning, and I corrected my payment method, which fixed it forever.
- Social Bans: Social networks shout the 1st amendment every time they get summoned by governments, the FBI, or courts for harmful content, but they love to play the ban-games in their groups and rooms. The users are arbitrarily banned based on a clickbunch of Report as spam/fake news that could have been easily staged by special interest groups. They flag users based on the words they use which are never subject to judicial scrutiny. Facebook is quite frequent in banning people for frequent posting in groups. This could be OK, but again, the definition of frequency is quite outdated, probably rooted in the 90s era when the internet was sparse. Banning someone for 2 weeks for posting in 5 groups, leaving him to deal with non-tech customer service staff, only to get banned again for 2 posts — scenarios like this exist. And it only speaks how much Facebook stands for small and medium businesses.
Social media companies must distance themselves from bureaucracies that the governments are known for
Tech companies have a very short time to market window. Pressured by investors flaunting 7-figure cash, they only focus on minimum viable features.
Next, in the growth stage, their focus is on the number of active users and not profits. MAU has the closest correlation with valuation. Profit is a better reflection of value addition to customers, but it is overlooked.
Barring systems where extraordinary security is required (products belonging to the banking and the government sector), Authentication/moderation isn’t part of MVP. Even when authentication is strong enough, its overall impact on the UX isn’t evaluated.
This is because they are rarely considered part of active messaging tech companies do with their users.
All social media tech companies act under an assumption that’s founded 2 decades ago: Authentication and moderation are undesirable gatekeepers that the users want to bypass. They are just eager to access the features guarded by it.
Mike’s niece is an exact opposite use case of this assumption. When assumptions are invalidated, it is a chance for much wider, industry-wide reforms backed by technology.
Social media companies must distance themselves from bureaucracies that the governments are known for. They should only focus on fixing the authentication and moderation problems with the weapon which they were born with: Technology.