Apple’s Double Standards Against Gab

Gab’s mission is to put people and free speech first. Gab believes in individual responsibility and empowering our users the tools to create and mold their own experience. We believe users are intelligent and responsible enough to use our feed filter settings to mute unwanted users, words, phrases, and hashtags from their Gab experience.

Our attempts to get our iOS app on Apple’s App Store has spanned several weeks. We’ve worked diligently with Apple despite an obvious heightened level of scrutiny applied to our App Review process.

Our first few rejections were relatively reasonable and our team worked with Apple to be totally compliant with their App Store guidelines and to make sure that our application was technically sound. Our latest rejection, and the timing of that rejection, are questionable and display a clear double standard against our application that is not applied to other Big Social iOS apps.

Here is the full timeline of our experience with Apple, step by step.

Rejection:- “Pornographic Content”

Apple’s first rejection was focused pornographic content shared by Gab users. While this is rare, Gab users can share legal pornographic content as long as they tag it as #NSFW (Not Safe For Work.) After a phone call with Apple we came to a mutual understanding on the topic and agreed to mute any NSFW posts by default in the iOS app so long as users had the choice to opt-in.

Gab users would have the option to activate NSFW posts in the app if they turned this setting on directly from Gab’s web application; however this setting would not be permitted in the iOS app itself. Twitter and other social apps block out sensitive material as well, so our team felt this was a modestly fair compromise.

Rejection: Bugs with NSFW filter, Missing EULA and Contact Info

Hello,
Thank you for submitting your app for review. During the review process we found the following issues and have now rejected your app for the App Store Review Guidelines detailed below.
Safety — 1.2
Your app contains user-generated content or services that can frequently become pornographic.
We’ve attached example screenshots of sexually explicit content for your reference.
Next Steps
Please remove all content that is similar in nature to the attached examples.
Additionally, apps that provide user-generated content must also provide the following user protection:
- Require that users expressly agree to terms (EULA or TOS) in the app and these terms must make it clear that there is no tolerance for objectionable content or abusive users.The EULA must be displayed in the app itself.
- Developer must provide contact information in the app itself, giving users the ability to report inappropriate activity.
Safety — 1.2
Your app enables the display of user-generated content tagged as NSFW (Not Safe For Work) but does not have the required precautions in place.
We’ve attached screenshots of example of NSFW content for your reference.
Next Steps
Please revise your app to implement all of the following precautions:
- Select the “Frequent/Intense” setting for Mature/Suggestive Themes or Sexual Content or Nudity in the iTunes Connect Rating field.
- Users cannot have a mechanism in the app to enable or disable access to NSFW content; enabling or disabling NSFW content should be done through the service website. When logged into the service on your website we were unable to locate a mechanism to enable or disable NSFW content in the app. We found that NSFW content was enabled in the app by default, NSFW content must be disabled by default in the app.
We hope you will consider making the necessary changes to be in compliance with the App Store Review Guidelines and will resubmit your revised binary.
Best regards,
App Store Review

After review, Apple discovered several more pornographic posts that were not being muted in our app. Additionally, they included more requests for us to include our EULA and developer contact information. These were very reasonable and common requirements for the App Store. Our team quickly shipped a patch to our NSFW feature and added in our EULA as well as developer contact information to the app.

Rejection:- Minor App Store Compliance Fix

Hello,
Thank you for resubmitting your app for review. However, during the review process we still found the following issues were not addressed. We have now rejected your app for the App Store Review Guideline detailed below.
Safety — 1.2
Apps that provide user-generated content must provide the following user protection:
- Require that users expressly agree to terms (EULA or TOS) in the app and these terms must make it clear that there is no tolerance for objectionable content or abusive users. The EULA must be displayed in the app itself, linking out to a website is not sufficient. You may consider to present the EULA in a web view within the app.
- Developer must provide contact information in the app itself, giving users the ability to report inappropriate activity. We were unble to locate developer contact information within the app. For example, you may consider to add your contact information in the settings section of the app so that users may contact you directly.
We hope you will consider making the necessary changes to be in compliance with the App Store Review Guidelines and will resubmit your revised binary.
Best regards,
App Store Review

For this rejection, Apple was not satisfied with us linking to our EULA outside of the app itself. An understandable requirement and an easy fix on our end. After a quick fix, a new build was submitted with the proper in-app version of the EULA.

Rejection:- Clear Double Standards, Ridiculous Review Times, and Questionable Rejection Timing

You’ll notice during our last two submissions, Apple was quick to review and reject our builds within ~24 hours time. In fact, according to Apple’s own developer website “50% of apps are reviewed within 24 hours and 90% of apps are reviewed within 48 hours.” This submission was different. Now that our app was technically sound and compliant with Apple’s App Store guidelines, Apple was out of excuses. They needed to find another way to block us from getting to the App Store. Something subjective that they knew was impossible to “fix” without going against our core mission of defending free speech.

You’ll notice above that instead of the standard 24–48 hour review time, our latest submission took 17 days and was finally rejected on January 21st, or the first full day in office for President Trump. We believe this was no coincidence given the large base of Trump supporters on Gab and the outspoken support for President Trump by our CEO.

During those 17 days our team followed up with Apple several times via phone and direct message. Finally, on January 11th Apple gave this generic response:

Jan 11, 2017 at 4:45 PM
From Apple
Hello,
We apologize for the delay. Your application is still in review but is requiring additional time. We will provide further status as soon as we are able. Thank you for your continued patience.
Best regards,
App Store Review

This was odd. No additional information or issues included. Our team had quickly patched their last rejection request and the app had no other issues. After this response, we continued to follow up every few days via phone and direct message until Apple sent this response six days later.

Jan 17, 2017 at 2:04 PM
From Apple
Hello,
Thank you for your response, however, the app is still requiring additional time in the review process. We will provide further status as soon as we are able and will contact you if we require any additional information.
We appreciate your continued patience and understanding.
Best regards,
App Store Review

Finally on the morning of January 21st our build was rejected and Apple sent us the message below. There is no way it took them 17 days to make these discoveries. The timing of this rejection, and the amount of time it took for it to happen, are far too coincidental to be an accident.

Hello,
Thank you for your resubmission and patience while we reviewed your app. We have now rejected your app for the App Store Review Guideline detailed below.
Safety — 1.1.1
Your app includes content that could be considered defamatory or mean-spirited. We found references to religion, race, gender, sexual orientation, or other targeted groups that could be offensive to many users.
We’ve attached screenshots for your reference.
Next Steps
Please remove all defamatory and mean-spirited content from your app and submit your revised binary for review.
Best regards,
App Store Review

Anyone who has spent five minutes on any other social network knows that they are loaded with “mean-spirited and objectionable content.” Apple sent us screenshots of their App Review team directly searching in our app for various “mean-spirited” words and phrases; which can be found on any website with user-generated content including Facebook, Twitter, Tumblr, Instagram, and Reddit by running a simple search. This clear double standard against us is potentially politically motivated and clearly targeted. When you actively search for something on a user generated website, chances are you’re going to find what you are looking for.

Apple’s screenshots of direct searches made within our app of “mean spirited and objectionable words,” included the “N word” and similar related searches. The Gab team ran an experiment and conducted these same searches for what can certainly be considered “mean-spirited and objectionable content” on other popular Big Social iOS apps which are currently available on Apple’s App Store.

Please note that we certainly do not condone this type of language and offer users the ability to control their own experience and mute out any of these terms or hashtags from their Gab feeds. Our point in highlighting it is simply to display the double standards and extreme scrutiny Apple is putting our application through while allowing Big Social apps to display the same and arguably worse content in their own apps.

One of the Gab posts cited by Apple

Had Apple simply used our Feed Filter feature to mute the specific words they were searching for, they would have found nothing in their search results as displayed above.

Below is what we found in our investigation of the Instagram, Tumblr, Reddit, and Twitter iOS apps. On Instagram we were even prompted with “Related” searches that were equally “objectionable.”

Instragram iOS app
Instagram iOS App
Instagram iOS app
Twitter iOS app
Tumblr iOS app
Tumblr iOS app
Reddit iOS app
Reddit iOS app

Run a quick search in your Big Social iOS apps and you’ll find similar “objectionable” and “mean-spirited” content. At Gab, we believe in individual responsibility. We provide our users with feed filtering tools that empower them to remove this type of content, down to individual words, phrases, hashtags, and users from their experience.

Rejection: 1. 1 Safety: “Objectionable Content” and unpublished App Store guidelines

Our reply: 
Hello,
Is there a reason you are logging into our web application to enforce guidelines for our mobile application? The way the iOS app is currently designed, there is no way to search for these type of users or content in the iOS app itself. It is a technical impossibility. A brand new Gab account that signs into iOS for the first time will see no content from any other users.
It is technically impossible for you to discover the content and users that you included in your screenshots of “objective content” unless you signed into the web version of Gab, searched for these users and content, and willingly opted-in to following them. Can you please explain why you are going to our website, actively seeking out “objectionable content,” following these users, then returning to the iOS app to then reject our submission?
Would you like us to include thousands of examples of objectionable content from other social networks that are accessible via iOS apps that are currently live on the App Store? Is it standard practice to use a web application to enforce guidelines for iOS apps that are 100% compliant with App Store Guidelines? Once again, the double standards here are truly something. Our app gives users complete control to shape their experience, hide and report objectionable content and users; and yet you are still refusing to approve our submission. Please explain.

Apple’s reply:

From Apple
Hello,
Thank you for your response.
We can confirm that during the review of your app we found content that is not compliant with App Store Review Guideline 1.1.1. It would be appropriate to revise the app to remove any defamatory or mean-spirited content.

Our response:

Hello,
Again you found this content by accessing our web application to search for this type of content, opting-in to receiving it, and then rejecting the app on this basis. We will ask again: is it common practice for Apple to login to web applications during the review process of reviewing iOS builds and applying App Store Guidelines to web application functionality that is separate from the iOS build? It is technologically impossible in the current iOS build to search for users or discover content unless you have done so from the web application and then re-opened the mobile application.

Apple’s reply:

Hello,
Thank you again for your response.
As stated in our previous communication, during the review of your app we found content that is not appropriate for the App Store.
You may consider to revise your app so that your content and service policies align with section 1.2 of the App Store Review Guidelines.

Apple is asking Gab to censor “mean-spirited and objectionable” speech while completely tolerating and accepting this speech on other apps in their App Store such as Twitter, Tumblr, Instagram, and Reddit. There is a clear double standard here and we intend to continue to appeal their rejections on these grounds. We refuse to cater to corporate censorship and will defend free speech at all costs. Until then Gab can always be accessed from any mobile browser and soon via Android devices; which are not blocked by a monopolistic walled garden of double standards.