Is Your Advice Unethical?

A machine learning project applying natural language processing to Reddit data.

Scott Atkinson
The Startup
16 min readFeb 23, 2021

--

Photo by AbsolutVision on Unsplash

Introduction

Ethics is an important facet of civilization, and a community’s ability to discern between ethical and unethical behavior is critical for a healthy society. In any business, school, or community, ethical communication and behavior is of high importance. For many situations, it is not difficult to decide if an action is ethical or not, but there are also many scenarios lying in an ethical gray area. The growing scale of text communication makes moderating social media sites, internal business communication platforms, and other online forums impossible to do manually — calling for an automated approach to detect unethical behavior.

This article reports the results and findings of a data science project that analyzes text data labeled as unethical or not. This project applies machine learning techniques to statistically detect unethical advice and address the following question:

What makes a piece of advice unethical?

We will use data from two subreddits on the Reddit website: r/LifeProTips and r/UnethicalLifeProTips. For most of the posts in these subreddits, a typical well-adjusted adult human will be able to accurately classify them into their proper subreddits. That is, there are not that many “ethically gray” posts in our dataset. So it is a reasonable expectation that a binary classification model can be trained to flag unethical advice.

Our analysis shows some general trends in the nature of the posts coming from the two subreddits. Advice coming from r/LifeProTips is generally aimed toward self-improvement. That is, most of the tips involve actions an individual can take to improve the quality of their life. These actions typically do not depend on the cooperation or participation of any other individual. According to the subreddit’s description, “A Life Pro Tip (or an LPT) is a specific action with definitive results that improves life for you and those around you in a specific and significant way.” For example:

LPT: hold both ends of the tube and run it over the edge of the sink to push toothpaste to the top — you’ll get almost every last bit with almost no effort

[note that many of the posts have been since removed by the original author or the respective moderators of each subreddit — the removed posts are still useful for our analysis and predictive model]

On the other hand, the pieces of advice coming from r/UnethicalLifeProTips generally involve some sort of dishonest behavior an individual can adopt with the goal of gaining some sort of advantage (financial, social, emotional, or otherwise) from someone else, often at their (the other’s) expense. According to the subreddit’s description, “An Unethical Life Pro Tip (or ULPT) is a tip that improves your life in a meaningful way, perhaps at the expense of others and/or with questionable legality.” The subreddit also clearly states the disclaimer: “Due to their nature, do not actually follow any of these tips–they’re just for fun,” and we wish to reiterate that none of the tips from ULPT should be attempted. Here is an example from r/UnethicalLifeProTips:

ULPT: The best way to hang up on someone is in the middle of your own sentence. That way they never suspect you of hanging up on them.

So the labels for this binary classification problem can be set to “self-improvement” or “dishonesty with the purpose of gaining from others.”

In this article, we begin by analyzing the data collected from these two subreddits. In particular, we examine some expected word frequencies, compare reading levels across the subreddits, and cluster the posts to find some common topics using latent Dirichlet allocation. We also examine the most predictive words for each subreddit with a multinomial naive Bayes analysis. Next we train a predictive classification model using the data from the two subreddits. We assess and select our classification model and the proper threshold for the intended use of our model. An interactive version of the model can be found at www.is-your-advice-unethical.com. We close by drawing some conclusions from our findings and discussing potential improvements for the model.

Data

The data for this project are obtained from two subreddits on the online forum site www.reddit.com: r/LifeProTips and r/UnethicalLifeProTips. The r/LifeProTips subreddit contains user-generated content in the form of advice, hints, and tips applying across all aspects of life. The r/UnethicalLifeProTips subreddit contains similar content with the difference being that the tips are unethical, or at best, in an ethical gray area. The data was obtained by scraping the most recent 5000 posts from each subreddit in early January 2021 using the pushshift API. The posts collected run the spectrum from making you laugh to making you cringe.

Data preparation and analysis

The data from each subreddit were assembled into their own respective dataframes with the following attributes: subreddit, title, id, created_utc, score, num_comments, selftext. For training purposes, we concatenate the two dataframes into a single dataframe. The subreddit column serves as our label/target column. In the title and selftext columns, we pass all letters to lowercase and remove all punctuation. We also remove all instances of the abbreviations “lpt” and “ulpt” along with any appearance of “unethical.” After cleaning, r/LifeProTips has 4945 entries, and r/UnethicalLifeProTips has 4926 entries in the dataset.

After a preliminary look at some entries for each subreddit, we form a list of some words that could be predictive for each and compare their frequencies for each subreddit. The figure below provides a visualization for some of the differences between the word frequencies.

Differences of frequencies of words in each subreddit. Image generated by the author.

From a preliminary check of the two subreddits, r/LifeProTips appears has many pieces of advice telling you not to do one thing and to do another thing instead, and r/UnethicalLifeProTips evidently has many posts about how to pay less or no money for goods or services.

We next consider the reading levels of the posts from each subreddit. We used the flesch_kincaid_grade reading level function from the textstat module to evaluate the reading levels of the unprocessed title column. The Flesch-Kincaid grade level is given by the following formula.

The output returned is meant to roughly align with U.S. grade levels. We retain the punctuation for the reading level computation because sentence count needs to be taken into account. We pass this value to the title_reading_level column of the dataframe. The summary data returned is included in the following table.

Summary statistics for Flesch-Kincaid grade level for each subreddit

The following figure displays the box plots for reading level of the (unprocessed) title column by subreddit.

Box plots displaying the distribution of reading levels for the titles grouped by subreddit. Image generated by the author.

Clustering topics

Next, we apply the unsupervised learning technique of clustering to the title column of each subreddit. After vectorizing we apply scikit-learn’s LatentDirichletAllocation (LDA) algorithm to the vectorized entries. LDA returns distributions for a prescribed number of topics/clusters, and then each post receives a score indicating how likely it is to belong to each of the topics. Using 7 or 8 clusters makes some topics from each subreddit apparent. The topics identified here help give more context to what makes a piece of advice ethical.

It is worth noting that these subreddits have certain categories for the ‘flair’ of a post. The ‘flair’ of a post provides a category to which that post belongs. Some of the flairs in the subreddit align with the topics excised usingthe LDA algorithm.

r/LifeProTips topics

Resolutions: As mentioned above, the data were scraped from Reddit in early January 2021, so there were understandably a large number of recent posts addressing New Year’s resolutions. Here are some samples from this topic:

LPT: Don’t wait for the first day of the new year to start or change something. TODAY is the day to start, no matter where it falls on the calendar!

LPT: When you make New Year’s Resolutions, make a plan for when you’re tempted and for when you fall short.

Cooking/kitchen tips: There was significant representation of tips and advice regarding cooking and preparing food. Some notable samples are:

LPT: Don’t use the microwave to heat up pasta from the fridge, but use a frying pan and a little bit of water

LPT: If youre a regular iced coffee drinker, freeze an ice tray of milk instead of water.

Money: Tips about money management were often clustered together. Samples:

LPT: Treat your savings contributions like they’re bills you need to pay

LPT: Whether you make a lot of money or very little money, you need to have a budget

Gift giving: Again, because the data were scraped near the holidays, there were many posts referring to Christmas gifts. Some samples include:

LPT: Keep an ongoing list of potential Christmas presents for your loved ones throughout the whole year. Every time they mention something they’d like to have, write it down

LPT: When gifting someone a book, always add a note inside. It makes it much more personal and memorable.

Mental health: This is not a surprising topic to have received attention in the LPT subreddit. Some examples of posts are the following:

LPT: Love Your Body

LPT: Do not react to anything overwhelmingly the same day it happens. Give yourself a nights sleep and attack it the next day. It chemically allows your brain to process it properly without the flood of emotions and confusion.

Cleaning: Many tips surrounded the topic of cleaning. Here are a few examples:

LPT: When cleaning the house, if you’re bringing something from one room to another, bring back something that belongs to the room you were in.

LPT: Keep a bottle of surface cleaner and a rag in your shower, and clean your shower while you shower.

Online behavior: Another common theme is advice regarding online behavior: managing online profiles, accounts, passwords, etc. Samples:

LPT: have 3 passwords: one for your main mail account, one for websites with your card info, one for the other

LPT: If you cannot put a tape against your laptop camera, then try disabling the camera in device controller.

r/UnethicalLifeProTips topics

We now turn to look at some of the topics from r/UnethicalLifeProTips found using LDA. Note that these topics and examples support the observation that the posts from r/UnethicalLifeProTips suggest dishonest behavior that takes advantage of others.

Avoid ads/paywalls: This topic includes tips on how to avoid ads or paywalls on various websites — a more passive form of dishonesty. Samples:

ULPT: If you get stuck behind a paywall for a news article the pay wall can usually be removed by using control + shift + I and deleting the pay wall website element.

ULPT: Want to use Youtube to listen to music, but don’t want to listen to ads? Don’t click the first result for your search, scroll down to a small channel (usually a lyrics one) without many views. These channels are almost never monetized, meaning you can listen to your videos without fear of ads!

Scamming return policies/rewards programs: Another large class of ULPT posts includes advice on how to exploit the return policies and rewards programs of various businesses. Examples include:

ULPT: Create a Nike Membership with 12 different emails for 30% off year round.

ULPT: If you need to get a new refrigerator filter, buy a new filter, then put the old filter back in the package and return it saying you bought the wrong one.

Get out of work: There is a lot of advice in ULPT on how to get away with doing little to no work at your job. A lot of the advice is especially aimed for employees working remotely — likely due to the COVID-19 pandemic. Examples:

ULPT: Working from home and need to appear online? Prop up a lock on the period button within the note pad application.

ULPT: Don’t do much at work? Occasionally change your status to “In a Call/Meeting” to keep them thinking you’re doing something

Interpersonal deception/spite/prank: This is a broad category. Behaviors including deception, eavesdropping, lying, manipulation, and emotional abuse are covered by this category. The tips in this category are less for monetary or material gain and more for spite or some sort of social/emotional/intangible advantage over another individual. Some examples include:

ULPT: is your plane being being held up because it’s waiting for a passenger? Say you know them personally and that they aren’t coming anymore.

ULPT: Government ordered social distancing is the best time to check in on those long time “friends” that you always avoid hanging out with.

Getting something for nothing: This topic includes advice for how to get something for free or for less than full price/effort. Similar to the return policy topic. Samples:

ULPT: If shipping packages through USPS, use the self service checkout and when you weigh your item, lift the corner of your package off the scale for a cheaper shipping rate

ULPT: Wanna pay low price for everything? Find a 1$ item in any physical store and take the barcode. Go to self checkout and slio the barcode in front of the item you dont wanna pay much for.

Car-related: There is also a large representation of advice regarding getting out of traffic/parking violations and other similar situations. Examples:

ULPT: Pass a cop while speeding on the highway and they start turning around to pull you over? Call 911 and report a drunk driver a few miles behind you.

ULPT: Avoid having to pay a parking ticket by paying it twice

COVID-19

Another interesting feature of the data is the presence of COVID-19-related tips from both subreddits. The r/LifeProTips subreddit is more active, so the 4945 posts span approximately the month of December 2020 (plus the first few days of 2021). The r/UnethicalLifeProTips subreddit is less active, and the 4926 posts span approximately February 2020 through New Year’s 2021. Below is one example from each subreddit — we will leave it to the human classification engines reading this to determine which quote came from which subreddit.

If you want a COVID-19 antibody test free of charge, the American Red Cross provides COVID antibody results from your donated blood.

Can’t afford to get tested for coronavirus? Cough your lungs out in a public space and check the news in a day or two to see if anyone tested positive.

Humor

The nature of a post from r/UnethicalLifeProTips seems to vary more when compared to those of r/LifeProTips. The posts in r/LifeProTips are for the most part sincerely trying to provide some good advice on various aspects of life. On the other hand some posts in r/UnethicalLifeProTips are ideas for things someone would actually try (like the refund scams or the tips for getting out of work), but there are also unethical posts that are so impractical or transparent that they are meant to entertain — often in a tongue-in-cheek manner. For example:

ULPT: If you don’t pay for your graduation photos, you get a version with the word “proof” watermarked over your face; in which case, you also don’t have to pay for a diploma.

Feature engineering

We use WordNetLemmatizer together with word_tokenize from nltk to lemmatize the words from the text data in both the title and selftext columns.

We next move to vectorize the text data. We first split the data into training (75%) and testing sets (25%) to avoid any data leakage in the vectorization step. The vectorizers are evaluated on and fit to exclusively training data. We consider scikit-learn’s CountVectorizer and TfidfVectorizer and use a preliminary MultinomialNB model to find the best vectorizer. We look at several different combinations of the title and selftext columns with and without bigrams to find the strongest way to vectorize the data. We engineer a new column, alltext, which takes the two strings from title and selftext and concatenates them into a single string. The best results are obtained when we vectorize title and alltext individually using CountVectorizer and include the top 3000 bigrams. Following a gridsearch, we select a minimum document frequency of 10.

Most predictive words

By fitting a CountVectorizer to the title training data alone, and predicting on an identity matrix, we are able to obtain, for each word in the corpus, the probability that word is predicted to be part of a post from the r/UnethicalLifeProTips subreddit. The following word clouds indicate the most predictive words for each subreddit.

Wordcloud of the most predictive title words for the r/LifeProTips subreddit. Image generated by the author.
Wordcloud of the most predictive title words for the r/UnethicalLifeProTips subreddit. Image generated by the author.

For r/LifeProTips, the top three words are “resolution,” “goal,” and “habit.” Each word in the word cloud aligns (allowing for seasonal considerations) with the observation that posts from r/LifeProTips are of a self-improvement nature.

The top three predictive words for r/UnethicalLifeProTips are “refund,” “toilet,” and “fake.” The presence of “refund” and “fake” and many others in the word cloud is not surprising considering the theme of dishonesty underscoring most of the posts in the subreddit. However, “toilet” needs some more explanation — after all, there is not anything inherently unethical about a toilet. It turns out that there are many posts on how to get free toilet paper. These align with the toilet paper shortage in the US during the early days of the COVID-19 pandemic. For example:

ULPT: Need toilet roll? Public bathrooms, free and fully stocked.

Another word’s appearance in the word cloud is slightly puzzling at first: “girl.” The word “girl” turns out to be the fourth most predictive word for r/UnethicalLifeProTips. From reviewing the data there are several different themes for this word on the r/UnethicalLifeProTips subreddit. One theme is posing as a girl on the internet:

ULPT: Pretending to be a girl online to get free stuff

Another major theme is “dating” advice:

ULPT: Save your side guy or girl’s number in your phone as Potential Spam. This way it will not cause alarm if your boyfriend/girlfriend sees them calling you.

ULPT: See an pretty girl at the grocery but too shy to approach her? Go to the pet food aisle and put the biggest bag of dog food into your cart. Then push your cart down the aisle she’s on and boom, she’ll probably start talking to you first.

Modeling

Model Assessment

We first apply the selected vectorizers to the title and alltext columns in the training data and concatenate using SciPy’s hstack function together with the title_reading_level column (passed as a sparse matrix). We then perform 5-fold cross-validation with three different estimators: RandomForestClassifier, MultinomialNB, LogisticRegression from scikit-learn; for MultinomialNB, we ignore the title_reading_level column. We score the cross-validation with both the accuracy metric and the ROC-AUC metric. Since the data is balanced, accuracy is a meaningful metric. The following table provides the results from these cross-validations along with the performance on the test data.

Cross-validation results

Threshold tuning

The next step is to tune the threshold for the each estimator since a trained estimator in truth returns a probability value for its predictions. We need to select the best threshold for each of our use-case. We wish to use this model as an automated filter for unethical content. Such a model would be used by social media moderators, internal business communications, online forums, etc. The automated aspect of the model is intended to significantly reduce manual monitoring, so we choose to prioritize reducing false positives (a positive label belonging to the r/UnethicalLifeProTips subreddit). This means we place more emphasis on obtaining a higher precision at the cost of a lower recall. After examining the precision-recall curves for the three estimators above, we choose the RandomForestClassifier estimator with threshold 0.5718 to obtain a precision of 89.03% with a recall of 50.20%. The following figure shows the precision-recall curve for the RandomForestClassifier estimator.

Precision-recall curve for RandomForestClassifier, predicting membership of a post in the r/UnethicalLifeProTips subreddit. The position of the selected threshold is indicated in red. Image generated by the author.

and the following table shows the confusion matrix for this the selected threshold.

Confusion matrix for RandomForestClassifier with threshold 0.5718 predicting ULPT

A precision of 89.03% means that out of all of the posts predicted to be from the r/UnethicalLifeProTips subreddit, just under 9 out of 10 are truly from the r/UnethicalLifeProTips subreddit, and a recall of 50.20% means that out of all the posts that are truly from the r/UnethicalLifeProTips subreddit,1 out of 2 are flagged by the model.

Product

A web app where you can input a piece of advice and it will return the model’s prediction on whether or not it belongs in the r/UnethicalLifeProTips subreddit is now available. You can find this app at www.is-your-advice-unethical.com. Have fun seeing what the model gets right and wrong!

Conclusion

For most of the posts in this dataset, any individual with a reasonable amount of cultural and ethical awareness can accurately assign the correct subreddit. The data indicate that a piece of advice offering self-improvement that can be achieved by an individual alone is likely to come from r/LifeProTips, and a piece of advice suggesting dishonest behavior to gain something from others is likely to come from r/UnethicalLifeProTips.

Overall r/LifeProTips reading levels are slightly higher than those of r/UnethicalLifeProTips. Thanks to our clustering via latent Dirichlet allocation, some common themes for these subreddits have been recognized:

r/LifeProTips: resolutions, cooking/kitchen, money, gift giving, mental health, cleaning, online behavior

r/UnethicalLifeProTips: avoid paywalls/ads, scam return policies/rewards programs, get out of work, interpersonal spite/prank, get something for nothing, car related

The top three predictive words for each subreddit are as follows:

r/LifeProTips: “resolution,” “goal,” and “habit”

r/UnethicalLifeProTips: “refund,” “toilet,” and “fake”

We obtained a binary classification model with precision 89.03% and recall 50.20% predicting a post’s membership in the r/UnethicalLifeProTips subreddit. Such precision indicates that the model is effective in that when it identifies a post as unethical, it is correct nine out of ten times. This greatly reduces the need for manual monitoring for forum moderation. The recall, much lower than the precision, shows that one out of every two unethical posts goes undetected.

It is worth noting that the content from the subreddits is in the form of advice. So this model is trained to specifically recognize unethical advice rather than more general unethical content. Furthermore, the motivation for writing each post seems to differ across the subreddits: r/LifeProTips are more sincere, r/UnethicalLifeProTips are a mix of sincerity, spite, and humor.

This model can be strengthened and made more accurate with more data. Scraping more posts from the two subreddits to obtain more training data would make the model more robust.

You can view the corresponding Jupyter notebook and accompanying files in the GitHub repository for this project. The web app where you can experiment with the model can be found at www.is-your-advice-unethical.com.

--

--

Scott Atkinson
The Startup
0 Followers
Writer for

Math professor with interest in machine learning and deep learning.