Over this past week, I reread Jordan Ellenberg’s How Not to Be Wrong, which looks at simple-to-understand yet profound results from mathematics. One section of this book is devoted to the Laffer curve, named after the American economist Arthur Laffer.
Ellenberg’s explanation of the curve includes the graphic below and reads, “The horizontal axis here is the level of taxation, and the vertical axis represents the amount of revenue the government takes in from taxpayers. On the left edge of the graph, the tax rate is 0% [which means] the government gets no tax revenue. On the right, the tax rate is 100% [which means] whatever income you have …. goes straight into Uncle Sam’s bag. Which is empty. Because if the government vacuums up every cent of the wage you’re paid … why bother doing it? Over on the right edge of the graph, people don’t work at all. Or, if they work, they do so in informal economic niches where the tax collectors hand can’t reach. The government’s revenue is 0 once again.” …
Suppose we wanted to play a supped up version of Scrabble where we still get points based on the complexity of a word but with a different metric. Rather than the letters used, we’ll score words based on how difficult they are to understand.
And here’s how we measure that difficulty. We’ll grab a copy of the Meriam-Webster dictionary and we look up that word. To learn a word in English, I read the definition. At this point, I have learned the meaning of the original word. However, to fully understand a word, I need to know the meaning of every word in its definition. And I need to do this for every definition. …
I took my first class in Artificial Intelligence in my junior year. I walked in bright-eyed and bushy-tailed. I was expecting all the buzz words — neural networks, support vector machines, Bayesian networks, and more. And the first lecture was the most disappointing 75 minutes of my Duke experience. Instead of cool-sound buzz words, I got depth-first search and recursion.
It ended up being one of my favorite classes and my professor knew exactly what he was doing. We had to learn the basics before we could take on the big stuff. We didn’t stand a fighting chance against deep learning if we didn’t know the fundamentals inside and out. Mr. Miyagi was teaching us to wax cars. …
I recently had a friend text me and say, “Andrew, I’ve been getting a ton of matches on Tinder, but I still haven’t been able to find the one. I think it’s because I’m not using enough linear algebra. Can you help me out?”
And I replied, “Wow, that’s a weirdly specific question. This sounds like a fake situation. But yes, of course, I’ll see what I can do.” In this article, we’ll set out to help my friend find the one. But how?
We’re going to break this question up into a few parts. In part 1, we’ll take a look at building vector representations for human characteristics and calculating alignment between two vectors. In part 2, we’ll use a preference learning algorithm to weight categories and return the most relevant matches. …
Over this past semester, I worked with 5 other Duke undergrads to extract information from various subreddits to predict and determine the approval rating of the President. On Reddit, some praise the President, some detest the President. Our team quantified the sentiment, calculated averages, and produced some interesting results. Our analysis focused solely on Presidential approval ratings but our methods readily generalize to calculating the approval of any person, policy, or product. As a note, we focused on President Barack Obama for this project as there was the most data available during his presidency.
As mentioned, this problem was tackled by 6 Duke undergraduate students — Milan Bhat, a sophomore studying Electrical and Computer Engineering, Andrew Cuffe, a senior studying Economics and Computer Science, Catherine Dana, a junior studying Computer Science, Melanie Farfel, a senior studying Economics and Computer Science, Adam Snowden, a junior studying Biology and Computer Science, and myself, a senior studying Mathematics and Computer Science. …
If there is one consistent assignment across 10ᵗʰ grade English classes, it is writing a summary of The Great Gatsby (TGG). TGG is an engaging and exciting work of literature. It has central themes that weave themselves throughout the novel — themes of society and class, wealth and status, the current and the past. A cogent and concise summary is no easy task for a 10ᵗʰ grader.
In this article, we’re going to summarize TGG using some methods from Natural Language Processing (NLP). NLP is a subfield of Artifical Intelligence and Machine Learning that analyzes how computers process and understand organic methods of communication (e.g. written language). …
Over this past school year, I became all too familiar with the technical recruiting process. In the fall semester, I opted for a breadth-first search approach and applied to about 30 software companies. In the spring semester, I helped run a technical recruiting class for undergraduate students in their first and second year. Based on my experience and the experiences of the students in the class, I learned two key things about software interviews.
If you’ve just stumbled onto this series, make sure to check out the introduction here.
If you’ve just stumbled onto this series, make sure to check out the introduction here.
If you’ve just stumbled onto this series, make sure to check out the introduction here.
About