<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Manu Ebert on Medium]]></title>
        <description><![CDATA[Stories by Manu Ebert on Medium]]></description>
        <link>https://medium.com/@maebert?source=rss-b9aeaeab6feb------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 19:31:10 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@maebert/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The Three-Sentence Email]]></title>
            <link>https://medium.com/@maebert/how-to-write-emails-like-a-grown-up-4952ffb002dd?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/4952ffb002dd</guid>
            <category><![CDATA[writing]]></category>
            <category><![CDATA[office-culture]]></category>
            <category><![CDATA[self-improvement]]></category>
            <category><![CDATA[productivity]]></category>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Wed, 14 Feb 2018 18:26:10 GMT</pubDate>
            <atom:updated>2018-02-14T18:30:01.962Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hgcwHUFp9jb0o5Qq31kcgg.jpeg" /><figcaption>That’s me on a Tuesday morning.</figcaption></figure><p>I get around 60 non-marketing emails into my inbox a day and have to respond to at least 30 of those. If you send me an email, you’ll find that I’m much more likely to respond if your email is three sentences or less. Nothing good ever happens after the third sentence.</p><p>In fact, the most successful people are known for their <a href="http://money.cnn.com/2014/12/01/technology/steve-jobs-emails/index.html">brief</a> emails. Here are some tips to keep your emails to three sentences or less:</p><h4>When you’re asking for something.</h4><p>Make it easy for people to help you by asking for specific things.</p><ol><li><strong><em>Context:</em></strong><em> Say why you need something.</em></li><li><strong><em>Objective:</em></strong><em> Say what your goal state is, and how they know their contribution was meaningful.</em></li><li><strong><em>Actionable steps: </em></strong><em>Say exactly what you need them to do. Make this bold.</em></li></ol><h4>When you did something and need to tell people.</h4><p>Help people understand what you did, why you did it, and how it impacts them.</p><ol><li><strong><em>Context: </em></strong><em>Say what the thing you did is relevant for and why it was necessary.</em></li><li><strong><em>Content:</em></strong><em> Say what you did.</em></li><li><strong><em>Conclusion: </em></strong><em>Say what the impact of what you did to the recipient is.</em></li></ol><h4>When you have to provide constructive criticism.</h4><p>Good people with the best intentions sometimes do bad things, or good things poorly.</p><ol><li><strong>Context:</strong> Say what happened and what the recipient did.</li><li><strong>Effect:</strong> Say what the negative effect of their action was.</li><li><strong>Actionable steps:</strong> Say how they could have done better.</li></ol><p>Always assume good intentions. People make mistakes not out of malice and seldom out of incompetence, but often out of ignorance of their actions impact.</p><h4>When you’re planning on doing something.</h4><p>You are planning something with uncertain outcome and need people to be on your side.</p><ol><li><strong>Context:</strong> Say why you need to do something and state assumptions.</li><li><strong>Hypothesis:</strong> Say what you will do and what you expect the effect of what you’re trying to do to be.</li><li><strong>Goals:</strong> Say how you know that what you are trying to do is successful.</li></ol><h4>When you’re giving praise.</h4><p>Forget brevity, never limit the praise you give.</p><h3>Finally…</h3><ol><li>Make the subject line your spokesperson. Make sure it’s relevant and searchable.</li><li>You can omit context if you’re replying to an email if everybody understands and is aligned on the context. If not, provide context, even if it seems repetitive.</li><li>A gif says more than a thousand words. But it’s not always clear which words those are.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*f1fqeiieAYWVfq0t-fANEw.gif" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4952ffb002dd" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Machine Learning on AWS Lambda]]></title>
            <link>https://medium.com/@maebert/machine-learning-on-aws-lambda-5dc57127aee1?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/5dc57127aee1</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[aws]]></category>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Tue, 31 May 2016 17:47:12 GMT</pubDate>
            <atom:updated>2016-05-31T17:47:12.787Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nbu2BlCflHcIIcRwFqbyyA.png" /><figcaption>AWS Icons. The “S” stands for “Spot The Difference.”</figcaption></figure><p><a href="https://aws.amazon.com/lambda/">AWS Lambda</a> is a service that lets you execute a single function on the AWS cloud, and only pay for the actual execution time. This is tremendously helpful for computation intensive tasks like cropping and converting images uploaded by users — you can trigger an AWS Lambda function with a file in S3, and it will dutifully perform its task independently of the rest of your architecture. It doesn’t matter whether you’re converting ten thousand pictures an hour or ten every month, there is absolutely no effort in scaling, and no architectorial differences.</p><p>That makes Lambda incredibly appealing for a lot of distributed computation tasks. However it can be a bit of a pain to set up: you have to bundle all of your dependencies into a single zip file along with your code. If you’ve ever tried that for a complex machine learning environment involving numpy and <a href="http://scikit-learn.org">sklearn</a>, you will have already experienced the torment and misery this will bring upon you.</p><p>Here is how to do it while maintaining your sanity.</p><h4>Setting up a Machine Learning Environment on EC2</h4><p>First, create an EC2 instance using Amazon Linux and login to that. There we’ll install all of our dependencies.</p><p>Remember Fortran? Yeah, we need it.</p><pre>sudo yum -y update<br>sudo yum -y upgrade<br>sudo yum -y groupinstall “Development Tools”<br>sudo yum -y install blas<br>sudo yum -y install lapack<br>sudo yum -y install atlas-sse3-devel<br>sudo yum install python27-devel python27-pip gcc</pre><p>Scikit-Learn won’t compile on less than 1GB RAM. If you’re using a free micro instance, create a swap file:</p><pre>sudo dd if=/dev/zero of=/swapfile bs=1024 count=1500000<br>sudo mkswap /swapfile<br>sudo chmod 0600 /swapfile<br>sudo swapon /swapfile</pre><p>Okay, let’s create a virtual environment and install everything we need:</p><pre>virtualenv ~/stack <br>source ~/stack/bin/activate<br>sudo ~/$VIRTUAL_ENV/bin/pip2.7 install numpy<br>sudo ~/$VIRTUAL_ENV/bin/pip2.7 install scipy<br>sudo ~/$VIRTUAL_ENV/bin/pip2.7 install pandas<br>sudo ~/$VIRTUAL_ENV/bin/pip2.7 install sklearn</pre><p>Another challenge is that Lambda has a limit of 50MB for zipped files including all dependencies. Let’s use some dirty tricks to bring our bundle size down. We can use <em>strip</em> to remove everything from binaries we probably won’t need in both <em>lib</em> and <em>lib64</em>:</p><pre>find “$VIRTUAL_ENV/lib*/python2.7/site-packages/” -name “*.so” | xargs strip</pre><p>Finally, let’s bundle up all of our modules in ~/lambda.zip:</p><pre>pushd $VIRTUAL_ENV/lib/python2.7/site-packages/<br>zip -r -9 -q ~/lambda.zip *<br>popd<br>pushd $VIRTUAL_ENV/lib64/python2.7/site-packages/<br>zip -r -9 -q ~/lambda.zip *<br>popd</pre><p>Lambda will be looking for shared libraries in <em>/var/task/lib</em>, so let’s put everything we need into a <em>lib</em> folder:</p><pre>cp /usr/lib64/atlas-sse3/liblapack.so.3 lib/.<br>cp /usr/lib64/atlas-sse3/libptf77blas.so.3 lib/.<br>cp /usr/lib64/atlas-sse3/libf77blas.so.3 lib/.<br>cp /usr/lib64/atlas-sse3/libptcblas.so.3 lib/.<br>cp /usr/lib64/atlas-sse3/libcblas.so.3 lib/.<br>cp /usr/lib64/atlas-sse3/libatlas.so.3 lib/.<br>cp /usr/lib64/atlas-sse3/libptf77blas.so.3 lib/.<br>cp /usr/lib64/libgfortran.so.3 lib/.<br>cp /usr/lib64/libquadmath.so.0 lib/.<br>cp /usr/lib64/libquadmath.2so.0 lib/.</pre><p>And add this to our bundle:</p><pre>zip -r -9 -q ~/lambda.zip lib/</pre><p>Great! Back on your local machine, get the zip file from EC2:</p><pre>scp -i pemfile.pem ec2-user@100.110.120.130:~/lambda.zip lambda.zip</pre><p>Now you can add your <em>lambda_handler.py</em>. If you have a more complex program, I recommend putting that into a module that you import in <em>lambda_handler.py</em>. Let’s remove all <em>.pyc</em> files first though:</p><pre>find my_module/ -name ‘*.pyc’ -delete<br>zip -9 lambda.zip lambda_handler.py<br>zip -9r lambda.zip my_module/</pre><p>Because it’s so big, we need to upload it to an S3 bucket first before updating lambda:</p><pre>aws s3 cp lambda.zip s3://my_bucket/lambda.zip<br>aws lambda update-function-code --region us-east-1 --function-name lambda_function --s3-bucket my_bucket --s3-key lambda.zip</pre><p>This should take care of most obstacles you are going to run into. Did this work for you? Let me know in the comments.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5dc57127aee1" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI’s Big Trade Secret]]></title>
            <link>https://medium.com/@maebert/ai-s-big-trade-secret-a0d59110d6e3?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/a0d59110d6e3</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Thu, 14 Jan 2016 17:50:01 GMT</pubDate>
            <atom:updated>2016-01-14T18:29:20.786Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<h4>Why Algorithms are Worthless</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*mmYNx33k6csJ0DdwEvrYTw.jpeg" /><figcaption>The <a href="https://en.wikipedia.org/wiki/The_Turk">Mechanical Turk</a>: Deceiving the Public about how AI works since 1770.</figcaption></figure><p>Artificial Intelligence is on the rise, but (for a change) I’m not talking about the great <a href="http://www.popsci.com/bill-gates-fears-ai-ai-researchers-know-better">AI panic of 2015</a>. I’m talking about everyday AI that’s build into vacuum cleaner robots, Siri, web analytics, and your <a href="http://gmailblog.blogspot.com/2015/11/computer-respond-to-this-email.html">email</a>. Everybody is fascinated by the technology, and it seems like AI is making huge breakthroughs.</p><p>I’m <a href="http://www.summer.ai">sitting at the source</a> of cutting edge AI development (pun intended), and as in any other discipline, 95% of the work has very little to do with coming up with amazing new algorithms to outsmart humans. But more importantly, 95% of how well these AI systems perform also has nothing to do with smarter algorithms.</p><h4>To explain why, here’s a little story.</h4><p>Back in 2005, I took an applied machine learning class in college. We had a little competition to classify smileys drawn on surveys into various categories like “happy”, “sad”, “angry”, and “confused”. The better exemplars looked like that:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KTJpO1JtAOuxyGSLpkcPyg.jpeg" /></figure><p>The more challenging smileys were wearing hats, eye-patches, tongues sticking out, ears, nose piercings, Marge Simpson wigs, and Braveheart war paints. I am not kidding.</p><p>Each smiley was 200⨉200 pixels large, which meant a solid 40,000 input dimensions. The various teams in the class immediately brought out the heavy artillery: support vector machines, recurrent neural networks, subspace ensemble classifiers… Their performance was well below satisfactory.</p><p>Disappointed with the absence of a quick win, our team tried something which we would now call “feature engineering”. We wrote a little script that would start at the bottom edge of the image and draw a line up until it hit the first black pixel. Then we compared the length of three of those lines, and also the ratio of black pixels in the upper left quadrants to compared to the upper right quadrant:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9CoQzuhlInJDaU_2oij6WA.png" /></figure><p>In the end, we ended up with only three values. We built a maximally dumb decision tree with these three values and outperformed all other teams by a large margin.</p><p>However, what I now call “feature engineering” and do every day was called “cheating” back in the class and got us disqualified, which lead me to this tendentious observation:</p><h3>Manuel Ebert on Twitter</h3><p>What successful #MachineLearning is made of vs. what academics think it&#39;s made of: pic.twitter.com/Ypnf0d0o7w</p><p>The dirty secret is that this kind of feature engineering — using your human intuition, domain knowledge, and reckless shortcuts to reduce 40,000 input dimensions to three— is exactly what makes most of AI applications work. The other thing, of course, is having enough, good, well-curated data.</p><p>This is precisely the reason Facebook and Google are <a href="http://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine/">giving away all of their machine learning and AI infrastructure for free</a>. <strong>Algorithms, on their own, are worthless.</strong></p><p><strong>The real value is in what you feed these algorithms</strong>, and companies keeping a tight lid on both their data and their feature engineering. The real job of many AI engineers is using their experience massaging data into something <strong>more digestible for the algorithms</strong>.</p><p>If you had a similar feature engineering win, please share your story!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a0d59110d6e3" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Price of a Pizza is not the Pizza]]></title>
            <link>https://medium.com/@maebert/the-price-of-a-pizza-is-not-the-pizza-77cd35897d94?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/77cd35897d94</guid>
            <category><![CDATA[time-management]]></category>
            <category><![CDATA[management]]></category>
            <category><![CDATA[self-improvement]]></category>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Wed, 13 Jan 2016 00:51:18 GMT</pubDate>
            <atom:updated>2016-01-13T01:37:28.806Z</atom:updated>
            <cc:license>http://creativecommons.org/licenses/by/4.0/</cc:license>
            <content:encoded><![CDATA[<h4>Why we confuse cost and value</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/800/1*4Ez5FFCUaV-PQZGVbX9EaA.jpeg" /><figcaption>“The Persistence of Pizza”, probably not by Salvador Dalí.</figcaption></figure><p>Many people have written about the toxicity of a culture where being busy is seen as an advantage. I couldn’t agree more, but underneath all the stroking-your-ego, appearing-important, and being-hard-at-it is a mystery that honestly confused the shit out of me.</p><p><strong>Why do smart people confuse the cost of something with its value?</strong></p><p>If someone boasts that they just pulled an all-nighter to finish a project, this is what I hear:</p><blockquote><em>“Dude, last night I paid $80 for a pizza!”</em></blockquote><p>I don’t need to know. I don’t even want to know. What I want to know is whether the pizza was any good. And then, maybe, whether it was worth $80.</p><p>You wouldn’t boast about how much you money you spent on dinner (unless you’re a real douche), why boast about how much time you spent on getting stuff done?</p><p>Time is your cost. <strong>Time is what you pay for creating value</strong>, doing your work, getting things done. And even if you don’t boast about your all-nighters, chances are you feel guilty if you don’t work as much as your co-workers. They might look at you funny if you go home at 3pm and declare that you’re done for today. It all boils down to the thinking that you didn’t “spend” enough on the work you did.</p><p>I believe this confusion comes from the fact that <strong>time is the only currency we all have in common</strong>. Designers are bad at judging the value of the work of engineers, engineers are comically bad at judging the value of the work done by managers, high-level managers are frighteningly bad at judging the value of work of low-paid staffers and the building cleaners. All we have to talk about is time.</p><p>Time is the only natural resource equally distributed amongst every single person on earth, and so we all know how <em>spending</em> time feels like. <strong>We all make the same amount of time — we have a fixed stipend of 24 hours per day which is ours to spend.</strong></p><p>The only way out of the cultural trap of being busy is to stop talking about how much time you spent on doing things. Of course you will need to talk about time when planning your tasks — it’s a resource that needs to be allocated like any other. And it often makes sense to look back to figure out how to spend less time on what you did.</p><blockquote>Choose being productive over being busy.<br>Choose talking about value over talking about cost.<br>Choose improving over complaining.</blockquote><p>Don’t ever complain about how much time you spent on something, or worse, brag about it. Instead, help your peers understand the value of your work.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=77cd35897d94" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unicode for Idiots like Me]]></title>
            <link>https://medium.com/@maebert/unicode-for-idiots-like-me-e7ea46030787?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/e7ea46030787</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[python]]></category>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Thu, 15 Oct 2015 19:10:59 GMT</pubDate>
            <atom:updated>2015-10-15T19:20:30.003Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZlogGQ-GcQzb7TBAlkZsbw.jpeg" /></figure><p>Unicode confuses a lot of programmers, but nobody wants to admit it. What’s the relationship between Unicode and UTF-8? Where should I use UTF-8 and where Unicode? Why do we even need the two?</p><p><a href="https://medium.com/u/d38a80b1d039">Armin Ronacher</a> has a <a href="http://lucumr.pocoo.org/2013/7/2/the-updated-guide-to-unicode/">very exhaustive answer</a> to this question. But really, it’s simple. Let’s talk about Machine Translation for a second.</p><p>There are basically two ways of doing machine translation — that is, automatic translations from say French to German. One is a direct translation: you translate every sentence from French directly to its equivalent in German. The other method of machine translation is using something called an <a href="https://en.wikipedia.org/wiki/Interlingual_machine_translation">Interlingua</a>, a hypothetical language that captures the <em>meaning</em> of a sentence without being concerned about the words to use, word order, grammar, and so on.</p><p>Back to Unicode: UTF-8 is German. Latin-1 is French. ASCII is Spanish. Unicode is — you guessed it — interlingua. The actual languages are called <em>encodings</em>. They are a specific way of representing the <em>meaning </em>of a symbol. Unicode isn’t concerned with how exactly atrocious characters like <em>ö </em>or<em> ñ </em>or<em> ‽ </em>or<em> </em>✈︎ are represented on a storage medium — which bits in a byte should be 1 and which 0 and how many bytes to start with. Unicode cares about the symbol and nothing but the symbol.</p><p>Here’s an error that every python programmer alive has seen:</p><blockquote>‘ascii’ codec can’t decode byte 0xc3 in position 0: ordinal not in range(128)</blockquote><p>This and many related errors, are easy to explain. You are trying to say something in Spanish, but there’s simply no word for it. In this case, the letter <em>Ü</em> doesn’t exist in ASCII.</p><p>Another common error is the one you might get from reading a file that has a different encoding than you think it has. It’s like reading a book that you assume to be written in French, but the words won’t make sense — because it’s actually written in German.</p><p>Here’s a simple rule that all programmers should follow:</p><p><strong>If it’s on disk, make it UTF-8.</strong></p><p><strong>If it’s flowing through your program, make it Unicode.</strong></p><p>Specifically, as soon as you load text from a file or a database, at the earliest point, convert it to your interlingua, Unicode. Within your software, all text should always be assumed to be Unicode. Only when text leaves your software, when it’s written on disk or sent through an API or stored in a database, encode it in UTF-8 (or whatever the appropriate encoding for your case is).</p><p>Prosper.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e7ea46030787" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Natural-Born Cyborgs]]></title>
            <link>https://medium.com/@maebert/natural-born-cyborgs-366a3566e61f?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/366a3566e61f</guid>
            <category><![CDATA[design]]></category>
            <category><![CDATA[philosophy]]></category>
            <category><![CDATA[neuroscience]]></category>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Tue, 13 Oct 2015 15:55:50 GMT</pubDate>
            <atom:updated>2015-10-20T18:35:51.558Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PtBMk3ye27Or3iuMpPjnEQ.png" /></figure><h4>What Design can learn from Philosophy and Neuroscience</h4><p>This is a three-part blog post. The first part will be about philosophical discipline of phenomenology and what that has to do with how we use tools. To keep you from falling asleep, there will also be Nazis and Street-Fighter references. In the second part, I’ll present findings from neuroscience that basically say the philosophers got it right. Finally, I will show you that this means that we’re all, in fact, cyborgs, and what that has to do how we use and hence should design tools, interfaces, and wearables.</p><p>If you rather listen to things than read them, here’s a video of a talk I gave on the subject:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F142961471&amp;url=https%3A%2F%2Fvimeo.com%2F142961471&amp;image=http%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F540436891_1280.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=vimeo" width="1280" height="720" frameborder="0" scrolling="no"><a href="https://medium.com/media/dba27bdefc7933c929c47da9ea9477bc/href">https://medium.com/media/dba27bdefc7933c929c47da9ea9477bc/href</a></iframe><h4>Phenomenology: let’s sip whiskey and talk about feelings.</h4><p>But first, let me take you to France. Paris, the city of love. In 1942. The city is under Nazi occupation. It’s night, long past the curfew, and outside SS patrols march down dark alleys with cobblestone streets. Inside, a man sat in front of a typewriter to work on the manuscript for his philosophical opus magnum “L’Être et le Néant” — <a href="http://amzn.to/1WQ4BBl">“Being and Nothingness”</a>.</p><p>Sartre and his lifelong partner Simone de Beauvoir would later become <em>the</em> most talked about celebrity couple in post-war France — think the Brangelina of the 50s, except with a lot more juicy affairs on the side. In 1949, de Beauvoir would publish <a href="http://amzn.to/1ZifRIJ">“The Second Sex”</a>, kickstarting second-wave feminism in Europe, and in 1964 Sartre would be awarded — and refuse — the Nobel Prize for his literary work.</p><blockquote>“Hell is other people’s code” — JP Sartre. Well, something along these lines anyway.</blockquote><p>But that night in 1942, he was tired. As he was typing on the manuscript, he tried to focus on the ideas, the concepts, the world the words he was typing were creating. But then his eyes became sore and weary, and the letters started to blur. He noticed how his attention shifted from the concepts to the letters that carry them. And then soon to his tired fingers that created those letters. Finally he couldn’t think about anything but his sore eyes and had trouble keeping them open.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/696/1*39-lG2aQI741qDrtETcA6g.jpeg" /><figcaption>Jean-Paul is staring at you. Maybe. Hard to tell with his funky eyes.</figcaption></figure><p>Most ordinary people would just have decided to call it a night. But Sartre was anything but ordinary. He was a philosopher. Particularly, he was a phenomenologist. Phenomenology is the philosophical study of experiences, or more correctly, the study of structures of experiences. In a way, it’s a scientific discipline, a way of describing the world and theorising about it. But as opposed to physics or psychology, phenomenology is a first-person approach: you’re analysing, deconstructing and describing your own experiences and consciousness. Phenomenologists don’t study things for what they are, but for how they appear to us.</p><p>Sartre, being a phenomenologist, was quick to notice a certain pattern in his experiences. He noticed how his attention shifts from one thing to another — the ideas, the words, his fingers, his eyes. What’s interesting here is that his attention shifts from the subject of perception to the medium of perception. First he perceives the ideas through the words, then the typed words themselves become centre of his attention. Finally, his own eyes become the subject of his experience, rather than the medium, the method of experiencing.</p><h4>Heidegger: Philosophical Hadouken Punch</h4><p>His book ”Being and Nothingness” is of course a play on the title of Martin Heidegger’s seminal work <a href="http://amzn.to/1WQ4ANU">“Being and Time”</a> — “Sein und Zeit” in German, which Sartre read in a German prison camp when he was a prisoner of war in 1941. Being and Time was a philosophical Hadouken punch, blasting away 2500 years of philosophy that was in its way to answer questions like “Why is there something instead of nothing?”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*p2AnEzqNoiSRA4ygCnNF1g.jpeg" /><figcaption>Martin is <strong>most definitely </strong>staring at you.</figcaption></figure><p>Spoiler alert: we still don’t know.</p><p>The book was written with a brutal accuracy that is only possible in German where you can make up words like <em>Ersatzzeitgeist</em> and <em>Auskunftspflicht</em> — which loosely translates to <em>“the obligation to inform authorities about any changes in permanent residential address or marital status in a timely fashion”</em>.</p><p>Needless to say, these linguistic “features” render Being and Time almost unreadable in any other language than German.</p><p>Heidegger introduces a distinction that is incredibly important in all of design, although we don’t call it like that: presence-at-hand and readiness-to-hand. Or, in German: <em>vorhandensein</em> and <em>zuhandensein</em>. What’s that? In his famous example, he considers a hammer. Alright, so most philosophers probably never used a hammer in their whole lives, which makes this example somewhat amusing.</p><p>When the hammer is lying on a table, we can look at it, analyse it, describe it based on its constituents — wooden handle, heavy cast iron head — maybe even infer its function and use from the way it’s shaped. The hammer is present-at-hand.</p><p>Magic happens when we pick up the hammer:</p><p>As soon as the hammer is in my hand, when I use the hammer to drive a nail into the wall, I do not think about which angle to hold my hand to manipulate the hammer — I think about how to hold the hammer to manipulate the nail. The hammer becomes almost invisible to me, it transitions from being an object in the world to a way of interacting with the world through it. Same thing happens when I use a pen to write or a computer mouse to point to things. I won’t think about how to move my hand on the trackpad, I think about how to move the cursor on the screen. When driving a car, I don’t think about how to twist my elbows to turn the steering wheel, I just think about where I want to go. The tool becomes part of my body in that way; I interact with the world through the tool now.</p><p>That’s what Heidegger calls ready-to-hand. Only when the hammer is ready-to-hand it becomes a means of action, rather than a subject of it, only then can we achieve some fluidity of using it. When the hammer breaks, it immediately loses its readiness-to-hand and becomes merely present-at-hand, our attention will immediately shift back from what we’re trying to hammer to the hammer itself.</p><p>What we’ve seen so far two examples here of how our experience of a tool can shift from the tool itself to what we’re perceiving through the tool (In Sartre’s case) or (in Heidegger’s example) manipulating through the tool and back.</p><p>The interesting part is this transition. Moments ago the hammer was a distinct thing, an ontologically discrete entity lying there in the outside world. And as soon as I pick it up, it becomes part of me. I use it as naturally as my own hands. The boundary between “me” and the “world” shifts — now the hammer is part of me, and ceases to be part of the “outside” world.</p><h4>Remember Affordances?</h4><p>Designers love to talk about affordances.The flat surface of a chair affords sitting on it. The little shadow under a button affords clicking on it. Affordances are action possibilities. Here’s what James Gibson had to say about hammers:</p><blockquote>“When in use, a tool is a sort of extension of the hand, almost an attachment to it or part of the user’s own body, and thus no longer a part of the environment of the user. But when not in use the tool is simply a detached object of the environment, graspable and portable, to be sure, but nevertheless external to the observer. This capacity to attach something to the body suggests that the boundary between the animal and the environment is not fixed at the surface of the skin but can shift. More generally it suggests that the absolute duality of ‘objective’ and ‘subjective’ is false. When we consider the affordances of things, we escape this philosophical dichotomy.”</blockquote><p>Before Gibson, the mainstream view of perception was that there’s an outside object, the subjective observer, and some internal representation of the observer object. But that model can’t explain what happens when tools become extensions of our bodies.</p><p>Tackling this problem of perception was central to <a href="http://amzn.to/1OoJDqj">Gibson’s work</a> when he came up with it in 1977, but has unfortunately got lost a bit when affordances entered mainstream design lingo.</p><p>So originally, affordances were meant to be an answer to this philosophical conundrum, not something to be lightly tossed at wireframes as a fancy way of saying “can you make this button pop a little more?”</p><h4>I promised you cyborgs. Here’s a cliffhanger.</h4><p>In this post, I demonstrated that when we inspect our own experiences of tool use, we can see that tools become part of our body. I also talked about some of the philosophical frameworks to explain that. But how do we know this is true and not just something some crackhead philosophers made up in their cozy leather armchairs on a particularly scotch-fuelled night?</p><p>Stay tuned for next week, where I’ll present some fantastic results from neuroscience studies that show what is going on in the brain while we use tools. Oh, and will also say that this means that we are, in fact, natural-born cyborgs.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=366a3566e61f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Artificial Intelligence is a total misnomer]]></title>
            <link>https://medium.com/@maebert/artificial-intelligence-is-a-total-misnomer-ebcf551349e4?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/ebcf551349e4</guid>
            <category><![CDATA[philosophy]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Wed, 29 Jul 2015 18:57:00 GMT</pubDate>
            <atom:updated>2015-07-29T18:57:00.895Z</atom:updated>
            <content:encoded><![CDATA[<p>Around the time that the term <em>Artificial Intelligence</em> was coined, the working model of cognitive science was the <em>computer metaphor</em>. The idea was that your brain is a big computer; it processes input and generates output. While this is not strictly speaking wrong, it’s about as helpful as considering my stomach to be a computer that accepts input and generates output. If the brain is like a computer, then surely computers could be like brains, too!</p><p><strong><em>“Artificial”</em> Intelligence refers to the fact that in many obvious ways, a computer is not a brain.</strong> It doesn’t require oxygen or adenosine triphosphate, it executes one operation after another, and it’s neither wet nor gooey.</p><p>This implicitly (but rather bluntly) equates the brain with “Intelligence”. However, intelligence is not a tangible thing, it’s not an entity, it doesn’t exist in a vacuum, you can’t find it by dissecting a pickled brain. <strong>Intelligence is something we <em>do</em></strong>, or more accurately, something we <em>ascribe</em> to actions and behaviours. There are no such things as artificial actions or behaviours. If something moves, it moves for real. If Google suggests a new route for you that avoids the traffic, it’s a real suggestion, not an artificial one.</p><p>Conversely, an action cannot be <em>artificially</em> or <em>naturally</em> intelligent. It is either intelligent, or it is not, and that depends on the context and the goals.</p><p>In that way, intelligence is like humour. Humour is not a little man sitting in your brain pushing the “joke” button every now and then — humour is if you intentionally say or do something funny. It’s funny if it makes you laugh. It is not artificially funny, it’s just funny.</p><p>It doesn’t make sense to talk about <em>artificial</em> intelligence.</p><p>We should talk about <strong>“machine intelligence”</strong> to signify that the action we deem intelligent (or not) was planned and executed by a machine. But the intelligence of the action is just as real.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ebcf551349e4" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Hacking your way into Burning Man]]></title>
            <link>https://medium.com/@maebert/how-to-hack-burning-man-with-apis-and-text-messages-4e4e0faa2758?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/4e4e0faa2758</guid>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Tue, 16 Sep 2014 02:38:24 GMT</pubDate>
            <atom:updated>2014-09-16T17:12:49.159Z</atom:updated>
            <content:encoded><![CDATA[<h4>Or: How to outrun the internet with APIs and geospatial arbitrage</h4><p><em>Note: This article was featured as a guest post on the </em><a href="http://blog.kimonolabs.com/2014/09/15/guest-post-how-to-hack-burning-man-with-apis-and-text-messages/"><em>Kimono Blog</em></a><em>.<br>Image credit: Black Rock City (based on </em><a href="https://secure.flickr.com/photos/fling93/2826485813"><em>this areal</em></a><em>, CC-BY-NC-SA)</em></p><p>Different people have different ideas about their perfect vacation. Some day-dream of relaxing on white beaches and palm trees. Others seek the thrill of snowboarding down the rockies. And then there are those who really just want to run through the desert half-naked, entangled in EL wire and <a href="https://www.youtube.com/watch?v=DIfhv9SD144">exploding things to toss anvils 100ft into the air</a>.</p><p>That’s me. And the probably only place on earth you can do this is the <a href="https://en.wikipedia.org/wiki/Burning_Man">Burning Man festival</a>. Problem is, I completely failed to get a ticket in time this year, and in between a lot of travelling I woke up with the sudden realisation that there were only five days left to find a ticket. The <a href="https://eplaya.burningman.com/viewforum.php?f=370">ePlaya forum</a> had plenty of people offering tickets at face-value, however the demand this year was so high that the average time until a ticket got sold was less than four minutes. I was still stuck in Edinburgh, UK; and there was no possible way I could monitor that page all day round to call dibs on the next ticket offered.</p><h4>APIs and my Inner Nerd to the rescue.</h4><p>I would personally find it unethical to write a bot to get a ticket for me, but at least the process of sitting on a computer and hitting the refresh button can be automated. The battle plan is to turn the HTML mess of the ePlaya phpBB board which only barely graduated from nested tables into clean data with <a href="https://wwwkimonolabs.com/">Kimono</a> and then notify my whenever that data changed.</p><p>First, let’s identify what we need in our API. Here’s what a section of the forum looks like:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IG4iondjaGJSw6iwQ3ypdg.gif" /></figure><p>We obviously need the title of the post since we’re ultimately only interested in tickets offered, not tickets needed. The date of the post might also be interesting, as are the number of replies (since we won’t care unless we’re the first). To select the date and replies, simply highlight the text by clicking and dragging your cursor over it the way you would do it in a text editor.</p><p>Opening the raw data view, we can now see the the data, the whole data, and nothing but the data:</p><pre>{<br>  &quot;Title&quot;: {<br>    &quot;text&quot;: “Oh… my… god…”,<br>    &quot;href&quot;: &quot;https://eplaya.burningman.com/viewtopic.php?f=370&amp;t=71750&quot;<br>  },<br>  &quot;Date&quot;: {<br>    &quot;text&quot;: &quot;Sat Aug 30, 2014 9:16 pm&quot;,<br>    &quot;href&quot;: &quot;https://eplaya.burningman.com/viewtopic.php?f=370&amp;t=71750&quot;<br>  },<br>  &quot;Replies&quot;: &quot;0&quot;<br>}</pre><p>Note how KimonoLabs neatly captures the URL to the individual thread, too. We’ll use that later.</p><p>Now, we can tell Kimono to send us an email whenever that data changes. However we’re</p><ol><li>not interested in getting email every time somebody posts a reply to an old thread and</li><li>the lowest interval for automatic polling is 15 minutes.</li></ol><p>So, let’s write a few lines of Python to deal with the data. We’ll use the <a href="http://docs.python-requests.org/en/latest/">requests</a> module to fetch the API:</p><pre><strong>import </strong>requests<br>URL <strong>= &quot;</strong>https://www.kimonolabs.com/api/6zzoaezg?apikey=&lt;MY_API_KEY&gt;&quot;<br>response <strong>= </strong>requests.get(URL).json()<br>data <strong>= </strong>response[&#39;results&#39;][&#39;collection1&#39;]</pre><p>Now data will be a list of dictionaries like the one above. Next we need to need to find all new posts. Let’s have a closer look at the URLs in the title attribute:</p><pre><a href="https://eplaya.burningman.com/viewtopic.php?f=370&amp;t=71750">https://eplaya.burningman.com/viewtopic.php?f=370&amp;t=71750</a></pre><p>Here, 370 is the id of the forum, and 71750 is the id of the topic. Before we run the script the first time, let’s create a file called latest_topic.txt and put 71750 inside. Back in Python, we load the file find find the newest topic, split every URL to find the topic id of the post, and if it’s newer than the newest topic from the last time we run the script, we remember this posts in a new_posts list. And while we’re on it, we also keep track of the maximum topic id we encounter so we can save it when we’re done finding new topics:</p><pre>latest_topic <strong>= </strong>int(open(&#39;latest_topic.txt&#39;).read())<br>max_topic <strong>= </strong>latest_topic<br>new_posts <strong>= </strong>[]</pre><pre><strong>for </strong>post <strong>in </strong>data:<br>    topic <strong>= </strong>int(post[&#39;Title&#39;][&#39;href&#39;].split(&#39;t=&#39;)[-1])<br>    <strong>if </strong>topic &gt; latest_topic:<br>      new_posts.append(post)<br>      max_topic <strong>= </strong>max(max_topic, topic)</pre><pre><strong>with </strong>open(&#39;latest_topic.txt&#39;, &#39;w&#39;) <strong>as </strong>f:<br>    f.write(str(max_topic))</pre><p>Great. Now let’s turn to our friends at <a href="http://twilio.com/">Twilio</a> for turning data into a concierge web notification service. Twill allows you to send text messages to any phone and takes less than 5 minutes to set up. Let’s write a short method that will take a post and send a text message to a number:</p><pre><strong>def</strong> <strong>send_message</strong>(post, number):<br>    URL <strong>= &quot;</strong>https://api.twilio.com/2010-04-01/Accounts/&lt;MY_ACCOUNT_NUMBER&gt;/SMS/Messages.json&quot;<br>    text <strong>= &quot;</strong>{}: &#39;{}&#39; ({} replies) {}&quot;.format(<br>        post[&#39;Date&#39;][&#39;text&#39;],<br>        post[&#39;Title&#39;][&#39;text&#39;],<br>        post[&#39;Replies&#39;],<br>        post[&#39;Title&#39;][&#39;href&#39;]<br>    )<br>    params <strong>= </strong>{<br>       &quot;From&quot;: &quot;&lt;MY_TWILIO_NUMBER&gt;&quot;,<br>       &quot;To&quot;: number, <br>       &quot;Body&quot;: text<br>    }<br>    requests.post(URL, data<strong>=</strong>params, auth<strong>=</strong>(&quot;&lt;TWILIO_API_KEY&gt;&quot;, &quot;&lt;TWILIO_API_SECRET&gt;&quot;))</pre><p>The format operation turns the post into a string such as</p><pre>&quot;Sat Aug 30, 2014 9:16 pm: &#39;Oh... my... god...&#39; (0 replies) https://eplaya.burningman.com/viewtopic.php?f=370&amp;t=71750&quot;</pre><p>Finally, a simple POST request to the <a href="https://www.twilio.com/docs/api/rest/sending-messages">Twilio API</a> is enough to send this string to a number. All that’s left to do is sending a text message for every post in new_posts. But wait… have you heard of the term “geospatial arbitrage”? Instead of notifying only myself, let’s send this message to friends in three different timezones in the US, the UK and Japan to maximise the chance of one of us being able to reply first:</p><pre><strong>for</strong> post <strong>in</strong> new_posts:<br>    <strong>for</strong> number <strong>in</strong> (&#39;+1 415 123–4567&#39;, &#39;+44 20 1234567&#39; &#39;+81 3 12345678&#39;):<br>        <strong>send_message</strong>(post, number)</pre><p>Done. If you’re on a free Twilio trial account, make sure all of the recipients’ numbers are verified. Last thing is to make sure this script runs, like, very often. Let’s save it as ticket_search.py and make an entry in our <a href="http://www.cronchecker.net/check?utf8=%E2%9C%93&amp;statement=*+*+*+*+*+python+ticket_search.py">Crontab</a> by typing <em>crontab -e</em> in our Terminal and adding a line</p><pre>* * * * * python ticket_search.py</pre><p>To make this script run once per minute.</p><p>Lean back.</p><p>At 7:04 PST in the morning, three phones on three different continents gently buzzed as part of our globally orchestrated concert of APIs and data, scripts and servers. I was the first one to tap the reply link, and sure enough:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*99DOvbfJNQVm_b5ojHyprw.gif" /></figure><p>Incredibly happy and exhausted, I paypal’ed the money to the seller, wrote a hand-letter postcard to say thanks, and started packing my bag for Burning Man.</p><p>You can get the full script <a href="https://gist.github.com/maebert/0166aa566cd04151be03">here</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4e4e0faa2758" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[9 Things I Learned as a Software Engineer]]></title>
            <link>https://medium.com/@maebert/9-things-i-learned-as-a-software-engineer-c2c9f76c9266?source=rss-b9aeaeab6feb------2</link>
            <guid isPermaLink="false">https://medium.com/p/c2c9f76c9266</guid>
            <category><![CDATA[science]]></category>
            <category><![CDATA[careers]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[advice]]></category>
            <category><![CDATA[self-improvement]]></category>
            <dc:creator><![CDATA[Manu Ebert]]></dc:creator>
            <pubDate>Mon, 30 Jun 2014 09:32:08 GMT</pubDate>
            <atom:updated>2019-11-12T01:45:47.467Z</atom:updated>
            <content:encoded><![CDATA[<h4>…that I wish I had known when I started grad school</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*uxvRnsFSvBXQhRdnacpDrA.jpeg" /><figcaption>My Alma Mater, the University of Osnabrück, Germany.</figcaption></figure><p>Three years ago I was working in a neuroscience lab in Barcelona, busy putting electrodes on people and teaching classes on cognitive systems. Today I design and write software for a living.</p><p>Of course back in science I wrote a lot of software — if you want to make any sense of 40 GB of brain scan data you’ll have to roll up your sleeves and write scripts to crunch those numbers, and I was always a good programmer. But it wasn’t until I quit my job (and possibly my future) in academia and started working for a <a href="http://hey.co/">small and ambitious start-up</a> that I understood what being a software engineer — and more importantly, being in the business of software engineering — is really about. It’s not knowing more programming languages, libraries, algorithms, and design patterns. It’s a mindset.</p><p>That mindset would have made my work a lot easier if only I had known and adopted it before I started grad school. This is a note to my younger self, a list of things I learned, sometimes painfully, in the past three years.</p><h4>1. Intelligence is overrated</h4><p>When you’re young, being smart gets you a long way. You’re a big fish in a small pond. Doubly so if you have a knack for expressing yourself half-way eloquently. In fact, being intelligent and a smooth talker will get anyone through high-school and most of college without learning much at all (you’ll have to study for physics though. Can’t just talk an equation away.) — congratulations, you’re lucky. And also, very unlucky. Because while you were effortlessly rushing through school, picking things up as you were going, others had to learn what would be much more important later on: Diligence. Persistence. Networking. And probably some of the eight things further down the list.</p><p>Our society values intelligence beyond proportion. When I tell people that I used to work in neuroscience, the first response if often: “Wow, you must be super smart”. I’m not dumb, but I know a lot of people who are probably less intelligent than I am, but far better neuroscientists.</p><p>Intelligence is certainly still a door-opener. But it will never get the job done on its own. Diligence, rigour, a reliable network, and finally not being a dick are essential qualities of not just software engineering but any profession that’s outside the little bubble called grad school.</p><h4>2. Take pride in your craft</h4><p>That mantra may be over-used, but it still holds an important lesson for you, dear younger self: whatever you do, consider it to be an honourable craft. Nothing should ever be just a means to an end. We all love seeing our names on publications, but the actual craft is to come up with and invalidate dozens of hypotheses, to work with your subjects — whether human or floating in a test tube — and tend to their needs, to rigorously analyse your data and validate your statistics, and to start over again because at some point you will notice an embarrassingly stupid mistake you made earlier on. If you write software, that means planning your features, researching existing open source code, learning new paradigms and programming languages, fixing your bugs, refactoring code and maintaining it. If you take no pleasure in these steps and just consider them to be what has to get done in order to publish your paper or ship your product, then you will never become truly good at it. If you have no ambition to become truly good at your craft, then maybe being a scientist, or an engineer, or whatever you’re doing right now is a waste of your time.</p><p>A good sign that you’re honouring your craft is that you’re taking on <a href="https://maebert.github.io/jrnl">pet projects</a>: silly little projects that don’t necessarily serve any immediate need that you’re doing just for the project’s sake. Because you enjoy working on them. Interestingly, these seem to be fairly common in the software community — many products we use every day started as someone’s pet projects — but are much more rare in scientific circles. One of my favourite quotes comes from Konrad Lorenz:</p><blockquote>“It is a good morning exercise for a research scientist to discard a pet hypothesis every day before breakfast.”</blockquote><p>If that sounds stupid to you, maybe you shouldn’t be a research scientist.</p><h4>3. Learn new tools</h4><p>As a continuation of the last point: devote time to learning new tools. Not just to expanding your abstract knowledge, but to actually learn tools that will help you get things done. It will pay off soon enough.</p><p>A great way of learning new tools is with “pet projects” mentioned above. Every time you build something new, also build it in a new way. Remember, pet projects are about failing. You don’t invest much, you learn a bit, and if it doesn’t take off or you lose interest or you realise that the challenge was a little too much: no harm done. No ego hurt.</p><p>Great tools that I highly recommend learning if you’re in academia:</p><ul><li>Git and <a href="https://www.github.com/">Github</a>. Git helps you to manage your work and never worry about backups again, and there’s a ton of great code on Github so you don’t have to reinvent the wheel over and over again. Oh, and please do code reviews with your peers. Don’t ever use code to analyse data that no-one has read but you (I can’t even believe I even have to tell you this, younger self. You’ve always been a good programmer, but the mistakes I still make that would go unnoticed if it wasn’t for code reviews makes me believe that 30% of all results in science are probably bogus because of bugs).</li><li>An illustration software: I personally prefer <a href="http://inkscape.org/">Inkscape</a>, but the industry standard Adobe Illustrator or newcomer <a href="http://bohemiancoding.com/sketch">Sketch</a> are just as good. Use it to post-process your plots and graphs; it’s often much easier than writing plotting directives in Matlab or matplotlib.</li><li>Learn how to use your text and code editor efficiently. <a href="http://sublimetext.com/">Sublime Text</a> is a great editor with a much lower learning curve than Vim or Emacs. Learn the shortcuts. It will save you an enormous amount of time.</li><li>Learn how to speak. Watch TED talks and pay attention how the more seasoned speakers can engage the audience for 15 minutes while telling a compelling story. Practice in front of a mirror. Your body and voice are tools, too.</li><li>Knowing the fundamentals of Python, R, HTML and Javascript will get you a long way. If you’re no stranger to programming already, learn a new aspect or library. Play with computer vision. Natural language processing. Web scraping. Music synthesis. Robots!</li></ul><p>The solutions you can see to a problem will always be limited by the tools you know. Learning new tools means looking at problems from other angles.</p><p>If you’re in college, I strongly suggest you set apart one day every week only for learning new tools. When you start doing your own research as a PhD: make that two days a week. You will save a lot of time in the long run and people will be astounded by your efficiency. If that sounds like a lot and you think that you don’t have the time and there’s too much pressure to get other things done, talk to your older peers and ask them for advice on what to really spend your time on.</p><h4>4. Be a stake-holder and make your agenda known.</h4><p>It’s a common assumption that your supervisor or CEO will always act in the institute’s or company’s best interest; that’s her job.</p><p>But, neither a company nor a lab is a conscious entity, and as such has no intrinsic interest. When we speak about a company’s best interest, we actually mean the best interest of the stakeholders. The real question now is: who does your CEO or supervisor think those stakeholders are, and how important are their interests?</p><p>If your boss thinks she or he is the only stakeholder (get as many publications as possible; aim for a quick and profitable exit): get out as fast as you can. You will be thrown under the bus. Who else? Your investors or grant givers? The employees? Students? Humanity? The point is: find out as soon as possible. If you’re not seen as a stakeholder, get out. As much as you may love your work, it’ll be a one-sided, abusive relationship.</p><h4>5. Shipping it</h4><p>“Shipping it” has become a very fashionable term in tech. It means getting your product out of your warehouse and to the consumer. But more than an action, it is a mentality. It means that your work is worthless until it ends up in the hands of the consumer, and that this should always be your main goal.</p><p>In academia, most software I wrote had to work exactly once on exactly one system. Writing production-ready code that will work for half a million users is a completely different kind of animal, and when I started to write code professionally my work often fell short of that.</p><p>But it also means that there’s no point in iterating for years until you have the perfect product: make a small point, and get it across. Write the simplest paper you can possibly get accepted. Worry about making a more complex study later. Get the basics right quickly, and get them out there as soon as possible. Just ship it.</p><h4>6. Know the 80/20 rule</h4><p>The 80/20 rule basically states that it will take you 20% of a project’s time to achieve 80% of the desired effect, and then the remaining 80% of time to just get the last 20% right. It’s like driving from the suburbs to the city: in 20% of the time, you’ll cover 80% of the distance. But once you hit urban traffic, the last 20% of distance will take you much longer.</p><p>Why is that important to know? Because people constantly underestimate the time a project needs. Scientists and engineers are particularly prone to that. That’s partly a matter of experience: the more you know, the better you can anticipate what will go wrong later and what the funny edge cases will be that nobody thought of when you started out.</p><p>If you don’t have that experience yet, just multiply the time you need for a project by 5, and expect to be “almost there” after a fifth of your estimated time.</p><h4>7. You didn’t sell your soul.</h4><p>I started my PhD for all the wrong reasons. One of them is what I now call “academic guilt”. I believed that if I didn’t pursue a PhD programme I would be wasting my talent. And I felt that I owed it to the people who went out of their way to support my academic career — professors and the people who paid for my scholarship — to do research. I didn’t. They may have invested in my academic future and may be disappointed that their investment didn’t pay off and produce a great scientist, but that’s their problem, not mine.</p><p>The same holds for any other job. People will always invest in you and it’s often in <em>their best interest</em> to do so. But that doesn’t mean they own your soul.</p><h4>8. Leave your comfort zone.</h4><p>Here’s how I look at the world:</p><figure><img alt="" src="https://cdn-images-1.medium.com/proxy/1*nUYG9kYTLBMxde_YNAw6hw.png" /><figcaption>If a situation is too familiar, you learn very little. However if you panic, you may learn nothing at all.</figcaption></figure><p>There’s your comfort zone. You know every fish in that pond. You belong. You know how to deal with problems. Nothing new under the sun. If you want to learn something new and grow, you’ll have to leave your comfort zone. This is where learning starts. That’s where interesting things happen. That’s where you don’t immediately have a response to everything.</p><p>Of course, there’s also the point where you’re just overwhelmed. That’s the panic zone. That’s where you’ll black out. That’s where all you can do is to try to keep your head outside the water hope somebody will save you.</p><p>The sweet spot is right before your panic zone. That’s where the challenges are where you’ll learn the most, grow the most, and change the most. Go there.</p><blockquote>“Forget safety.<br>Live where you fear to live.<br>Destroy your reputation.<br>Be notorious.”</blockquote><blockquote>- Rumi</blockquote><h4>9. Tame your monkey mind</h4><p>Sit comfortably, close your eyes and just continue breathing normally. Focus on how the air you exhale through your nostrils feels on your skin above your upper lip. Nothing else. Just focus on that.</p><p>How long before your mind wanders off? Five minutes? Probably not. A minute? Good. Twenty seconds or less? Congratulations, you’re normal. Your mind is like a monkey and it will grasp whichever branch is closest. I would probably phrase that slightly differently in academic settings… the buzzword is associative thinking. Associative thinking is great if you want to do something creative, but it’s the killer of focus. Good news is: you can learn how to focus. There are a bajillion “productivity techniques” out there, but all of them just scratch on the surface. You don’t want pasta timers and distraction free writing software. You want to tame your monkey mind once and for all.</p><p>What works for me might be grossly different than what works for you. I get great results with meditating regularly (which has a number of other beneficial side effects), but even then there are so many different styles and traditions, and I can’t possibly recommend one that suits everybody. What I do recommend is keeping your mind in great shape, and taking that seriously. Think meditation is a waste of time? You go to the gym to pump up on your body. You should spent at least twice as much time on mental work-outs: Lose a few pounds of distracting thoughts. Improve your mental eye-sight. Strengthen your back to be able to keep your mind upright longer.</p><p><em>Thanks to Benedict, Judith, Gabriel, and Peter for feedback and discussion.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c2c9f76c9266" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>