<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by 5agado on Medium]]></title>
        <description><![CDATA[Stories by 5agado on Medium]]></description>
        <link>https://medium.com/@5agado?source=rss-8615d974dee1------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 21 Jul 2017 12:24:41 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/@5agado" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Conway’s Game Of Life in Blender]]></title>
            <link>https://medium.com/towards-data-science/conways-game-of-life-in-blender-6dd84cd22fa1?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/6dd84cd22fa1</guid>
            <category><![CDATA[game-of-life]]></category>
            <category><![CDATA[cellular-automata]]></category>
            <category><![CDATA[blender]]></category>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Sat, 08 Jul 2017 17:15:52 GMT</pubDate>
            <atom:updated>2017-07-09T18:04:31.026Z</atom:updated>
            <content:encoded><![CDATA[<p>Game Of Life (GOL) is possibly one of the most notorious examples of a <a href="https://en.wikipedia.org/wiki/Cellular_automaton">cellular automata</a>.</p><p>Defined by mathematician John Horton Conway, it plays out on a two dimensional grid for which each cell can be in one of two possible states. Starting from an initial grid configuration the system evolves at each unit step taking into account only the immediate preceding configuration. If for each cell we consider the eight surrounding cells as <em>neighbors</em>, the system transition is defines by <a href="https://en.wikipedia.org/wiki/Conway&#39;s_Game_of_Life#Rules">four simple rules</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/1*HuS0jfHc6D1GI_QljmWQxQ.gif" /><figcaption>A basic plain-2D example</figcaption></figure><p>I was interested in exploring the visualization of such phenomenon with Blender. What follows are some experimental results.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FHDjVuUvreHA%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DHDjVuUvreHA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FHDjVuUvreHA%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/8b16a4c9d540a0e0b559cb73f0852e01/href">https://medium.com/media/8b16a4c9d540a0e0b559cb73f0852e01/href</a></iframe><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FWpCuLey2VQM%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DWpCuLey2VQM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FWpCuLey2VQM%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/bbdbb74c199cd1f52adff3adbcbf7e9f/href">https://medium.com/media/bbdbb74c199cd1f52adff3adbcbf7e9f/href</a></iframe><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FLLPuASwteao%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DLLPuASwteao&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FLLPuASwteao%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/7ace4f97a8a687510873af00f44fe13b/href">https://medium.com/media/7ace4f97a8a687510873af00f44fe13b/href</a></iframe><p><a href="https://gist.github.com/5agado/19284bd220165758a0c953ae25108b6b">Here the code if someone is interested</a>. It’s a reusable script you can import and run directly in the Blender scripting interface. It defines the GOL logic and breaks down the porting of the GOL grid to Blender in two customizable components:</p><ul><li><strong>Generator </strong>—responsible for the generation of the Blender object that will be mapped to a cell in the original GOL grid. A generator is exactly used to build the initial Blender grid with the preferred mesh (examples: cube, sphere, monkey).</li><li><strong>Updater </strong>— defines update behavior for a Blender object based on GOL grid value. Should specify what happens to an object depending if the corresponding grid value is 0 or 1 (examples: scale, hide).</li></ul><p>The rest is just helper code to register the updater handler such that a frame change causes an update of the GOL grid (possibly including keyframing).<br>I suggest to delete the handler once you obtain a result you like, in such a way that additional frame changes won’t trigger again the update and screw up you results.</p><h4>3D Version</h4><p>I experimented also with a three dimensional grid which used the same set of rules while extending the count of neighbors to the new dimension.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F4JWtzI80a8s%3Ffeature%3Doembed&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D4JWtzI80a8s&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F4JWtzI80a8s%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/d90027341b3d2495e0f903eda898e32e/href">https://medium.com/media/d90027341b3d2495e0f903eda898e32e/href</a></iframe><p>I am planning to look more into the 3D concept to find more stable configurations. An additional interesting improvement would be to have a non-constrained grid, meaning an automaton that starting from initial configuration can grow indefinitely in space. For such approach I have to reformulate my current code logic, and probably experiment first with some alternative to Blender, cause the guy here consumed a lot of resources even for these simply renders, so any suggestions to this regard is more than welcome!</p><h4>Digression</h4><p>Playing around these cellular automata visualizations prompted in my mind concepts like causality/teleology and <a href="http://www.gregegan.net/PERMUTATION/FAQ/FAQ.html">dust theory</a>. All system states are deterministically defined, but can be computed only based on the state at the previous time-step. With the scaling/shrinking function in place, the system behaves in exactly the same way it would do with other type of updating function, what change is the behavior of cells with regards to the system state. A cell is instantly shrank by external forces to reflect a state equal to zero, but then evolves with the illusion of free will to a renewed state of full size. Practically speaking this growth is nothing more than keyframing filling. It is not the story of a cell that defines its future, but the future of the system that defines the story of all cells life.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6dd84cd22fa1" width="1" height="1"><hr><p><a href="https://medium.com/towards-data-science/conways-game-of-life-in-blender-6dd84cd22fa1">Conway’s Game Of Life in Blender</a> was originally published in <a href="https://medium.com/towards-data-science">Towards Data Science</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[On The Future of Life Logging — A Speculative View]]></title>
            <link>https://medium.com/@5agado/on-the-future-of-life-logging-a-speculative-view-ce6e3722beb6?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/ce6e3722beb6</guid>
            <category><![CDATA[quantified-self]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[transhumanism]]></category>
            <category><![CDATA[futurism]]></category>
            <category><![CDATA[lifelogging]]></category>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Tue, 02 May 2017 14:56:49 GMT</pubDate>
            <atom:updated>2017-05-05T13:51:53.403Z</atom:updated>
            <content:encoded><![CDATA[<h3>On The Future of Life Logging — A Speculative View</h3><p>This is a light and speculative entry where I brainstorm personal ideas about the future of lifelogging (imminent or sci-fi future, depending on your optimism level). I will focus mainly on the practical aspects of such activity, while largely ignoring ethical, moral and psychological issues. Even if speculative, I tried to include as many “good” references as possible, and I as well hope that ideas here expressed can themselves inspire you or simply give you food for thought.</p><p>I will start with a questionable distinction between two otherwise intertwined, blurry-defined movements: quantified-self and life-logging. Both are about pretty much the same main point: the tracking of data related and generated by an individual to improve the life of such individual. The goal of our idealized individual is to collect (or better, have it collected) as much data as possible about its life, while making the best use of it, with the final and indisputable goal of… being better! Better health, better performances, better person, better mood, better potential for the future — everything one cares about, one wants it better! Makes perfect sense to me.</p><blockquote>“Quantified-Self is just Data Science… for yourself”</blockquote><p>While from one side with quantified-self we have the more common, “traditional” variety of data (heart-rate, sleep, calories, mood), on the other, with life-logging, we have the more controversial entries related to our perception and senses, media like text, audio, video — smell too maybe? — together with other even more peculiar forms of data.</p><p><a href="https://www.quora.com/What-are-the-main-differences-similarities-between-lifelogging-quantified-self-and-personal-analytics">Someone might say that life-logging is really just about collecting data,</a> or that quantified-self is more about numbers, <a href="http://www.simulation-argument.com/simulation.html">but everything is numbers</a>, and if you collect data is because you want to do something with it (at least that’s the initial good resolution). If you don’t, then the fault it’s of humanity for not giving you good enough tools or motivation, but the initial goal was there: you wanted/hoped to improve your life with data.</p><h3>Quantified Self (just a brief intro)</h3><p>I have a Fitbit, and that’s already considered by some as a pretty hardcore device: automatic sleep and heartbeat monitoring using flashing green light? And it shows the time — kind of — when it recognizes that you turned the wrist to check it ?! Astounding!<br>But then you start reading and chatting around this movement, the quantified-self, start getting interested and fascinated by it, get more and more acquainted with concepts like wearables, continuous monitoring, self-tracking and optimization, cyborgs and transhumanism. You recognize that fancy Fitbit to be just the commercial surface of a deep and variegated ecosystem of choices and products. What else have we there?</p><p>Start with basic activities monitoring, for which Fitbit is an example: heartbeat (or the even more important <a href="https://medium.com/@justin_d_lawler/heart-rate-variability-what-why-how-931c43fce678">heart-rate-variability</a>), steps, distance covered, activity levels, burned calories, sleep tracking.<br>Then we can move to more intimate body analysis: genome testing and sequencing, analysis of blood and guts (microbiome), saliva and stool testing. We already approach scenario for which a “wearable” is not enough, tests have to be conducted in a dedicated laboratory, but in a seamless easy process, which is luckily so far away from the tediousness of old fashion hospital routines.</p><p>Then there is the brain, still part of our body, but sort of our favorite one. <a href="http://waitbutwhy.com/2017/04/neuralink.html">Here the potential is enormous, with a lot of studies and money put into brain computer interfaces (BCI)</a>. And even if at consumer level one might see just basic gaming and meditation helpers, <a href="https://www.emotiv.com/">different</a> <a href="http://www.choosemuse.com/">companies</a> provide affordable devices.</p><p>And while talking about helpers we can conclude our list on the personal optimization ecosystem, which fades into apps over apps: time trackers, task takers, habits builder, bad-habits wreckers(?), actions loggers and <a href="https://habitica.com/static/front">life gamificators</a>, with the omnipresent and always judging Pomodoro technique!</p><p>That’s a lot of tracking, and many from one group can see many of another group as crazy or weird just for being into such things. One might ask “is it worth it?”, one should answer “it depends”.<br>At the same time, many (if not all) are tracked by companies or similar entities in all kind of possible ways, during all their interactions with, practically, anything digital. Seeing the success such companies had using in an intelligent way our data, I expect by analogy that a similar fate should fall upon us once we can do the same with regards to our own goals.</p><p>Diversions apart, all this was yes about learning to get and feel better, but still about what I addressed as “traditional” type of data, but my post is about the future of life-logging, so let’s move to some more quirky stuff.</p><h3>Life Logging</h3><p>Gordon Bell seems to have been one of the first “playing” around this concept, <a href="https://www.microsoft.com/en-us/research/project/mylifebits/">with a two-sided project of both collection and analysis of life’s data</a>. But then you discover that after all that work <a href="https://www.technologyreview.com/s/601300/life-logging-is-dead-long-live-life-logging/">he quit</a>, together with another big name like Chris Anderson, and both with practically the same takeaway message: <em>not worth it</em>. Many others point out how dull most of the recorded stuff might end up to be, but that’s such a wrong understanding of the logging concept. Software log everything, not just errors or exceptions, they go as low as possible, down to the <em>finest</em> level, where yes, life is pretty boring, but one can always filter, and who knows, maybe there is so much important insight that can be derived and generated from those lower levels.</p><blockquote>Better to bring more than needed, than to find yourself in the need of something you previously decided was superfluous.</blockquote><p><a href="http://faculty.som.yale.edu/ShaneFrederick/HedonicTreadmill.pdf">As humans, we keep striving for more</a> (for better or worse) and once we will achieve an optimal level of analysis and use of the quantified-self data, do you really want to tell me that richer media such as audio, text, and video would not be the next obvious step? We don’t even need more practical incentives, just more transparent and effortless automation and probably a bit of an increase of apathy toward privacy matters.</p><h4>Video</h4><p>Capturing what we see is something most of us do, and something <a href="http://gizmodo.com/5883082/this-is-the-first-painting-humanity-ever-made">humans have been doing for a while now</a>. <a href="http://www.urbandictionary.com/define.php?term=Glasshole">Eyebrows rise though</a> when you mention continuous logging: a camera strapped to your chest, some <a href="https://www.spectacles.com/">recording glasses</a> or even <a href="http://www.computerworld.com/article/3066870/wearables/why-a-smart-contact-lens-is-the-ultimate-wearable.html">smart contact lenses</a>. But again, if you want to log, you better log properly, especially if it’s feasible and effortless with nowadays technologies. I know Google glasses — for example — were <a href="https://www.technologyreview.com/s/539606/google-glass-finds-a-second-act-at-work/">not really successful</a> as a commercial project, but being this a speculation I can dream of a reality in which products are good just because they are suitable for you and your peculiar needs, and not because there are enough people interested in buying them. <br>Smart contact lenses could probably deliver unbeatable POV recording fidelity, but it’s also worth pointing out that <a href="https://youtu.be/BP_b4yzxp80?t=14m18s">we might not need external devices to record what we see/perceive</a>; nevertheless, my point would stay quite the same.</p><p>With huge databases of images from video life-logs would be relatively easy to implement a search through time, a search through your <a href="http://www.imdb.com/title/tt0364343/">entirely recorded life</a>. It’s a matter of platform and algorithms scalability, and <a href="https://arxiv.org/pdf/1507.06120.pdf">a lot of research already went into the subject</a> (sometimes nicely addressed as <em>egocentric vision)</em>. In correspondence with improvements on the machine learning side, many kinds of natural language queries could be transparently operated on one’s life database: “when I saw person Y?”, “where I was at datetime X?”, “what I did on day Z”, “what did I eat in the year K?”, “how many yellow cars I saw in my life”. Why? <a href="http://www.urbandictionary.com/define.php?term=Because%20I%20can">Because we can</a>.<br>There are actually many practical health-related use-cases for such a technology, that cover the entire spectrum of prevention, healing, and enhancement. Food recognition is probably one of the best marketable ones, with automatic caloric and nutritional value calculation (approximate, but the more precise the better technologies get). First world problem, but if there is something stopping me from keeping track of food is definitely how bothersome, impractical and time consuming the recording process is. One can use all this data to infer the influence that some specific food has on him, and discover possible allergies or more subtle effects not diagnosed until then.</p><p>This was just about food, but combination and aggregation of different sources of data are the key values for all these “I want to improve” movements: discover how something, how everything affects you. Track luminosity and colors, environments and location, see how habits, mood, performances change. Maybe someone needs technology to tell them who they like or dislike, or whatever other emotions people or situations can influence. GPS tracking can already say a lot about you, but this would be GPS on steroids, with full context awareness.</p><p>All this is just about filming what surrounds you, but I realize how much more you could gain if such recording could include also your person and actions. More complex analysis and reporting of hand gesturing, body language, posture monitoring, tics or routines you might not be aware of, with possible reinforcing learning loops via automated feedback. <br>Plus additional data for all our mood recognition and tracking.<br>And I feel stupid saying this while staring at the piece of paper taped to my laptop camera, but I suppose that’s the nice thing about speculating, a suspension of disbelief towards some of our <em>credos</em>.</p><h4>Audio and Text</h4><p>Many points about video virtually apply to this section too, but I want to reinterpret the audio part, and just focus here on what one says.</p><p>Textual messages are already easily available for analysis (F<a href="https://medium.com/@5agado/conversation-analyzer-baa80c566d7b#.w20u1gltf">acebook Conversation Analyzer</a>) or OCR if you still happen to scribble around. What we say with our voice is instead most likely and for the most part not recorded. Someone else might be recording it, but they are unlikely to share the data with us. I doubt someone like NSA would share its data about me with me, no matter how much of an enjoyable collaboration that might end up to be.<br>Recording what we say is the first step to a sea of possibilities in terms of personal discovery and improvement. Once we consider speech to text solutions, we just reduce the analysis tasks to the resources-rich realm of NLP, while leaving the audio data to more <a href="https://www.ted.com/talks/julian_treasure_how_to_speak_so_that_people_want_to_listen">specific metrics that are instead lost with pure textual translation</a>.</p><p>After my simple <a href="https://medium.com/@5agado/analysis-of-a-personal-public-talk-9218c429e172">analysis of a personal public talk</a>, I thought a lot about this scenario, data, and potential. It’s something simple, but still, until it doesn’t reach your mind is not there, no matter how simple. What if I record everything I say, process it and analyze it. Bear a moment with me and see how much stuff you can get out of it, and this is just was I was able to come up with:</p><ul><li><strong>vocabulary</strong>: how rich your vocabulary is (lexical richness), common repetitions of words or idioms, grammatical and syntactical errors</li><li><strong>improvements </strong>over time: changes in your “language model” (multiple if you also speak more languages, with analysis of reciprocal interaction and influence)</li><li><strong>pronunciation: </strong>accents, quality, weaknesses</li><li><strong>register, timbre, prosody, pace, pitch, volume, mood…</strong></li></ul><p>Body language we covered in the previous section, so once you bring good content, here we have the potential to make you <a href="http://www.artofmanliness.com/2011/01/26/classical-rhetoric-101-the-five-canons-of-rhetoric-invention/">the perfect orator.</a><br>And I was again restricting myself to what is more normal to us, that is, the recording of public speaking. Instead — in this speculation of ours — everything is public speaking, and so your voice can be analyzed in relation to all sort of situations or people you are interesting in, being it a blind date, a weekend with the family, a mortgage negotiation or a drunk night out.</p><h3>Even more speculations, and conclusions</h3><p>Sci-Fi <a href="https://letterboxd.com/katewinslets/list/movies-that-could-easily-be-black-mirror/">movies</a> and <a href="http://www.imdb.com/title/tt2085059/">series</a> already said this all, and what I enjoy about them is tinkering for next months about ways to achieve that, what more is there, how technologies I know would behave for such scenarios.</p><p>Think how many services you can build around this: more startups for everyone! Given the amount of data, AI becomes the necessary intermediary, and while someone might call it our new — external — brain layer (the successor of the neomammalian one) let’s address it for now in a more friendly way: <em>personal AI coach</em>.<br>So there you have, a personal AI coach who observes and listens to you 24/7, has a pretty unmatchable universal knowledge, can give you advice on almost everything, know what is good and bad, and can subtly make you better, more like you would like to be… doesn’t this sound magical to you?</p><p><a href="https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555">Humans don’t seem to excel at rational decision making</a>, especially for daily life activities. A continuous observation of our “doing” has a great potential for improving our stand towards things like <a href="https://www.amazon.com/You-Are-Not-So-Smart/dp/1592407366">cognitive biases, heuristics and logical fallacies</a>; a great potential for better choices overall.<br>“Better or fewer choices?”, you might ask. As per terms and conditions, our AI coach doesn’t order you what you have to do, is neither responsible nor punishable for what you do based on what it says, cause it’s just an advisor. It gives advice, summarized from such a huge amount of knowledge and computation power. I am not saying it will always be right, but most of the times, definitely most times than you would, so you will seem rather stupid/stubborn not to listen to it, and just do the opposite.</p><p>We can even accelerate in our speculation, discarding series and movies and get instead inspiration from books, which are <a href="https://en.wikipedia.org/wiki/Charles_Stross">really aiming way higher</a>. Hypothesize about a virtual reality future instead of a physical one, of uploaded brains instead of transhumanism, of <a href="https://en.wikipedia.org/wiki/Diaspora_(novel)"><em>citizens</em> instead of <em>fleshers</em></a>. At that point maybe all that I have said would not make sense anymore, who knows.</p><h4>Conclusions</h4><p>I started this entry with measuring my heartbeat with Fitbit and ended up with realities from hard science fiction novels, passing through a lovable/inevitable AI coach and a lifetime of one’s experiences stored somewhere in the cloud, for a possible better life. <br>I used a lot the word <em>speculation</em>, but for many all that has been here proposed doesn’t even sound so implausible or even too far. We live in interesting times, a steepness for the curve of technological advancement that none of our ancestors had the honor to experience, something to be sure content about if you are a curious mind, no matter what the future will hold.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Wl-nPe0Z2bEjqSRPv5Nf-Q.png" /><figcaption>“The Entire History of You”</figcaption></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ce6e3722beb6" width="1" height="1">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Personality for Your Chatbot with Recurrent Neural Networks]]></title>
            <link>https://medium.com/towards-data-science/personality-for-your-chatbot-with-recurrent-neural-networks-2038f7f34636?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/2038f7f34636</guid>
            <category><![CDATA[chatbots]]></category>
            <category><![CDATA[recurrent-neural-network]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[bots]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Wed, 29 Mar 2017 10:58:22 GMT</pubDate>
            <atom:updated>2017-03-30T13:37:32.904Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/153/1*GVAJo3FNcHF0dA7SoRusaw.png" /><figcaption>Hello, how can I help you today?</figcaption></figure><p>In a <a href="https://medium.com/@5agado/building-a-personal-virtual-assistant-step-1-your-cv-as-a-chatbot-a4381fce6983#.qd1435nsg">previous short entry,</a> I gave an introduction to chatbots: their current high popularity, some platform options and basic design suggestions.</p><p>In this post, I am going instead to illustrate what I believe is a more intriguing scenario: <strong>a</strong> <strong>deep-learning-based solution for the construction of a chatbot off-topic behavior and “personality”</strong>. In other words, when confronted with off-topic questions, the bot will try to automatically generate a possibly relevant answer from scratch, based only on a pre-trained RNN model.</p><p>What follow are four self-contained sections, so you should be able to jump around and focus on just the one(s) you are interested in without problems.</p><ul><li><a href="#1b87">Short intro on chatbots tasks and types.</a></li><li><a href="#d887">Details of the RNN model used:</a> high-level model architecture, info on training data sources and pre-processing steps, as well as a link to the code repository. I’m not going into details for RNN, but I’m including what I believe are some of the best-related resources and tutorials.</li><li><a href="#c4ee">Architecture of the final working solution</a>: the working chatbot involves separate and heterogeneous components. I will illustrate them and their interactions, while describing all tools and resources involved.</li><li><a href="#5205">Showcase of the chatbot in action</a>: this is technical-details free, and pure entertainment, so just jump here if you not interested in the rest, or need motivation for checking the other sections.</li></ul><h3>Chatbots</h3><p>Chatbots (or conversational agents) can be decomposed into two separate but dependent tasks: understand and answer.</p><p>Understanding is about interpretation and assignment of a semantic and pragmatic meaning to user input. Answering is about providing the most suited response, based on the information obtained during the understanding phase and based on the chatbot tasks/goals.</p><p><a href="http://www.wildml.com/2016/04/deep-learning-for-chatbots-part-1-introduction/">This post</a> provides a very good overview of two different models for the answering task, as well as going into great details for the application of deep learning in chatbots.</p><p>For <strong>retrieval-based models,</strong> the answering process consists mostly of some kind of lookup (with various degrees of sophistication) from a predefined set of answers. Chatbots currently used in production environments, presented or handled to clients and customers, will most likely belong to such category.</p><p>On the other hand, <strong>generative-based models</strong> are expected to, well… generate! They are most often based on basic probabilistic models or on machine learning ones. They don’t rely on a fixed set of answers, but they still need to be trained, in order to generate new content. Markov chains have originally been used for the task of text generation, but lately, Recurrent neural networks (RNN) have gained more popularity, after many promising practical examples and showcases (<a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/">Karpathy’s article</a>) <br>Generative models for chatbots still belongs to the research sector, or to the playfield of the ones that simply enjoy building and demoing test application of their own models. <br>I believe that, for most business use cases out there, they are still not suited for a production environment. I cannot picture a client who wouldn’t bring up <a href="http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist">Tay</a> if proposed with a generative model option.</p><h3>RNN Model</h3><p>A Recurrent Neural Network is a deep learning model dedicated to the handling of sequences. Here an internal state is responsible for taking into consideration and properly handle the dependency that exists between successive inputs (<a href="http://machinelearningmastery.com/crash-course-recurrent-neural-networks-deep-learning/">crash course on RNN</a>).<br>Apart from the relative elegance of the model, it&#39;s impossible not to get captured and fascinated by it, simply from the many online demos and examples showcasing its generative capabilities. From <a href="http://distill.pub/2016/handwriting/">handwriting</a> to <a href="https://arstechnica.co.uk/the-multiverse/2016/06/sunspring-movie-watch-written-by-ai-details-interview/">movie script generation</a>.</p><p>Given its properties, this model is really well suited for various NLP tasks, and exactly in the text generation context I started exploring it, playing with basic concepts using <a href="http://deeplearning.net/software/theano/">Theano</a> and <a href="https://www.tensorflow.org/">Tensorflow</a> for then moving to <a href="https://keras.io/">Keras</a> for the final models training. Keras is a high-level neural networks library, that can run on top of either Theano or Tensorflow, but if you are willing to learn and play with the more basic mechanisms of RNN and machine learning models in general, I suggest to give a try to one of the other libraries mentioned, especially if following again the <a href="http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/">great tutorials by Denny Britz</a>.</p><p>For my task I trained a sequence-to-sequence model at word-level: I feed to the network a list of words and expect the same as output. Instead of using a vanilla RNN, I used a long/short term memory (LSTM) layer, which guarantees better control over the memory mechanism of the network (<a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/">understanding LSTM</a>). The final architecture includes just two LSTM layers, each followed by dropout.<br>As for now I still rely on one-hot encoding of each word, often limiting the size of the vocabulary (&lt;10000). A highly advised next step would be to explore the option of using words embedding instead.</p><p>I trained the model on different corpora: personal conversations, books, songs, random datasets and movie subtitles. Initially the main goal has been pure text generation: starting from nothing and generating arbitrarily long sequences of words, exactly like in <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/">Karpathy’s article</a>. With my modest setup, I still obtained fairly good results, but you can see how this approach doesn’t work on the same assumptions of text generation for chatbots, which is at the end a question-answering scenario.</p><h4>Chatbot Training</h4><p>Question-answering is another big NLP research problem, with its own ecosystem of complex and component-heterogeneous pipelines. Even when focusing only on deep learning, <a href="https://cs224d.stanford.edu/reports/StrohMathur.pdf">different solutions with different levels of complexity exists</a>. What I wanted to do here is to first experiment with my baseline approach and see the results for off-topic questions handling.</p><p>I used the <a href="http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html">Cornell Movie — Dialogs Corpus</a>, and built a training dataset based on the concatenation of two consecutive interactions that resembled a question-answer situation. Each of such q-a pair ends up constituting a sentence of the final training set. During training the model gets as input a sentence truncated from the last element, while the expected output is the same truncated from the first word.</p><p>Given such premises, the model is not really learning what is an answer and what is a question, but should build an internal representation that can coherently generate text. This either by generating a sentence from scratch starting with a random element, or simply by proceeding in the completion of a seed sentence (the potential question), one word at a time, until predefined criteria are met (e.g. a punctuation symbol is produced). All the newly generated text is then retained and provided as candidate answer.</p><p>You can find additional details and WIP implementation in <a href="https://github.com/5agado/recurrent-neural-networks-intro">my Github repository</a>. All critiques and comments are more than welcome.</p><h3>Architecture</h3><p>Interfacing with the chatbot is as simple as sending a message on Facebook Messanger, but the complete solution involves different heterogeneous components. Here a minimal view of the current architecture</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cw2IVdr5sGnnSRIuMS1mCQ.png" /><figcaption>Chatbot Solution Architecture</figcaption></figure><p>Data processing and RNN model training have been operated on a Spark instance hosted on <a href="http://datascience.ibm.com/">IBM Data Science platform</a>. I interfaced with it directly via a Jupyter Notebooks, which simply rocks!<br>Using Keras callbacks system I automatically kept track of the model performances during training, and backed-up the weights when appropriate. At the end of each training the best snapshot (model weights) was then persistently moved to the Object Storage connected with the Spark instance, together with Keras model architecture and additional data related to the training corpus (e.g. vocabulary indexing).</p><p>The second piece is the model-as-a-service component; a basic Flask RESTful API that exposes the trained models for text generation via REST calls. Each model is a different endpoint and accepts different parameters to use for the generation task. Examples of parameters are</p><ul><li><strong>seed</strong>: seed text to use for the generation task</li><li><strong>temperature</strong>: an index of variance, or how much “liberty” you want to give to the model during prediction</li><li><strong>sentence minimum length</strong>: minimum length acceptable for a generated sentence</li></ul><p>Internally this application is responsible for retrieving and loading in memory the models from the remote Object Storage, such that they are ready to generate text when corresponding endpoints are called.</p><p>The final component is a Java Liberty web application which acts as a broker for the <a href="https://developers.facebook.com/products/messenger/">Facebook Messanger Platform</a>. This is responsible for handling Facebook webhooks and subscriptions, storing users chat history and implement the answering-task logic. From one side it relies on the system described in my previous article, using IBM Watson services like Language Recognition and Conversation, on the other, when specific requirements are met or when no valid answer has been provided, can rely on the text generation bit, and call the Flask API at the most convenient endpoint.</p><p>Both the Java and the Python app are hosted on Bluemix, and regarding the former one, I’m currently working on the coverage of additional messaging platforms like Slack, Whatsapp and Telegram.</p><h3>Show Me the Bot!</h3><p>You can interact with the chatbot simply via Facebook Messanger, but making your bot public (usable by whoever reaches its page) requires some work, demonstration videos and successive official approval from Facebook. As for now, I have to manually add people as testers of my app to allow them to use it, so in case you are interested, just drop me a note.</p><p>Nevertheless, I have to admit that already seeing the interactions of the few current testers was a pretty entertaining experience, a good emotional roller-coaster of amusement, shame, creepiness, pride…</p><p>Let’s start with some mixed results (on the right the bot replies, on the left some friends’ inputs, nicely anonymized).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/397/1*OOeTSfi8LgTOzSFFmnVWfQ.png" /></figure><p>Notice that here there is mixed behavior: the “marry me” response is based on Natural Language Classification, while the rest are all generated. There are already grammatically wrong sentences, but at the same time I was for example nicely impressed by the second answer, being fooled into reading a sense of conscious omnipotence in it.</p><p>Sometimes replies seem totally random, but still build a nice interaction, a simulation of a shy, confused and ashamed personality, a bit romantic too maybe.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/397/1*DhzHde-i1SRWlWQ2Irvqgg.png" /></figure><p>Given the training data, it also learned proper punctuation, so it’s likely to reply with a sentence starting with a punctuation symbol if the previous input was not ending with one itself.<br>It can also come up with some seemingly deep stuff, for then failing miserably especially given the momentary new high expectations.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/404/1*dC0zElT36MqV6fpmjidUGA.png" /></figure><p>Notice that in no way there is context retention between answers, it’s simply not built in the models nor system, is just the interaction flow that gives this illusion, and sometimes can coincidentally give a really good impression:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/396/1*tjHDTnGL_xG0XSveU_Fcdg.png" /></figure><p>I am aware that there is no breakthrough in all this, and results might me “mhe”, but after all, so many people get crazily excited about their “baby’s first words”, which from my knowledge are way below the bar I set here…</p><p>My decent understanding of the mechanisms behind it, while observing it talking, makes everything even more fascinating. Feels irrationally surprising all it can formulate, with no actual semantic knowledge of what it’s being told and what it’s replying, just statistical signals and patterns… and I wonder, after all, how many people might be actually working in pretty much the same way, just with a more powerful borrowed computation instance in their skull, and a longer and richer running training on their shoulder.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/398/1*Kf8qZuZ2wK-5VKHXGJBJ_g.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2038f7f34636" width="1" height="1"><hr><p><a href="https://medium.com/towards-data-science/personality-for-your-chatbot-with-recurrent-neural-networks-2038f7f34636">Personality for Your Chatbot with Recurrent Neural Networks</a> was originally published in <a href="https://medium.com/towards-data-science">Towards Data Science</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Data Manipulation and Visualization with Pandas and Seaborn — A Practical Introduction]]></title>
            <link>https://medium.com/@5agado/data-manipulation-and-visualization-with-pandas-and-seaborn-a-practical-introduction-d7891773b534?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/d7891773b534</guid>
            <category><![CDATA[python]]></category>
            <category><![CDATA[data-science]]></category>
            <category><![CDATA[jupyter-notebook]]></category>
            <category><![CDATA[data-visualization]]></category>
            <category><![CDATA[pandas]]></category>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Mon, 20 Feb 2017 13:38:10 GMT</pubDate>
            <atom:updated>2017-02-20T13:38:10.091Z</atom:updated>
            <content:encoded><![CDATA[<p>This article is really just a Jupyter notebook, with embedded explanations, comments and working examples. Was hoping for a better embedding system or rendering from the Medium guys, but hey, that’s life. However all is there, so feel free to play with it and give me your feedback. Happy Data Sciencing(?)!</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/fe7db6cf6a5d069c6495c643eeae69ae/href">https://medium.com/media/fe7db6cf6a5d069c6495c643eeae69ae/href</a></iframe><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d7891773b534" width="1" height="1">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Analysis of a Personal Public Talk]]></title>
            <link>https://medium.com/@5agado/analysis-of-a-personal-public-talk-9218c429e172?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/9218c429e172</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[quantified-self]]></category>
            <category><![CDATA[ibm-watson]]></category>
            <category><![CDATA[fitbit]]></category>
            <category><![CDATA[data-science]]></category>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Mon, 12 Dec 2016 13:40:25 GMT</pubDate>
            <atom:updated>2016-12-12T13:49:51.723Z</atom:updated>
            <content:encoded><![CDATA[<p>I recently gave a talk about <a href="https://medium.com/@5agado/a-quest-for-better-sleep-with-fitbit-data-analysis-5f10b3f548a#.z1snko79u">my analysis on Fitbit sleep data</a>, at the <a href="https://www.meetup.com/Quantified-Self-Dublin/">Dublin Quantifies Self meetup</a>. Being a Quantified Self meetup, seemed more than appropriate (if not obligatory) for me to “quantify” and analyze all the data I gathered and generated during such talk.<br>I will here explore two kinds of data: heart-rate measurements from my Fitbit and a transcript of my speech.</p><p>This article is supplemented with a <a href="https://gist.github.com/5agado/15ff55d4729d63f53cec492a933738a8">Jupyter notebook</a>, which explores the code and methods used for obtaining the results I illustrate here. I relied on common Python libraries (Pandas, Sklearn, NLTK, and Seaborn for visualization) and <a href="http://www.ibm.com/watson/developercloud/speech-to-text.html">IBM Watson APIs</a> for the speech-to-text task.</p><h3>Heart Rate</h3><p>It’s always a fun exercise to monitor your heart-rate in uncommon, out-of-the-comfort-zone events. We are often aware of our state, we feel the stress, agitation, and palpitation! But we are most likely to lose the focus on such internal state eventually, cause something else requires or shifts out attention.</p><p>Tracking your heart rate allows you to partially observe your body reactions to a situation even when the situation is already long gone, gives you the opportunity to approach with a neutral analytical state your past inner behavior.</p><p>Here is my heart-rate for the whole day of Tuesday 22nd November 2016</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qHAR0vUoytvpQTqRQOM-Tg.png" /><figcaption>Plot of my Fitbit heart-rate measurements for 2016–11–22. One entry per minute</figcaption></figure><p>Average value is 81 heart beats per minute (bpm), with lowest and highest measurements respectively of 53 and 134 bpm.</p><p>Here a zoomed view around the time-frame during which I gave the talk</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3OV0Jo1n1sGlbB4XXS5VYg.png" /></figure><p>I stressed each relevant point with a letter:</p><ul><li>A: arrived at the location of the event (preceded by a 15 minutes fast-walk)</li><li>B: start of the first speaker talk. Here I’m just sitting and listening, also most likely consciously stressing myself by trying to relax myself</li><li>C: on the stage, start of my talk</li><li>D: start of the Q&amp;A session</li><li>E: back to the chair</li></ul><p>I already saw a couple of similar graphs, for comparable situations: like many others, I was simply not able to avoid that peak. As you might have noticed, that’s exactly the highest value I reached for the day, evened by a later quick-run after a couple of pints. It is interesting to observe also how HR dropped down consistently as soon as I started presenting. I remember to have been initially highly aware of my words, of what I was saying, for then simply leave space to “auto-pilot mode”.</p><p>Theoretically, the more talks I will give, the more I will present in public, the “better” that graph should get. This is my first personal dataset of such kind, but I hope to collect much more data in the following years. Is not only practice/experience that I will have to take into considerations, but also context, aging, and God only knows how many more possible explanatory variables.</p><h3>Speech Analysis</h3><p>For the speech analysis, I am going to analyze only the actual transcripts of my talk. A lot of data will be lost cause of this decision, and I’m not talking just about possible inaccuracy of speech-to-text results… here a list of important aspects of public speaking which are lost when considering only a basic textual representation:</p><ul><li>Speech Rate: words spoken in a minute, speed and pauses</li><li>Body Language (posture, gestures, eye contact, facial expressions)</li><li><a href="https://www.ted.com/talks/julian_treasure_how_to_speak_so_that_people_want_to_listen">Voice (register, timbre, prosody, pace, pitch, volume)</a></li></ul><p>That’s surely a lot, but let’s see what we can do and get from the basic text, and let all these aspects to a later stage.</p><h4>Speech To Text</h4><p>First thing first: speech-to-text. I didn’t do it on the spot, using tools for real-time generation of text, I instead relied on the video recordings setup for the event. <br>I got my video file, extracted the audio content, cropped the unnecessary parts (including Q&amp;A session) and went for a speech-to-text solution.</p><p>Well known services for speech-to-text (with space for some free usage) are from big names likes <a href="https://cloud.google.com/speech/">Google</a>, <a href="https://www.microsoft.com/cognitive-services/en-us/speech-api">Microsoft</a>, and <a href="http://www.ibm.com/watson/developercloud/speech-to-text.html">IBM</a>.</p><p>Google wanted my credit card number and Microsoft APIs seemed unresponsive to my effort. On the other hand, IBM results are really rich and, together with various optional settings, each recognized word can be accompanied by a confidence level, start and end time (“beginning and ending time in seconds relative to the start of the audio”).</p><h4>Basic Text Analysis</h4><p>Considering the pure textual info I already ended up with this basic but neat summary of my talk:</p><blockquote>“Here a summary of the conversation. Overall 3315 words have been said, of which 951 unique ones, giving a lexical richness of 28.69%. <br>With the talk total duration of 21.8 minutes, the speech rate is of 152 Words Per Minute (WPM).”</blockquote><p>Basic aspect one might want to explore in this circumstance is words usage: which are the most common words, bigrams, trigrams, used during the talk. Top results will most likely be common and low-informative constituents like articles, adverbs, and pronouns. For my results I found that first significant word is <em>data</em>, ranking 65 in the words that occur the most, which might make sense given the topic of the talk.<br>To better explore <em>actual</em> relevant words, you might first of all try to simply remove stop words, or rely on more specialized statistics like <a href="https://en.wikipedia.org/wiki/Tf%E2%80%93idf">tf-idf</a> (explained in more practical details in the notebook).</p><h4>Words Alignment and Speech Rate</h4><p>Let’s consider again the speech-to-text results from IBM Watson service, here what the first five rows (out of 3315, one for each word) of the cleaned results look like:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/387/1*ukvpn6I1ryxLakZC-AP2DA.png" /></figure><p>The alignment info can be used to overcome some of the limitation of pure text analysis. The following histogram should provide an approximate view of my speech rate trend; the entire conversation has been split in 10 bins of equal size (time) and the <em>x</em> value is the count of words which fall in the corresponding bin.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kwAziZJmp_tFQ4JUYJy3gA.png" /><figcaption>Histogram for binning on time_end variable. Equivalent to a binned word count</figcaption></figure><p>You can notice for example a slight but constant decrease in my speech rate. Fitting a regression line between such points I obtain a coefficient of -5.23. Considering the 10 bins used, each bin has a size of 2.18 minutes. Very roughly, this is equivalent to say that on average my speech rate decreased by 5.2 words each subsequent 2.1 minutes.</p><p>Even more generic, one could simply say that the more time passed, the slower I talked. We should then clarify what one means with “talking slower”. There are two options I can think of:</p><ul><li>using fewer words, which can be caused by two factors: more spacing/pauses between words or usage of longer words, which take more time to be pronounced</li><li>lower word speed (time to pronounce a word, measured as length of word<em> w </em>divided by time to pronounce <em>w</em>)</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/814/1*mBPjsNQqDFxuq9Xe8P5WPw.png" /><figcaption>Scatterplots for each derived measures. X axis is the bin index</figcaption></figure><p>Based on the results from linear regression fitting and correlation coefficients, an additional summary for the alignment part would be:</p><blockquote>Your average speech rate is 152 Words Per Minute (WPM), but an approximately constant and significant decrease can be observed, bringing you from an initial WPM of 166 to a final value of 142. The primary cause of this is the usage of increasingly longer pauses between words, secondarily reinforced by a combination of using longer words, as well as a tendency to slow down the pronunciation of words, while the talk unfolds.</blockquote><p>Finally, another interesting addition in the results from IBM APIs, is the presence of a specific keyword: <em>%HESITATION</em>, which unfortunately doesn’t seem to be well documented, but should represent fillers and speech disfluencies such as “Uh”, “Ah”, “Erm”, “Um”, etc.<br>Here a visualization of the occurrences of such keyword.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/898/1*pZ5s5EuaRre_uDOthIKasw.png" /><figcaption>Violinplot showing the distribution of %HESITATION occurrences</figcaption></figure><p>I want to stress again that all the text analysis here demonstrated depends merely on the quality of the speech-to-text results, which considering the setup, audio quality, and brief observations are way below optimal. At the same time, the proposed framework is a reusable one, which I’m definitely planning to further expand and put into use on future data of — hopefully — higher quality. As usual, all feedback, critiques, and corrections in particular, are more than welcome.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9218c429e172" width="1" height="1">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I share your pain on this.]]></title>
            <link>https://medium.com/@5agado/i-share-your-pain-on-this-16aca495ac7f?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/16aca495ac7f</guid>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Tue, 08 Nov 2016 09:34:50 GMT</pubDate>
            <atom:updated>2016-11-08T09:34:50.498Z</atom:updated>
            <content:encoded><![CDATA[<p>I share your pain on this. If you interested in a conversation scraper, or want to further analyze your conversation, here is my Python project: <a href="https://github.com/5agado/conversation-analyzer">https://github.com/5agado/conversation-analyzer</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=16aca495ac7f" width="1" height="1">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Hi Justin,]]></title>
            <link>https://medium.com/@5agado/hi-justin-3a31d6ec414e?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/3a31d6ec414e</guid>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Fri, 16 Sep 2016 17:47:24 GMT</pubDate>
            <atom:updated>2016-09-16T17:47:24.004Z</atom:updated>
            <content:encoded><![CDATA[<p>Hi Justin,</p><p>First of all, great article! Lot a good info and resources to work on.<br>After my analysis on Fitbit sleep data, I am now working exactly on my heart-beat measurements, and your article provided me with many insight and ideas.</p><p>I will also have to dive deeper into the blood related biomarkers; as you pointed out they are harder to get, but definitely valuable!</p><p>Again, great post! Keep up with the amazing work!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3a31d6ec414e" width="1" height="1">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I soon became frustrated trying to replicate the graph data call of the Fitbit website.]]></title>
            <link>https://medium.com/@5agado/i-soon-became-frustrated-trying-to-replicate-the-graph-data-call-of-the-fitbit-website-a219f5900374?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/a219f5900374</guid>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Sun, 28 Aug 2016 16:55:06 GMT</pubDate>
            <atom:updated>2016-08-28T16:57:02.194Z</atom:updated>
            <content:encoded><![CDATA[<p>I soon became frustrated trying to replicate the graph data call of the Fitbit website. I searched again for easier and better solutions, and I ended up <a href="https://community.fitbit.com/t5/Web-API/Intraday-data-now-immediately-available-to-personal-apps/td-p/1014524">on this Fitbit community post</a>, explaining a simplified way for a user to collect all his data via the official APIs. Briefly speaking you have to create a Fitbit app on the website, configure it as described in the previous link and implement a OAuth flow in order to obtain the access tokens needed for the API calls. <br>If you use Python, I suggest to rely on t<a href="http://python-fitbit.readthedocs.io/en/latest/index.html#fitbit.Fitbit.intraday_time_series">he official python implementation of the Fitbit API </a>and give a look <a href="http://blog.mr-but-dr.xyz/en/programming/fitbit-python-heartrate-howto/">here</a> for a clear explanation on what needs to be done to have all setup and running.<br>I have to admit that this ended up to be easier and cleaner that the previous scraper solution, still, you need to keep a couple of things in mind:</p><p>1. You have to write some code to clean the JSON data returned by the API, and extract the info relevant to you<br>2. Seems like the API have a limit of 150 calls per hour, so you might need to repeat the operation several times depending on how much and what data you need</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a219f5900374" width="1" height="1">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Were good article!]]></title>
            <link>https://medium.com/@5agado/were-good-article-2b94b50ed3d3?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/2b94b50ed3d3</guid>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Fri, 05 Aug 2016 10:47:40 GMT</pubDate>
            <atom:updated>2016-08-05T10:47:40.089Z</atom:updated>
            <content:encoded><![CDATA[<p>Were good article!<br>It pushed me to finally build my own chatbot, turning my CV into something more interesting and interactive. <br>I used Watson Services and Facebook Messenger Platform, posted a short article here <a href="https://t.co/yWU9JThZBx">https://t.co/yWU9JThZBx</a></p><p>Thanks again Esther</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2b94b50ed3d3" width="1" height="1">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building a Personal Virtual Assistant. Step 1: Your CV as a Chatbot]]></title>
            <link>https://medium.com/@5agado/building-a-personal-virtual-assistant-step-1-your-cv-as-a-chatbot-a4381fce6983?source=rss-8615d974dee1------2</link>
            <guid isPermaLink="false">https://medium.com/p/a4381fce6983</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[chatbots]]></category>
            <category><![CDATA[bots]]></category>
            <category><![CDATA[facebook-messenger]]></category>
            <category><![CDATA[cv]]></category>
            <dc:creator><![CDATA[5agado]]></dc:creator>
            <pubDate>Tue, 12 Jul 2016 13:04:09 GMT</pubDate>
            <atom:updated>2016-07-12T13:04:09.007Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*sGPAf2g_k6iSmAc6Sbv9qA.png" /><figcaption>Chit-chatting</figcaption></figure><p>I currently work as a Solution Engineer for IBM Watson, Dublin, so I took advantage of the situation for finally building my own chatbot. Needless to say, for the task I relied on <a href="https://www.ibm.com/watson/developercloud/">IBM Watson Developer Cloud</a>, using services like Natural Language Classifier (NLC), Dialog and Language Detection. I then wrote some code to tie these services together and hook the final solution to <a href="https://developers.facebook.com/products/messenger/">Facebook Messenger Platform,</a> hosting the actual broker application on Bluemix. I am not going to explain here in details how I achieved that, there are already many good post about this, I personally followed and suggest <a href="https://developer.ibm.com/bluemix/2016/05/26/bot-for-facebook-messenger-using-bluemix/">this entry</a>. Still, if you interested in a more technical and detailed explanation, feel free to leave a comment or <a href="http://5agado.github.io/">contact me directly</a>.</p><h4>Let them speak!</h4><p>Seems like chatbots and conversational agents are a big trend lately (<a href="http://venturebeat.com/2016/06/29/rise-of-the-chatbots-and-why-you-should-care/">Rise of the Chatbots</a>, <a href="http://www.nbcnews.com/tech/innovation/what-are-chatbots-why-does-big-tech-love-them-so-n572201">Chatbots NBCNews</a>, <a href="https://medium.com/@Conversate/natural-language-apis-for-bots-e791f090e32f#.3nibdwql2">Available APIs Review</a>). While all these articles sure made me interested in the subject, I have to confess that <a href="http://lifehacker.com/how-i-turned-my-resume-into-a-chat-bot-1775565350">this one in particular</a> spurred me to actually build something. Because yes, what best use case that a bot acknowledged simply about yourself, its own creator? In all projects I worked on, corpus, knowledge base, ground truth and alike really required a lot of work and experts, depending on the context, subject and coverage, but if it’s about me, well, I should know and be enough! In her article Esther focuses on an overview of “Not-Too-Technical Solutions”, but makes many good points, that’s why I also decided to focus on the recruiting aspect, turning my CV and related info into the knowledge domain/base of the bot. You can ask generic question about me, like how old I am, were I’m from, as well as more specific ones like what is my expertise, my professional interests and if I am interested in a particular job offer.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/360/1*kI7Z8cwY8Um_MVNLpQRK8A.png" /><figcaption>Examples of personal and CV-related Q&amp;A</figcaption></figure><h4>Chit Chat</h4><p>Around this corpus of personal data, questions and answers, I played and modeled a bit the so called chit chat, all those utterances that make the conversation much more human, and less database-query like: greetings, compliments, insults, generic questions and off-topic.<br>Good advice for this part is to add variation and a bit of randomness, but don’t try too hard to fool the user about your bot being a human, for most cases, there is no point in that. Be sincere from the start, put some personality in your bot, but don’t overdo: the user knows she’s facing a bot, doesn’t take much effort to test that, especially if you want to deliver something useful and consistent. Actually, if you know about some really good bot I can easily test for free, please let me know, I am curious to test my personal list of bot-weak-points on products that are out there.</p><p>All this part as been managed manly via <a href="http://www.ibm.com/watson/developercloud/dialog.html#">Dialog service</a>, but I am now willing to give a try to a new experimental service called <a href="http://www.ibm.com/watson/developercloud/conversation.html">Conversation</a>, which should unify the dialog and the actual language classification and understanding parts.</p><h4>UI and Facebook Messenger Platform</h4><p>While I enjoyed setting up structure, logic and data, I got annoyed pretty quickly playing with the UI part: is not my thing, but at the same I appreciate beautiful design, so I could not settle with a cheap and quick solution of mine, I needed an alternative. I checked again all these articles about chatbots, revisited some of the suggested platform, and the Facebook one seemed the most immediate, intuitive to setup, and most important, didn’t need to bother my friends/testers with “download this app” or “go to this website” just told them “hey, try to chat with this <em>friend of mine</em>”.</p><p>Being my bot a simple service, accessible via API, I just had to implement and deploy the broker code, which in fact can be easily improved in order to cover other platforms (e.g. Slack, Telegram, WhatsApp) and services (e.g. Tone Analyzer, Visual Recognition).</p><p>Notice that making your bot public requires some work and successive official approval from Facebook, so for now I have to manually add people as testers of the app if I want them to actually chat with it, otherwise they will simply stand there waiting indefinitely for a reply. Again, if you interested, let me know.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/306/1*VHqML3ohOhHyMyc2oDeQqA.png" /><figcaption>Hello, how can I help you today?</figcaption></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a4381fce6983" width="1" height="1">]]></content:encoded>
        </item>
    </channel>
</rss>