🤖AI Diary #4
Topics in this issue include deep learning applied to radiology, language GANs falling short, integrating reasoning into AI models, improve evaluations for NLP-based clinical research,…
Challenges of applying deep learning for radiology
Radiology have recently seen significant gains in the use of deep learning methods for things like action recognition and automatic detection of brain injuries or skull fractures. AI researcher, Sasank Chilamkurthy, dives deep into the challenges faced when dealing with Head CT scans and medical imaging, and what creative ways can be used to address these challenges. As reported in this blog post, the main challenge seems to be in the processing and preparation of the medical images since it requires different processing techniques as compared to the more common computer vision or natural language datasets and tasks.
Language GANs Falling Short
I didn’t come up with the heading above, that’s the actual title a recent paper released by the AI research group known as MILA from the Université de Montréal. The paper makes several surprising observations on the limitations of GANs in the context of natural language generation (NLG) tasks. The main idea of using GANs in place of maximum-likelihood (MLE) models trained with teacher forcing is that they don’t suffer from the so-called “exposure bias”. However, the authors found that the impact of exposure bias on sample quality is less severe than was previously claimed or thought.
Concept extraction with energy functions
Abilities tied to human intelligence such as abstract reasoning and planning require the ability to convert experience into concepts. An OpenAI research team has released a new technique, based on Boltzmann machines, that enables agents to learn and extract concepts such as “near” and “above” from specific tasks and then use these generated concepts to solve other tasks in various domains. For instance, they experimented with a 2d particle environment to solve tasks on a 3-dimensional physics-based robot. The cross-domain transfer is interesting and could allow for further analysis of concepts and language understanding. Read more about this work here.
Want to keep track of the misuses of AI in society? This repository does a decent job of curating an updated list of potentially harmful AI systems appearing in the wild. Some notable examples include Deep Fakes and Fake News Bots.
NIPS conference name-change dilemma
A few weeks ago, the board of the Neural Information Processing Systems (NIPS) conference released the results from a survey they had issued earlier this year. Based on the feedback they received, they made the final decision not to change the name of the conference which has sparked controversy and heated debate within the community. Leaders in the field have continued to raise concerns about the conference name and have even started their own petitions for a name change as an act of protest against the conference. This is a very delicate issue but I truly believe that a name change is necessary. Even though we must always keep a high interest in safeguarding the progress of science, we also need to take care of the people doing the actual science, and if takes changing the name of a prominent conference, then I believe that is the right choice to make. We should do our best to make other underrepresented groups feel more welcome to the field. Below is the official letter released by the conference and re-shared by @hardmaru:
Efficient floating point math for AI hardware
A Facebook AI research team has developed a new technique that optimizes floating point math which makes AI models run 16 percent more efficient than when using standardized int8/32 math. All this means is that with such technique it will be possible to improve the speed of AI model training and simplify how models are quantized and deployed to production.
Unsupervised intent induction
One of the key areas of NLP is the development of conversational agents (also referred to as chatbots) that are able to have smarter conversations. Building such type of AI technology requires understanding different aspects of a natural language dataset such as intent and entities, in addition to being able to resolve the difficult task of determining the intent of dialogue via back and forth conversation. The last phase is obviously the response generation phase, where the agent selects a response from a predefined set of answers or directly through automatically generated text. Read more here about how lang.ai, an NLP startup, is using unsupervised AI to induce intent, one of the challenging tasks of building natural language conversational agents.
Deep learning approaches to understand human reasoning
In order to train machines to make better decisions and assertions that humans can understand we must be able to teach them to reason similar to how humans reason on different experiences and interactions with their environment. In order to teach machines how to reason, there have been many advancements such as visual question answering (QA) and understanding visual relationships that provide clues on how to improve deep learning systems through embedding knowledge. In fact, more recent methods such as one-shot learning can be used to effectively reason based on past knowledge, but they are still limited in terms of generalizability and learning from rare events. One possible solution is to make use of augmented memory so that models can learn more efficiently and be able to reason faster. Read more on this type of systems here.
Reinforcement learning educational package
Earlier today, OpenAI released a new educational package, called Spinning Up in Deep RL, for those interested in learning about the topic of deep reinforcement learning. It includes an extensive list of algorithms and resources used to effectively train deep reinforcement learning algorithms.
NLP for clinical informatics research
We have seen the success of integrating different NLP techniques in designing conversational agents and recommendation systems, however, one of the promising areas where NLP will be heavily used in the future is in clinical informatics research. This review paper looks at the different NLP methods used in clinical research and applications and the challenges involved in evaluating them.
EMNLP 2018 roundup
This year’s EMNLP conference, one of the top venues on all things related to NLP, featured groundbreaking research papers that ranged from capsule networks applied to NLP tasks and a remarkable number of new datasets such as HotpotQA and SWAG. Some wonderful people like Sebastian Ruder, Claudia Hauff, and Patrick Lewis put together their thoughts and reviews/highlights on some of the important works that were presented in the beautiful city of Brussels this past week, where EMNLP was hosted this year. I also participated in the conference as an oral presenter and was truly impressed with the organization and location of the conference. I also summarize my takeaways in the tweet below:
Google open-sources Kubeflow Pipelines
Google wants to democratize AI. This is the not the first time we have heard that statement, haven’t we? Recently, in an effort to democratize access to AI and foster collaboration between businesses, Google Cloud announced a new platform called Kubeflow Pipelines which consists of a machine learning toolkit that simplifies and scales the deployments of ML workflows on the cloud.
Quora has publicly released a new dataset called Question Sincerity, where the goal is to feed the data to a machine learning model and train it to be able to distinguish well-intentioned questions from provocative ones. The idea is to be able to capture those ill-intentioned questions, flag them, and remove them from the platform so as to reduce any harm that such information could cause to the community. The dataset was also released on Kaggle as a competition.
One last quick thing: any sort of engagement (follows, shares, 👏👏👏, and feedback) will make a huge difference for the future and sustainability of the dair.ai publication. So I will deeply appreciate any of that in advance.