Doing Research has, until recently, been synonymous with finishing an MS or PhD program and getting a researchy job. That route isn’t a level-playing field at all — there are plenty of gatekeepers on the way, and you must be lucky to work around each of them.

Photo by on

What gets lost in the noise about degrees, exams, applications and funding is the individual drive to do something special, to achieve mastery, to make an impact.

Passion, Curiosity, Persistence, Rigor.
Discipline, Communication, Aiming for Impact.
Problem Solving, Desire to be an Expert.
Coding, Mathematical analysis.

At OffNote Labs, we welcome self-motivated, driven…


The Shape of U?

Developers spend a lot of time deciphering the structure of incoming data in order to do transformations. Consider the following code which requests data from a particular url.

import requests
data = requests.get(url, options).text #request data from url
# data: {'f1': .. , 'f2': [{...}, {...}, ...]}

What is the shape of the returned data here? A schema for incoming datais missing here, so your best option is to print data and/or guess the structure.

The inability to describe and probe data shapes systematically makes writing data transformers and pipelines very hard.
* Data takes heterogeneous forms: JSON, XML, Table, Tensors…


Photo by on

Probing the Self for Fun and Profit

Self-Supervision is (and the ). Explaining the difference between self-, un-, weakly-, semi-, distantly-, and fully-supervised learning (and of course, RL) just got exponentially . :) Nevertheless, we are gonna try.

The problem, in context, is to encode an object (a word, sentence, image, video, audio, …) into a general-enough representation (blobs of numbers) which is useful (preserves enough object features) for solving multiple tasks, e.g., find sentiment of a sentence, translate it into another language, locate things in an image, make it higher-resolution, detect text being spoken, identify speaker switches, and so on.

Given how diverse


tldr: is a library to define dimension names and named shape expressions for tensors. Allows shape labels on variables, shape assertions and intuitive shape transformations using names. Works with arbitrary tensor libraries. Explicit shape annotations accelerate debugging of Deep Learning programs and improve developer productivity, code readability.

Source code available at the github repository.

Update (Nov 2019): Checkout our library to annotate and check named shapes on the fly. With tsanley, you can avoid writing explicit shape assertions and automatically annotate third party deep learning code that you want to reuse.

Writing deep learning programs which manipulate tensors (e.g., using numpy, pytorch, tensorflow, keras ..)…


or How to use neural networks to find relevant products / answers?

I’ve long wondered why is it so hard to find the ‘right’ set of matching documents for my query, at an e-commerce site or a QA forum. We know of an array of query-doc ‘matching’ technologies — from syntactic TF-IDFs to semantic word/neural embeddings. Then why, in the age of AI, are these matchers unable to figure out what my query or a document or a product item really means?

Natural language based search is a ubiquitous technology — searching the web, or a product catalog, or answer…


or Hybrid Software 1.0 and 2.0

When devising an attack strategy for client problems, I’ve often observed that the best solution combines both programming-by-example (machine/deep learning, Software 2.0) and program-with-rules/rule-based (RB, Software 1.0) flavors. My earlier discusses this tug-of-war between the two extreme styles.

To recap: given a bunch of input/output pair examples, you need to guess a function f, such that f(i) = o for each pair. So, do you write the rules in f yourself or use an ML/DL algorithm that learns f from these input/output examples. Or, even better, combine them to overcome drawbacks of either.

Interestingly, for most client problems, I’ve…


or A More Liberal Approach to Software Design

This trending question has sparked many a debate. Instead of taking (DL vs non-DL) sides, I prefer a unifying (and liberal) view of software design.

Let us first distinguish programs written via Deep Learning (or ML) vs traditional software engineering.

  • DL/ML/AI are all about writing programs (like we’ve been writing since the beginning of the era of computing) to solve problems and execute tasks. Writing traditional software = writing a sequence of rules (either as imperative actions or denotational constraints) in the syntax of a chosen programming language, mostly manually.
  • Writing (supervised)…


The buzz about machine learning or deep learning or AI is promising and deafening at the same time. Almost like a new beast in the town, all set to take over everything. I’ll try in this article to motivate the developments from the point of view of program synthesis — getting computers to write programs for us automatically. We will discuss how Deep Learning and Program Synthesis are intricately related, motivating why Deep Learning is relevant to every programmer, irrespective of their background.

Let us begin on a somewhat familiar, common ground:
The digital revolution — software eating industries.

Software

Nishant Sinha

Researcher, Consultant, Educator | Deep Learning, Reasoning | OffNote Labs, ex-IBM Research, Carnegie Mellon | nishant at

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store