Broadcasting the continuum of science

PK
6 min readApr 23, 2013

--

I think we need a better tool for communicating ideas.

The problem with journal papers is that they are discrete, static and conservative while science is essentially a continuous, dynamic and unpredictable process. So, we need an instrument that would allow, along with printing IMRAD articles on paper or uploading PDFs to arXiv (come on, it’s 2013 out there!), to broadcast the full continuous flow of scientific inquiry.

Here is how it should look like. First, anything that comes up in your lab or on your mind should be easily publishable, sharable, browsable and searchable in one place, including raw data sets, source code, graphs, formulas and text.

All sorts of negative results, failed experiments, stuck over theorem proofs, peer reviews, otherwise unpublishable, are more than welcome — it's a very precious source of raw ideas and if you don't know what to do with them now, maybe tomorrow someone else from a different field will find a completely unexpected application for them. It happens.

Now, not to get lost in this ocean of information, sorry for the cliché, we introduce a notion of bits of sense. One bit of sense is an elementary part of academic narrative which can be referenced to and commented upon without giving extra context. It can be an unexpected twist in a theorem proof, a new beautiful formula, an indicative graph point, an important conclusion, a questionable row in the spreadsheet, an interesting couple of passages or a whole block of text worth 10000 words. Anybody, including author, can decide what is important in the work and highlight it, i.e. give a precise digital link to it, as opposed to a vague analogue reference to the whole paper.

Of course, we could come up with tons of add-ons like, for example, theorem and lemma libraries or data mining tools, but the core is as simple as that.

So, what are the benefits of this publishing platform, if implemented?

  1. Coauthoring. Now it's crystal clear who contributed what. The first author problem, pretty harsh currently, is nonexistent.
  2. Bibliometrics thanks to bits of sense looks more meaningful now. Everybody has his own gems in a good paper and questions about a bad one (let’s call it minus bits). If there’s nothing to highlight, there is no science there, no matter how many pages it amounts to.
  3. You can track your own cached train of thought, including failures, systematically in a convenient form. So can your colleagues.
  4. The scientific process is now open to students and amateurs (as a next step for MOOCs?) — you'd rather follow a live quest for truth than read instructions in a textbook.
  5. It goes without saying that now you have Basecamp fine-tuned for academia.
  6. Future employers and grant-givers can see your whole career (eventually, of course) in the most full and objective way — not only achievements, but how you work on them along with sketches of all stages and unfinished work which may account for the majority of efforts so far. Essentially it's your personal portfolio, which is far more informative than a CV or a set of polished articles.

Finally, to have an aggregated understanding of this massively streaming math under the hood, we obviously need new sense visualization media, probably coming up with better than pencil&paper-age visual and thinking metaphors for STEM abstract concepts. Something in this direction:

http://vimeo.com/67076984

Bit of sense coins

There is one more aspect to it: the idea of crowdfunded science might turn out to be not as stupid as it looks like now. Yes, the crowd is by nature short-sighted, emotional, on average poorly educated and interdependently thinking, and therefore crowdfunding for science sounds almost like an oxymoron and, more importantly, nonsense. You can’t get Riemannian manifolds look sexy to and hence be popular with a crowd. We can observe this happening at places like Microryza, where pundits get funded to investigate “Why are jokes funny?”, “How do spammers harvest your e-mail address?” and something about pandas.

But on the other hand, a decentralized and much deeper network of web pockets would come quite handy in cutting grant giving red tape and fueling not only opportunistic, conformist and mainstream topics.

So, the trick is to maximize the scientific effect of web donations equally, be it worth $10 or several millions, to the mutual satisfaction of all parties and without compromising research quality. As a solution we need some competently redistributing mechanism as an interlayer between the crowd and academia. A good candidate for this role would be a system of bits of sense.

First, the scientific inquiry of all participating academics is broadcast life as described above. If you want to support this work, you donate, not directly, but to the special account, where money is accumulated over some given period, say a month, while scientists award each other bits of sense, i.e. in essence issue and distribute virtual academic currency. Of course, some regulation is needed here, e.g. blocking different kinds of fraud like self-awarding. Hence, at the end of this period we have two pools: donated money and accumulated bits of sense. Now we can calculate an exchange rate, and each academic can have some real crowdonated money in exchange for his well deserved bits of sense and spend it on whatever research he competently deems important, be it his own or his colleague’s. The community is interested in issuing merit based virtual currency fairly: not too little, so that few couldn’t take all donations, and not too much, not to inflate the purchasing power of one bit of sense. Earned bits of sense are transferable. Later on a moving exchange rate based on previous periods could be introduced, although this is subject to discussion.

At the end of the day everybody is satisfied: donators are happy and confident that they contributed to something fundamentally important, even if they don’t understand it. An academic is glad that he easily got funding without encumbrances, either based on his reputation in the eyes of some peers as a reflection of his contribution to science, or as a transfer from at least one peer, who thinks that his ideas are undervalued and deserve a chance.

P. S.

— You once compared the whole building of mathematics with a tree, Hilbert’s tree, with a metric structure expressing closeness or nearness between different areas and results. We know from Kurt Gödel that there are parts of that tree we will never reach. On the other hand, we have a grasp of a certain part of the tree, but we don’t know how big this part is. Do you think we know a reasonable part of Hilbert’s tree? Is the human mind built for grasping bigger parts of it or will there stay areas left uncharted forever?

— Actually, I am thinking about that now. I don’t know the answer, but I have a program of how we can approach it. It is a rather long discussion. There are certain basic operations by which we can perceive the structure. We can list some of them, and apparently they bring you to certain parts of this tree. They are not axioms. They are quite different from axioms. But eventually you cannot study the outcome with your hands and you have to use computers. With computers you come to some conclusions without knowing the intermediate steps. The computational size will be too huge for you. You have to formalize this approach to arrive at certain schemes of computations. This is what I think about now but I don’t know the answer.

Interview with Mikhail Gromov

Two recent articles by Mikhail Gromov describe ergosystems, i. e. systems which build themselves and evolve “ergologically” by infiltrating the incoming data flow in search of “interesting structures”, the study of which Gromov starts in the realm of human cognitive and mental activity, but which of course can potentially describe a broader range of complexities. Essentially it’s a search for the next level of math abstraction. Meanwhile Vladimir Voevodsky has started the Univalent Foundations of Mathematics program aimed at creating a computational “homotopy type theory”-based ZFC replacement. Now, based on Voevodsky’s foundations, Gromov’s approach could be applied, in particular, to creating ergological automated theorem provers running on the human level of abstraction while leveraging machine’s computational power, which in turn could harvest applications in systems like IBM Watson as well as interesting theoretical findings.

Chances are in the coming years several years the level of abstraction will go up so that we’ll leave simple yet massive proofs and conclusions to computers, while our task will be to conclude based on the machine’s conclusions. The above described system could then come useful: ideas and approaches collected by the tool in question might well become a good sandbox for Gromovian/Watsonian machine learning.

--

--