ResearchRabbit is out of beta- my review of this new literature mapping tool
What is Research Rabbit?
ResearchRabbit has been a tool that has been in beta for a while. Until recently, access to it was limited via an invite system, but this changed last week and anyone can now try it. But what is it?
ResearchRabbit is in the class of products and services that I call “Literature mapping tools”. There has been a recent explosion of such tools in this area, due to the growing availability of Scholarly metadata and in particular citations and I have been tracking most of them here.
These tools which perhaps be more accurately called citation based Literature mapping services, generally accept one or more relevant seed papers and use various techniques (mostly citation based) to recommend new ones that are similar to be added.
In a recent Medium post, I listed my top 3 favourite tools in this category and in another post I did a deep dive , but how does ResearchRabbitt measure up?
ResearchRabbit — a similar concept
Firstly, many of these tools be it Connectedpapers, Litmaps, Inciteful, Citation Gecko etc and now ResearchRabbit draw on very similar sources of data. For example, similar to Litmaps, Local Citation Networks and Inciteful, ResearchRabbit draws on the data made available in Microsoft Academic Graph (MAG) .
While data sources are similar, the design space for innovation in terms of algorithms for recommending papers and interface design (particularly use of visualizations) is mostly unexplored as many of these tools have only started emerging recently (I would identify Barney walker’s Citation Gecko released in 2018 as one of the first of it’s kind, though many more have started appearing in 2020 onwards)
So how does ResearchRabbit compare with it’s peers? In terms of general concept, it isn’t that different.
You start off with a “collection” and as you add relevant papers (often called “seed papers” by similar tools) using various functionalities (more on that later) the app provides, it recommends more papers to be added. All this is pretty standard for tools like this.
But as you will see the way it executes the idea, particularly in the UI is pretty innovative.
A summary of ResearchRabbit
For the rest of the post, I will explain the functionality of ResearchRabbit in detail. But here’s a summary for those who want to just get an overall view and then try it out yourself.
One of the challenges of doing literature review is that you are constantly switching between different modes of searching and browsing. You may find some relevant papers via keyword searching or stumble upon some via Wikipedia or at a conference, which leads you to look at the references or citations of that paper, which may lead you to spot interesting authors that you decide to check out their publications, which leads to more citation mining…. and you spiral down the rabbit hole.
ResearchRabbit is designed to help support and enhance such workflows as you stumble down this rabbit hole. (See why it is called Research Rabbit?)
Firstly by using a novel and slick column based interface, coupled with fast generation of options, ResearchRabbit aims to put you “in the flow” as you effortlessly move from paper to paper. Want to check all citations of one paper? Click a button and you get that in a split second with all the results in a new column, want to look at all references of a paper that was newly found in *that* column? One click again and you get yet another column of papers, which you can further drill in by references, citations etc. You get the idea.
Unlike many of it’s peers, it is also one of the first few tools to support co-authorship graphs providing another dimension for the researcher to explore the literature forest and add publications by authors.
I have often been skeptical on how useful visualizations are in these class of tools and ResearchRabbit’s response to this is simple.
As the tool supports and in fact encourages you to jump from paper to paper easily, it is easy to find yourself totally lost after a few jumps. As such the default visualization is a simple citation network of papers with papers already in your collection that are colored green. This allows you to get a sense how fast you have wondered off…..
All in all ResearchRabbit isn’t perfect, might have a bit of a learning curve compared to simpler one-shot use tools like Connectedpapers and it can something be hard to figure out how it is recommending items because the algo is a total blackbox (there isn’t even a brief description of how it works), but it’s the first tool I’ve seen so far that proposes a flow that I think is compatible with how experienced researchers think when they do their literature review.
Curious? The rest of the post will go into a deep dive of the tool and my impressions of it.
Working on an example in Research Rabbit
In this example, I will work on a collection of papers about Microsoft Academic Graph (which is kinda meta- given ResearchRabbit uses Microsoft Academic Graph).
Like most other similar tools, you can add specific papers via DOIs/PMIDs, Title search or just by importing papers via BibTeX or RIS.
You can also search with keywords. At the time of writing, the two search engines used are Lens.org and Pubmed.
I just went ahead to add 10 or so papers on the topic to the collection to test it out.
Adding recommended papers to the collection — 2 major methods
Once you have added some relevant papers into the collection, you can look for more papers to add. ResearchRabbit provides two major methods.
Firstly you can select any individual paper and explore from there or you can look at recommendations that ResearchRabbit gives for all the papers you have so far in your collection (or for multiple papers, more on that later).
Let’s start with exploring from individual papers
Method 1 — Exploring from individual papers.
When you click on any individual paper (you can also select multiple papers but more on that later), this is where the uniqueness of ResearchRabbit starts to emerge.
Say we clicked on Hazing & Alakangas (2017) as the paper we want to focus on.
As you can see from the image above, as you drill down to Hazing & Alakangas (2017), ResearchRabbit will create a column to the right of the collection with details on that paper and options to explore further.
You can click on “All references” to look at all references of Hazing & Alakangas (2017) while “All citations” looks at the citations of Hazing & Alakangas (2017). “Similar work” does what you might expect, though this is using a black box algorithm that there are no details about even in a rough sense.
We will talk about the other two author related options later.
Let’s click on “All citations” which generates yet another column of papers titled “All citations” and a visualization panel titled — “Connections betwen your collection and 24 papers” — 24 is the number of citations of Hazing & Alakangas (2017) known to ResearchRabbit.
The fact that ResearchRabbit not only creates a column of papers but also automatically opens a visualization pane is a interesting choice. I initially found it a bit jarring but I can see that once you get used to it , it makes things a bit more seamless.
The idea is if the next thing you usually do after generating a new list of papers is to visualize it more often then not, why not do it automatically and save you a click? This shows a high degree of confidence on the usefulness of the visualization!
So I guess the question is , is the visualization panel useful? I’ve been a bit of a skeptic on the usefulness of visualizations in such tools, so this is an interesting question.
By default the visualization , creates a citation network graph with green nodes representing papers already in your collection and blue nodes representing papers that are in the column to the left of the visualization -which in this case are citations of Hazing & Alakangas (2017)
As you mouse over papers in the “all citations” column, the node representing it in the visualization will be highlighted in yellow.
The idea here is as you consider whether to add one of these papers to your collection you can see how related they are to your existing set of papers. Currently, we selected one paper from our collection — Hazing & Alakangas (2017) and are drilling down into the citations, so pretty much everything IS directly connected to at least one green node of course. But as you go deeper in (or explore by authors), you will start to see this might not be true.
You can also look at the timeline view for the same visualization, which as you expect gives you a similar visualization where Green nodes are papers in your collection and blue nodes are papers that are in the list that you are currently considering except that the vertical axis is now used to represent year of publication.
If any of the papers listed in this new column are of interest, you can drag them to your collection collection, or click on them and select “To this collection”
Thoughts and suggestions on the paper networks
Firstly, the list of papers in the latest columns (which in this case is all the citations of just one paper in the collection) often includes both papers already in your collection and those that are not. While mousing over each paper and looking at the graph allows you to see if the node is blue or green will sort you out, this isn’t easy to see in a complicated graph. Seems to me it would be a lot easier if those papers were marked as “in your collection” already, so you can ignore it.
As it is, I found click on a interesting paper, try to add it to my collection but I realise it is already in there and can only remove it , which is fustrating.
The other issue is that if you look closer there are actually two shade of blue (and probably green) , I have confirmed with ResearchRabbit that despite the lack of documentation, the darker shades are newer papers, so this needs to be documented.
Also if you have multiple collections, they all show up as Green nodes regardless of which collection they are from which seems to be a missed opportunity. It would seem more natural for papers from different collections to showup as different colors (and show a split circle if they are in multiple collections) which is something it’s competitor Litmap does nicely.
Lastly, a point about the UI. I found it initially confusing that the “Visualize these papers” was automatically turned on (shaded blue). And if you click on it, you are turning off the visualization panel.
It look me a few tries to figure out what it was doing as I clicked on the button wondering what additional visualization it would do and it ended up turning the visualization panel off. Also this would not only close the visualization but also jumped you back to the column before which made it less obvious what it was doing.
Method 1 — continued — ResearchRabbit allows you to navigate by authors as well
As I noted in my round up on literature mapping tools, most of these tools currently focus on doing things with citation relationships and there was room to considering including co-authorship relationships. ResearchRabbit is one of the first to do so.
Say among all the citations of Hazing & Alakangas (2017) , we select again another paper Hug&Brandle (2017).
This time instead of exploring the references or citations, we click on “these authors”. In this case this automatically opens the two authors of that paper Hugs & Brandle (2017) in yet another column and automatically opens a coauthorship network.
In this case the visualization isn’t too useful because we only have two authors, but we might have noticed that Hug seems to be an author that is worth looking at and we can click in to see all his published works which leads to a very familar column and visualization again.
This time, you notice that while some of Hug’s publications are linked to the papers in your collection (green nodes) there are two other clusters of papers that are not. This is normal, but clearly the blue nodes that are linked with the green nodes are worth studying to decide if you can add them to your collection.
Just to recap in case you were lost, we started with a couple of seed papers in the collection, clicked on Hazing & Alakangas (2017), from there we clicked on “All citations” which gave us a list of 24 papers that cited the paper, one of which was Hugs & Brandle (2017) and from there we looked at all publications by Hugs.
And of course you don’t have to stop there, you could add some of the papers in the latest column to your collection , then select one paper and continue the whole process again and click on “All citations”, “All references” , “Similar works”, “These authors”, “Suggested authors” etc.
Are you starting to see why this tool is called Researchrabbit? It basically allows you to keep going down the rabbit hole via forward citation, backwards citations, browsing authors and more.
Thoughts and suggestions on authorship options
The suggested author is an interesting option, using an unknown algo it suggests some authors you might be interested in, their latest affiliation in a list and automatically opens up a co-authorship graph.
I can see how this can be interesting to spot authors that might be worth investigating further. However curiously the co-authorship graph currently highlights all authors in one color — red.
Seems to me it might be a good idea to highlight authors who are already authors of papers in your collection in a different color, so you can choose to focus on them more or ignore them. This is analogous to the current paper network where papers in the collecton are green. But clearly this is an area that can be expanded further.
Another interesting idea I could see is to help guide users to identify interesting authors to look into. For example, tagging authors with the number of papers they already have authorship in the collection could be helpful. Of course, some of this “smarts” could be embedded in the “Suggested author”, “Similar papers” algos and rankings but having some transparent way to rank or explore authors could be nice for users who wants some control.
Method 2— Exploring/Recommendations from collection or multiple papers.
As you add more papers to your collection, you will see options start to lightup in the Collection options.
I initially expected the options to be similar as for individual papers but that wasn’t true.
While options like “Similar work” , “These Authors”, “Suggested Authors” are in common with the earlier method, you lose “All citations” and “All References”.
These are replaced by “Earlier work” and “Later work”.
Do note that if you select multiple papers instead of just one, the same options will appear as well.
Regardless of which option you choose, the same idea as seen earlier will apply so you will get exactly the same column of papers and visualizations as seen earlier, so I won’t belabour the point by showing all the options except to show one example of selecting “Earlier work”.
Thoughts and suggestions
I personally miss the functionality of looking at “all citations” and “all references” of papers in the column and it seems an obvious thing to include. It’s unclear to me if this isn’t an option because it will overload the app when there are too many papers or it is simply something that isn’t as useful as I expect (probably the later)
Somewhat even more concerning to me is the three options — “Earlier work”, “Later work” and “Related work”.
All three are based on opaque algorithms which is alarming to some who want transparency. For me, I am usually not a very hardliner on this issue , after all, my view is you don’t need to know how Google Scholar’s algorithms work exactly to use it well.
Here’s though I do admit in this particular scenario I am not too happy. Why?
While I don’t know exactly how Google Scholar’s algo works, I have a rough sense of it. I know it will match my keywords, with priority on matching in the title. I know it heavily weights citations etc.
But when I look at “Earlier work”, “Later work” and “Similar work”, I started wondering, wouldn’t “Earlier work”, “Later work” also be “Similar work”? Perhaps the first two are just “Similar work” split by time? But quick testing suggests that you generally get far more “Similar works” than the other two added together.
After corresponding with ResearchRabbit, they clarified with me that the “Earlier work” and “Later work” are using citation relationships while “Similar work” uses that and something more. They also claim that those three algos will usually generate options which rarely overlap.
After playing for a while, I still dont have much of an intutive feel for these three options. Except that maybe “Similar works” tend to include papers that a bit further away in conceptual space than the other two options (less connected or further connected on paper graph). But given “Similar works” often generates x10 times more suggestions than “Earlier work” and “Later work” that might be simply the larger numbers at work.
As it stands, I would they might as well be labelled algo 1,2,3 and I would just click on them all, since it’s hard to see when to use each.
Comparing ResearchRabbit to other similar tools
How does ResearchRabbit stack up against other similar tools?
First off it is clearly aiming at a different set of users compared to the wildly popular ConnectedPapers. ConnectedPapers is designed as a “one shot” visualization tool, you enter only one seed paper, and it generates a map/graph for you. You can further use it to try to find “Prior” and “Derivative works” (basically seminal works and survey/review papers) but other than that’s it a remarkable simple tool with little options. You don’t even need (you can’t actually at the timing of writing) to create an account to use it.
In comparison, Litmaps encourage you to create an account. The idea here is you are supposed to come back over and over again to lovingly curate your maps, as you add additional papers recommended by the system. Litmaps also allows multiple visualizations options taking into account publication year, title similarity, citations counts in the axis and options to change node size.
As such Litmaps is the more approprate tool to compare to with ResearchRabbit . Both tools for instance have an email alert option that will send you emails weekly of new potential papers to add and both allow sharing and collaborations.
ResearchRabbit also allows you to add notes to papers which helps with collaborations of course.
Final evalution of ResearchRabbit
For me, the challenge of these new Literature mapping tools is two fold.
Firstly, can they come up with a flow that supports and even supplements the diverse ways researchers combine searching and browsing techniques in a natural way?
Secondly, can they provide intitutive and useful visualizations to support the researcher in making sense of the literature along the way?
The most obvious way which many have done (including Citaton Gecko) is to use the seed/input papers to suggest papers to add iteratively by cocitations or bliometric coupling (similarity in references).
Tools like Litmap instead provides a way to combine keyword searchs with connections to seed papers in existing maps, while inciteful allows you to see connections between any two papers.
While these approaches sound like they could work in theory, in practice I personally haven’t found them that compelling.
ResearchRabbit I think may have being the first to succeed in a general way for the first question, while providing a partial answer for the second.
Instead of trying for some novel way of thinking of literature review, it goes back to the basics.
As every experienced researcher knows, doing literature review involves a series of iterative searching and browsing.
You typically startoff with a keyword search in Google Scholar to look at some promising ones or perhaps you were already given some good leads by someone you trust. Then for each relevant paper, you might do any of the following
a) Trace the relevant paper’s references (backward citation)
b) Traces the relevant paper’s citations (forward citations)
c) Look at other relevant works by authors who caught your eye
d) Redo the search or browse by keyword/subject headings in relevant papers
ResearchRabbit recognises this and with the exception of perhaps the last one (though this seems to be something that they could do since they already plugin to search engines), it helps reduce the friction to do these tasks in a single interface.
Side note: I just noticed that while most academic search engines and citation indexes, including both Web of Science, Scopus , Semantic Scholar, Lens, scite etc allow you to do both forward citations and backward citations aka check references of a paper, Google Scholar lacks the later feature
On top of the reduced friction, the visualization of the potential papers you wish to add in relation to what you already have in your collection as well as the guidance by the co-authorship graph helps gives you further clues in the direction you should be going.
Of course, for a tool to reduce friction, it needed to be quick, intutive and seamless to use. Has Research Rabbit achieved this?
In general,most of the new general Literature mapping tools such as Citation Gecko, Litmaps, Connected Papers and Inciteful are targetted at researchers and are pretty slick and easy to use(as opposed to Science mapping tools like Citespace, Vosviewer which are used by bibliometrics people) but I think ResearchRabbit is exceptional even taking this into account.
While ResearchRabbit has a bit of a learning curve, for me personally, this might be the slickest one yet. One of Research Rabbit’s tricks that might be easily missed is how quick it is to allow you to explore citations, references etc and generate the visualization and this is one of the keys of reducing friction.
The fact that it is set by default to open the visualization panel, whenever you select an option is I believe a statement of their confidence in how fast ResearchRabbit works, and also a calculated attempt to pull the researcher into the flow of drilling into the papers, pulling them down the rabbit hole.
Visualization wise they have chosen to keep it simple by just showing citation connections to the papers you already have and showing co-author networks and it works, but I think this hardly scratches the surface of what might be possible.
All in all, I think ResearchRabbit is a strong first entry into the arena, no doubt thanks to their long period of testing in closed beta.
While you can use ResearchRabbit in a similar way to Connected papers, by putting in one seed paper and looking at recomemndations, I wouldn’t suggest that as I find ConnectedPapers makes better recommendations than ResearchRabbit based on one paper.
But that’s okay, ResearchRabbit isn’t meant for that. ResearchRabbit vs Litmaps is a more interesting matchup. At this point, I think Litmaps has nicer and more customizable visualizations, but the mental model behind using ResearchRabbit would be closer to what researchers are used to.
Either way in my opinion these three tools — Connected Papers, Litmaps and ResearchRabbit are worth considering in my opinion.
Limitations of ResearchRabbit
The limitations of ResearchRabbit are very similar to are tools/services of its class.
These tools basically live or die by the scope of their index and with Microsoft Academic Graph being discontinued at the end of 2021, this will eventually have an impact on the quality of results even though other open sources of data exist.
In the particular case of ResearchRabbit, more of the exploration, such as checking of references , which in the past was done manually by hand will now be outsourced to the tool. So this means any bias in the tool , for example not being able to properly extract all references of a paper would impact the findability of other references. For example, due to the limited coverage of books in Microsoft Academice Graph, ResearchRabbit would usually not show books citations or references….
Not to mention of course tools like ResearchRabbit by their nature of being recommenders, leads to the danger of algorithmic bias, and depending on the technique used might lead to a Matthew Effect’s on papers that are already rich in citations getting found and cited even more.
Of course tools like ResearchRabbit which use blackbox algos to recommend papers will be more impacted by such issues(This is not unusual, though some tools like inciteful , Citation Gecko are more transparent on what is being done).
Lastly, the question of the business model of this service, it isn’t quite clear yet what this will be…. Again not a question unique to ResearchRabbit.