Semantic Scholar’s new AI-powered TLDR feature, now available in beta mode, automatically generates extreme summaries to help you decide which papers are most relevant to your work.
TLDRs (Too Long; Didn’t Read) are super-short summaries of the main objective and results of a scientific paper, generated using expert background knowledge and the latest GPT-3 style NLP techniques. This new feature is now available in beta for nearly 10 million computer science papers and counting in Semantic Scholar.
Staying up to date with scientific literature is a key part of any researchers’ workflow, and parsing a long list of papers from various sources by reading paper abstracts is time-consuming. The new TLDR feature in Semantic Scholar puts single-sentence, automatically-generated paper summaries right on the search results and author pages, allowing you to quickly locate the right papers and spend your time reading what matters to you.
For example, for the paper Simple and Effective Multi-Paragraph Reading Comprehension, our system produced this helpful and succinct TLDR:
“We propose a state-of-the-art pipelined method for training neural paragraph-level question answering models on document QA data.”
TLDRs help you make quick, informed decisions about which papers are relevant, and where to invest the time in further reading. Additionally, TLDRs provide ready-made paper summaries for explaining the work in various contexts, such as sharing a paper on social media. In recent years, we’ve seen a dramatic shift from PCs to mobile phones — 25% of Semantic Scholar’s traffic comes from users on mobile devices. TLDRs are especially helpful in this context, since you can quickly decide if the paper is worth reading and then add it to your library to read later.
“This is one of the most exciting applications I have seen in recent years! Not only are TLDRs useful for navigating through papers quickly, they also hold great potential for human-centered AI,” says Mirella Lapata, AI2 Scientific Advisory Board Member and Professor in the School of Informatics at the University of Edinburgh. “Semantic Scholar has millions of users who can provide feedback, and help continually improve the technology underlying TLDRs.”
Using a model developed by the Semantic Scholar Research team, TLDRs are the latest feature on semanticscholar.org rooted in groundbreaking NLP research.
“People often ask why TLDRs are better than abstracts, but the two serve completely different purposes,” says Daniel S. Weld, head of the Semantic Scholar Research Group at AI2, Professor of Computer Science at the University of Washington, and co-author of TLDR: Extreme Summarization of Scientific Documents. “Since TLDRs are 20 words instead of 200, they are much faster to skim.”
The development of TLDRs on Semantic Scholar was motivated by a gap in previous research: much work has been done in extreme summarization, but prior to TLDRs it had not been applied to the scientific domain.
“Information overload is a top problem facing scientists,” says Johns Hopkins University PhD Student Isabel Cachola, former Pre-Doctoral Young Investigator at AI2 and Author of TLDR: Extreme Summarization of Scientific Documents. “Semantic Scholar’s automatically generated TLDRs help researchers quickly decide which papers to add to their reading list.”
To learn more about the innovative research powering the TLDR feature on Semantic Scholar, read the paper TLDR: Extreme Summarization of Scientific Documents from Isabel Cachola, Kyle Lo, Arman Cohan, and Daniel S. Weld from the Semantic Scholar team at AI2.
Head over to tldr.semanticscholar.org to learn more, and search Semantic Scholar for a computer science-related term to see TLDRs live in beta mode. Have feedback on this beta feature? Email us at email@example.com.
Follow @semanticscholar on Twitter for the latest updates.