PocketLLM now supports AI-Assisted Private Search for your Email and Browsing History.

Anshu
ThirdAI Blog
Published in
3 min readDec 14, 2023

Unlock hyper-personalized semantic search on your emails and browsing history with ThirdAI’s PocketLLM. PocketLLM provided the highest possible level of air-gapped privacy, which means the data and the query never leaves your device, so even Google won’t know what your are searching. Download here

It’s likely we’ve all experienced the frustration of searching for an email, we know exists, yet it doesn’t show up even after multiple attempts using Gmail or Outlook’s built-in search. I know there is an email chain where we “discussing pasta and an interesting research idea was proposed” but cannot find it anymore. Similarly, we often struggle to recall something we read online recently and are unable to get to the url again, because its now buried inside hundreds of thousands of URLs (browsing history) we’ve visited recently.

Introducing ThirdAI’s PocketLLM: Fully Private and Fully Local Search for Windows and Mac

We are happy to announce AI driven upgrades for searching inboxes and browsing history on personal laptops and desktop. We can now elaborate in natural language what we are looking for and a fully local AI-Engine will get back with the result in a fraction of seconds, without any of the data or query leaving your personal device. Now, feel free to elaborate as much information in the prompt as you like to help AI search the content.

AI-Assisted discovery on your inbox and drafting response with simple prompts
Private AI-Assisted discovery of your browsing history

Complete Privacy of Data and Query:

ThirdAI’s efficient AI technology enables this AI use only the compute resources on your local device. This implies that all the computations, email or browsing data, and results never leave your device. Once the data is downloaded, everything stays local. That means, even Google wont know what you are searching for! Well, that is how it should be, after-all email and browsing history is your personal and private data and you should not be throwing them over the internet all the time.

Optional OpenAI ChatGPT 3.5 Integration: PocketLLM has a completely optional feature to leverage ChatGPT 3.5 to convert the search results into more human-like responses to what you are looking for. You can even draft replies based on input prompts. To unlock these features, you need to provide your OpenAI key. Note, enabling OpenAI by providing your own key will send the displayed information along with query to OpenAI’s ChatGPT services, so that option is obviously not private.

Worlds First AI-Driven Search with Real-Time User-Driven Personalization

We have started to come to a realization (see this case study) that without personalization and constant refinement, the AI will not keep up to users expectation. With PocketLLM , his has never been a problem. Take full liberty to nudge the AI and personalize it, occasionally or frequently, with feedback to align (or rather re-align) to your changing needs.

Why was this not done before? Well … its hard without ThirdAI’s NeuralDB.

In the age of generative AI and ChatGPT, we might wonder why “semantic” and “personalizable” search isn’t available in established applications like Gmail and Outlook. The answer lies in ThirdAI’s, one-of-its-kind, technological superior AI software built on 10 years of academic research.

ThirdAI NeuralDB enables two key capabilities that would be prohibitively expensive to achieve with existing AI-driven search ecosystems:

1. Real-time, lightweight processing on local hardware: All computations, including indexing, pre-training, querying, and personalization fine-tuning, occur on a laptop’s 2–3 CPU cores. This enables real-time query processing and personalization while maintaining privacy and democratizing access.

2. Extremely lightweight memory footprint: PocketLLM utilizes NeuralDB, the world’s only semantic search system that doesn’t require embedding storage. This means we only need the neural model itself, regardless of document size (1 page or 10,000 pages). Just about 1GB of your laptop’s RAM is sufficient for everything. Additionally, perpetual fine-tuning is possible thanks to fast processing capabilities.

Existing RAG or vectorsearch ecosystems face two major challenges:

  • Compute-intensive embedding models: Generating embedding require significant compute to generate and memory to store. They are compute intensive and hard to scale.
  • Prohibitive real-time fine-tuning: Updating embedding models is slow, requires rebuilding the VectorDB, and thereby user-driven modification is prohibitive.

PocketLLM, powered by NeuralDB, overcomes these limitations. Our advanced AI technology enables perpetual feedback-driven retrieval refinement and personalization, all with mere few CPU-cores, setting it apart from the competition.

Important Links

Download and More Info here

Discord Channel

NeuralDB Information here

--

--

Anshu
ThirdAI Blog

Professor of Computer Science specializing in Deep Learning at Scale, Information Retrieval. Founder and CEO ThirdAI. More: https://www.cs.rice.edu/~as143/