Published inACMI LABSBuilding better accessInvesting in our existing technology to enable automated collection access.Apr 2Apr 2
Published inACMI LABSThe anecdote machineWe’re always considering new ways of engaging with the ACMI collection, using legible technology, with interfaces that require minimal…Feb 12Feb 12
Published inACMI LABSImage embeddings and audio captionsThis month we released an iteration on two of our collection exploration tools — our video search and works explorer.Oct 1, 2024Oct 1, 2024
Published inACMI LABSLanguage model explorations with our collectionHow we built a natural language collection chat server using LangChain and our Public API.Jun 27, 2024Jun 27, 2024
Published inACMI LABSEmbeddings and our collectionHow we built our new Works Explorer using vector embeddings of our collection records.Oct 27, 2023Oct 27, 2023
Published inACMI LABSSearch inside our videos — Part 2: content discoveryExtending the ACMI video search by looking inside image frame content using VideoMAE, YOLOv8, and BLIP-2.Aug 4, 2023Aug 4, 2023
Published inACMI LABSSearch inside our videos — Part 1: dialogue discoveryAfter the first pass at transcribing our video collection was complete we started prototyping a way to search those transcriptions.Mar 27, 2023Mar 27, 2023
Published inACMI LABSCollection video transcriptions at scale with WhisperThe ACMI Labs team has spent the past few weeks prototyping, building, and integrating automated video transcriptions into our museumDec 9, 2022Dec 9, 2022
Published inACMI LABSACMI x DALL·E miniWe thought we’d pass the ACMI collection metadata to DALL·E mini and see what images it might generate.…Jun 27, 20221Jun 27, 20221