Data stored at the atomic level with super-high density
by Ryan Whitwam
Hard drive capacity is getting surprisingly large these days, but the density of data written to traditional materials can only take us so far. Researchers from Delft University of Technology in the Netherlands have devised a way to store data on the atomic scale by manipulating single atoms. The density of the proof-of-concept atomic storage created for the study is 500 terabytes per square inch. That’s 500 times more dense than the highest capacity hard drives.
The atomic storage developed at Delft University of Technology is based on the scanning tunneling microscope (STM). The same basic technique has been used to move individual atoms around for decades. In 1990, physicist Don Eigler made news when he managed to spell out the letters “IBM” with 35 xenon atoms. Scientists have been experimenting with storing information with arrangements of atoms ever since, but the Delft study avoids many of the pitfalls of such methods by using a storage grid based on atomic vacancies rather than where atoms are.
While it’s possible to move atoms around, they don’t necessarily like to stay put. The team from Delft realized that chlorine atoms deposited on a copper substrate would form a regular grid. Information could be stored much more reliably in this grid by sliding atoms around like a puzzle with the STM tip. The pattern of vacancies actually encode bits of information. This is much more stable than attempting to arrange individual atoms — the team reports better than 99% reliability in the data.
An example of the storage matrix above shows what the researchers were able to accomplish. The array is a few nanometers across and contains one kilobyte of data. There are 144 blocks, each one with atomic vacancies that encode eight binary bits of information. In this case, it’s storing an excerpt from Richard Feynman’s “There’s plenty of room at the bottom” lecture.
Unfortunately, this is the largest example of atomic storage the team has produced so far. The final capacity isn’t the only hurdle to overcome, either. Even with the improved stability of the vacancy approach, the storage medium still needs to be cooled with liquid nitrogen at -196 degrees Celsius (-320 F). The pathfinding algorithm used to read data from the atoms is slow by storage standards as well — it takes 10 minutes to read the 1KB block right now. Existing STM technology should be able to boost that to 1Mbps eventually. The other issues will take more work and the development of new technologies, but the team believes this has the potential to change the nature of data centers.
Now read: How DNA data storage works
Note: This article was originally published on ExtremeTech.com.