Abstract
Recent work has shown that it is possible to take brain images of a subject acquired while they saw a scene and reconstruct an approximation of that scene from the images. Here we show that it is also possible to generate text from brain images. We began with images collected as participants read names of objects (e.g., ``Apartment'). Without accessing information about the object viewed for an individual image, we were able to generate from it a collection of semantically pertinent words (e.g., "door," "window"). Across images, the sets of words generated overlapped consistently with those contained in articles about the relevant concepts from the online encyclopedia Wikipedia. The technique described, if developed further, could offer an important new tool in building human computer interfaces for use in clinical settings.
Similar content being viewed by others
Article PDF
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Pereira, F., Detre, G. & Botvinick, M. Generating descriptive text from functional brain images. Nat Prec (2011). https://doi.org/10.1038/npre.2011.5666.1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/npre.2011.5666.1