Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Information theory and computation is the study and development of protocols and algorithms for solving problems and analysing information. This discipline usually breaks information down into individual bits, and then determines the optimum logical operation required to efficiently process this data in any required way.
The creation and purification of magic states can be a limiting step in quantum computing. Now an error correcting code has been found where the overhead of this process is the lowest value possible, showing that optimal performance can be achieved.
A recent study demonstrates through numerical simulations that implementing large language models based on sparse mixture-of-experts architectures on 3D in-memory computing technologies can substantially reduce energy consumption.
The Fisher information imposes a fundamental limit on the precision with which an unknown parameter can be estimated from noisy data, as Dorian Bouchet explains.
Nonlinearity is crucial for sophisticated tasks in machine learning but is often difficult to engineer outside of electronics. By encoding the inputs in parameters of the system, linear systems can realize efficiently trainable nonlinear computations.
Petros Koumoutsakos argues that the intellectual space between AI and computational science is home to exciting opportunities for scientific discovery.
In an age of expensive experiments and hype around new data-driven methods, researchers understandably want to ensure they are gleaning as much insight from their data as possible. Rachel C. Kurchin argues that there is still plenty to be learned from older approaches without turning to black boxes.