Introduction

Artificial intelligence (AI) technologies are revolutionizing the landscape of scientific research in unprecedented ways, profoundly impacting fields such as physics1, chemistry2, biology3, climate science4, astronomy5, and neuroscience6. AI assists scientists in formulating hypotheses, designing experiments, efficiently processing and analyzing vast datasets, and uncovering insights that would be difficult or impossible to achieve with traditional methods alone7,8. At the heart of artificial intelligence for science (AI4S) is the powerful computational infrastructure provided by high-performance computing (HPC) libraries. Libraries like PyTorch9, TensorFlow10, and JAX11 offer the computational efficiency, flexibility, and hardware acceleration necessary to meet the rigorous demands of AI-driven research. However, despite their strengths, these libraries were originally developed for machine learning and deep learning applications, and generally lack critical features needed for scientific research, such as the proper handling of physical units.

A fundamental pillar of scientific research is the precise measurement of the physical world, which relies on well-defined unit systems12. For example, the metric system, with the meter as the standard unit for length, ensures the interoperability and comparability of measurement results across different studies. Without such unified standards, effective communication and comparison of scientific results would be nearly impossible. In the realm of scientific computing, the accurate application of physical units is not only a mark of scientific rigor but also essential for preventing dimensional errors and avoiding catastrophic failures, such as the Mars Climate Orbiter disaster caused by unit mismanagement13. Moreover, maintaining a consistent unit system throughout complex computations is vital for reinforcing physical constraints in models and ensuring that results remain physically plausible. Standardized unit systems, such as the International System of Units (SI)14, are invaluable for fostering global scientific collaboration, facilitating data sharing, and ensuring the accurate interpretation and reproducibility of research findings.

Despite their critical importance, current mainstream HPC libraries generally lack native support for physical units. This deficiency presents significant challenges for researchers applying advanced AI technologies to scientific discovery. In complex computational processes, even minor errors in unit management can lead to significant deviations in results. Consequently, researchers often resort to manual unit conversions, which not only increases operational complexity and the risk of errors but also consumes considerable time and effort that could otherwise be devoted to core research activities. Moreover, code that omits unit information is often difficult to interpret, leading to “magic numbers” that lack clear physical meaning. When computational anomalies occur, the absence of unit information significantly complicates troubleshooting and debugging. Furthermore, varying unit conventions across different research fields increase communication costs and create barriers to data and model sharing, hindering the cross-pollination of ideas that is often essential for groundbreaking discoveries15. Therefore, as the scientific community increasingly embraces AI-driven methodologies7,8, the need for standardized and comprehensive unit support within HPC libraries has become more urgent than ever.

To address this critical gap, we introduce SAIUnit, a unit system designed to seamlessly integrate physical units into high-performance scientific AI libraries, with a particular focus on JAX11. SAIUnit seeks to transform scientific computing by marrying the computational power of modern AI libraries with rigorous physical unit constraints. It provides a robust, efficient, and user-friendly framework for managing physical units, reducing potential errors, enhancing research productivity, and establishing unit standards. We delve into the design principles and implementation details of SAIUnit and its broad applications on numerical integration, brain modeling, and physics-informed neural networks (PINNs), illustrating how it can bridge the gap between the abstract world of high-performance computing and the physical realities that scientists strive to understand. As we stand on the brink of a new era in AI-driven scientific discovery7,8, SAIUnit represents a pivotal step towards fully harnessing the potential of AI to advance human knowledge and understanding.

Design principles

Towards a standardized physical unit system supporting HPC capabilities, SAIUnit was designed and implemented with several key considerations:

  • Standardization: Adopting SI units12 to ensure global scientific compatibility and standardized communication.

  • Consistency: Maintaining strict adherence to physical laws in the relationship between fundamental units (e.g., length, time, mass) and compound units (e.g., velocity, force, energy), ensuring logical consistency throughout the system.

  • Universality: Comprehensive support for known physical quantities, both fundamental (e.g., length, mass, time) and compound (e.g., force, energy, charge), with built-in extensibility to accommodate new physical quantities.

  • Accuracy: High-precision unit conversions and computations, particularly when handling data of extreme magnitudes.

  • Efficiency: Optimized implementation to minimize computational overhead for unit checking.

  • Type safety: Prevention of illegal operations between incompatible units, such as direct addition of time and length.

  • AI compatibility: Full compatibility with AI functionalities, including automatic differentiation, just-in-time (JIT) compilation, GPU/TPU acceleration, vectorization, and parallelization transformations.

Results

Automatic dimensional analysis with saiunit.Dimension structure

SAIUnit’s innovation centers on three interconnected data structures: saiunit.Dimension, saiunit.Unit, and saiunit.Quantity (Fig. 1). These structures are complemented by a unit-aware mathematical system that is compatible with AI functionalities (Fig. 2).

Fig. 1: Core data structures for physical unit processing.
figure 1

a–d saiunit.Dimension enables dimensional analysis and conversion based on SI rules. a The Dimension structure represents the dimensionality of the seven base SI units shown in b. b The SI base units include: meter (m) for length, second (s) for time, kilogram (kg) for mass, ampere (A) for electric current, kelvin (K) for temperature, candela (cd) for luminous intensity, and mole (mol) for amount of substance. c Examples of Dimension instances for both base and derived dimensions. Base dimensions are represented by one-hot vectors, where the position of “1" corresponds to the relevant SI unit in b. Derived dimensions are formed by combining these vectors as described in d. d Operational rules for Dimension instances, including addition, subtraction, multiplication, division, and exponentiation. e–h saiunit.Unit provides representations for physical units. e The Unit structure is composed of a metric scale and a Dimension instance. While the default base factor is 10, the custom base factor can be specified. f The metric scale varies widely in physical worlds, ranging from pico- to tera-scale. By adding this scale prefix, Unit can represent entities as small as atoms or as large as stars. g Examples of Unit instances for representing various physical units. h Operational rules for Unit instances, including addition, subtraction, multiplication, division, and exponentiation. i–l saiunit.Quantity provides an array-based interface for unit-aware computations. i The Quantity structure is composed of a numerical value A and a Unit instance. j The numerical value A can be implemented as Python numbers, NumPy, or JAX arrays. k Examples of Quantity instances for representing physical quantities used in scientific computing. l Supported mathematical operations for Quantity include the basic arithmetic functions (e.g., addition, subtraction, multiplication, division, and exponentiation) and commonly used array functions (e.g., indexing, slicing, broadcasting), following the NumPy array syntax16.

Fig. 2: JAX-compatible physical units and unit-aware mathematical functions.
figure 2

a Categories of physical units supported by SAIUnit, spanning over 2000 commonly used units in fields such as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, and engineering. b Unit-aware mathematical functions in SAIUnit follow NumPy’s array programming conventions16. Supported operations include but not limited to arithmetic, statistics, bitwise manipulation, linear algebra, array manipulation, searching, sorting, broadcasting, aggregation, and matrix operations. c All physical units and mathematical functions in SAIUnit are fully compatible with JAX transformations, including jax.grad for automatic differentiation, jax.vmap for vectorization, jax.pmap for parallelization, and jax.jit for JIT compilation.

saiunit.Dimension establishes a rigorous unit conversion system. It is based on the SI units14, the most widely adopted standard measurement system globally. This system is founded on seven fundamental physical units: length (m), mass (kg), time (s), electric current (A), thermodynamic temperature (K), amount of substance (mol), and luminous intensity (cd) (Fig. 1b). In our implementation, Dimension is represented as a tuple of seven integers (Fig. 1a). Each integer in this tuple corresponds to the dimension of a base unit, where the value indicates the number of dimensions of that particular base unit. Derived units, which measure other physical quantities such as force, electric charge, and speed, are created through mathematical combinations of these fundamental one-hot base vectors (Fig. 1c). The composition rules for Dimension, including addition, subtraction, multiplication, division, and exponentiation, follow the SI rule and are illustrated in Fig. 1d and methods. These rules ensure that all unit conversions and calculations maintain dimensional consistency.

Physical unit representations with saiunit.Unit structure

Physical phenomena and objects can vary dramatically in size and magnitude. For instance, lengths can range from the subatomic scale of particles to the astronomical scale of galaxies. While the standard metric unit of length is the meter, it is often impractical to use this base unit for all measurements. Real physical units frequently incorporate prefixes that indicate their scale relative to the base unit. These prefixes are based on powers of 10, both positive (101, 102, 103, etc.) and negative (10−1, 10−2, 10−3, etc.), as illustrated in Fig. 1f (Supplementary Table 1 provides a comprehensive list of metric prefixes). Therefore, saiunit.Unit is implemented to represent physical units with scales, which consists of a metric scale and a Dimension instance (Fig. 1e). This structure allows for the representation of a wide range of physical quantities across various scales (Fig. 1g). Users can dynamically switch to the most appropriate metric prefix, ensuring that quantities are always expressed in the most suitable units for the given context. The operations between two instances of Unit follow five fundamental rules (Fig. 1h and methods). Addition, subtraction, and comparison are only valid between units with the same dimension. Multiplication and division between units follow the rules of Dimension multiplication and division, with the scales being respectively multiplied or divided. Applying an exponent to a unit affects both its Dimension and its scale. These rules ensure that all operations on Unit instances maintain physical consistency and yield meaningful results across different scales and dimensions.

Unit-aware array computations with saiunit.Quantity structure

High-performance computing today heavily relies on operations performed on vectors, matrices, and higher-dimensional arrays16. The saiunit.Quantity structure implements unit-aware array computation by integrating a numerical value A with a Unit instance (Fig. 1i). The numerical component A is typically represented as a Python number, NumPy array, or JAX array (Fig. 1j), enabling deployment on CPUs, GPUs, or TPUs for efficient computation across diverse hardware architectures. In contrast, the unit component is processed exclusively on the CPU during compile time. Quantity inherently aligns with scientific notation (x × 10y), where A corresponds to the mantissa x, and the unit’s scale represents the exponent y. However, it extends beyond traditional scientific notation by incorporating an additional layer of dimensional analysis (Fig. 1k). Unlike existing unit libraries that integrate the metric scale s directly into the numerical value A (see Supplementary Note A), Quantity maintains separate representations for the numerical value A, the metric scale s, and the dimension. This design choice offers significant advantages, particularly when A is represented using low-precision data types like float16, which are common in deep learning applications. By keeping the metric scale separate, Quantity can handle very large (e.g., exa-, 1024) or very small (e.g., yocto-, 10−24) scales without compromising computational accuracy or encountering the limitations typically associated with low-precision arithmetic. Moreover, to ensure seamless integration with existing numerical computing workflows, Quantity overloads nearly all mathematical operators supported by NumPy and JAX arrays, including arithmetic operations, indexing and slicing, broadcasting, aggregation, statistical operations, and many more (Fig. 1l).

Unit-aware mathematical system compatible with JAX transformations

SAIUnit offers a comprehensive library of over 2000 commonly used physical units. This extensive collection encompasses units from various domains of physics and engineering, including but not limited to: mass, angle, time, length, pressure, area, volume, speed, temperature, energy, power, and others (see Fig. 2a and Supplementary Table 2). By providing such a wide array of pre-defined units, SAIUnit significantly reduces the burden on researchers and engineers to manually define and manage units in their calculations. Moreover, the library is designed to be extensible, allowing users to define and add their custom units if needed, further expanding its applicability to specialized fields or unique experimental setups (see Supplementary Note N). SAIUnit is equipped with a robust unit-aware mathematical framework capable of automatically handling the diverse range of physical quantities it represents when performing complex numerical computations. Particularly, it provides over 500 commonly used mathematical operators that closely mirror the syntax and behavior of their NumPy and JAX counterparts (Fig. 2b). This design choice allows most Python users to leverage their existing knowledge and immediately start programming in SAIUnit using familiar mathematical syntax while benefiting from the added dimension of unit-aware computations.

Moreover, SAIUnit seamlessly supports the AI framework JAX11. JAX has gained widespread adoption in fields such as machine learning and scientific computing, since it provides a set of powerful transformations that enable advanced high-performance computation, including jax.grad for autograd, jax.jit for JIT compilation, jax.vmap for vectorization, and jax.pmap for multi-device parallelization. SAIUnit is fully compatible with these JAX transformations (Fig. 2c). A key aspect of JAX’s design is its use of the PyTree data structure. Any dynamic data within a PyTree can be differentiated, compiled, vectorized, or parallelized using JAX transformations. Recognizing this, we have registered Quantity as a PyTree, in which the numerical value A is treated as dynamic data, while the unit is considered static data. This separation offers several significant advantages. (1) By treating only the mantissa A as dynamic data, gradient computation is restricted to this numerical value, rather than its unit. This prevents illegal or nonsensical gradient definitions on metric scales s, which are inherently non-differentiable. (2) It allows JIT compilation to trace only the mantissa values. Unit conversion and checking only occur during compilation time rather than runtime after compilation. This design minimizes the overhead of physical unit processing on numerical computation, resulting in maximized performance comparable to raw numerical operations. (3) By treating the unit as static data, SAIUnit ensures that unit information is preserved on data throughout JAX transformations. This prevents accidental unit loss or corruption after gradient or JIT computation.

Seamless integration of unit-aware computations across disciplines

To evaluate the broad applicability of unit-aware computation of SAIUnit across various AI-driven scientific computing domains, we integrated it into multiple JAX-based scientific computing libraries, with a particular demonstration on numerical integration, brain modeling, and PINN domains (see Fig. 3a). However, it can be seamlessly incorporated into other general scientific computing scenarios (refer to methods).

Fig. 3: Integration of physical units across scientific computing domains.
figure 3

a SAIUnit's architecture and ecosystem: SAIUnit enables unit-aware computations across CPU, GPU, and TPU platforms with full JAX transformation compatibility, supporting JIT compilation, automatic differentiation, vectorization, and parallelization. This system has been integrated into diverse scientific computing libraries, including PINNx for physics-informed neural networks17, BrainPy ecosystem for brain modeling18,19, BrainCell for dendritic modeling20, JAX, M.D. for molecular dynamics21, Diffrax for numerical integration22, and Catalax for biological system modeling23. b–d Unit-aware numerical integration applications. Demonstration using Diffrax for b first-order chemical reaction equations, c epidemiological models, and d Lotka-Volterra predator-prey dynamics. e–g Multiscale brain modeling (detailed description please see Supplementary Note E). e Brain dynamics exhibit characteristic multi-scale properties, with each scale from microscopic to macroscopic requiring precise physical unit representations. f Incorporation of physical units enables the investigation of spike propagation mechanisms in multiscale brain models using experimental data in their native format (10 mA during 200–260 ms). g Physical units render intermediate results in multi-scale brain modeling more intuitive and interpretable for biologists. h–j Physics-informed neural network applications. h Integration of physical units within neural network architectures. The system enforces dimensional consistency by permitting only inputs with matching physical dimensions. Input data are automatically normalized to dimensionless quantities based on their specified units before network processing. The network outputs are then reconstituted with appropriate physical units according to predefined units. i PINN losses for two-dimensional Navier-Stokes equations: (1) The physics loss enforces adherence to governing physical equations, where SAIUnit's unit-aware computations ensure dimensional consistency across all PDE terms; (2) The data loss quantifies the agreement between network predictions and experimental observations, with SAIUnit maintaining appropriate physical meanings in the fitting residuals. j Comparison of predicted and true u-velocity fields at the final time step.

Unit-aware numerical differential equation solver – Diffrax

Differential equations (DEs) are fundamental tools for describing the dynamic behavior of natural, engineering, and social systems, extensively used to study the evolution of system variables over time or space. In recent years, rapid advancements in computational technologies have driven numerical integration of DEs transition from traditional CPU-based serial implementations (e.g., SciPy24) to AI framework-based GPU/TPU parallel computing (e.g., Diffrax22).

To enable unit-aware numerical integration, we integrated SAIUnit into Diffrax22-a numerical solver library within the JAX ecosystem that supports automatic differentiation and hardware acceleration. Precision control in numerical integration directly impacts the reliability of simulation results. Different physical systems exhibit significant variations in their requirements for spatiotemporal discretization precision: chemical reaction equations typically demand millisecond-level precision (Fig. 3b), epidemiological models (such as the SIR model) require day-level precision (Fig. 3c), and Lotka-Volterra ecological models operate on a monthly timescale (Fig. 3d). In abstract numerical computing libraries, these dramatically different timescales are all expressed as a magic value “1”, making it challenging to accurately represent and compare timescales across different systems. In contrast, the unit-aware Diffrax can standardize spatiotemporal precision control within a consistent dimension. This ensures the comparability of integration computations under varying precision requirements (Fig. 3b–d). Moreover, by verifying that the dimensions of the vector field on the equation’s right-hand side align with the definition of the derivative terms on the left-hand side (see Fig. 3b–d), unit-aware numerical integration can prevent computational deviations arising from unit conversion errors and ensure the proper definition of the equation.

Unit-aware brain dynamics programming ecosystem – BrainPy

In brain simulations, rigorous verification of physical units is essential for ensuring model accuracy and reliability. The brain exhibits significant multi-scale characteristics, including the molecular dynamics of ion channels, the membrane dynamics of individual neurons, the chemical dynamics of synaptic transmission, and the behavior dynamics of networks. These dynamics involve the intercoupling of various physical quantities with substantial differences in magnitude. For instance, membrane potential is measured in millivolts (mV), membrane conductance in millisiemens (mS), and transmembrane current in nanoamperes (nA) or picoamperes (pA). Such magnitude disparities can easily lead to scaling errors in simulations in the absence of a unit system.

To address this problem, we integrated SAIUnit into BrainPy18,19, a differentiable brain dynamics ecosystem built on top of JAX. SAIUnit can effectively represent multiscale quantities in the brain (Fig. 3e). Specifically, at the neuron and synapse levels, it accurately encodes membrane time constants on the millisecond scale and membrane areas on the square millimeter scale; at the network level, it captures detailed synaptic connectivity and network dynamics; and at the circuit level, it calculates fundamental parameters such as inter-regional brain distances on the centimeter scale, synaptic conduction velocities, and delays. Introducing physical units makes the definition of multi-scale brain networks more intuitive (Supplementary Note E and F). Moreover, unit-aware brain simulation makes the simulation process more accessible to biologists. For example, the investigation of spike propagation in a multi-scale brain model25 can utilize input currents with explicit physical units that are consistent with experimental settings into the network (Fig. 3f). Training spiking neural networks using gradient-based methods nowadays has been widely applied to study brain functions and brain-inspired computing26. We incorporated physical units into the training of spiking neural networks within BrainPy. We found that utilizing physical units in both brain simulations and brain-inspired computing models (Supplementary Note E and G) renders intermediate results more intuitive and easier to interpret in a biological context. Specifically, outcomes such as membrane potentials, synaptic conductance, excitatory and inhibitory currents within brain regions, and inter-regional currents can be directly mapped to real experimental scenarios without the need for additional dimensional scaling (Fig. 3g and Supplementary Fig. 1). This significantly enhances the interpretability of the model results and is particularly valuable for brain scientists without computational background.

Unit-aware physics-informed neural network framework – PINNx

PINNs represent a cutting-edge paradigm in AI-driven scientific computing by seamlessly integrating physical laws, expressed as partial or ordinary DEs, with deep neural networks (DNNs). Within the PINN framework, neural networks are required not only to learn system behaviors from observational data but also to rigorously adhere to established physical constraints. Ensuring the consistency of physical dimensions is crucial for accurately representing fundamental physical laws, such as mass conservation, energy conservation, and momentum conservation, within neural networks. Any dimensional inconsistencies can lead to erroneous formulations of physical equations, thereby compromising the model’s predictive accuracy and physical interpretability.

To address this issue, we developed the PINNx framework17, which integrates SAIUnit to enforce physical unit constraints on both the inputs and outputs of neural networks (Fig. 3h) and on the computation of differential equations (Fig. 3i). During both PINN inference and training, PINNx automatically performs unit consistency checks and scale transformations, ensuring that the entire computation process strictly adheres to physical dimensional norms (Fig. 3i). For example, in the two-dimensional Navier-Stokes equation (Fig. 3j), the neural network’s predicted velocity fields (u and v) and pressure field (p) must be dimensionally consistent with the governing equations. Additionally, the network’s spatial and temporal derivatives must maintain dimensional consistency among the equation terms to ensure that the model adheres to rigorous physical constraints (Fig. 3i).

Moreover, unit-aware computation significantly enhances the efficiency and accuracy of data import and model fitting in PINNs. DNNs are highly sensitive to the magnitude and scale of input data, and inconsistencies in units or scales across datasets can lead to diminished model performance or even training failures27. SAIUnit mitigates this issue by providing a physical unit-based preprocessing mechanism. This mechanism normalizes heterogeneous data from various sources to a unified scale based on their physical units and filters out anomalous data with inconsistent units (Fig. 3h). By ensuring that all input data adhere to predefined unit standards, SAIUnit can guarantee consistent and reliable data quality, and support more robust and accurate neural network predictions.

Unit-aware computations without runtime overhead or precision loss

One distinguishing feature of SAIUnit, not shared by existing unit systems such as Pint and Quantities, is the separation of numerical values from unit scales. Specifically, a physical quantity is represented as A × 10s (Fig. 1i–k), where the unit scale s is managed by JAX’s static system, while the mantissa A is tracked by JAX’s JIT compilation. This architectural design not only ensures minimal runtime overhead but also preserves the integrity of computational precision.

First, to evaluate the performance and scalability of the SAIUnit’s unit-aware computation across AI-driven scientific computing domains, we conducted an extensive performance analysis. The analysis was carried out on CPU, GPU, and TPU platforms (see methods), focusing on three primary aspects: first, evaluating the numerical integration performance of typical DEs such as Couette 1D equation and Navier-Stokes 2D equation (Fig. 4a, b, Supplementary Fig. 2a, b, and Supplementary Note L); second, analyzing the computational overhead of unit-aware simulations of multi-scale brain models (Fig. 4c and Supplementary Note E); and third, performing comprehensive performance comparisons across three types of partial DEs from various scientific disciplines, including Burgers 1D equation, reaction-diffusion 2D system, and Navier-Stokes 3D equation (Fig. 4d–f, Supplementary Fig. 2d–f, and Supplementary Note L). To gain deeper insights into the framework’s scalability, we further examined SAIUnit’s performance across different computational scales, including extensions of spatial discretization, increases in brain network size, and expansions of training dataset volumes.

Fig. 4: Computational overhead analysis of SAIUnit’s unit-aware computations across CPU and GPU platforms.
figure 4

Performance evaluation of unit-aware Diffrax in numerical integration tasks, comparing compilation and simulation times for the one-dimensional Couette equation in a and two-dimensional Navier-Stokes equation in b. c Performance analysis of unit-aware BrainPy in multiscale brain simulations, measuring compilation and simulation time overhead. d–f Assessment of unit-aware PINNx performance in PINNs, examining compilation and training times across three test cases: one-dimensional Burgers equation in d, two-dimensional reaction-diffusion system in e, and three-dimensional Navier-Stokes equation in f. ns: 0.05  < p = 1.0. *: 0.01  < p = 0.05. **: 0.001  < p = 0.01. ***: 0.0001  < p = 0.001. ****: p = 0.0001. The error bar denotes the 95% standard deviation.

Under each control condition, we conducted at least 10 model runs and reported the mean along with the 95% standard deviation of the time. We also employed t-tests for significance analysis to assess if compilation and runtime performance significantly differ between models with and without physical units. After comparison in each experiment, we verified the simulation results (see Supplementary Fig. 3 for numerical integration and brain modeling examples) and the training loss convergence (see Supplementary Fig. 4 for PINN examples) are consistent between models with and without physical units.

Our experimental results showed that in large-scale scientific computing tasks, SAIUnit’s performance overhead is primarily concentrated in the compilation phase, regardless of whether computations are executed on CPU, GPU, or TPU devices (see “Compilation Time” in Fig. 4 and Supplementary Fig. 2). This overhead is mainly attributable to its static analysis process of unit conversion and type checking. In most cases, the runtime overhead introduced by SAIUnit’s unit-aware computations is negligible compared to abstract computing without physical units (see “Simulation Time” and “Training Time” in Fig. 4 and Supplementary Fig. 2). These findings indicate that by separating numerical computation and unit processing, SAIUnit’s unit-aware computation effectively maintains consistency checks for physical dimensions within AI frameworks while imposing minimal computational overhead.

To validate the accuracy of SAIUnit’s unit-aware computation, we then compared the numerical stability of three existing Python unit systems – SAIUnit, Quantities, and Pint – in simulating the Lorenz system (Supplementary Fig. 5). As a quintessential chaotic system, the Lorenz system can amplify minor numerical precision differences into significant trajectory divergences. At 64-bit floating-point precision in Quantities and Pint, the errors between simulations with and without units were zero (Supplementary Fig. 5a and d). However, at 32-bit and 16-bit precisions, simulations using units exhibited significant discrepancies compared to unitless simulations, with these differences emerging at earlier time points as precision decreased (Supplementary Fig. 5b, c and e, f). In contrast, Lorenz system simulations using SAIUnit demonstrated zero difference between unit-based and unitless simulations across all tested precisions (Supplementary Fig. 5g–i). This demonstrates that SAIUnit enables physical unit processing without loss of precisions, and implies that its physical unit system can be integrated into low-precision AI training scenarios without compromising the accuracy of low-precision training. We verified this hypothesis using PINNs, and found that 16-bit precision PINN training with and without SAIUnit exhibited identical training loss curves (Supplementary Fig. 6), implying that the correctness of low-precision training remains unaffected.

Unit-aware computations improve research productivity

Next, we demonstrate the advantages that SAIUnit brings to research productivity through its automatic unit checking and conversion. Abstract numerical computing libraries typically rely on manual unit conversion and verification, introducing potential human errors and risking unit inconsistencies. SAIUnit automates this process through its unit-aware computation. It ensures consistency across physical quantities throughout calculations, thereby eliminating unit mismatch errors and reducing both cognitive burden and verification time. The advantages of this approach become evident when constructing biophysical neuron models in computational neuroscience. For example, when computing the ion channel current using the Goldman-Hodgkin-Katz equation28 (Fig. 5a and methods), traditional NEURON simulator29, lacking built-in unit systems, necessitate manual unit conversions (Fig. 5b and Supplementary Note J). In contrast, BrainCell20, developed on top of SAIUnit, allows direct expression of model logic without requiring magnitude conversion management (Fig. 5c and Supplementary Note K).

Fig. 5: Unit-aware computations enhance research efficiency in ion channel modeling.
figure 5

a The low-threshold voltage-activated T-type calcium current in cerebellar Purkinje cells30 was modeled using the Goldman-Hodgkin-Katz equation. b NEURON code for modeling this T-type calcium current requires manual unit conversions. c BrainCell code for modeling the T-type calcium current leverages unit-aware computations in SAIUnit, allowing researchers to avoid managing dimensional scales.

Unit-aware computations foster research collaboration

Finally, we showcase the critical role of SAIUnit’s standardized unit system in fostering scientific collaboration. Interdisciplinary collaboration is essential for addressing complex problems in scientific computing. However, different disciplines often use distinct units and measurement standards, which presents challenges for data integration and collaboration. For example, physicists use joules (J) to represent energy, while ecologists tend to use calories (cal); blood pressure in the medical field is typically measured in millimeters of mercury (mmHg), whereas in physics, pressure is measured in pascals (Pa); and metabolic rate in biological systems is usually expressed in kilocalories per hour (kcal/h), while it is commonly represented in watts (W) as a unit of power in physics. These discrepancies make data sharing and collaboration across disciplines complex and inefficient. SAIUnit solves this issue by introducing a unified physical unit system that represents data units from different fields using consistent dimensions of SI units14 (Fig. 6). This enables researchers from various disciplines to seamlessly connect, understand, and utilize each other’s data, thereby enhancing the efficiency and accuracy of interdisciplinary research.

Fig. 6: SAIUnit’s standardized unit system unifies diverse measurement units using SI standards.
figure 6

SAIUnit automates the conversion between discipline-specific units and SI standards across fundamental physical quantities. For example, it converts energy units such as calories (biology), electron volts (physics), and ergs (CGS system) to joules (SI); pressure units like millimeters of mercury (medical), atmospheres (chemistry), and bars (engineering) to pascals (SI); and force units including dynes (CGS system) and kilogram-force (engineering) to newtons (SI). CGS system: the centimeter-gram-second system of units.

Discussion

In conclusion, we introduced SAIUnit, a unit system designed to seamlessly integrate physical units into high-performance, AI-driven scientific computing, with a particular emphasis on compatibility with JAX. SAIUnit addresses a critical gap in current AI libraries: the lack of native support for physical units, which is vital for rigorous scientific research. By offering a comprehensive library of over 2000 physical units and an extensive set of unit-aware mathematical operators, SAIUnit provides a robust tool for unit-aware scientific computing. Its full integration with JAX transformations allows users to utilize advanced features such as automatic differentiation, JIT compilation, vectorization, and parallelization while maintaining unit consistency. Through a series of use cases in numerical integration, brain modeling, and PINNs, we demonstrated SAIUnit’s ability to automate unit processing with minimal overhead and its practical applicability in real-world scenarios.

By combining the advantage of physical unit handling and high-performance numerical computing, SAIUnit enables accurate, reliable, maintainable, interpretable, and rigorous scientific computing:

$$\,{\mbox{numerical computing}}+{\mbox{physical units}}={\mbox{scientific computing}}\,.$$

This synergy offers several key benefits. First, physical units mitigate the risk of unit mismatch, ensuring that results conform to physical laws and improving the correctness of models. Furthermore, explicit unit labeling enhances code readability and maintainability, making it easier for developers to understand variable meanings, particularly in large-scale or interdisciplinary projects. Additionally, the use of standardized units fosters code portability, allowing libraries from different domains to adopt a consistent unit system, thereby promoting scientific collaboration. Finally, automatic unit conversion alleviates the burden on developers, minimizing errors from manual conversions and streamlining debugging processes.

Despite the significant advantages that SAIUnit brings to AI-driven scientific computing, several critical issues remain that require in-depth future exploration. One such issue concerns how SAIUnit can leverage its advantages in low-precision computations when representing physical quantities using a format similar to scientific notation (Fig. 1i–k). This format effectively accommodates extremely large or small values that would typically necessitate high bit widths in traditional fixed-point or floating-point representations. In practical applications, it inherently synergizes with the quantization techniques widely employed in DNNs31. Even when both the mantissa A and the unit scaling factor s are represented with low precision, the system can still cover a wide numerical range, thereby enhancing the ability of quantized models to handle values across varying magnitudes. However, the use of low-precision mantissa A introduces potential numerical instability, particularly susceptible to overflow and underflow issues during massive numerical computations. This challenge necessitates the development of a robust and rigorous operational framework for effective detection and management of numerical boundary conditions through dynamically adjusting scaling factor s, thereby ensuring computational stability and maintaining numerical precision throughout complex calculations.

Methods

Mathematical operations for physical dimensions and units

In the dimensional analysis of physical units, fundamental operations include addition, subtraction, multiplication, division, and exponentiation (see Fig. 1d and Table 1 for saiunit.Dimension, and Fig. 1h and Table 1 for saiunit.Unit). These operations are governed by rigorous mathematical rules by the SI standard14, ensuring both correctness and consistency in the treatment of physical dimensions. Particularly, addition and subtraction are permitted only when the dimensional types are compatible. Physical units need to adjust compatible scale factors when performing addition and subtraction. Multiplication and division, however, are unconstrained by such compatibility requirements, while exponentiation is permitted not only for scalar exponents. Further details can be found in Table 1.

Table 1 Operational rules for physical dimensions and units

Unit-compatible gradient computation with JAX AD

Unit-aware computations in SAIUnit support two types of gradient computations. The first type employs the original JAX AD, wherein gradients preserve the original data’s units because JAX AD treats quantities with physical units as PyTrees. Physical quantities in SAIUnit seamlessly support both forward and reverse AD of JAX. This includes operations such as scalar loss gradients using jax.grad, forward-mode Jacobian computations with jax.jacfwd, reverse-mode Jacobian computations with jax.jacrev, and Hessian matrix computations using jax.hessian. During forward computations, the unit system ensures consistency among data units. However, during the AD pass, unit checking is disabled, and units are not enforced among gradients. Whether using forward-mode or reverse-mode gradient passes, the computed gradients retain the same units as the original data. This feature — treating units as static data that do not propagate through gradient computations — is very useful to be directly compatible with traditional AD optimizers.

Unit-aware gradient computation with saiunit.autograd

The second gradient computation within SAIUnit is provided in saiunit.autograd module, which ensures all derivative operations respect the dimensions and units of the involved quantities and maintains physical interpretability of derivatives. saiunit.autograd establishes a formal mathematical framework for unit-aware AD, which encompasses scalar-loss gradients, Jacobians via forward-mode AD, Jacobians via reverse-mode AD, and Hessians. By formalizing AD operations with unit-aware rules, we ensure that derivatives not only are mathematically correct but also physically meaningful. For a concrete example of unit-aware automatic differentiation, please refer to Supplementary Note H.

Unit-aware scalar-loss gradient: saiunit.autograd.grad

For a scalar-valued differentiable function \(f:{{\mathbb{R}}}^{n}\to {\mathbb{R}}\), where each input \({x}_{i}\in {\mathbb{R}}\) is associated with a physical quantity qi having dimension [qi]. The output f(x) has its own dimension [f]. The gradient f(x) is a vector in \({{\mathbb{R}}}^{n}\) consisting of the first partial derivatives of f with respect to each component of x:

$$\nabla f({{{\bf{x}}}})={\left[\frac{\partial f}{\partial {x}_{1}}({{{\bf{x}}}}),\frac{\partial f}{\partial {x}_{2}}({{{\bf{x}}}}),\ldots,\frac{\partial f}{\partial {x}_{n}}({{{\bf{x}}}})\right]}^{T},$$
(1)

where each component \(\frac{\partial f}{\partial {x}_{i}}\) has dimensions \(\left[\frac{\partial f}{\partial {x}_{i}}\right]=\frac{[f]}{[{x}_{i}]}\). For example, if f has units of energy [E] (e.g., joules), and xi has units of length [L] (e.g., meters), then \(\left[\frac{\partial f}{\partial {x}_{i}}\right]=\frac{[E]}{[L]}\), which corresponds to force units [F] = [E]/[L].

Unit-aware jacobian: saiunit.autograd.jacfwd and saiunit.autograd.jacrev

For a vector-valued function \({{{\bf{f}}}}:{{\mathbb{R}}}^{n}\to {{\mathbb{R}}}^{m}\), each component \({f}_{j}:{{\mathbb{R}}}^{n}\to {\mathbb{R}}\) has its own dimension [fj]. The Jacobian matrix Jf(x) computed using reverse-mode (saiunit.autograd.jacrev) or forward-mode AD (saiunit.autograd.jacfwd) is an m × n matrix where each entry (ij) is the partial derivative of the i-th component of f concerning the j-th component of x.

$${J}_{{{{\bf{f}}}}}({{{\bf{x}}}})=\left[\begin{array}{cccc}\frac{\partial {f}_{1}}{\partial {x}_{1}}({{{\bf{x}}}})&\frac{\partial {f}_{1}}{\partial {x}_{2}}({{{\bf{x}}}})&\ldots &\frac{\partial {f}_{1}}{\partial {x}_{n}}({{{\bf{x}}}})\\ \frac{\partial {f}_{2}}{\partial {x}_{1}}({{{\bf{x}}}})&\frac{\partial {f}_{2}}{\partial {x}_{2}}({{{\bf{x}}}})&\ldots &\frac{\partial {f}_{2}}{\partial {x}_{n}}({{{\bf{x}}}})\\ \vdots &\vdots &\ddots &\vdots \\ \frac{\partial {f}_{m}}{\partial {x}_{1}}({{{\bf{x}}}})&\frac{\partial {f}_{m}}{\partial {x}_{2}}({{{\bf{x}}}})&\ldots &\frac{\partial {f}_{m}}{\partial {x}_{n}}({{{\bf{x}}}})\end{array}\right],$$
(2)

where each entry \(\frac{\partial {f}_{i}}{\partial {x}_{j}}\) has dimensions \(\frac{[{f}_{i}]}{[{x}_{j}]}\).

Unit-aware Hessian: saiunit.autograd.hessian

For a scalar-valued twice-differentiable function \(f:{{\mathbb{R}}}^{n}\to {\mathbb{R}}\), the Hessian matrix Hf(x) at x is an n × n symmetric matrix consisting of all second-order partial derivatives of f:

$${H}_{f}({{{\bf{x}}}})=\left[\begin{array}{cccc}\frac{{\partial }^{2}f}{\partial {x}_{1}^{2}}({{{\bf{x}}}})&\frac{{\partial }^{2}f}{\partial {x}_{1}\partial {x}_{2}}({{{\bf{x}}}})&\ldots &\frac{{\partial }^{2}f}{\partial {x}_{1}\partial {x}_{n}}({{{\bf{x}}}})\\ \frac{{\partial }^{2}f}{\partial {x}_{2}\partial {x}_{1}}({{{\bf{x}}}})&\frac{{\partial }^{2}f}{\partial {x}_{2}^{2}}({{{\bf{x}}}})&\ldots &\frac{{\partial }^{2}f}{\partial {x}_{2}\partial {x}_{n}}({{{\bf{x}}}})\\ \vdots &\vdots &\ddots &\vdots \\ \frac{{\partial }^{2}f}{\partial {x}_{n}\partial {x}_{1}}({{{\bf{x}}}})&\frac{{\partial }^{2}f}{\partial {x}_{n}\partial {x}_{2}}({{{\bf{x}}}})&\ldots &\frac{{\partial }^{2}f}{\partial {x}_{n}^{2}}({{{\bf{x}}}})\end{array}\right],$$
(3)

where each second-order partial derivative \(\frac{{\partial }^{2}f}{\partial {x}_{i}\partial {x}_{j}}\) has dimensions \(\frac{[f]}{[{x}_{i}][{x}_{j}]}\).

Chain rule with units

When composing functions, the chain rule must respect units. For functions \(f:{{\mathbb{R}}}^{n}\to {{\mathbb{R}}}^{m}\) and \(g:{{\mathbb{R}}}^{m}\to {{\mathbb{R}}}^{p}\):

$$\frac{\partial {(g\cdot f)}_{k}}{\partial {x}_{i}}={\sum}_{j=1}^{m}\frac{\partial {g}_{k}}{\partial {f}_{j}}\frac{\partial {f}_{j}}{\partial {x}_{i}}.$$
(4)

The physical dimensions should follow the rule of:

$$\left[\frac{\partial {(g\cdot f)}_{k}}{\partial {x}_{i}}\right]=\frac{[{g}_{k}]}{[{x}_{i}]}={\sum}_{j=1}^{m}\left(\frac{[{g}_{k}]}{[{f}_{j}]}\cdot \frac{[{f}_{j}]}{[{x}_{i}]}\right)$$
(5)

where each term in the sum maintains \(\frac{[{g}_{k}]}{[{f}_{j}]}\cdot \frac{[{f}_{j}]}{[{x}_{i}]}=\frac{[{g}_{k}]}{[{x}_{i}]}\).

Linearity of differentiation with units

For any scalar constants ab with appropriate dimensions and functions \(f,g:{{\mathbb{R}}}^{n}\to {\mathbb{R}}\):

$$\frac{\partial }{\partial {x}_{i}}(af+bg)=a\frac{\partial f}{\partial {x}_{i}}+b\frac{\partial g}{\partial {x}_{i}}.$$
(6)

The physical dimensions should follow the rule of:

$$\left[\frac{\partial }{\partial {x}_{i}}(af+bg)\right]=\frac{[af+bg]}{[{x}_{i}]}=\frac{[a][f]+[b][g]}{[{x}_{i}]},$$
(7)

where each term on the right-hand side has:

$$\left[a\frac{\partial f}{\partial {x}_{i}}\right]=[a]\cdot \frac{[f]}{[{x}_{i}]},\quad \left[b\frac{\partial g}{\partial {x}_{i}}\right]=[b]\cdot \frac{[g]}{[{x}_{i}]}$$
(8)

For dimensional consistency, [a][f] = [b][g].

Integrating physical units into scientific computing libraries

There are two primary methods for integrating SAIUnit’s unit-aware system into scientific computing libraries across various domains.

Method 1: deep integration of SAIUnit into scientific computing libraries

The first method involves deeply integrating SAIUnit into scientific computing libraries, as demonstrated in Seamless integration of unit-aware computations across disciplines with libraries such as Diffrax, BrainPy, and PINNx. The core steps of this integration include:

  • Representing arrays with SAIUnit’s quantity: Data is represented using saiunit.Quantity instead of jax.Array, thereby assigning explicit physical dimensions to the data. This ensures that the data possesses consistent and traceable unit information before numerical computations.

  • Utilizing SAIUnit’s unit-aware mathematical functions: During computations, mathematical functions from the saiunit.math module are employed instead of those from jax.numpy. This approach maintains unit awareness and facilitates unit conversions throughout the numerical calculation process.

Since SAIUnit is designed as a comprehensive, unit-aware, NumPy-like computational library, these two steps typically suffice to seamlessly integrate unit-aware computations into scientific computing libraries. However, specific implementations may exhibit subtle differences across different scientific computing libraries (see following sections).

Method 2: Encapsulating unit support for predefined scientific computing functions

A substantial number of existing scientific computing functions are designed based on dimensionless data. The second method applies to these dimensionless functions without modifying existing frameworks or underlying implementations. The core idea is:

  • Dimensionless processing before function calls: Before invoking scientific computing functions, input data undergoes dimensionless processing to ensure that the functions internally handle only unitless numerical operations. For example, using b = a.to_decimal(UNIT) method can normalize the quantity a into the dimensionless data b according to the given physical unit UNIT.

  • Restoring physical units after computation: Once the computation is complete and results are returned, we can restore the appropriate physical units to the solutions.

Specifically, SAIUnit provides the assign_units function, which facilitates the automatic assignment and restoration of physical units at the input and output stages of functions. This method is, in principle, applicable to any Python-based scientific computing library, preserving the physical semantics and interpretability at the input and output levels without altering their existing structures. Supplementary Note C shows an example of integrating physical units into SciPy’s optimization functionalities using assign_units function.

Integrating physical units into numerical integration

We have integrated SAIUnit into the numerical solver library Diffrax22, enabling physical unit checks and constraints during the integration process. We integrated SAIUnit into Diffrax by considering not only defining physically meaningful variables as shown in Supplementary Note I, but also providing robust dimensional consistency verification throughout the computation process. When constructing differential equations, the unit-aware Diffrax rigorously enforces dimensional compatibility between the right-hand side parameters and variables and their left-hand side counterparts. During numerical integration, the solver performs comprehensive unit validation at each step, ensuring that all intermediate variables and parameters maintain dimensional consistency according to standardized measurement units. This verification process also generates immediate diagnostic feedback through error messages if any dimensional inconsistencies are detected, allowing users to identify and rectify potential physical modeling errors early in their calculations.

Integrating physical units into brain modeling

SAIUnit has been deeply integrated into the brain dynamics programming ecosystem BrainPy18,19. This integration ensures that BrainPy users can now seamlessly incorporate physical units into their brain modeling workflows, including not only biophysical single neuron models (see Supplementary Note B), but also circuit networks (see Supplementary Note E and F). Particularly, we integrated physical units into BrainPy to enable precise expression and tracking of key biophysical quantities during the modeling process, including basic physical quantities such as membrane potential (mV), ion concentration (mol/L), and synaptic conductance (nS), as well as multi-scale dynamical features ranging from millisecond-scale action potentials to hour-long neural activity, and from micron-scale synaptic structures to centimeter-scale brain regions. Given that neuroscience experimental data inherently carries physical units, we have integrated SAIUnit into the experimental data fitting process of BrainPy, enabling direct comparison of simulation results with experimental data.

Integrating physical units into PINNs

We have integrated SAIUnit with PINNx to create unit-aware PINNs. This integration addresses key limitations of existing PINN methods and frameworks, which often lack explicit physical meanings. One significant challenge with current PINN libraries, such as DeepXDE32, is that they require users to manually track the order and meaning of variables. For instance, in these frameworks, variables[0] might represent the amplitude, variables[1] the frequency, and so on, without any intrinsic connection between the variable order and its physical meaning. In contrast, PINNx allows users to assign explicit, meaningful names to variables (e.g., variables["amplitude"] and variables["frequency"]), removing the need to manage the order of variables manually.

Another limitation of existing PINN libraries is their reliance on users to manage complex Jacobian and Hessian relationships among variables. In PINNx, we simplify this by tracking intuitive gradient relationships through simple dictionary indexing. For example, users can access the Hessian matrix ∂2y/∂x2 using hessian["y"]["x"]["x"] and the Jacobian matrix ∂y/∂t with jacobian["y"]["t"].

Additionally, many existing PINN frameworks lack native support for physical units, which is essential for ensuring dimensional consistency in physical equations. For instance, in the Burgers equation, the left-hand side \(\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}\) and the right-hand side \(\nu \frac{{\partial }^{2}u}{\partial {x}^{2}}\) must have the same physical units, meter/second2. To address this, PINNx leverages unit-aware automatic differentiation from saiunit.autograd, enabling the computation of first-, second-, or higher-order derivatives while preserving unit information. This ensures that physical dimensions are correctly maintained throughout the derivation process.

Supplementary Note I demonstrates how to implement a unit-aware PINN to solve one-dimensional Burgers equation using SAIUnit.

Integrating physical units into JAX, M.D. and catalax

We have also integrated physical units into JAX, M.D.21 and Catalax23 using both deep integration and encapsulation methods. We incorporated physical units such as electric voltage, angstrom, interatomic force, and atomic mass units to quantify force, temperature, pressure, and stress within molecular dynamics simulations. Most energy and displacement functions were computed using unit-aware mathematical functions from SAIUnit, ensuring automatic unit conversion and validation when defining molecular quantities. However, for energy potentials computed by neural networks in JAX, M.D., we used assign_units to convert quantities with units into dimensionless data, allowing for their processing by neural networks. Afterward, the energy units were restored to ensure an accurate representation of the simulation results. In Catalax, we enabled unit-aware biological systems modeling by integrating physical units into its differential equation definitions. We employed our unit-aware Diffrax for numerical integration and utilized assign_units for Markov Chain Monte Carlo (MCMC) sampling when fitting experimental data distributions, as the underlying MCMC interface is inherently dimensionless.

Computing calcium currents using Goldman-Hodgkin-Katz equation

The low-threshold voltage-activated T-type calcium current in the Cerebellum Purkinje cell model was modeled in Haroon et al.30. This model incorporates two activation gates (m) and one inactivation gate (h), capturing the channel’s gating dynamics. The current density is calculated using the Goldman-Hodgkin-Katz (GHK) equation28, which accounts for the electrochemical gradients of calcium ions across the neuronal membrane. The kinetic equations governing the T-type calcium current are summarized below:

$${I}_{{{{{\rm{Ca}}}}}^{2+}}=\overline{{P}_{{{{{\rm{Ca}}}}}^{2+}}}\times {m}^{2}\times h\times {g}_{{{{\rm{GHK}}}}}$$
(9)
$$\frac{{{{\rm{d}}}}m}{{{{\rm{d}}}}t}=\frac{{m}_{\infty }-m}{{\tau }_{m}}$$
(10)
$${\tau }_{m}=\left\{\begin{array}{ll}1\hfill\quad &\,{{\rm{if}}}\,V \, \leqslant -90\,{{{\rm{mV}}}},\\ 1+\frac{1}{\exp \left(\frac{V+40}{9}\right)+\exp \left(-\frac{V+102}{18}\right)}\quad &\,{{\rm{otherwise}}}\,,\end{array}\right.$$
(11)
$${m}_{\infty }=\frac{1}{1+\exp \left(-\frac{V+52}{5}\right)}$$
(12)
$$\frac{{{{\rm{d}}}}h}{{{{\rm{d}}}}t}=\frac{{h}_{\infty }-h}{{\tau }_{h}}$$
(13)
$${h}_{\infty }=\frac{1}{1+\exp \left(\frac{V+72}{7}\right)}$$
(14)
$${\tau }_{h}=15+\frac{1}{\exp \left(\frac{V+32}{7}\right)}.$$
(15)

Where \(\overline{{P}_{{{{{\rm{Ca}}}}}^{2+}}}\) represents the maximal permeability of the channel to calcium ions which has the units of cm second−1, and gGHK is the Goldman-Hodgkin-Katz conductance, defined as:

$${g}_{{{{\rm{GHK}}}}}={z}^{2}\frac{{V}_{m}{F}^{2}}{RT}\frac{{[{{{\rm{C}}}}]}_{i}-{[{{{\rm{C}}}}]}_{o}\exp \left(-z{V}_{m}F/RT\right)}{1-\exp \left(-z{V}_{m}F/RT\right)},$$
(16)

in which z = 2 is the valence of the calcium ion, Vm is the membrane potential which is measured in mV, F = 96485.3321 s A mol−1 is the Faraday constant, R = 8.3144598 J mol−1 K−1 is the universal gas constant, T = 295.15 K is the temperature, [C]i and [C]o = 2.0 mM are the intracellular and extracellular calcium ion concentrations, respectively.

The activation variable m and the inactivation variable h evolve over time according to their respective steady-state values (m and h) and time constants (τm and τh). These dynamics ensure that the T-type calcium current responds appropriately to changes in membrane potential.

In the NEURON simulator, the absence of intrinsic physical units necessitates most dimensional scaling to be performed manually. Initially, the membrane voltage Vm is represented as a dimensionless quantity scaled by the unit mV. This voltage is then converted to volts (V) as shown in Line 04 of Fig. 5b, resulting in the dimensionless parameter \(\zeta=\frac{z{V}_{m}F}{RT}\) (Line 05 in Fig. 5b). The computed conductance gGHK initially has units of C/m3. However, since the permeability \(\overline{{P}_{{{{{\rm{Ca}}}}}^{2+}}}\) is measured in centimeters (cm) and the calcium current \({I}_{{{{{\rm{Ca}}}}}^{2+}}\) is calculated based on a membrane area with units of square centimeters (cm2), the NEURON simulator converts gGHK from C/m3 to C/cm3 (Lines 08–10 in Fig. 5b). Finally, to ensure consistency with other ion channel currents, the simulator transforms the calcium current \({I}_{{{{{\rm{Ca}}}}}^{2+}}\) from amperes per cubic centimeter (A/cm3) to milliamperes per cubic centimeter (mA/cm3) (Line 16 in Fig. 5b). For complete details of the NEURON code, please refer to Supplementary Note J.

In contrast, the BrainCell simulator, enhanced by the unit-aware computations of SAIUnit, eliminates the need for researchers to manually manage dimensional scaling. All units are automatically aligned within a unified unit system, allowing physical quantities with differing metric scales to be seamlessly computed together. This automatic alignment ensures consistency and accuracy across simulations, reducing the potential for unit-related errors. Consequently, researchers can focus more on modeling and analysis without worrying about the complexities of unit conversions and dimensional inconsistencies. For a detailed BrainCell code, please refer to Supplementary Note K. For a comparison between NEURON and BrainCell, please refer to Fig. 5b, c.

Environment settings

All evaluations and benchmarks in this study were conducted in a Python 3.12 environment on a system running the Ubuntu 24.04 LTS edition with CPU, GPU, and TPU devices. The CPU experiments ran on an Intel Xeon W-2255, a 10-core/20-thread Cascade Lake processor with a 3.7 GHz base clock and 4.5 GHz turbo boost. The CPU features 24.75 MB of L3 cache and supports up to 512 GB of six-channel DDR4-2933 ECC memory. The GPU experiments used an NVIDIA RTX A6000, a professional Ampere GPU with 10,752 CUDA cores, 336 tensor cores, and 48 GB of GDDR6 memory. The card delivers 1 TB/s memory bandwidth, draws 300W power, and connects via PCIe 4.0 × 16, making it ideal for parallel computing and AI workloads. The TPU experiments leveraged the Kaggle-free TPU v3-8 cloud instance. Specifically, the v3-8 instance gives 8 TPU v3 cores, each providing 128 GB/s of bandwidth to high-performance HBM memory.

The following software versions were used or compared during the study: JAX (v0.4.36)11, BrainPy (v2.6.0)18,19, BrainCell (v0.0.1)20, pinnx (v0.0.1)17, Diffrax (v0.6.1)33, SAIUnit (v0.0.8)34, brainstate (v0.1.0)35, and brainscale36.