Introduction

General-purpose hardware is engineered to mirror human-like advanced problem-solving abilities and cognitive thinking. The fundamental challenge in developing general-purpose hardware lies in replicating the adaptability and flexibility of human brain intelligence1. The construction of general-purpose hardware can be decomposed into developing machine learning modules that simulate various units of the brain, such as neuron aggregates and then integrated based on a brain-like cognitive architecture, enabling the system to emulate the operational principles and structural intricacies of the human brain intelligence2. In particular, the highly reconfigurable neural network of brain neural circuits is one of the important sources of brain intelligence, giving the brain the exceptional capacity for flexible adaptation to dynamic environments. The brain neural circuits can maintain flexible reconfigurable synapse/heterosynapse/soma inter-connections while frequently adjusting neuronal states3. Such distinctive mode orchestrates the connections of neural circuits across various interactive environmental tasks to ensure an appropriate alignment between task complexity and computational resources4. For neural circuits, neural signals are transmitted to the receiving neuron via synapse, subsequently relayed through the control of heterosynaptic connection to various soma, where the integration and processing of signals are performed (Fig. 1a)5,6,7. Therefore, neuromorphic computing hardware equipped with this capability needs to be developed to imitate the principles and structures of the brain neural circuits. Such hardware should be able to simultaneously implement precise engineering of individual devices’ inner states and inter-modules’ connections, thus boosting adaptability and plasticity and providing strategies for the implementation of adaptability and flexibility in general-purpose hardware8,9,10.

Fig. 1: Concept of RAH and reconfigurable functionalities.
figure 1

a Schematic of the complex structures and functions of biological neural circuits formed by synapses, heterosynapses, soma, and their inter-connections. In the synaptic component, the release and reception of neurotransmitters accomplish signal transmission, and their dynamic modulation (concentration and action time) simulates the reconfigurable weights, reflecting the plasticity inherent in biological synapses. Heterosynapses are particularly notable for their ability to facilitate inter-neural circuit communication through state transitions and to transmit signals across various types of neural cells. The soma component underscores the integrative capacity of neural networks in processing complex signals. The multifunctionality of biological neural circuits is demonstrated in the capability of these circuits to perform a multitude of tasks, including perception and learning. b Schematic of the biology-inspired reconfigurable hardware (I) that encompasses three principal modules: the synaptic module (cascaded MoS2 FET arrays), the heterosynapse module (MoS2 FET-based OPA and MoS2 FET connections), and the soma module (MoS2 FET-based OPA and feedback component integration) (II). These modules employ adjustable states, such as conductance encoding, switchability, and reconfigurable feedback mechanisms, to mimic the components of biological neural circuits, such as synapses, heterosynapses, and soma. This configuration facilitates multitask processing, including signal conversion, artificial visual simulation, and neural network computing (III). c Demonstration of 8-bit DAC. Square-wave input signals (Vin1Vin8) with an input amplitude of 0.1 V and an input frequency of fn = 2n−1 × f1, where f1 = 5 Hz is the input frequency of signal Vin1 (I). Conductance encoding in the synapse module following j1 (Gn = 2n−1 × G1, G1 = 6.06 μS) (II), a sustained on-state T1 as p1 in the heterosynapse module (III), and a voltage follower as k1 in the soma module (IV). The inset in the synapse module shows the output curves of the eight FETs. d Output characteristics of the 8-bit DAC. The inset (black box) provides a magnified view of the output signal between 170 and 180 milliseconds.

2D materials, which possess superior physical properties11, can support neuromorphic computing hardware12,13,14. Neuromorphic hardware based on 2D materials respectively adopts transistors, logic gates, and memory to construct synapses, heterosynapse, and soma components, which is optimized for a specific individual function15,16,17. Recent advancements have demonstrated the integration of multiple 2D material devices and multiple sensor modalities to achieve brain-like functionalities18,19,20,21,22. However, despite these advancements, limitations remain in fully emulating the computational flexibility of brain neural circuits, especially when efficient multitasking is required in dynamic environments23,24. This limitation results in resource waste in lightweight settings, specifically, excessive driving signals and device redundancy, because computational tasks of varying information content require different device quantities25,26,27. Moreover, this limited adaptability may yield suboptimal outcomes because different linear and nonlinear computational processes are required to match different tasks by circuit assembly28,29,30,31. Therefore, the development of 2D material-based reconfigurable analog hardware is the key to truly emulate the computational flexibility of brain neural circuits for multitasking demands in dynamic environments.

In this work, motivated by biological principles, we developed an 2D MoS2-based reconfigurable analog hardware (RAH) that included synapse, heterosynapse, and soma modules (Fig. 1b, (I)) and demonstrated its reconfigurable multiple functions and potential as a solution for general-purpose machines with rich dynamics. At the device level, synapse, heterosynapse, and soma modules were fabricated with MoS2 FETs (including cascaded MoS2 FETs and MoS2 FET-based operational amplifier (OPA) units32). At the circuit level, the synapse, heterosynapse, and soma module wiring assembly was adjusted based on the task requirements to process the signal transmission. By co-encoding the inner-states and inter-connections of all modules, the high adaptability and plasticity of RAH allowed the realization of diverse linear and nonlinear computing functions and effective handling of varying task requirements. The proposed RAH realized the functions of analog-to-digital converter (ADC) and digital-to-analog converter (DAC), with reconfigurable resolutions to match different tasks, a bandwidth of 50 kHz, and a maximum power consumption (8-bit ADC and DAC) of ~750 μW in a converting period. Given its adaptability and flexibility, RAH can also support multiple computing functions. For instance, it was employed in sparse coding and a convolutional computing, which was then used to reconstruct and image sharping from pathological regions in computed tomography (CT) images to facilitate easy identification and assessment by medical professionals. In addition, it imitated attention switching and visual persistence in visual systems through designed circuit configurations, which enabled the detection of distance and velocity in autonomous driving applications. This 2D MoS2-based RAH implemented the key degrees-of-freedom of the inner states and inter-connections of devices and modules. It can pave the way for the development of future brain-like general-purpose machines with high adaptability and plasticity for multitasking.

Biology-inspired reconfigurable hardware

Figure 1a illustrates the complex structures and functions of biological neural circuits formed by synapse, heterosynapse, soma, and their inter-connections. Motivated by biological principles, we developed an RAH by employing 2D MoS2 FETs to translate the biological concepts in Fig. 1a to a concrete electronic hardware design (Fig. 1b). RAH has synapse, heterosynapse, and soma modules, for which diverse computing functions can be built through connection adaption within or between these modules. As shown in Fig. 1b, (II), the synapse module constructed with multiple cascaded MoS2 FETs achieves synaptic-like plasticity by modulating the gate voltage to perform the FET conductance state (Gn). Each transistor Mn follows a customizable discrete equation marked as subfunction jx. The heterosynapse module fabricated with MoS2 FET-based OPA units and six MoS2 FETs (T1-T6) controls heterosynaptic-like inter-connection among different neural circuits by encoding the on/off state of the MoS2 FETs and is known as subfunction px. The connection encoding table of on/off status for 6 MoS2 FETs programs the heterosynaptic-like inter-connection under different environment-interactive requirements, as shown in Table S1. The soma module, composed of MoS2 FET-based OPA and diverse feedback loops, integrates the front-end transmitted signals and generates a soma-like response, thus forming subfunction kx33,34,35. The manufacturing details are given in the Methods section and Note S1. The optical images, SEM image, STEM image, and schematic of the hardware are shown in Fig. S1 and S2, which incorporates over 600 MoS2-based FETs, with an impressive yield rate exceeding 95%. The performance of the MoS2 FET arrays in the synapse module was measured and is presented in Figs. S3 and S4. The MoS2 FET-based OPA’s design details are shown in Note S3, and the corresponding feedback circuits formed with diverse feedback loops are shown in Figs. S5S12. In consideration of the wiring–assembling degree of freedom commonly existing in the brain, the flexible combination of jx, px, and kx realizes diverse functions for the hardware (Table 1).

Table 1 Multiple functions with reconfigurable hardware configuration

To demonstrate the functionalities experimentally, we encoded eight square-wave signals in eight input channels, with an amplitude of 0.1 V and frequencies of \({f}_{n}={2}^{n-1}\times {f}_{1}(n=1,\,2,\,\ldots,8)\), where f1 = 5 Hz and n represents different input channels (Fig. 1c, (I)). The gate voltages applied to each device in the synapse module could fix the conductance of M1–M8 to the desired values to satisfy geometric sequence equation j1:\({G}_{n}={G}_{1}/{2}^{n-1}\) (G1 is the conductance state of the first transistor, M1). Here, the conductance of M1 was G1 = 6.06μS at Vg = 3.62 V (Fig. 1c, (II)). Subfunction p1 was realized by a transimpedance amplifier with a feedback resistor (Rf = 300 kΩ) and an on-state MoS2 FET and was connected to the voltage follower of subfunction k1 (Fig. 1c, (III) and (IV)). Figure 1d shows the performance of the 8-bit DAC, with the adjacent transformed analog voltage levels discerned distinctly. Versatile functions were achieved in RAH by programming different combinations of subfunctions jx, px, and kx, which were then utilized to imitate the synapse, heterosynapse, and soma modules, respectively, leading to RAH’s ability to mimic the adaptive behavior of organisms in different environments.

Eight-bit DAC/ADC

RAH implemented the functionalities of 8-bit ADC and DAC to show its signal conversion capabilities. The related simplified circuit is presented in Fig. 2a. The detailed circuit design and operation mechanism of ADC and DAC are given in Notes S3 and S4. In a differential mode configuration, low-frequency sine waves (1 V peak to peak) were inputted to the MoS2 FET-constructed OPA in the soma module to characterize the frequency responses. The gain and phase response plots related to the input frequency are depicted in Fig. 2b, c. The maximum gain achieved was 6 dB because the reference resistor was set to twice the input resistor, leading to a 0 dB gain bandwidth cutoff frequency (fT) at 50 kHz. The high-frequency operation can be improved by reducing the MoS2 FET sizes36,37,38,39,40.

Fig. 2: RAH adapted as 8-bit DAC and 8-bit ADC.
figure 2

a Diagram of an 8-bit DAC and ADC simplified circuit structure. Frequency responses of OPA, including gain bandwidth (b) and phase bandwidth (c). d Grayscale image of the digit 5 with 256 levels, with 6 selected pixel points as the input signals for the 8-bit DAC. e Entire data conversion process, including digital signal input Vin1Vin8 (1), DAC output signal Vout, DAC (2), reference threshold voltage Vth for ADC (3), and ADC output signal Vout, ADC (4). f |Vout, DAC| of the 8-bit DAC corresponding to the signal input in (d) and the noise level of the circuit. The inset compares the unit quantization voltage (UQV) with noise. g Vout, ADC of the 8-bit ADC corresponding to the signal input in (f) and the scanning Vth. h Error analysis by comparing |Vout, DAC| and Vth when ADC finishes conversion.

Furthermore, to characterize the performance of RAH working as an 8-bit ADC and DAC, we chose six specific pixels from a grayscale image of the digit 5 containing 256 different levels as inputs (Fig. 2d). Digital pixels were converted into analog signals by DAC and converted back to digital outputs by ADC (the procedure is in Fig. 2e, and the conversion details are in Note S6). The analog voltage output values of DAC were measured (Fig. 2f), and the inset magnified the signal and noise curves within the dashed red box. The unit quantization voltage (UQV) reached 14.15 mV, and the noise level remained at 24 μV. The UQV-to-noise ratio of the 8-bit DAC was 55.4 dB, indicating that the converted results could be distinguished clearly. Subsequently, the six analog voltage signals were sent to the 8-bit ADC, and the converted digital number was determined by the corresponding converting time width t, as shown in Fig. 2g. Simultaneously, we recorded the Vload value when the output voltage switched to a low voltage level. Given that the load resistor of ADC and the reference resistor of DAC shared the same value, Vload and |VDAC_OUT| had nearly similar values, with the difference being below 1 mV, which is much lower than UQV. This result confirms that the hardware has excellent fidelity in signal conversion (Fig. 2h), and the DAC/ADC resolution is reconfigurable. The 4- and 6-bit ADC/DAC functions are presented in Notes S4 and S5 and Figs. S13S18, which indicate that RAH can fit the resolution requirements to avoid resource wastage.

Reconstruction and feature extraction of medical images

The adaptability of RAH makes it applicable to various practical situations. Here, it was implemented to perform the reconstruction and feature extraction of medical computed tomography (CT) images, with the aim of improving the identification of pathological regions and enhancing the accuracy of medical diagnosis. To achieve the image restoration process, the hardware was used to construct Function 1 (DAC/ADC) and Function 3 (convolutional kernels) for signal conversion and convolution calculation, as shown in Fig. 3a. A 256 × 256 pixel original CT image containing bleeding spots in the brain was segmented into 8 × 8 pixel patches for image processing, as shown in the red dotted box in Fig. 3a (I). First, the hardware working under ADC and DAC functions was employed for a sparse coding algorithm, in which the original image was reconstructed to reduce the noise via data transfer and conversion within ADC and DAC (Fig. 3a, (II)). Second, the hardware adapted as convolutional kernels was used to sharpen the reconstructed image (Fig. 3a, (III)). The sparse coding process that involves forward and backward DA/AD conversion (the details are in Note S7) is shown in Fig. 3b. Signal 1 was mapped to 0.1 V, and Signal 0 was mapped to 0 V. Last, 256 random coefficients were constructed initially, and the forward and backward calculation results of the first iteration are shown in Fig. 3c. In the sparse coding experiment, the choice of regularization parameter λ was pivotal for the sparse coding outcomes to balance sparsity and image quality. Various λ values were applied experimentally to determine their effects. Figure S20 shows the corresponding activation level at λ = 2.2, in which only a few dictionary vectors contributed considerably to image reconstruction and captured the signal’s sparsity characteristics effectively. After 100 iterations, the optimized sparse coefficients were obtained and are shown in Fig. 3d. The small λ values could result in low sparsity but could cause overfitting in image reconstruction (Fig. S21 for λ = 0.1), and the large λ values allowed for sparse encoding but resulted in underfitting in image reconstruction (Fig. S22 for λ = 3.5). In the experiment, λ = 2.2 achieved excellent sparsity and image quality balance.

Fig. 3: RAH configured for reconstruction and feature extraction of medical images.
figure 3

a Image reconstruction by sparse coding and feature extraction by convolution realized with the ADC/DAC function and nonlinear convolution function of RAH, respectively. b Process of sparse coding. c Output data of forward and backward DA/AD conversion for the first iteration in the experiments. d Sparse coding coefficients for all patches in the original image. e Image sharpening (Function 3) in RAH, in which the synapse, heterosynapse, and soma modules were configured as subfunction j3 (convolutional kernel), subfunction p1 (sustained on-state), and subfunction k1 (voltage follower), respectively. f Electronic performance to evaluate subfunctions j3 and k1. g Output current of a 6 × 6 patch from the convolution operation. The 2D grayscale image of the 6 × 6 patch is given in the inset.

After image reconstruction, a convolution operation for image sharpening was applied by configuring RAH as convolutional kernels (Function 3, Fig. 3a (III)). The configuration of a hardware as convolutional kernels avoids additional accelerators, which supports efficient large-scale convolution computations41. Such a configuration enhances the speed of image sharpening, particularly when dealing with extensive datasets. Initially, a 3 × 3 sharpening convolution kernel was designed, and the weight of the central element was nine times the weight of the surrounding elements. Accordingly, the weights of the convolution kernel were represented by the conductivity of the transistors in the synapse module to encode subfunction j3 (Fig. 3e, left). The conductance of M5 was 9 G0 (3.4 μS) under Vg = −0.7 V, and the conductance of the remaining components was G0 (~0.37 μS) under Vg = −2.4 V (Fig. 3f, left). The right panel of Fig. 3e shows a sustained on-state T1 in the heterosynapse module (subfunction p1). The soma module worked as a voltage follower (subfunction k1), and its output characteristics are shown in the right panel of Fig. 3f. The image patch from the reconstructed image was encoded as 8-bit binary digital signals, which were then converted into analog voltage signals through the hardware working as an 8-bit DAC. These analog voltage signals were fed into the reconfigured hardware (Function 3) working for convolution operations. The convolution operation involved sliding a convolution kernel over each patch of the same size and calculating the convolution result in the corresponding region. Afterward, the output voltages were obtained to represent the extracted features from the respective regions. The experimental results are shown in Fig. 3g, and the features from the 256 × 256 grayscale CT image were extracted and are presented in Fig. 3a (III). The blood vessels and tissue structures were displayed more clearly than those in the original image. Thus, RAH can provide robust support for future medical research and clinical applications. Additionally, image reconstruction with additional noise was implemented using RAH (Figs. S23 and S24). The peak signal-to-noise ratios of the noisy and reconstructed images are 20.1026 and 27.1774 dB, respectively, in Fig. S25a (20.2940 and 28.5906 dB in Fig. S25b), confirming the high-quality reconstruction under additional noise. Notably, the number and conductivity states of cascaded transistors can be configured to form multifunctional convolution cores for flexible convolution operations.

Visual processing in bionic receptive fields

RAH also allows for brain-like multitasking to realize attention switching and visual persistence in biological visual systems, so it has potential for applications in autonomous driving (Fig. 4a)42,43,44,45. Attention switching allows visual systems to focus on information only within the active receptive field (RF). The output signal changes immediately when RF shifts. RF selection enhances data sparsity, reduces redundancy, and effectively captures critical features45,46,47. Meanwhile, visual persistence can utilize previous data acquired outside the present RF to predict future scenarios and infer the actions of objects. The output signal that responded to the previous RF persists, and its disappearance is delayed after RF shifts. The biological mechanism is discussed in Note S7.

Fig. 4: RAH configured for visual processing in bionic receptive (RF) fields.
figure 4

a Diagram of RF attention. Without visual residual, only signals within the active RF (left or right RF) are received. With visual residual, signals from the previously active RF persist for a short time and can be processed after the active RF is switched. b The left panel shows the simplified circuit connection for both RFs. The right panel presents the basic functional characteristics. The MoS2 FET conductance of the active RF satisfies the Gaussian distribution with G1 = G8 = G2/2 = G7/2 = G23 /4 = G6 /4 = G4/8 = G5/8 = 3.76 μS (\({j}_{m1}^{l}\)), and the output characteristics of each FET are shown in an orange background. The FET conductance of the inactive RF is in the off state with Goff of G1/14 = 0.271 μS (\({j}_{m1}^{r}\)), and the output characteristics of each FET are shown in a cyan background. The subfunction \({k}_{m}^{p}\) of the first-stage OPA constructed from MoS2 acts as a voltage follower, and its output characteristics are shown in a gray background. The subfunction \({k}_{m}^{{att}}\) of the second-stage OPA constructed from MoS2 acts as a comparator, and its output characteristics are shown in a pink background. The abovementioned modules were connected through subfunctions p1 and p2. c The left panel shows the input signals to \({L}_{3}^{l}\,\) and \({L}_{3}^{r}\,\) (red) under left RF activation and to \({L}_{6}^{l}\) and \({L}_{6}^{r}\,\) (blue) under right RF activation. The right panel presents the output signals of the first-stage OPA, where the red curve represents left RF, and the blue curve represents right RF. d Schematic of object detection in autonomous driving via RAH. Within the active left RF, the conductance distribution is programmed to compute distance and speed information. e, g For distance detection, the upper panel shows the output signals Vout,l of the first-stage OPA obtained from each programmed location. The lower panel shows the output signals Vout,att of the second-stage OPA obtained from each programmed location, whose width determines the distance (g). f, h For speed detection, the left panel presents the input signals with different time intervals representing different speed scenarios, and the right panel presents the output signals of the second-stage OPA obtained from each programmed location. The output time interval indicates the speed (h).

The functionality (Functions 4 and 5) of RAH was adjusted to realize the abovementioned biological mechanism, as shown in Fig. 4b. The simplified hardware system used two synapse modules to represent the left and right RFs (marked with orange and blue backgrounds, respectively). A soma module with three OPAs (The electrical properties of individual FET are shown in Figs. S4, S26, and S27) was employed to achieve RF switching and visual persistence, and a heterosynapse module was used to realize the inter-connection of synapse/soma modules by encoding the on/off switching of MoS2 FETs. The active RF (synapse module 1) encoded a Gaussian distribution with \({G}_{{\mathrm{1,8}}}={G}_{{\mathrm{2,7}}}/2={G}_{{\mathrm{3,6}}}/4={G}_{{\mathrm{4,5}}}/8=3.76\mu S\) marked as subfunction \({j}_{m}^{l},\) and the inactive RF (synapse module 2) represented universal off-states with \({G}_{{off}}={G}_{1}/14=0.271\mu S\) marked as subfunction \({j}_{m}^{r}\). Signal persistence was realized with a voltage follower circuit by connecting the first-stage OPA (soma modules 1 and 2) circuit with a load capacitor (Cload, 33 μf; subfunction \({k}_{m}^{p}\)). The second-stage OPA (soma module 3) served as a voltage comparator to generate the output (subfunction \({k}_{m}^{{att}}\)).

After circuit configuration, the left RF was activated in 0–50 ms, and the right RF was activated in 50–100 ms. A 20 ms, 0.1 V voltage pulse was sent to the input terminal’s \({{\mbox{L}}}_{3}^{{\mbox{l}}}\) and \({{\mbox{L}}}_{3}^{{\mbox{r}}}\) ports at the time of 25 ms and inputted to the \({{\mbox{L}}}_{6}^{{\mbox{l}}}\) and \({{\mbox{L}}}_{6}^{{\mbox{r}}}\) ports at the time of 50 ms (Fig. 4c, (I)). When the attention was on the left RF, the left RF’s soma module 1 generated a voltage output (\({V}_{{out},l}\)) that was higher than that of the right RF’s soma module 2 (Vout,r), leading to positive output voltage \({V}_{{out},{att}}\) for soma module 3 (Fig. 4c, (II)). Similarly, negative output voltage Vout,att was generated for soma module 3, representing the attention on the right RF (Fig. 4c, (II)). The output signal at soma module 3 persisted for a longer time than the input pulse width, indicating the visual persistence effect.

The distance and speed between the driving and target vehicles in autonomous driving can be determined using the abovementioned working principles. To simplify the testing, we focused on the detection of a single RF. The diagram is shown in Fig. 4d. M1–M8 represent eight activation state encoders to program eight different locations (marked #1–#8) centered around the driving vehicle. These locations include four different distances to the driving vehicle that are distributed symmetrically at the front and back sides of the driving vehicle, and the corresponding conductance distribution of M1–M8 follows \({j}_{{\mbox{m}}}^{{\mbox{l}}}\). Notably, the programming of conductance can be configured based on actual road conditions to expand the detection range and improve accuracy, also highlighting the advantages of the adaptive hardware. Each encoder can receive an activation voltage pulse with a width of 4 ms and amplitude of 0.1 V when a target vehicle is detected at its corresponding programmed location. By processing the output signals of the hardware, we can obtain the distance and speed of the driving vehicle relative to the target vehicle.

During distance detection and processing, the current accumulation in soma module 1 is determined by the encoded transistor conductance, with high conductance resulting in a high output voltage, as shown in Fig. 4e (I). The high output voltage of soma module 1 can lead to a long pulse persistence time width for the output signal of soma module 3, which also indicates a close distance to the driving vehicle, as shown in Fig. 4e (II). Under the assumption that the target vehicle is passing from locations #1 to #8 sequentially in the experiment setup, the measured time widths of the output signals at #1–#8 are 44, 59, 73, 87, 87, 73, 59, and 44 ms, respectively (Fig. 4g), indicating that the target vehicle approaches the driving vehicle first then leaves the driving vehicle afterward. The distance information of the driving vehicle relative to the target vehicle can be distinguished directly.

In speed detection and processing, the different speeds of the detection vehicle result in different time intervals when the vehicle moves from one programmed location to an adjacent one. Thus, the time intervals of soma module 3’s output voltage pulses can be used to determine the speed, in which a short time interval indicates high speed. As shown in Fig. 4f, to simplify the testing, we defined three scenarios with high, medium, and low speeds. Pulse signals with three different frequencies representing the three speeds were applied to \({{{\rm{L}}}}_{1}^{{{\rm{l}}}}{{\rm{\hbox{--}}}}{{{\rm{L}}}}_{8}^{{{\rm{l}}}}\) in sequence, as shown in Fig. 4f (I). The speed information could be distinguished directly by analyzing the time interval of the output signals (Fig. 4f, (II)). The output frequencies of high, medium, and low speeds were ~12.04, ~6.02, and ~4.02 Hz, respectively (Fig. 4h).

Discussion

In summary, the 2D MoS2-based RAH implemented the connection-adaptable degree of freedom innovated from neural circuits in the brain. The devices’ inner states and inter-connections were co-encoded to enhance adaptability, thus endowing RAH with multiple functions and potential as a solution for general-purpose machines. Notably, the MoS2 FETs used in this work could be replaced with floating gate transistors, charge-trapping transistors, or memristors to achieve a non-volatile hardware structure and reducing reliance on gate control strategies41,48. RAH working under ADC and DAC functions exhibited reconfigurable resolutions (maximum of 8 bits), which can match different tasks to avoid resource wastage. ADC and DAC also realized a bandwidth of 50 kHz and a maximum power consumption (8 bits ADC and DAC) of ~750 μW during a converting period. In addition, wide-ranging computing applications, such as AI-assisted diagnoses and autonomous driving, were discussed to directly prove that the high-level adaptability and flexibility of RAH can appropriately meet the linear and nonlinear computing requirements of different tasks. As one potential scheme for future brain-like general-purpose machines, this hardware is expected to be used in many other practical applications, such as speech recognition and decision making. This design strategy opens an avenue to create other intelligent, efficient, flexible general-purpose hardware solutions for complex problems.

Methods

This study complied with all the relevant ethical regulations. The Clinical CT image sample used in this research were obtained from Huangshi Central Hospital after written informed consent was obtained from patients. This research was approved by the Ethics Committee of Huangshi Central Hospital (K(2023)-46). All methods were implemented following the relevant guidelines and regulations stipulated in the Declaration of Helsinki.

Fabrication details of RAH

The CVD-MoS2 thin films supplied by Six Carbon Shenzhen as channel materials. The complete fabrication process encompassed all lithographic steps via UV lithography using an MA8 system. The gate, source, and drain metal contacts (Au and Pt) were manufactured with a Leybold electron-beam evaporation system at a pressure of <3 × 10−5 mbar. The gate dielectric, which was 30 nm Al2O3, was deposited via atomic layer deposition from trimethylaluminium and water at 250 °C. CVD MoS2 was lifted off the growth substrate by spinning a thick polymethyl methacrylate (PMMA) film. The polymer/MoS2 stack was then rinsed in deionized water. After drying at a slightly elevated temperature, CVD MoS2 was transferred in a dry-air glovebox to the prebaked target substrate, which was heated slowly from room temperature to ~150 °C to soften PMMA and facilitate proper contact between the film and substrate. PMMA was subsequently dissolved in acetone. MoS2 and Al2O3 were etched using Ar/SF6 plasma etching in an Oxford Cobra reactive ion etching system.

Electrical measurements

The electrical characteristics of the MoS2 FETs are measured in a cryogenic probe station at room temperature. The Agilent B1500 source measurement unit (SMU) was used to apply gate and drain voltages and to measure the drain current as a function of gate voltage and drain voltage. The testing evaluates key parameters such as transfer and output characteristics and threshold voltage. For the MoS2 FET-based operational amplifier (OPA) circuits, the measurements of individual FET were performed using a B1500 semiconductor analyzer in conjunction with a cryogenic probe station. For integrated device measurements, bias voltages were supplied by the B1500 semiconductor analyzer and source meter. Input and output signals were generated and measured using an Agilent 33220 A function generator and a Keysight Infiniivision oscilloscope, respectively. This setup ensures precise control of the device conditions and accurate characterization of the amplifier’s performance.

For the digital-to-analog conversion (DAC) functionality testing, encoded digital signals (square waves with varying frequencies) are applied across multiple input channels. The conductance values of these channels are programmed by the NI PXI equipment according to the required formula. The output analog signals are then measured using a Keysight Infiniivision oscilloscope to verify the accuracy of the digital-to-analog conversion. For analog-to-digital conversion (ADC) functionality testing, an Agilent 33220 A function generator provides an analog signal to one input of the operational amplifier, while reference conductance channels receive periodic square waves. The analog signal is compared to the reference voltages, and the digital output is monitored with a Keysight Infiniivision oscilloscope. The diagram of experimental setup is shown in Fig. S29.

For Convolutional Kernel testing, a convolution kernel was programmed by adjusting the conductance values of the transistors using an external controller (NI PXI system). The central FET is set to a conductance of 9 × G0, while the surrounding FETs are set to G0. The resulting output was recorded using a Keysight oscilloscope. The convolution process is applied to extract key features from the reconstructed CT image, enabling image sharpening and enhancing the clarity of important details. The Clinical CT image sample used in this research were obtained from Huangshi Central Hospital and approved by the Ethics Committee of Huangshi Central Hospital (K(2023)-46).

Sparse coding

The purpose of sparse coding is to find a sparse representation of input X that must be as similar as possible to the features of the input and whose coefficients are sparse. In optimal sparse approximation, we determined the coefficients with the fewest nonzero entries by solving the minimization problem

$$\min {{||}\alpha {||}}_{0}s.t. \, D\alpha=X,$$

where \({{||}\alpha {||}}_{0}\) denotes the number of nonzero elements of \(\alpha=\left[{\alpha }_{1},{\alpha }_{2},\cdots,{\alpha }_{m}\right]\) and D represents the feature dictionary. To solve this NP-hard problem, we used locally competitive algorithms to obtain the sparse coefficients.

The localized competitive algorithm (LCA) draws inspiration from observed properties in neural systems: the inputs cause the membrane potential to accumulate like a leaky integrator, the membrane potentials exceed the threshold-generated action potentials for extracellular signaling, and the positive responses inhibit neighboring units through horizontal connections. The internal state of the units is denoted by \({u}_{m}(t)\). When the internal state \({u}_{m}\) of a node is considerably large, the node becomes active and produces output signal \({\alpha }_{m}\) that represents the stimulus inhibiting other nodes. This output coefficient results from the application of an activation function to the membrane potential, \({\alpha }_{m}={T}_{\lambda }({u}_{m})\), with system threshold λ as a parameter. Through the combination of these components, the dynamics of LCA nodes can be expressed by a nonlinear ordinary differential equation.

$$\frac{d{u}_{m}}{{dt}}=\frac{1}{\tau }\left(-{u}_{m}+{\left({X}_{m}-{\hat{X}}_{m}\right)}^{T}D+{\alpha }_{m}\right),$$
$${\alpha }_{m}=\left\{\begin{array}{c}{u}_{m}{if}{u}_{m} \, > \, {{\lambda }}\\ 0 \, \hfill {if} {u}_{m}\le {{\lambda }}\end{array}\right.,$$

where \({u}_{m}\) is the membrane potential of neuron m, τ is a time constant, \({\alpha }_{m}\) is the \({m}_{{th}}\) column coefficient of D, and \({\hat{X}}_{m}=D{\alpha }_{m}\).

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.