Abstract
The construction industry faces pressing challenges, including persistent labor shortages, hazardous working conditions, and stagnating productivity gains. Simultaneously, the field of humanoid robotics has matured from early experimental platforms to advanced systems capable of dynamic locomotion, dexterous manipulation, and partial autonomy. This paper examines how humanoid robots, with anthropomorphic designs suited to human-centric environments, might revolutionize future construction processes. The unique challenges of construction humanoid robots are outlined such as perceptual robustness, adaptive locomotion, human-level dexterity, continual learning, and human–robot collaboration. In addition, non-technical perspectives that could affect the adoption and implementation of humanoid robots in construction are discussed such as workforce implications, safety and ethical considerations. These challenges may emerge when translating humanoid capabilities to active construction sites due to unstructured and dynamic settings, unpredictable task sequences, frequent interactions with human trades, and nascent regulatory frameworks. Drawing on a breadth of literature, this paper presents future milestones from near-term technical advances and to longer-term visions of scalable, fully integrated robotic ecosystems. This paper uniquely contributes a comprehensive roadmap for humanoid robot integration in construction, addressing both technical challenges like adaptive locomotion and ethical considerations such as workforce impacts. By fostering interdisciplinary collaboration, it aims to position humanoid robots as transformative assets for advancing safety, efficiency, and sustainability in the industry.
Similar content being viewed by others
Introduction
Recent breakthroughs in robotics and artificial intelligence (AI), particularly in the last 3 years, have enabled the development of next-generation humanoid robots characterized by enhanced mobility, dexterity, and autonomy. Characterized by anthropomorphic morphology and bipedal locomotion, these systems promise to operate in spaces originally designed for humans, leveraging existing infrastructures and tools in ways that conventional wheeled or tracked robots cannot1,2. While early humanoids, such as Honda’s ASIMO, primarily served as research platforms to explore bipedal locomotion and interactive behaviors3, newer entrants like Boston Dynamics’ Atlas or Tesla’s Optimus seek to expand into more practical and industrial applications4,5 (Fig. 1).
Humanoid robots have gained momentum: (a) ASIMO from Honda. Image Source6:; (b) Digit from Agility Robotic. Image Source7:, used with permission; (c, d) Atlas HD & New Atlas. Image Source8:, used with permission; (e) DR01. Image Source9:, used with permission (f) Optimus from Tesla. Image Source10:.
In parallel, the construction industry faces critical challenges, among them a chronic labor shortage, stricter safety requirements, productivity declines, and time and cost overruns11,12,13. Construction projects often take place in environments with rapidly changing layouts, hazardous working conditions, and a high degree of unpredictability in day-to-day tasks14,15. Unlike sectors such as manufacturing, where automation has already led to dramatic gains in output and quality16,17, construction remains less automated due to the inherent variability in building processes and the significant customization required in each project phase18,19. These factors have led to a strong and still growing interest in the potential of humanoid robots to assume roles traditionally performed by on-site human workers, tackling repetitive, physically demanding, or hazardous tasks, and thus alleviating labor constraints and improving overall site safety20,21,22.
There have been reservations for adopting humanoid robots in the construction industry. One of such arguments claims that the humanoid morphology is not optimal for many construction activities such as payload handling23,24. While we argue that adopting humanoid robots present unique practical and scientific opportunities for transforming the entire industry for three reasons. First, the human-centered design of construction tools and spaces means that a robot with similar anthropometrics and dexterous capabilities can, in principle, use the same infrastructure without requiring substantial redesign of work environments or equipment21,25. Second, a bipedal system is capable of traversing uneven, partially completed surfaces, climbing stairs and ladders, and navigating scaffolding, i.e., tasks that are relatively straightforward for humans but typically challenging for wheeled machines26,27. Third, the technology community has shown a growing interest in developing platforms and methods for humanoid robots28,29,30. These resources can be leveraged to enhance humanoid robots with AI-driven perception algorithms that harness 3D mapping, multimodal sensors, and real-time object recognition to adapt to evolving site conditions31,32. Humanoid robots show the ability to operate flexibly in dynamic environments underpins a vision of seamless human–robot collaboration, where machines provide support alongside skilled workers, thereby reducing physical strain, improving efficiency, and safety outcomes33,34.
The potential economic, safety, and societal benefits of humanoid robots in construction are significant. Humanoid robots show a great potential to automate material handling and transport, assembly, inspection and works that must be done in hazardous places. By addressing labor shortages, optimizing material handling, and mitigating hazards, these systems can contribute to a safer, more productive, and more sustainable industry35,36. Nevertheless, the realization of humanoid robot deployments in construction is still in its infancy. The demands of advanced perception ability, dexterity, robust locomotion, generalizable policies, high payload capacity, sufficient battery life, and intuitive human–robot interaction pose formidable engineering and algorithmic challenges22. Even as cutting-edge research demonstrates increasingly agile and capable humanoids in controlled or semi-structured contexts, such as laboratory settings or manufacturing cells, substantial work is needed to adapt these technologies to the more unpredictable and harsh conditions of active construction sites. In addition, introducing autonomous or semi-autonomous humanoids into a workforce also raises questions about workforce displacement, ethical considerations, and regulatory frameworks, all of which require careful and ongoing discussion among researchers, policymakers, and industry stakeholders37,38. This paper aims to develop an in-depth discussion on the challenges and opportunities of humanoid robotic for the construction industry and propose a roadmap of research for the next decade.
State of the art
Recent developments in humanoid robotics reflect a rapidly maturing field that has moved from proof-of-concept bipedal walkers to sophisticated platforms capable of dynamic locomotion, dexterous manipulation, and partial autonomy. Although the roots of humanoid robotics stretch back several decades, contemporary efforts have benefitted immensely from advances in actuation, sensing, and control theory. As researchers continue to explore new materials, mechanical architectures, and AI frameworks, humanoid robots are increasingly seen as potential game-changers in unstructured and human-centered domains such as construction, disaster response, and healthcare39,40,41.
Major platforms and their capabilities
Early examples, including Honda’s ASIMO and KAIST’s HUBO, primarily demonstrated the viability of bipedal locomotion and rudimentary interactive behaviors3,42. As shown in Table 1. Their designs focused on maintaining balance under moderate disturbances, performing simple gestures, and navigating flat surfaces. These pioneering systems set crucial foundations for gait control, power management, and anthropomorphic design. In the past decade, however, a new wave of platforms has emerged with increasingly robust mobility and advanced manipulation abilities. Boston Dynamics’ Atlas, for instance, employs high-torque electrical actuators and model predictive control to enable agile motions such as jumping, backflips, and traversing rough terrain. These capabilities are complemented by recent advancements in reinforcement learning frameworks, which allow robots like Atlas to adapt locomotion strategies dynamically to unstructured environments4,71. Similarly, Tesla’s Optimus project, though still in early development, aims to merge large-scale reinforcement learning with advanced servomotors, aspiring to produce cost-efficient humanoids for industrial and consumer applications5. Examples of humanoid robots are shown in Table 2.
Research into locomotion spans beyond simply achieving upright bipedal movement. The emphasis lies in robust adaptation to diverse terrains and disturbances without sacrificing overall efficiency73,74. Classical approaches like the Zero Moment Point (ZMP) framework remain influential, as they define stable gaits by ensuring the robot’s center of mass stays within the support polygon formed by its feet75,76. However, more contemporary work employs deep reinforcement learning to train walking policies in simulation, allowing robots to master dynamic locomotion skills under varied scenarios77,78. Such policies are then transferred to hardware, requiring careful tuning to accommodate real-world factors such as actuator backlash, sensor noise, or friction inconsistencies79,80. Balancing these computationally demanding methods against real-time constraints remains an ongoing challenge, necessitating hardware accelerators or algorithmic optimizations.
Another core domain is manipulation. Since humanoid robots are intended to function in spaces and with tools designed for humans, researchers focus on end-effector designs, soft or flexible actuators, and multi-modal sensing. Tactile sensors embedded in finger pads, high-resolution force-torque sensors at the wrist, and visual-haptic fusion all aim to enable skillful handling of objects of varied sizes and fragility81,82. Cutting-edge labs have also explored imitation learning and zero-shot learning, allowing humanoids to quickly adapt to unfamiliar tasks based on limited demonstrations62,83. This convergence of sensor-driven control, machine learning, and ergonomically inspired hardware is critical to achieving human-like dexterity, as tasks such as wire routing, bolt tightening, and part assembly can be substantially more complex than the standardized manipulations seen in industrial robotics84,85.
Finally, perception and AI integration remain pivotal. Current humanoids typically carry multiple sensor modalities, stereo or Red Green Blue-Depth (RGB-D) cameras, Light Detection and Ranging (LiDAR), Inertial Measurement Units (IMUs), and sometimes radar, to map their surroundings, identify objects, and track their own poses86,87. AI algorithms for object detection and semantic scene parsing often leverage deep neural networks, which have shown remarkable progress on benchmarks88,89. Nevertheless, these capabilities are largely validated in controlled environments, and real-world performance can deteriorate under variable lighting, dust, or clutter. Fragility, high deployment costs, and the absence of large-scale real-world training datasets continue to hamper the widespread adoption of humanoids outside research institutes or specialized industry pilot projects90,91.
Unique challenges for construction robotics
Construction sites, by definition, are unstructured, large-scale environments in flux. Unlike factories with fixed robotic cells, construction areas feature partially built walls, scaffolding in constant motion, and daily rearrangements of heavy equipment92,93. Tasks like pouring concrete, stacking materials, or installing steel beams can drastically alter navigable paths, making frequent re-mapping and localization crucial. Dust, glare, and inclement weather degrade sensor performance, while uneven terrain heightens the risk of slips or falls85,94. Traditional solutions that rely on static reference markers or carefully planned routes often falter when confronted with such perpetual variability.
A further complication arises from the diversity of tasks on-site. Some are repetitive like carrying bricks or painting walls while others demand extreme precision, such as plumbing or electrical installations. The order of operations can also shift unexpectedly: if a critical material delivery is delayed, teams might reprioritize tasks and disrupt the planned workflow49,95. For a humanoid robot to be effective, it must seamlessly adapt to these evolving conditions, modulating its skill sets and planning horizons. However, existing control algorithms and scheduling systems often assume a degree of task predictability ill-matched to the sporadic nature of construction96,97. This necessitates robust task switching, flexible scheduling, and possibly real-time learning or human-in-the-loop planning.
Cooperative interactions with human workers further complicate matters. Construction sites frequently host tradespeople of varied specializations electricians, pipefitters, carpenters each operating with different tools, safety protocols, and daily objectives98,99,100. Robots have to navigate the same shared environment, sometimes needing to coordinate tasks such as material handoffs, floor layout changes, or collaborative lift-and-fit operations. Ensuring the robot can detect and respond to human gestures, maintain safe distances, and provide clear communication channels is paramount101,102. These interaction layers far exceed those of conventional manufacturing, where robots are often caged or structured to avoid close human–robot proximity.
All of these technical complexities compound when one factors in regulatory and liability frameworks103. Construction sites are subject to stringent occupational safety rules like OSHA standards in the U.S. or EN directives in Europe to protect workers from falls, equipment malfunctions, and respiratory hazards104,105,106. Introducing a humanoid robot that traverses scaffolding or wields power tools raises new questions about accountability for accidents. If a robot’s sensor fails and it collides with a worker or structural element, does fault lie with the machine’s operator, manufacturer, or site supervisor107,108? Equally pressing are insurance premiums and certification processes, which typically assume risk profiles suited to standard equipment. Evolving these frameworks to encompass humanoid robotics in open, dynamic sites remains an ongoing pursuit109,110,111.
Beyond these environmental and regulatory challenges, the mechanical and energy limitations of current humanoid robots remain major barriers to practical deployment. Despite impressive advances in actuation and control, humanoids continue to struggle with joint elasticity, vibration damping, and precision in ankle and wrist mechanisms, all of which affect balance and dexterous tool use. Their payload capacity is relatively low. For example, Boston Dynamics’ Atlas (≈ 89 kg) can safely lift only about 11 kg making heavy material handling impractical on most sites. Similarly, battery endurance typically ranges between 30 and 90 min of continuous operation, far below the duration required for daily construction tasks. Table 2 summarizes representative metrics for current state-of-the-art humanoids, including battery endurance and payload-to-mass ratios. Most commercial and research platforms operate for 1–2 h per charge, with battery packs ranging between 1.5 and 3.0 kWh and overall mass above 40 kg. Their effective payloads rarely exceed 20–25 kg.
Energy efficiency is a critical bottleneck: dynamic locomotion, balance correction, and actuation of multiple high-torque joints demand continuous power, leading to rapid depletion. To address this, ongoing research explores energy-efficient grippers and compliant end-effectors capable of passive holding or variable stiffness control, minimizing active power draw during sustained manipulation. Recent designs use tendon-driven under actuation or electro-permanent magnetic couplers to hold payloads with minimal current once grasped, while others integrate variable-stiffness actuators (VSA) that store and release mechanical energy through elastic components. These strategies are particularly valuable for field operations where power access is unreliable such as remote or partially completed construction sites enabling robots to sustain manipulation tasks without constant high-power actuation.
In addition, limited autonomy constrains field usability: many humanoids still rely on external motion-capture systems or pre-planned trajectories for navigation and manipulation. The combination of low payload-to-mass ratios, short battery life, and limited endurance in harsh weather means current humanoid systems remain primarily research prototypes rather than deployable field assets. Nevertheless, ongoing mechatronic innovations including lightweight composite materials, variable-stiffness actuators, energy-dense batteries, and hybrid locomotion mechanisms are steadily narrowing this gap. Continued collaboration between roboticists and civil engineers will be crucial to achieving the mechanical resilience and energy efficiency necessary for sustained operation in real-world construction environments.
Existing efforts for advancing construction robotics
Despite these formidable obstacles, the potential benefits of humanoid robots in construction remain too significant to overlook. If such systems can be effectively adapted to the realities of construction environments, they could eventually undertake physically strenuous or hazardous tasks thereby improving worker safety, reducing operational costs, and shortening project durations112,113. Achieving these outcomes, however, depends on targeted interdisciplinary research that marries advanced perception (able to handle high levels of environmental noise and clutter) with locomotion strategies suitable for uneven, partially assembled floors93,94,114. Efforts to develop more energy-efficient systems are equally critical, as construction tasks frequently run for extended shifts in remote settings where frequent battery swaps might be impractical115,116,117. Additionally, cooperative task planning must be refined to facilitate robust, dynamic teaming between human specialists and robotic assistants, bridging the gap between theoretical multi-robot coordination algorithms and tangible job-site collaboration61,102,103,118,119. Addressing these gaps will entail consistent collaboration between roboticists, civil engineers, site managers, and policymakers, ensuring that hardware design, control algorithms, and regulatory measures evolve in tandem120,121. Only through such interdisciplinary synergy can humanoid robots progress from captivating laboratory demos to essential, reliable players on tomorrow’s construction sites.
Opportunities and application areas
The inherent anthropomorphic design and advanced locomotive and manipulation capabilities of humanoid robots are expected to pave the way for a range of construction-related applications. By capitalizing on the capacity to traverse sites built for human workers and to handle tools intended for human hands, humanoid robots may uniquely address several persistent challenges in the construction sector73,94,98,122. This section outlines four promising avenues material handling and transport, assembly and installation, inspection and quality control, and demolition in hazardous contexts where humanoid robots could deliver appreciable value if current technological hurdles can be surmounted.
Material handling and transport
Material handling within construction projects is a labor-intensive, repetitive, and often injury-prone activity123. Manual transport of heavy loads or bulky materials, such as bricks, drywall, or piping, not only consumes significant labor resources but also exposes workers to musculoskeletal hazards124. Humanoid robots, equipped with bipedal locomotion and dexterous manipulators, present an opportunity to automate these tasks in environments not well-suited for traditional wheeled or track-based robotic platforms125,126 (Fig. 2).
By maintaining an upright posture, humanoids can navigate through tight corridors, climb stairs, and move across uneven surfaces with greater ease than comparable wheeled systems94,114. Their human-like reach and dexterity enable the handling of irregularly shaped or fragile materials that might otherwise require specialized grippers or custom apparatuses57,122. When augmented with advanced AI-based perception, these robots could dynamically adapt to fluctuating site conditions,avoiding obstacles, rerouting in response to temporary blockages, and adjusting force output to accommodate varying loads55,56,57,81. This adaptability could help minimize logistics bottlenecks, reduce physical strain on human workers, and improve overall site productivity12,96,127.
Assembly and installation
Construction projects rely on precise and timely assembly tasks, ranging from rough framing to the installation of finishing elements such as fixtures, wiring, and insulation128. The high variability and custom nature of these tasks make them especially challenging to automate with traditional industrial robots, which are typically confined to structured environments34,50. Humanoid robots, however, could leverage their anthropomorphic dimensions and dexterous end-effectors to utilize standard hand tools, such as drills, wrenches, hammers, and navigate within partially completed structures without major alterations to the site98,122,129,130,131.
Emerging research in dexterous manipulation, particularly those employing multi-sensor integration and AI-driven motion planning, strengthens the feasibility of humanoid robots performing fine-grained installation tasks82,122,132. Advanced force feedback and tactile sensing can enable nuanced control for fastening bolts, positioning ducts, or aligning modular components, thereby reducing the margin of error57,133,134 (Fig. 3). Moreover, real-time coordination with Building Information Modeling (BIM) systems could provide the robot with geometric references and scheduling data, enabling dynamic adjustments to reflect on-site changes or design updates135,136.
Dexterous hands for pipe assembly. Image source; Informatics, Cobots and Intelligent Construction Lab, University of Florida.
Inspection and quality control
Ensuring structural integrity and compliance with specifications is paramount to successful construction outcomes20,36. Typically, inspectors must traverse scaffolding, maneuver through narrow spaces, or climb to significant heights to perform detailed evaluations of welds, joints, or installations which is a task often rendered hazardous by unstable or confined environments14,137. Humanoid robots, by virtue of their bipedal balance and full-body mobility, can ascend and navigate scaffolding or ladder-like structures without extensive site reconfiguration78,138.
Integrating high-resolution visual, thermal, or even ultrasonic sensors into the humanoid’s upper torso or head module can facilitate detailed, non-destructive testing of critical components63,139,140. With the aid of AI-based object detection and anomaly classification, robots can highlight potential defects, such as cracks, misalignments, or thermal irregularities, in real-time, communicating these findings to human supervisors through a shared digital platform141,142,143,144. This continuous data-driven inspection process has the potential to enhance quality control, reduce rework, and bolster site safety by limiting the need for human workers to undertake dangerous assessments145,146.
Demolition and hazardous work
Demolition tasks and hazardous operations, such as handling toxic materials, cutting through unstable structures, or performing cleanup in disaster zones, pose significant risks to human workers147,148,149. While specialized demolition machinery exists, many aspects of dismantling or decommissioning a structure still rely on manual labor, especially for tasks involving restricted spaces or partial structural collapses150,151. Humanoid robots equipped with reinforced end effectors and ruggedized body shells could navigate precarious areas, wield standard demolition tools, and perform partial teardown operations more safely than human crews152,153 (Fig. 4).
Moreover, teleoperation modes allow human operators to guide the robot’s actions from a safe distance, leveraging onboard cameras and force feedback for situational awareness95,155,156. This dual mode of operation autonomous for routine tasks and teleoperated for complex or delicate procedures enables continuous adjustments to unplanned circumstances (e.g., unexpected debris shifts) while ensuring human expertise remains in the loop157,158. Such hybrid solutions could drastically reduce injuries in demolition work and related hazardous applications like asbestos removal, while also contributing to faster site turnover159,160,161.
Taken together, these use cases illustrate the breadth of potential value that humanoid robots may bring to construction sites. Whether assisting in material logistics or performing nuanced installation tasks, the ability of humanoid robots to replicate human mobility and tool usage gives them a distinct edge over conventional robotic solutions22,162,163. However, realizing these capabilities on a large scale will require concomitant progress in perception, locomotion stability, power management, and human–robot interaction, as described in subsequent sections of this paper. The successful integration of humanoid robots in construction will thus demand a concerted, multidisciplinary effort that involves not only roboticists and AI researchers but also construction professionals, safety experts, and policymakers92,96,107,120,121,164,165.
Technical challenges and considerations
Humanoid robot deployments in construction settings demand robust engineering and advanced algorithms capable of handling the inherent unpredictability of unstructured, dynamic worksites. Many of the technical breakthroughs enabling humanoid robots to walk, perceive, and manipulate objects remain optimized for laboratory or factory conditions, where variables are more contained1,2,22,38. In contrast, construction tasks confront robots with moving equipment, workers, and ever-changing site layouts that stress-test existing perception and control methods166. In addition, the versality and dynamics of construction also calls for the ability to learn generalizable policies or continual learning167,168. This section addresses the substantial technical hurdles and considerations required for implementing humanoid robots in dynamic construction environments.
Long and deep perception
Humanoid robots in construction must operate in highly dynamic settings where the tasks and the layout evolve across overlapping phases, such as excavation, foundation work, framing, and finishing137,169,170. Each phase introduces fresh contexts and spatial configurations, partially completed structures, temporary scaffolding, newly placed rebar, which can radically alter navigable paths and obscure visual markers. Moreover, tasks like installing formwork, routing electrical conduits, and fitting concrete panels are interdependent, so an error in one task may cascade into complications in subsequent steps171. Standard perception pipelines, which often assume static or slowly shifting environments, are ill-prepared for these fast-paced changes. Consequently, a robust construction robot must not only detect and localize existing walls or equipment but also predict how they might evolve over the course of a single workday, especially in response to crane movements, material deliveries, or partial installations118,153,172. We call this new type of perception “long perception” (see Fig. 5). Balancing this predictive element in long perception with immediate situational awareness is essential for safe, efficient navigation and task execution in such fluid settings.
Another reason why many existing perception systems struggle lies in the fundamental 3D complexity of construction sites. Unlike functionally “2.5D” indoor environments, construction areas contain overhead cranes and suspended loads, trenches and rebar protrusions underfoot, and partial floors or scaffolding at intermediate levels174. Even advanced algorithms for object detection or semantic mapping can be flummoxed by moving workers, occluded objects, or intense dust from cutting and drilling operations. To address these challenges, research increasingly emphasizes multi-sensor fusion, integrating streams from LiDAR, stereo cameras, inertial measurement units (IMUs), and potentially radar or ultrasonic sensors175,176. Fusing these data sources in real time makes it possible to build dense 3D representations, often leveraging SLAM (Simultaneous Localization and Mapping) techniques that incorporate factor graphs or graph optimization for scalability164,175,177,178. The efficacy of these pipelines is tempered by high noise levels, partial occlusions, and reflective materials, prompting the need for robust outlier rejection and continuous sensor calibration, which we call “deep perception” capability (see Fig. 5).
New methods are needed to further enhance perception to reach the proposed “long and deep perception”. One direction is to seek algorithmic innovations in SLAM and visual odometry pielines to implement context-aware filtering and redundant sensor arrays to bolster reliability. Some solutions can turn to machine learning architectures, including transformer models and self-supervised networks, which can extrapolate structural cues even if large portions of the scene are obscured114,164. Recent advances in Vision Transformers (ViTs) and multimodal fusion techniques offer promising avenues for enhancing perception pipelines by enabling robust object detection and semantic understanding under challenging conditions like dust or occlusion. For instance, newly poured concrete, with its uniform texture, can break traditional feature-based matching; machine learning-driven approaches can infer likely boundaries or shapes even without distinct key points. Additionally, we should explore in-situ sensing networks as a means of offloading part of the perception burden, installing IoT beacons or drone-mounted LiDAR scanners to provide globally consistent point clouds that the robot can fuse with its local observations179,180. These external sensors also help track fast-moving or occluded objects, including workers wearing sensor-embedded vests102,181,182.
Still, adopting external sensor infrastructure creates its own set of challenges and trade-offs. Installation and maintenance of site-wide LiDAR scanners or beacons can be expensive, and connectivity dropouts may cause the robot to revert to onboard perception in precisely the adverse conditions that prompted external assistance in the first place183,184. A typical scenario is a robot tasked with moving drywall panels through a corridor where dust clouds and irregular lighting impair camera-based SLAM. A global LiDAR network could offer updated environmental maps, enabling more reliable route planning. Yet if part of the network fails or misaligns due to vibrations or weather, the robot might have to rely on noisy onboard measurements to avoid collisions185,186. These contingencies underscore the need for fallback strategies, robust sensor fusion, and adaptive algorithms that gracefully degrade in performance rather than fail abruptly.
All-terrain locomotion and mobility
Humanoid robots on construction sites confront enormous variability in terrain, ranging from soft mud and loose gravel to scattered debris and uneven ground levels. Unlike controlled indoor environments where paths are flat and predictable, construction terrains often shift daily due to excavation or deliveries, and even small obstacles such as wood offcuts or cable spools can jeopardize stability. Furthermore, certain tasks may require ascending scaffolding that sways under load or navigating partial structures with little margin for error. These factors make it difficult to precompute a reliable gait or route; instead, real-time adjustments in friction, slope, and support compliance become essential for safe locomotion. The combined threat of stumbling, tipping, or encountering abrupt changes in surface inclination underscores the critical importance of robust gait planning, active balance control, and adaptive body posture.
Recent approaches (Fig. 6) to tackling uneven terrain rely on predictive footstep planning informed by real-time sensor data, sometimes known as foothold adaptation. By estimating ground properties before each step, via force sensors or exteroceptive measurements, robots can adjust stance width, gait cycle duration, and foot placement to minimize slips and stumbles187,188. Additional techniques incorporate tactile or distributed pressure sensors in the feet, allowing for partial compliance or “give” in each step as the robot encounters irregularities. Similarly, on scaffolding or partial floor segments, advanced balance algorithms using Zero Moment Point (ZMP) or Model Predictive Control (MPC) frameworks can integrate scaffold compliance data and micro-vibrations into the center-of-mass trajectory planning46,47,76. Fall recovery mechanisms, such as reflex stepping or using transient handholds, provide fallback strategies if the robot’s predictions fail in the face of unexpected perturbations. Researchers also explore controlled fall approaches to mitigate damage or injury when all other countermeasures fail.
In parallel, a key focus is refining gait control algorithms for bipedal robots to be both agile and resilient194,195,196. While ZMP-based control has traditionally anchored stable walking, it can be too conservative for cluttered sites demanding quick evasive maneuvers around mobile equipment. MPC, by contrast, anticipates future states over a short horizon, enabling rapid footstep reconfiguration when terrain or obstacles change suddenly. Hybrid locomotion paradigms blend reactive reflex loops, which handle micro-disturbances at the millisecond timescale, with higher-level predictive models that plan footsteps over seconds, striking a balance between responsiveness and long-term stability. Others propose bipedal-hybrid systems that incorporate wheeled or tracked “feet”, switching between walking in complex vertical areas (e.g., ladders, tight corners) and rolling on flat surfaces for efficiency193,197. However, such designs often complicate transitions between different types of ground and can compromise the anthropomorphic advantages needed to climb or work in human-designed spaces.
Another crucial consideration is energy consumption and safety. Continual balance corrections, high-torque actuators, and heavy onboard computational loads can rapidly deplete batteries, particularly if the robot is also tasked with carrying substantial payloads like tools or building materials115,116,117. Construction sites often lack convenient recharging infrastructure, pushing engineers toward solutions such as swappable battery packs, lightweight composite materials, and energy-aware motion planning. From a safety standpoint, geofencing or high-level supervisory controls can override locomotion routines if the robot drifts into restricted zones or if environmental conditions degrade too severely. Combined with sensor redundancy and anomaly detection in mechanical components, these safety protocols aim to prevent catastrophic falls or collisions. Ultimately, locomotion in construction is a delicate interplay of hardware robustness, real-time motion intelligence, and fail-safe mechanisms,an interplay that must be meticulously designed to handle the unpredictability of large-scale building projects.
Human-level manipulation dexterity
Humanoid robots in construction must contend with an extraordinary variety of objects, tools, and materials in environments far more chaotic than standard industrial settings. Unlike the controlled or semi-structured layouts seen in factories, construction sites are rife with partially assembled frameworks, irregularly shaped components, and materials ranging from fine electrical wiring to massive steel beams128,198,199,200. Manipulation tasks thus oscillate between precision (e.g., carefully threading cables through conduits) and brute force (e.g., prying or lifting heavy panels), often within tight clearances. Moreover, dust, debris, and temperature fluctuations can degrade sensor performance and mechanical reliability201,202,203. These factors underscore the acute need for end-effectors and control algorithms specifically tailored to support robust, adaptive manipulation. Critically, many essential tasks including tying rebar intersections, wire fishing, pipe fitting operations, applying sealants and welding, require the kind of dexterous finger control normally attributed to human hands. When performed manually, these detailed tasks impose significant strain on workers’ wrists and fingertips, reinforcing the potential value of anthropomorphic robotic solutions122,204,205.
Recent advances seek to address these challenges by developing modular, reconfigurable end-effectors, which combine anthropomorphic multi-fingered hands for dexterous tasks with interchangeable tool attachments for heavy-duty operations54,206. Such designs might feature interchangeable “fingers” optimized for high-friction grasps or specialized couplings that replicate the clamping power of pliers and wrenches207. In parallel, a growing emphasis on force-torque sensing at the wrist and fingertips enables real-time adjustment of grip force, reducing the risk of accidental material damage or dropped loads208,209. For instance, when tying rebar intersections, the robot may rely on tactile feedback to sense the tension in the wire; likewise, careful cable routing demands sufficiently low grip force to avoid pinching or severing insulation55,57,210. Another dimension is visual serving for tasks like pipe fitting, where minute alignment errors can cause leaks or structural weaknesses. By integrating depth imaging and force feedback, the robot can iteratively adjust the pipe’s angle until the desired tolerance is achieved.
However, most state-of-the-art dexterous hands such as the Allegro, Shadow, and DLR-HIT designs remain mechanically fragile and limited in payload capacity, as they were developed primarily for research in precision manipulation rather than forceful, high-torque tasks (Fig. 7). Their lightweight structures and intricate actuation mechanisms make them unsuitable for repetitive or high-impact operations common in construction environments. This limitation highlights the urgent need for construction-grade end-effectors that blend human-like dexterity with industrial-level durability, possibly through hybrid actuation systems, compliant materials, and protective housing against dust and vibration.
In addition to dexterity, several of the hand designs listed in Table 3 employ actuation strategies that enhance energy efficiency, an increasingly important factor for field operations with limited power access217. For instance, tendon- and cable-driven mechanisms (e.g., DLR Hand II, Schunk SVH) reduce motor count and enable remote actuation, lowering joint-level inertia and overall energy use compared to fully motorized servo configurations. Pneumatic and hybrid systems (e.g., Biomimetic and Shadow Hands) further improve force-to-weight ratios and can store potential energy through compliant materials, allowing energy recycling during grasp release.
Recent research also explores variable stiffness actuators (VSA) and underactuated tendon hands, which dynamically adjust stiffness or share actuators across multiple joints, yielding substantial energy savings for long-duration manipulation tasks. Such designs are particularly relevant in off-grid or partially powered construction environments where humanoid robots must operate for extended periods between battery charges.
On the control side, hybrid position-force strategies and compliance control methods are instrumental for ensuring safe, reliable manipulation in unstructured environments218,219. By blending position-based path planning with real-time force feedback, these algorithms allow the robot to deviate from purely rigid trajectories and “feel out” uncertain tasks. For example, if a drill bit enters a hole at a slightly incorrect angle, the controller can adjust torque and posture to prevent binding. Meanwhile, compliance control can assist with tasks like inserting pre-bent piping into tight wall cavities, accommodating slight geometric variations204. Researchers also experiment with learning-based manipulation, including reinforcement learning or imitation learning frameworks, which can speed the acquisition of new dexterous skills131,220,221. In some prototypes, robots train in physics-rich simulations to master wire handling, rebar tying, or pipe fitting, subsequently refining these skills in real-world conditions.
Efforts to improve manipulation robustness also hinge on multi-modal sensor integration, fusing visual, depth, force, and tactile data into cohesive situational awareness222,223 (Fig. 8). Systems that exploit state-of-the-art semantic segmentation can distinguish between a partially obstructed rebar segment and background clutter, while model predictive controllers anticipate how contact forces will evolve if the object shifts unexpectedly. By combining such sensor fusion with advanced planning algorithms, future humanoids may reliably perform delicate tasks currently deemed too repetitive or risky for unassisted human labor224,225. Although notable progress has been made, the complexity and unpredictability of construction sites mean that dexterous, foolproof manipulation, particularly for intricate tasks like wiring, plumbing, and rebar tying, remains an open frontier, motivating deeper research into integrated hardware-software architectures, high-fidelity simulation, and adaptive learning mechanisms.
Multi-modal sensations are used to improve robot dexterity. Image source; Informatics, Cobots and Intelligent Construction Lab, University of Florida.
Transferability and generalizability
A major appeal of humanoid robots in construction lies in their promise of flexible redeployment across diverse tasks and project sites without demanding excessive reprogramming. Yet achieving robust transferability remains exceedingly challenging. Construction projects vary widely in their architectural designs, structural materials, and building codes, while even within a single site, day-to-day changes can drastically alter the robot’s operational domain226,227,228. A robot that learns to carry drywall up a scaffold in one setting may struggle in another environment229, where the scaffold sway dynamics differ or the drywall dimensions deviate from prior assumptions. Without effective knowledge transfer, the time and effort required to recalibrate models and retune controllers for each new situation can easily erode the cost and productivity benefits of a humanoid platform230,231,232,233.
To address these issues, researchers are increasingly exploring domain adaptation and meta-learning techniques that help bridge the gap between a “source domain” (e.g., a training facility or simulation) and a “target domain” (a live construction site). Domain adaptation often employs adversarial methods that encourage feature extractors to learn environment-invariant representations, thus minimizing performance degradation when exposed to novel lighting conditions, partial occlusions, or different building materials234,235,236. Meanwhile, meta-learning or few-shot learning paradigms train the robot to rapidly update its skills with minimal on-site data, reducing the overhead of re-labeled demonstrations or manual controller tweaking237,238,239. In practice, this might take the form of a humanoid robot refining its drilling parameters after observing or mimicking a human worker for just a few minutes rather than requiring a full day of reprogramming.
Another promising avenue is the foundation model approach, where large-scale pre-trained networks capture broad sensorimotor priorities from massive simulated and real-world datasets. By fine-tuning such a model with a comparatively small domain-specific dataset, robots can adapt to new tasks or layouts swiftly240. As shown in Fig. 9, the proposed foundation model for transferability of robotic manipulation skills integrates perceptual encoding, cross-domain policy adaptation, and low-level control refinement. While the conceptual basis draws from the framework proposed by Fu et al.241, the present model extends it through multimodal learning components and cross-environment generalization mechanisms tailored for construction-oriented manipulation tasks. This approach is bolstered by cloud-based knowledge repositories, wherein multiple deployed robots share training logs and best practices. For example, if a robot successfully masters the insertion of specialized fasteners in a high-rise project, it can upload its policy updates to a central server; a second robot can then download and adapt these updates to a mid-rise residential job242,243. This collective learning accelerates deployment, promotes standardization in skill representation, and distributes best practices widely across geographically dispersed projects.
Foundation model for transferability robot manipulation skills. Image source; Informatics, Cobots and Intelligent Construction Lab, University of Florida.
Crucially, the success of these techniques hinges on the availability of diverse, high-quality datasets that capture the breadth of construction scenarios, from extreme weather conditions to partially collapsed or partially built structures244. Building such datasets often requires specialized site mapping, sensor instrumentation, and data annotation processes. The field is consequently placing emphasis on simulation-to-reality (sim2real) protocols, wherein robots train extensively in highly realistic virtual environments,including artificially introduced noise and variations, and then transition to real sites with minimal performance drop245,246. Combined with robust sensor fusion and advanced planning methods, these efforts aim to make humanoid robots not only capable of complex tasks in a single setup but also adaptable enough to be redeployed fluidly across the multi-faceted, ever-changing tapestry of modern construction.
Continual learning
Continual learning is an essential yet challenging aspect of deploying humanoid robots in construction, where the environment and tasks evolve over time. Unlike static factory floors, a construction site transforms daily as structures rise, materials get relocated, and new subcontractors bring different tools and processes. Moreover, local building codes and safety regulations can shift across jurisdictions or even mid-project, necessitating on-the-fly adjustments to the robot’s operational parameters. A single, monolithic model trained prior to deployment is unlikely to remain viable amid these ongoing changes. Instead, the robot must incrementally update its perception and motion policies to cope with new tasks, unfamiliar site layouts, or revised safety constraints without losing the knowledge it has already acquired. Achieving this “continual learning” paradigm is particularly difficult because naive attempts to retrain a model on recent data often lead to “catastrophic forgetting,” where previously learned competencies degrade in the face of new information247,248.
To address this, robotics researchers are developing safe, on-site learning methods that allow incremental adaptation without risking accidents. One technique involves simulation-driven pretraining, wherein the robot refines a baseline policy in a virtual environment that mirrors real-world variability249,250,251,252. Once deployed, the robot continues to gather field data but updates its model in a “shadow mode”, executing new or partially trained policies only in simulation or at low-risk times, such as after work hours or on specially designated test sections of the site253,254. This practice dramatically reduces the chance of failures that could compromise worker safety or project timelines. Another strategy focuses on transfer reinforcement learning255,256,257, which leverages each job site’s feedback (e.g., scaffolding sway, ground irregularities) as a domain adaptation signal, gradually narrowing the gap between simulated assumptions and reality. Importantly, robust fallback policies are retained for critical tasks, ensuring the robot can revert to proven maneuvers if its newly acquired skills falter.
Under the hood, much of continual learning relies on regularization and dynamic scheduling to mitigate catastrophic forgetting while incorporating fresh information. For instance, techniques like L2 regularization (often employed in neural networks) limit how drastically the model’s parameters can deviate from their well-established values258,259. This encourages small, incremental changes aligned with the robot’s past experience rather than radical overhauls that risk erasing learned behaviors. A learning rate scheduler can also be employed: the robot might increase its learning rate only after confirming that certain new data are relevant, then reduce the rate once adaptation is complete260,261. Advanced approaches include elastic weight consolidation (EWC)262, where parameters most critical to existing tasks are protected during retraining, and progressive networks263, which allocate new neural pathways for novel tasks while preserving the old pathways for previously mastered skills. By carefully combining these strategies, humanoid robots can gradually absorb fresh site data and updated building regulations (such as a sudden switch from steel frames to cross-laminated timber) while continuing to perform established tasks effectively.
Power and payload constraints
The dual demands of hauling hefty payloads while maintaining extended operational durations create a persistent challenge for humanoid robots in construction. Because bipedal walking inherently consumes more energy than rolling or static platforms115,116,264,265, these robots require high-torque actuators that rapidly deplete battery reserves. At the same time, tasks such as lifting concrete blocks or steel beams push designers toward more powerful motors, yet every additional kilogram in mechanical structure and batteries increases overall mass, a vicious cycle of weight versus capability. For instance, a robot strong enough to move large panels might carry heavier battery packs, cutting into its total operation time and limiting how far it can travel before recharging. This trade-off becomes even more pressing in remote or sprawling worksites where easy access to charging stations is not guaranteed.
To mitigate these constraints, researchers and engineers have explored solutions that range from battery swap stations, where robots dock to quickly exchange depleted power units for fresh ones, to partial or full tethered power configurations that sidestep onboard energy storage. In environments with minimal movement needs, tethered approaches can drastically boost uptime; however, cables may impede a humanoid’s ability to climb scaffolding or squeeze into confined spaces. Meanwhile, large-scale projects might adopt a fleet approach266,267, deploying multiple robots in parallel rotations: as soon as one unit’s battery runs low, it is replaced by a freshly charged counterpart, ensuring near-continuous coverage. This strategy allows site managers to keep tasks moving without long robot downtimes, albeit at the expense of purchasing multiple, often costly, humanoid platforms.
Beyond operational planning, emerging innovations in battery technology and alternative power sources hold promise for alleviating the power-versus-payload dilemma. Next-generation lithium-sulfur or solid-state batteries may offer superior energy density and thermal stability, reducing the weight penalty for high-capacity storage268,269,270. Other initiatives investigate fuel cells271,272,273,274, which can yield longer runtimes and faster refueling, or supercapacitors that deliver bursts of high current for transient heavy lifts. Researchers also examine energy-aware motion planning algorithms that optimize each step’s torque usage or exploit momentum to reduce power draw275,276,277. Although no single technology provides a universal fix, a judicious mix of advanced batteries, clever mechanical design, and operational strategies can expand humanoid robots’ effective working window and their utility on active construction sites.
Human–robot interaction
Integrating humanoid robots into construction sites demands more than just advanced hardware and software; it requires seamless interaction with human workers who navigate the same dynamic spaces (Fig. 10). One of the most significant challenges lies in ensuring that robots and humans share a situational awareness of ongoing tasks, potential hazards, and each other’s movement37,137,155,278,279. Because a construction site may involve multiple trades working concurrently, the robot needs real-time updates on who is welding in one area, who is carrying materials in another, and how structural features are changing with each completed phase. Without this shared cognitive map, misunderstandings can arise: a worker might inadvertently walk into the robot’s path, or the robot might begin a task in a zone where critical safety checks have not yet been performed.
Human–Robot teaming will be a main form of automation in the future; (a) Fire Fighters and Cassie Blue Walks Through Fire. Image source; Michigan Robotics: Dynamic Legged Locomotion Lab, used with permission; (b) 4NE-1 from NEURA Robotics shown in a conceptual promotional image illustrating potential industrial applications. Image source280,281; (c) Atlas HD hands-on tools bag to human worker. Image Source4:, used with permission.
Addressing these issues involves developing robust communication channels and intuitive interfaces for collaboration. Robots can integrate with Building Information Modeling (BIM) data and site management software to stay informed of scheduling changes or hazard alerts, while workers might receive visual or audible cues when a robot is approaching200,282,283,284. For instance, wearable devices or tablet interfaces can display the robot’s current status or path, helping humans anticipate movements and coordinate tasks. In this context, human acceptance of humanoid robots depends less on affective or emotional engagement and more on trust, predictability, safety, and perceived usefulness. Studies of technology adoption have shown that frameworks such as the Technology Acceptance Model (TAM3)285 and the Unified Theory of Acceptance and Use of Technology (UTAUT)286 provide better lenses for understanding how construction professionals evaluate robotic systems. Key factors such as performance expectancy, effort expectancy, and facilitating conditions directly influence whether workers view the robot as a reliable teammate or as a disruptive element on-site. Accordingly, ensuring transparent robot behavior, legible motion patterns, and reliable safety boundaries is essential to fostering trust and reducing anxiety during collaboration287,288,289,290. Gesture recognition and voice command systems can also provide user-friendly methods for directing the robot33,60,291, especially when hands-on tools are involved or when rapid, situationally adaptive instructions are needed.
Underlying these interfaces is a framework of safety protocols designed to prevent accidents and collisions. Many humanoid robots incorporate advanced sensor arrays (e.g., LiDAR, depth cameras, ultrasonic sensors) to detect nearby personnel and obstacles, triggering real-time collision avoidance routines if something encroaches on the robot’s workspace. In congested areas, robots may slow or reduce their working radius until humans clear the zone107. Some companies are even experimenting with cognitive load sensing that gauges how intensely a worker is focused on a particular task, adjusting the robot’s behavior accordingly to minimize distraction or surprise. Regulatory bodies and standards organizations, meanwhile, are starting to formulate guidelines on permissible robot interactions, protective equipment, and co-working protocols. Ultimately, as construction humanoids become more autonomous, the depth and sophistication of their human–robot interaction capabilities will determine whether they coexist harmoniously with workers or remain relegated to strictly controlled niches.
Beyond technology: socioeconomic and ethical implications
Understanding the non-technical aspects of adopting humanoid robots in construction is as crucial as the technical perspectives to ensure successful integration. Beyond engineering challenges, the broader socioeconomic and ethical dimensions shape the feasibility, acceptance, and long-term sustainability of these technologies within the industry. This section considers several key aspects: the workforce implications, where we explore potential job displacement versus upskilling opportunities; safety issues, weighing the reduction of human risk against new risks introduced by robots; economic feasibility, analyzing the balance between initial investments and anticipated gains; public perception and acceptance, which determine how these technologies are received and integrated by workers and communities; and ethical frameworks and policy development, crucial for addressing liability, privacy, and data security concerns. Together, these considerations offer a holistic view of the complex interplay between technological advancements and their societal impacts in the construction sector.
Workforce implications
Humanoid robots in construction promise both increased productivity and a safer working environment, but they also raise concerns about how existing jobs might be affected. A major worry is that widespread automation could lead to job displacement, particularly for tasks requiring repetitive or physically taxing manual labor292,293,294. Given the global shortage of skilled construction labor, some industry stakeholders argue that robots could fill the gap rather than displace existing workers. Examples from industries like automotive manufacturing demonstrate how automation can coexist with human labor by creating new roles in robot operation, maintenance, and system integration, highlighting opportunities for upskilling rather than displacement. Nonetheless, apprehension persists: Will introducing robots for tasks like material handling, assembly, or site inspection diminish opportunities for entry-level laborers? And how will experienced tradespeople adjust if robots begin taking on highly specialized activities? These questions underscore the delicate balance between tapping robotics’ potential and safeguarding the livelihoods of the human workforce.
To address such concerns, many experts point to historical precedents in other industries295,296,297,298. The automotive sector, for example, saw widespread fear of worker displacement when assembly-line robots became commonplace299,300. Yet automation ultimately catalyzed the creation of new roles such as robot programmers, maintenance technicians, and systems integrators, alongside upskilling or reskilling initiatives aimed at the existing workforce128,301. Construction can follow suit by framing robots as enablers that augment human capabilities rather than pure labor replacements. This might include equipping experienced tradespeople with the knowledge and certifications needed to supervise robotic platforms, troubleshoot errors, or calibrate specialized tools. Government agencies and trade unions can collaborate to develop short-course training, online modules, or apprenticeship-style programs where workers learn basic robotics literacy, safety protocols, and multi-robot fleet management. Such initiatives would not only preserve jobs but potentially elevate the sector’s skill floor, fostering a more technologically adept labor force.
In practical terms, construction workers might gain new pathways to transition into “robot managers,” monitoring multiple robots and intervening when anomalies occur. Some might become certified robotic tool specialists, advising on the best end-effectors for given tasks. Vocational institutions could offer micro-credentials recognized by both construction firms and robotics manufacturers, ensuring standardized skill sets across regions. Over time, a carefully orchestrated approach based on stakeholder dialogue, well-funded training programs, and accessible upskilling resources can help shift the narrative from job loss to workforce enhancement. By embracing robots as collaborative partners, the industry can expand its talent pipeline, retain seasoned experts who adopt new technical competencies, and draw younger workers eager to work with cutting-edge technologies.
Safety
Humanoid robots have the potential to dramatically reduce the most dangerous aspects of construction labor. Whether it is lifting heavy materials, operating in cramped or poorly ventilated conditions, or scaling scaffolds at precarious heights, robots can take on tasks associated with high injury rates and long-term health problems. For instance, back injuries from continuous heavy lifting are a prominent issue in construction302,303, and robotic assistance could alleviate such chronic strain. Similarly, in demolition or hazardous material removal (e.g., asbestos), sending a robot in place of a human significantly lowers the risk of exposure. By leveraging advanced sensors, real-time feedback loops, and precise motor control, these machines can perform intricate tasks without succumbing to fatigue or momentary lapses in attention, i.e., factors that account for many on-site accidents.
However, the introduction of humanoid robots also brings novel risks. Collisions are a prime concern, particularly in congested sites where visibility can be poor and workers are frequently crossing paths96,105,108. Software vulnerabilities ranging from bugs that cause erratic movements to hacking attempts that disrupt coordination also pose another layer of risk. If a critical control algorithm malfunctions during a lift or scaffold climb, the consequences could be catastrophic for both the robot and surrounding personnel. Thus, while the robotic hardware might limit human exposure to direct hazards, its misuse or malfunction can create new dangers. Confronting these realities calls for robust safety standards tailored specifically to humanoid forms operating in dynamic, unstructured settings.
One path forward lies in adapting and extending existing industrial safety certifications (e.g., ISO 10218304, ANSI/RIA standards305) to the unique context of construction humanoids. Unlike stationary robotic arms in manufacturing cells, bipedal robots have to navigate multi-level worksites, requiring continuous collision avoidance, fallback procedures, and foolproof emergency stop mechanisms138,139,306. Standards organizations, in tandem with manufacturers and government regulators, could develop performance benchmarks that measure a robot’s stability under unexpected impacts, sensor redundancy in harsh climates, and resilience against cyber intrusions. Additionally, well-documented safety protocols should be as integral to a project as standard engineering specifications to establish how humans and robots share corridors, signal who has right-of-way, and under what conditions a robot must yield. Implementing such guidelines alongside worker education, thorough risk assessments, and frequent on-site drills can ensure that the safety advantages of humanoid robots are realized without introducing unmanageable new threats.
Economic feasibility
The economic feasibility of deploying humanoid robots in construction hinges on balancing the high initial capital expenditure (sensors, actuators, and advanced control hardware) against prospective labor savings and productivity gains over time. Even before considering large-scale production, individual components can be costly: high-torque motors and joint mechanisms must be durable enough for rugged environments, while LiDAR and depth cameras often dominate sensor budgets307. For smaller contractors working on low-margin projects, such upfront costs can be prohibitive, reinforcing the perception that robotics adoption may only be viable for major construction firms. Moreover, ongoing variable costs like battery replacements, software updates, and additional insurance premiums introduce complexities into long-term financial planning.
In response, industry stakeholders should undertake cost–benefit analyses that weigh capital investment and maintenance expenses against anticipated operational savings. For large-scale projects with substantial labor shortages or extensive repetitive tasks, a well-deployed humanoid robot fleet can cut down on delays, reduce rework, and improve overall site efficiency, thereby recouping the initial purchase costs more quickly. Conversely, in smaller construction jobs with fewer repetitive tasks, the return on investment may be less straightforward, making robotic adoption a tougher sell. In some regions, public funding or R&D subsidies are beginning to play a role; government grants or innovation tax credits can offset part of the financial burden, especially when pilot programs yield safety or environmental benefits. Over time, as the scale of production ramps up and cost-reduction strategies (e.g., standardized hardware modules) become more widespread, the economic threshold for integrating humanoid robotics will likely lower further.
Another angle to consider is the flexibility and reusability of a humanoid robot across multiple sites. An expensive platform might be justified if it can be transported between projects and configured for a range of tasks, thereby amortizing costs over numerous job sites. Contractors might also pursue leasing or sharing models, where robots are rented during peak demand periods and returned afterward, offloading maintenance responsibilities to the provider. Ultimately, robust financial modeling requires a nuanced look at project scale, location, labor availability, and specific construction tasks. By assembling reliable data from existing pilot deployments and early adopters, the industry can refine its ROI projections and move toward more data-driven decision-making on robotic investments.
Public perception and acceptance
The proliferation of robots on construction sites invariably triggers apprehension about job security and broader societal impacts. Much like the debates surrounding self-driving cars and warehouse robotics308,309,310, concerns arise that humanoid robots might replace human workers, eroding the social fabric of manual trades. This fear is compounded by the fact that humanoid form factors, by design, evoke comparisons to human capabilities, potentially heightening anxiety around displacement. Beyond job fears, there are emotional and psychological barriers: encountering a life-sized, moving machine in a hard hat can be off-putting, especially for workers unaccustomed to robot co-collaborators.
Addressing these concerns demands community engagement and transparent communication. One effective strategy involves demonstration projects that showcase not only the robot’s capabilities but also how it collaborates safely and efficiently with human teams. Public open houses at large construction sites, livestreamed tests on social media, and candid Q&A sessions can demystify the technology, offering workers and the local community a chance to see the robots in action. Lessons from the self-driving car industry underscore the importance of staged rollouts, starting with lower-risk tasks and gradually increasing autonomy as public trust builds. Surveys and case studies indicate that acceptance grows when people can observe tangible benefits including fewer workplace injuries, quicker project completions109,311,312, while confirming that robots do not replace but rather assist or complement human labor.
From an implementation standpoint, forging partnerships with trade unions or vocational schools can further legitimize robotics in the construction realm. Early involvement of key stakeholders helps articulate realistic job augmentation scenarios and fosters credibility in how these technologies are deployed. As the novelty wears off and robots reliably perform tasks that are either ordinary or hazardous, perceptions typically shift from fear to pragmatism, i.e., viewing humanoid machines as advanced tools that, like any other equipment, require proper training, oversight, and respect for operational limits.
Ethical frameworks & policy development.
Alongside economic and social considerations, the ethics of humanoid robotics in construction come to the fore in scenarios involving accidents or data privacy287,313. Liability represents a particularly thorny issue: if a robot malfunctions or collides with infrastructure, who bears responsibility: its operator, the manufacturer, the site manager, or the software developer? The legal landscape grows even more complex if software errors or hacked systems cause project delays or safety incidents. Moreover, many humanoid robots rely on extensive sensing capabilities (e.g., cameras, microphones, thermal imagers) that can inadvertently capture private conversations or sensitive site information314,315. These surveillance potentials demand stringent data management and consent protocols, lest they infringe on worker privacy or corporate intellectual property.
To navigate such complexities, policymakers and industry bodies should call for formalized standards akin to established industrial robotics safety guidelines but adapted for mobile, humanoid systems in unstructured environments. This effort should include drafting procedures for verifying real-time sensor reliability, implementing secure communication channels to mitigate hacking risks, and setting performance benchmarks for safe, autonomous decision-making. Jurisdictions may also incorporate “ethical audits” into project approval processes, requiring documentation of how the robot’s data logs are stored, how collision avoidance has been tested, and who is accountable for system upgrades. Additionally, localized variations in labor laws and safety regulations mean that solutions cannot be simply copy-pasted from one market to another; the policy framework must be malleable enough to account for cultural, legal, and infrastructural differences worldwide.
Looking ahead, strategic policy measures could expedite responsible adoption by offering liability protections when robots are used within strict operational boundaries or by granting tax incentives for companies that invest in robust safety systems. Governments might also introduce licensing schemes to certify that both robots and their operators meet core competency standards, analogous to drivers and vehicles in the automotive sector104,316. Through a combination of clear guidelines, enforceable regulations, and cross-industry collaboration, ethical frameworks can foster an environment in which humanoid robots bring transformative benefits to construction sites without undermining worker rights, public safety, or personal privacy.
Proposed future research directions
Based on the throughout discussion on the challenges related to humanoid robots for construction, we propose a three-stage roadmap for the research community and the industry to realize a sustainable humanoid robot-centered ecosystem in the next decade (Fig. 11).
The proposed 10-year roadmap for construction humanoid robots.
Short-term milestone (< 3 years)
In the immediate future, researchers and industry stakeholders should focus on refining key perception and locomotion components that underpin humanoid robot performance in construction. Specifically, enhanced SLAM and sensor fusion methods need to be adapted for dynamic, cluttered environments, where partially built structures and moving equipment can foil conventional algorithms. Achieving 'long and deep perception’ requires integrating predictive modeling with real-time sensor fusion to anticipate dynamic changes in site layouts caused by shifting materials or machinery movement throughout the workday. Additionally, more resilient outlier rejection methods and real-time calibration are critical, as construction sites generate high levels of dust, glare, and occlusions.
On the locomotion front, stable walking on uneven terrain remains a top priority. Even slight miscalculations on mud, loose gravel, or debris can result in falls that jeopardize both the robot and surrounding personnel. Research efforts should thus focus on advanced control schemes such as model predictive control for footstep planning and reflex loops for quick disturbance rejection that adapt to changing friction conditions in real time. Field-testing these locomotion solutions through pilot programs or smaller-scale testbeds will provide valuable feedback on their robustness. Collaborations with sensor manufacturers could accelerate innovation by integrating specialized LiDARs or depth cameras designed to cope with construction dust and harsh lighting. Likewise, partnering with AI laboratories can help expedite data-driven perception pipelines, generating algorithms optimized for real-time SLAM and multi-modal sensor fusion specifically tuned to the demands of large-scale building sites.
Short-term milestones could also include systematic benchmarking of prototypes in controlled but realistic mock-up sites. These environments would mimic a subset of construction hazards, such as partial scaffolding, busy foot traffic, and temporary obstacles, allowing researchers to evaluate whether their perception and locomotion modules maintain high safety and reliability metrics. Ultimately, these smaller-scale tests will inform the broader community of best practices for sensor placement, fall recovery strategies, and specialized hardware configurations needed to achieve baseline stability and perception in the field.
Mid-term milestone (3–5 years)
Over the mid-term horizon, the primary goal expands to improving manipulation and dexterity, enabling humanoid robots to tackle a broader range of construction tasks. While basic manipulation might be demonstrated in the short term, advanced capabilities, such as tying rebar intersections, fitting pipes with sealants, or installing delicate electrical components, require more sophisticated end-effectors and force-control algorithms. Designs that incorporate tactile sensors and interchangeable tool interfaces can help robots switch between precision tasks and heavier operations without extensive reconfiguration. By leveraging real-time learning methods, the robot can adapt its grip strength, insertion angle, and motion profile based on feedback from the environment, reducing the likelihood of damage to materials or tools.
During this phase, large-scale data collection from the first 3 years of testing will be critical. Robust datasets capturing varied construction scenarios (e.g., different materials, weather conditions, and site layouts) can inform more accurate AI-driven manipulation strategies. These data resources will aid in building “manipulation intelligence,” where the robot’s end-effectors recognize subtle tactile cues or partial occlusions, automatically adjusting grip force or trajectory mid-task. Prototypes that combine advanced computer vision (e.g., semantic segmentation of scaffolding elements) with responsive force control can serve as a benchmark for dexterous humanoid operation in the construction domain.
Additionally, the integration of standardized tool interfaces such as quick-change systems is a key research focus. By developing universal connectors, the robot can swap out end-effectors for specialized tasks (e.g., drilling, welding, or fastening) thus broadening its functional range. This modularity reduces the downtime associated with manual adjustments and eliminates the need for multiple specialized robots on a single site. Pilot-scale projects might explore multi-robot teams, where one bot loads the correct tool attachment while another performs the task, showcasing coordinated dexterity and paving the way for more efficient on-site workflows.
Long-term milestone (5–10 years)
Looking 5–10 years into the future, the aspiration is to establish a methodology for generalizability that allows humanoid robots to seamlessly adapt to new construction sites and tasks. Current systems often require extensive reprogramming or retraining whenever the robot is moved from one context to another. By contrast, scalable AI models, possibly leveraging deep reinforcement learning, meta-learning, and advanced domain adaptation techniques, could enable a true “plug-and-play” mode of operation. In this scenario, a robot arriving at an unfamiliar site would download relevant structural data, calibrate its perception pipelines to local lighting and dust conditions, and begin work with minimal human intervention. This level of flexibility would not only cut deployment costs but also unleash the full potential of humanoids as generalist machines capable of bridging the myriad tasks encountered in construction.
Another significant thrust at this stage involves deepening human–robot interaction (HRI) to make collaboration between crews and robots more intuitive and fluid. Developments might include wearable interfaces that provide real-time feedback on the robot’s intentions or gestures, enabling workers to coordinate tasks without halting their own progress. Advanced natural language processing could allow voice commands and dialogues, supporting on-the-fly revisions to work orders or quick clarifications about site conditions. Critically, as robots progress toward full or partial autonomy, policies and practices must evolve to ensure smooth role delineation and safe co-working arrangements. Future cross-industry AI models may share knowledge from other automation sectors (logistics, healthcare, manufacturing) thus enriching construction robots’ skill sets and accelerating iterative improvements.
Continuous efforts
The introduction of humanoid robots into the construction industry is not a one-off technical feat; rather, it necessitates an ongoing process of refinement that accounts for safety, trust, ethical considerations, workforce dynamics, and emerging regulatory mandates. As new technologies enable more sophisticated autonomy or dexterity, the guidelines that govern robot conduct, oversight requirements, and workforce interactions must similarly evolve. This iterative process takes on added significance given the inherent risks of large, unstructured construction sites. Every new machine-learning model or manipulator design might introduce novel failure modes or safety risks, necessitating thorough testing and updated protocols. Even the workforce’s perception of robots can shift over time, especially as humans gain (or lose) trust in robotic coworkers when unforeseen incidents arise. Therefore, continuous research must remain a staple in construction robotics, ensuring that policy frameworks, user training, and risk assessments are never static or outdated.
One cornerstone of this ongoing effort is the development of comprehensive risk assessment and compliance mechanisms. This entails regularly revisiting safety standards, refining emergency procedures, and adjusting co-working guidelines as technology matures. For example, an improved locomotion algorithm that allows the robot to climb scaffolding more effectively will need new hazard analyses for scenarios such as partial structure collapses or worker-robot interference at height. Additionally, as construction sites incorporate more digital infrastructures including like shared BIM systems or site-wide sensor networks, the interplay between human decision-making and robotic autonomy can become more complex. By incorporating iterative feedback from workers, project managers, and regulatory bodies, each incremental technological advance can be responsibly integrated into day-to-day construction routines, minimizing surprises and maintaining stakeholder confidence.
Moreover, it is crucial to treat ethical and workforce considerations as dynamically evolving. Public attitudes and regulatory stances toward data privacy, liability, and labor rights may shift in response to high-profile deployments or accidents. Thus, continuous research and pilot programs should not only monitor technical performance but also track worker acceptance, psychological comfort, and skill adaptation over time. In this sense, every new deployment becomes a learning opportunity, generating empirical evidence that informs best practices and revises codes of conduct. Whether it is reevaluating how a robot shares workspace with electricians or recalibrating proximity sensors to account for construction dust, continuous research undergirds a living set of standards that keep pace with both technological breakthroughs and real-world lessons learned.
Need for a benchmarking system
Given the expanding interest in humanoid robots for construction, a coherent benchmarking system is essential to assess their viability across different tasks and project scales. Construction sites, unlike uniform factory floors, host diverse activities, carrying heavy loads, installing fixtures, operating under inclement weather that stress different aspects of robotic performance. Metrics such as speed of task completion, error rate, stability, energy efficiency, and return on investment (ROI) would offer a robust way to compare one platform against another and determine how well a particular robot meets the dynamic needs of varying projects. To ensure widespread adoption, benchmarks should also include metrics for human–robot collaboration efficiency and safety compliance under real-world construction conditions. By establishing clear benchmarks, stakeholders can make data-driven decisions about whether a given humanoid system is suitable for bricklaying, scaffolding assembly, or precision wiring, for example.
Importantly, these metrics need to be harmonized into standards or protocols specifically designed for construction humanoid robots, addressing performance benchmarks (e.g., load capacity, step height adaptability), safety requirements (e.g., collision avoidance efficacy), and interoperability (e.g., standardized tool interfaces, data exchange formats). Such frameworks might draw inspiration from well-known robotics competitions like the DARPA Robotics Challenge, which spurred advances in locomotion and disaster response. However, a construction-focused variant would reflect the industry’s unique demands evaluating how quickly a robot can adapt to partial builds or maintain balance amid strong winds and dust storms. These structured assessments would help focus future R&D investment toward the most pressing real-world challenges.
Achieving an authoritative benchmarking system calls for collaboration between academia, industry, and government bodies. University researchers can devise precise test protocols and metrics grounded in scientific rigor, while manufacturers and construction companies provide real-world sites and operational expertise. Regulators could then incorporate the findings into an official certification or labeling system, analogous to energy efficiency ratings or vehicle safety standards. This unified approach would accelerate technology transfer: as soon as a robot meets certain performance thresholds, project managers could confidently deploy it for specific tasks without the uncertainty of untested or poorly documented capabilities. With effective, transparent benchmarking in place, the construction community can more rapidly isolate the most impactful breakthroughs, giving impetus to a new era of capable, reliable humanoid robotics.
Conclusions
Humanoid robots have the potential to fundamentally reshape how buildings are constructed, maintained, and even renovated, presenting profound opportunities across the spectrum of construction activities. From handling repetitive and strenuous tasks to performing intricate wiring or hazardous demolition, these robots promise to alleviate labor shortages, enhance safety, and boost productivity in an industry that has long struggled with low efficiency gains compared to other sectors. Yet, as this paper has highlighted, these benefits hinge on addressing a broad range of technical, socioeconomic, and ethical challenges. Substantial progress must be made in perception, locomotion, dexterous manipulation, and continual learning to contend with the relentlessly shifting and cluttered nature of construction sites. Equally pressing are workforce implications, regulatory frameworks, and public perception, with each requiring thoughtful, ongoing dialogue among all stakeholders.
Looking ahead to the next decade, humanoid robots could evolve into universally recognized collaborators on construction sites, working alongside skilled tradespeople rather than replacing them. When responsibly integrated, humanoid robots can revolutionize labor-intensive processes by reducing errors, enhancing worker safety through automation of hazardous tasks, and improving overall project timelines through adaptive capabilities. In parallel, advancements in battery technology, AI-driven autonomy, and standardized benchmarking systems will pave a smoother path for large-scale adoption. By building stakeholder trust through consistent safety records, transparent communication, and demonstrable economic advantages, the construction industry can collectively embrace a more efficient, sustainable, and innovative future.
Realizing this vision calls for active collaboration among roboticists, construction managers, regulators, ethicists, and policymakers. Each group holds a vital piece of the puzzle, from designing robust mechatronic systems and shaping inclusive workforce policies to drafting guidelines that ensure data privacy and worker safety. Strategic partnerships across academia, industry, and policymakers will be instrumental in scaling pilot deployments into standardized solutions while addressing ethical considerations and workforce integration challenges. Through conscientious, iterative development, humanoid robots can ultimately stand as catalysts for a new era of resilient, high-quality construction, an era marked by safer job sites, improved project timelines, and greater adaptability to the rapidly evolving demands of the built environment.
Data availability
All data supporting the findings of this study are included in the article.
References
Cao, L. AI robots and humanoid AI: Review, perspectives and directions. ResearchGate https://doi.org/10.48550/arXiv.2405.15775 (2024).
Gu, Z. et al. Humanoid locomotion and manipulation: current progress and challenges in control, planning, and learning. https://doi.org/10.48550/arXiv.2501.02116 (2025).
Honda, R. Honda Robotics Utilize Technology to Help People (2025).
Boston, D. Atlas. Atlas® and Beyond: The World’s Most Dynamic Robots (2025).
Tesla. AI & Robotics | Tesla Other Europe. AI & Robotics Tesla Optimus (2025).
Morio. in Wikimedia Commons Cropped version of “Tokyo Motor Show 2011: ASIMO (version 2011)” by Morio, originally licensed under CC BY-SA 2013.2010. (Wikimedia Commons, 2011).
Robotics, A. (ed Digit humanoid robot) A full-body image of the Digit humanoid robot (Agility Robotics, 2023).
Dynamics, B. A figure of the Atlas robot (Boston Dynamics, 2024).
Robotic, D. (Deep Robotic 2024).
Tesla. (ed Tesla-optimus-bot-gen-2-scaled.jpg) A figure of the Tesla Optimus Gen-2 humanoid robot, as showcased in Tesla’s February 2024 release. (Tesla, 2024).
Assafi, M. N., Hoque, M. I. & Hossain, M. M. Investigating the causes of construction delay on the perspective of organization-sectors involved in the construction industry of Bangladesh. Int. J. Build. Pathol. Adapt. 42, 788–817. https://doi.org/10.1108/IJBPA-10-2021-0137 (2022).
Hasan, A., Baroudi, B., Elmualim, A. & Rameezdeen, R. Factors affecting construction productivity: A 30 year systematic review. Eng. Constr. Arch. Manag. 25, 916–937. https://doi.org/10.1108/ECAM-02-2017-0035 (2018).
Alotaibi, N. O. L. Evaluating Factors Affecting Construction Labor Productivity in Mass Timber Building Projects. (2024).
De’Arman, K. J., Cordner, A. & Harrison, J. A. Workplace tentativity: Biophysical environmental impacts on outdoor work. Environ. Sociol. 10, 237–252. https://doi.org/10.1080/23251042.2023.2298017 (2024).
Pan, X. et al. in 2020 IEEE International Conference on Robotics and Automation (ICRA). 679–685.
Ashima, R. et al. Automation and manufacturing of smart materials in additive manufacturing technologies using Internet of Things towards the adoption of industry 4.0. Mater. Today Proc. 45, 5081–5088. https://doi.org/10.1016/j.matpr.2021.01.583 (2021).
Peruzzini, M., Prati, E. & Pellicciari, M. A framework to design smart manufacturing systems for Industry 5.0 based on the human-automation symbiosis. Int. J. Comput. Integr. Manuf. 37, 1426–1443. https://doi.org/10.1080/0951192X.2023.2257634 (2024).
Chen, Q., García de Soto, B. & Adey, B. T. Construction automation: Research areas, industry concerns and suggestions for advancement. Autom. Constr. 94, 22–38. https://doi.org/10.1016/j.autcon.2018.05.028 (2018).
Griffin, A. et al. Using advanced manufacturing technology for smarter construction. Proc. Inst. Civ. Eng. Civ. Eng. 172, 15–21. https://doi.org/10.1680/jcien.18.00051 (2019).
Yoshiike, T. et al. The experimental humanoid robot E2-DR: A design for inspection and disaster response in industrial environments. IEEE Robot. Autom. Mag. 26, 46–58. https://doi.org/10.1109/MRA.2019.2941241 (2019).
Kumagai, I. et al. Toward industrialization of humanoid robots: Autonomous plasterboard installation to improve safety and efficiency. IEEE Robot. Autom. Mag. 26, 20–29. https://doi.org/10.1109/MRA.2019.2940964 (2019).
Tong, Y., Liu, H. & Zhang, Z. Advancements in humanoid robots: A comprehensive review and future prospects. IEEE/CAA J. Automatica Sinica 11, 301–328. https://doi.org/10.1109/JAS.2023.124140 (2024).
Noreils, F. R. Humanoid robots at work: Where are we ? (2024).
Sartore, C., Rapetti, L. & Pucci, D. Optimization of Humanoid Robot Designs for Human–Robot Ergonomic Payload Lifting (2022).
Kheddar, A. et al. Humanoid robots in aircraft manufacturing: The airbus use cases. IEEE Robot. Autom. Mag. 26, 30–45. https://doi.org/10.1109/MRA.2019.2943395 (2019).
Tsuru, M., Escande, A., Kumagai, I., Murooka, M. & Harada, K. Online multi-contact motion replanning for humanoid robots with semantic 3D voxel mapping: ExOctomap. Sensors 23, 8837. https://doi.org/10.3390/s23218837 (2023).
Liu, D. et al. (eds) Infrastructure Robotics: Methodologies, Robotic Systems and Applications 1st edn. (Wiley, 2024).
Nvidia. NVIDIA Announces Project GR00T Foundation Model for Humanoid Robots and Major Isaac Robotics Platform Update. NVIDIA Newsroom (2024).
OpenAI et al. GPT-4 Technical Report. arXiv:2303.08774 (2023). https://ui.adsabs.harvard.edu/abs/2023arXiv230308774O.
Boston, D. Changing your idea of what robots can do. (2025).
Ruchik Kashyapkumar, T. Advancements in forestry robotics: Autonomous navigation, sensing, and AI-driven applications for precision forestry and forest inventory management. 10.5281/ZENODO.14001694 (2024).
Adiuku, N., Avdelidis, N. P., Tang, G. & Plastropoulos, A. Advancements in learning-based navigation systems for robotic applications in MRO hangar: Review. Sensors 24, 1377. https://doi.org/10.3390/s24051377 (2024).
Vianello, L. et al. Human–humanoid interaction and cooperation: A review. Curr. Robot. Rep. 2, 441–454. https://doi.org/10.1007/s43154-021-00068-z (2021).
Shah Bukhari, S. T., Anima, B. A., Feil-Seifer, D. & Qazi, W. M. in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 7992–7998.
Javaid, M., Haleem, A., Singh, R. P. & Suman, R. Substantial capabilities of robotics in enhancing industry 4.0 implementation. Cogn. Robot. 1, 58–75. https://doi.org/10.1016/j.cogr.2021.06.001 (2021).
Ikuabe, M., Aigbavboa, C. & Kissi, E. Potential applications and benefits of humanoids in the construction industry: A South African perspective. Int. J. Build. Pathol. Adapt. 41, 254–268. https://doi.org/10.1108/IJBPA-04-2023-0042 (2023).
Callari, T. C., Vecellio Segate, R., Hubbard, E.-M., Daly, A. & Lohse, N. An ethical framework for human-robot collaboration for the future people-centric manufacturing: A collaborative endeavour with European subject-matter experts in ethics. Technol. Soc. 78, 102680. https://doi.org/10.1016/j.techsoc.2024.102680 (2024).
Skubis, I., Mesjasz-Lech, A. & Nowakowska-Grunt, J. Humanoid robots in tourism and hospitality, exploring managerial, ethical, and societal challenges. Appl. Sci. 14, 11823. https://doi.org/10.3390/app142411823 (2024).
Darvish, K. et al. Teleoperation of humanoid robots: A survey. IEEE Trans. Rob. 39, 1706–1727. https://doi.org/10.1109/TRO.2023.3236952 (2023).
Angelopoulos, G., Baras, N. & Dasygenis, M. Secure autonomous cloud brained humanoid robot assisting rescuers in hazardous environments. Electronics 10, 124. https://doi.org/10.3390/electronics10020124 (2021).
Mukherjee, S. et al. in 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 822–826.
Kaist. Hubo Debuts as a News Anchor. (2020).
Chestnutt, J. et al. in Proceedings of the 2005 IEEE International Conference on Robotics and Automation. 629–634.
Hirose, M. & Ogawa, K. Honda humanoid robots development. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 365, 11–19. https://doi.org/10.1098/rsta.2006.1917 (2006).
Park, I.-W., Kim, J.-Y., Lee, J. & Oh, J.-H. in 5th IEEE-RAS International Conference on Humanoid Robots. 321–326 (2005).
Yoshikawa, T. Identification of human walking balance controller based on COM-ZMP model of humanoid robot. Front. Robot. AI. https://doi.org/10.3389/frobt.2022.757630 (2022).
Piperakis, S. & Trahanias, P. in 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids). 202–209.
Kajita, S., Nagasaki, T., Kaneko, K. & Hirukawa, H. ZMP-based biped running control. IEEE Robot. Autom. Mag. 14, 63–72. https://doi.org/10.1109/MRA.2007.380655 (2007).
Zhang, Q. et al. in 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 11225–11231.
Özaln, R. et al. in 2019 IEEE International Symposium on INnovations in Intelligent SysTems and Applications (INISTA). 1–5.
Muzio, A. F. V., Maximo, M. R. O. A. & Yoneyama, T. Deep reinforcement learning for humanoid robot behaviors. J. Intell. Rob. Syst. 105, 12. https://doi.org/10.1007/s10846-022-01619-y (2022).
Hokayem, P. F. & Spong, M. W. Bilateral teleoperation: An historical survey. Automatica 42, 2035–2057. https://doi.org/10.1016/j.automatica.2006.06.027 (2006).
Schwartz, M. et al. in 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids). 322–329.
Gorjup, G. et al. in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 4103–4110.
Bao, R. et al. Integrated intelligent tactile system for a humanoid robot. Sci. Bull. 68, 1027–1037. https://doi.org/10.1016/j.scib.2023.04.019 (2023).
Pohtongkam, S. & Srinonchat, J. Object recognition for humanoid robots using full hand tactile sensor. IEEE Access 11, 20284–20297. https://doi.org/10.1109/ACCESS.2023.3249573 (2023).
Kappassov, Z., CorralesRamon, J. A. & Perdereau, V. Tactile sensing in dexterous robot hands—Review. Robot. Auton. Syst. 74, Part A, 195–220. https://doi.org/10.1016/j.robot.2015.07.015 (2015).
Schaal, S. Is imitation learning the route to humanoid robots?. Trends Cogn. Sci. 3, 233–242. https://doi.org/10.1016/S1364-6613(99)01327-3 (1999).
Sasagawa, A., Fujimoto, K., Sakaino, S. & Tsuji, T. Imitation learning based on bilateral control for human–robot cooperation. IEEE Robot. Autom. Lett. 5, 6169–6176. https://doi.org/10.1109/LRA.2020.3011353 (2020).
Thomason, W. & Knepper, R. A. Recognizing unfamiliar gestures for human-robot interaction through Zero-Shot Learning. In 2016 International Symposium on Experimental Robotics. Springer Proceedings in Advanced Robotics, vol. 1 (eds D. Kulić, Y. et al.) 841–852 (Springer, Cham, 2017). https://doi.org/10.1007/978-3-319-50115-4_73
Zhang, B. & Soh, H. in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 7961–7968.
Yao, Y. et al. AnyBipe: An end-to-end framework for training and deploying bipedal robots guided by large language models. https://doi.org/10.48550/arXiv.2409.08904 (2024).
Scicluna, A., Le Gentil, C., Sutjipto, S. & Paul, G. in 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE). 920–925.
Roychoudhury, A., Khorshidi, S., Agrawal, S. & Bennewitz, M. Perception for humanoid robots. Curr. Robot. Rep. 4, 127–140. https://doi.org/10.1007/s43154-023-00107-x (2023).
Luo, A. et al. Surface recognition via force-sensory walking-pattern classification for biped robot. IEEE Sens. J. 21, 10061–10072. https://doi.org/10.1109/JSEN.2021.3059099 (2021).
Kong, W., Qu, Z., Liu, H. & Deng, L. in 2023 IEEE 6th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE). 1031–1035.
Chatterjee, S., Zunjani, F. H. & Nandi, G. C. in 2020 6th International Conference on Control, Automation and Robotics (ICCAR). 202–208.
Chen, Y. et al. in 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 8647–8656 (IEEE).
Aksoy, E. E., Aein, M. J., Tamosiunaite, M. & Wörgötter, F. in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2875–2882.
Ramirez-Amaro, K., Beetz, M. & Cheng, G. Understanding the intention of human activities through semantic perception: Observation, understanding and execution on a humanoid robot. Adv. Robot. 29, 345–362. https://doi.org/10.1080/01691864.2014.1003096 (2015).
Ruscelli, F. Planning and Control Strategies for Motion and Interaction of the Humanoid Robot COMAN+. (2021).
Feng, S., Whitman, E., Xinjilefu, X. & Atkeson, C. G. in 2014 IEEE-RAS International Conference on Humanoid Robots. 120–127.
Torres-Pardo, A. et al. Legged locomotion over irregular terrains: state of the art of human and robot performance. Bioinspir. Biomim. 17, 061002. https://doi.org/10.1088/1748-3190/ac92b3 (2022).
Radosavovic, I., Kamat, S., Darrell, T. & Malik, J. Learning Humanoid Locomotion over Challenging Terrain. https://doi.org/10.48550/arXiv.2410.03654 (2024).
Hong, Y.-D. Capture point-based controller using real-time zero moment point manipulation for stable bipedal walking in human environment. Sensors 19, 3407. https://doi.org/10.3390/s19153407 (2019).
Varma, N., Sudheer, A. P. & Joy, M. L. Investigation on ZMP variation of 12-DoF biped robot in screw theory framework. In Advances in Systems Engineering (eds Saran, V. H. & Misra, R. K.) 587–596 (Springer, 2021). https://doi.org/10.1007/978-981-15-8025-3_56.
Peng, X. B., Berseth, G., Yin, K. & Van De Panne, M. DeepLoco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Trans. Graph. 36, 1–13. https://doi.org/10.1145/3072959.3073602 (2017).
Li, Z. et al. in 2021 IEEE International Conference on Robotics and Automation (ICRA). 2811–2817.
Kim, D. et al. Dynamic locomotion for passive-ankle biped robots and humanoids using whole-body locomotion control. Int. J. Robot. Res. 39, 936–956. https://doi.org/10.1177/0278364920918014 (2020).
Pratt, J. et al. Capturability-based analysis and control of legged locomotion, part 2: Application to M2V2, a lower-body humanoid. Int. J. Robot. Res. 31, 1117–1133. https://doi.org/10.1177/0278364912452762 (2012).
Mandil, W., Rajendran, V., Nazari, K. & Ghalamzan-Esfahani, A. Tactile-sensing technologies: Trends, challenges and outlook in agri-food manipulation. Sensors 23, 7362. https://doi.org/10.3390/s23177362 (2023).
Patel, R. B. A Unified Visual-haptic Fingertip Sensor for Advanced Robot Dexterity (University of Colorado at Boulder, 2019).
Bharadhwaj, H., Gupta, A., Kumar, V. & Tulsiani, S. in 2024 IEEE International Conference on Robotics and Automation (ICRA). 6904–6911.
Hettich, G., Lippi, V. & Mergner, T. Human-Like Sensor Fusion Implemented in the Posture Control of a Bipedal Robot. In Neurotechnology, Electronics and Informatics pp 29–45 (Springer, 2015). https://doi.org/10.1007/978-3-319-15997-3_3.
Gupta, S. & Kumar, A. A brief review of dynamics and control of underactuated biped robots. Adv. Robot. 31, 607–623. https://doi.org/10.1080/01691864.2017.1308270 (2017).
Ferreira, J. F. et al. Sensing and artificial perception for robots in precision forestry: A survey. Robotics 12, 139. https://doi.org/10.3390/robotics12050139 (2023).
Tang, Q., Liang, J. & Zhu, F. A comparative review on multi-modal sensors fusion based on deep learning. Signal Process. 213, 109165. https://doi.org/10.1016/j.sigpro.2023.109165 (2023).
Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 779–788.
Zou, Z., Chen, K., Shi, Z., Guo, Y. & Ye, J. Object detection in 20 years: A survey. Proc. IEEE 111, 257–276. https://doi.org/10.1109/JPROC.2023.3238524 (2023).
Nowruzi, F. E. et al. (arXiv, 2019).
Salari, A., Djavadifar, A., Liu, X. & Najjaran, H. Object recognition datasets and challenges: A review. Neurocomputing 495, 129–152. https://doi.org/10.1016/j.neucom.2022.01.022 (2022).
Melenbrink, N., Werfel, J. & Menges, A. On-site autonomous construction robots: Towards unsupervised building. Autom. Constr. 119, 103312. https://doi.org/10.1016/j.autcon.2020.103312 (2020).
Li, H., Deng, H. & Deng, Y. Towards worker-centric construction scene understanding: Status quo and future directions. Autom. Constr. 171, 106005. https://doi.org/10.1016/j.autcon.2025.106005 (2025).
Yu, S., Perera, N., Marew, D. & Kim, D. (arXiv, 2024).
Zhou, T., Zhu, Q., Shi, Y. & Du, J. Construction robot teleoperation safeguard based on real-time human hand motion prediction. J. Constr. Eng. Manag. 148, 04022040. https://doi.org/10.1061/(ASCE)CO.1943-7862.0002289 (2022).
Zhang, M., Xu, R., Wu, H., Pan, J. & Luo, X. Human–robot collaboration for on-site construction. Autom. Constr. 150, 104812. https://doi.org/10.1016/j.autcon.2023.104812 (2023).
Sheridan, T. B. Human–robot interaction: Status and challenges. Hum. Factors 58, 525–532. https://doi.org/10.1177/0018720816644364 (2016).
Tu, M., Yang, J., Qi, Q. & Ding, H. Efficient construction of an interference-free region and tool orientation planning for the robotic grinding of blisks. J. Manuf. Process. 131, 356–368. https://doi.org/10.1016/j.jmapro.2024.09.014 (2024).
Matteucci, P. & Cepolina, F. A robotic cutting tool for contaminated structure maintenance and decommissioning. Autom. Constr. 58, 109–117. https://doi.org/10.1016/j.autcon.2015.07.006 (2015).
Sun, Y., Jeelani, I. & Gheisari, M. Safe human-robot collaboration in construction: A conceptual perspective. J. Saf. Res. 86, 39–51. https://doi.org/10.1016/j.jsr.2023.06.006 (2023).
Paneru, S. & Jeelani, I. Computer vision applications in construction: Current state, opportunities & challenges. Autom. Constr. 132, 103940. https://doi.org/10.1016/j.autcon.2021.103940 (2021).
Zhao, C. et al. Robotic motion planning for autonomous in-situ construction of building structures. Autom. Constr. 171, 105993. https://doi.org/10.1016/j.autcon.2025.105993 (2025).
Xu, Q. et al. FEM-based real-time task planning for robotic construction simulation. Autom. Constr. 170, 105935. https://doi.org/10.1016/j.autcon.2024.105935 (2025).
Segate, R. V. & Daly, A. Encoding the enforcement of safety standards into smart robots to harness their computing sophistication and collaborative potential: A legal risk assessment for European Union policymakers. Eur. J. Risk Regul. 15, 665–704. https://doi.org/10.1017/err.2023.72 (2024).
Sanders, N. E., Şener, E. & Chen, K. B. Robot-related injuries in the workplace: An analysis of OSHA Severe Injury Reports. Appl. Ergon. 121, 104324. https://doi.org/10.1016/j.apergo.2024.104324 (2024).
Murtaza, I., Rashid, M. U. & Asad, M. Construction safety challenges and opportunities at design stage for building industry. Proc. Inst. Civ. Eng. Manag. Procurement Law 178, 3–15. https://doi.org/10.1680/jmapl.23.00010 (2025).
Shayesteh, S., Ojha, A., Liu, Y. & Jebelli, H. Human-robot teaming in construction: Evaluative safety training through the integration of immersive technologies and wearable physiological sensing. Saf. Sci. 159, 106019. https://doi.org/10.1016/j.ssci.2022.106019 (2023).
Robla-Gómez, S. et al. Working together: A review on safe human–robot collaboration in industrial environments. IEEE Access 5, 26754–26773. https://doi.org/10.1109/ACCESS.2017.2773127 (2017).
Davila Delgado, J. M. et al. Robotics and automated systems in construction: Understanding industry-specific challenges for adoption. J. Build. Eng. 26, 100868. https://doi.org/10.1016/j.jobe.2019.100868 (2019).
Yahya, M. Y. B. et al. The challenges of the implementation of construction robotics technologies in the construction. MATEC Web Conf. 266, 05012 (2019).
Waqar, A., Alrasheed, K. A. & Benjeddou, O. Enhancing construction management outcomes through the mitigation of robotics implementation barriers: A sustainable practice model. Environ. Chall. 16, 100989. https://doi.org/10.1016/j.envc.2024.100989 (2024).
El Asmar, M., HannaAwad, S. & Loh, W.-Y. Evaluating integrated project delivery using the project quarterback rating. J. Constr. Eng. Manag. 142, 04015046. https://doi.org/10.1061/(ASCE)CO.1943-7862.0001015 (2016).
Koch, I., Poljac, E., Müller, H. & Kiesel, A. Cognitive structure, flexibility, and plasticity in human multitasking—An integrative review of dual-task and task-switching research. Psychol. Bull. 144, 557–583. https://doi.org/10.1037/bul0000144 (2018).
Yao, C. et al. TAIL: A terrain-aware multi-modal SLAM dataset for robot locomotion in deformable granular environments. IEEE Robot. Autom. Lett. 9, 6696–6703. https://doi.org/10.1109/LRA.2024.3407411 (2024).
Xia, B., Li, B., Lee, J., Scutari, M. & Chen, B. The Duke Humanoid: Design and Control For Energy Efficient Bipedal Locomotion Using Passive Dynamics. (2024).
Rebolledo, M., Zeeuwe, D., Bartz-Beielstein, T. & Eiben, A. E. Co-optimizing for task performance and energy efficiency in evolvable robots. Eng. Appl. Artif. Intell. 113, 104968. https://doi.org/10.1016/j.engappai.2022.104968 (2022).
Kashiri, N. et al. An overview on principles for energy efficient robot locomotion. Front. Robot. AI 5, 129 (2018).
Wood, G., Vine, S. J. & Wilson, M. R. The impact of visual illusions on perception, action planning, and motor performance. Atten. Percept. Psychophys. 75, 830–834. https://doi.org/10.3758/s13414-013-0489-y (2013).
Gao, Y., Meng, J., Shu, J. & Liu, Y. BIM-based task and motion planning prototype for robotic assembly of COVID-19 hospitalisation light weight structures. Autom. Constr. 140, 104370. https://doi.org/10.1016/j.autcon.2022.104370 (2022).
Zeng, L., Guo, S., Wu, J. & Markert, B. Autonomous mobile construction robots in built environment: A comprehensive review. Dev. Built Environ. 19, 100484. https://doi.org/10.1016/j.dibe.2024.100484 (2024).
Brosque, C., Galbally, E., Khatib, O. & Fischer, M. in 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA). 1–8.
Huang, L., Cai, W., Zhu, Z. & Zou, Z. Dexterous manipulation of construction tools using anthropomorphic robotic hand. Autom. Constr. 156, 105133. https://doi.org/10.1016/j.autcon.2023.105133 (2023).
Kumar, C. N., Sakthivel, M., Elangovan, R. K. & Arularasu, M. Analysis of material handling safety in construction sites and countermeasures for effective enhancement. Sci. World J. https://doi.org/10.1155/2015/742084 (2015).
Alsharef, A., Albert, A., Awolusi, I. & Jaselskis, E. Severe injuries among construction workers: Insights from OSHA’s new severe injury reporting program. Saf. Sci. 163, 106126. https://doi.org/10.1016/j.ssci.2023.106126 (2023).
Vaz, J. C. & Oh, P. in 2020 IEEE International Conference on Robotics and Automation (ICRA). 9796–9801.
Henze, B., Roa, M. A. & Ott, C. Passivity-based whole-body balancing for torque-controlled humanoid robots in multi-contact scenarios. Int. J. Rob. Res. 35, 1522–1543. https://doi.org/10.1177/0278364916653815 (2016).
Yang, X., Amtsberg, F., Sedlmair, M. & Menges, A. Challenges and potential for human–robot collaboration in timber prefabrication. Autom. Constr. 160, 105333. https://doi.org/10.1016/j.autcon.2024.105333 (2024).
Pfeiffer, S. Robots, industry 4.0 and humans, or why assembly work is more than routine work. Societies 6, 16. https://doi.org/10.3390/soc6020016 (2016).
Shibata, M. & Hirai, S. Soft object manipulation by simultaneous control of motion and deformation. In Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, 2460–2465 (2006).
Navarro-Alarcon, D. et al. Automatic 3-D manipulation of soft objects by robotic arms with an adaptive deformation model. IEEE Trans. Rob. 32, 429–441. https://doi.org/10.1109/TRO.2016.2533639 (2016).
Arriola-Rios, V. E. et al. Modeling of deformable objects for robotic manipulation: A tutorial and review. Front. Robot. AI 7, 82 (2020).
Li, Y. et al. A survey of multifingered robotic manipulation: Biological results, structural evolvements, and learning methods. Front. Neurorobot. 16, 843267 (2022).
Burdea, G. & Zhuang, J. Dextrous telerobotics with force feedback—An overview. Part 1: Human factors. Robotica 9, 171–178. https://doi.org/10.1017/S0263574700010213 (1991).
Aviles, A. I., Alsaleh, S. M., Hahn, J. K. & Casals, A. Towards retrieving force feedback in robotic-assisted surgery: A supervised neuro-recurrent-vision approach. IEEE Trans. Haptics 10, 431–443. https://doi.org/10.1109/TOH.2016.2640289 (2017).
Zhu, A., Pauwels, P., Torta, E., Zhang, H. & De Vries, B. Data linking and interaction between BIM and robotic operating system (ROS) for flexible construction planning. Autom. Constr. 163, 105426. https://doi.org/10.1016/j.autcon.2024.105426 (2024).
Du, J., Zou, Z., Shi, Y. & Zhao, D. Simultaneous Data Exchange between BIM and VR for Collaborative Decision Making. (2017).
Görsch, C., Seppänen, O., Peltokorpi, A. & Lavikka, R. in Proc. 28th Annual Conference of the International Group for Lean Construction (IGLC). 937–948.
Li, Z., Zeng, J., Chen, S. & Sreenath, K. Autonomous navigation of underactuated bipedal robots in height-constrained environments. Int. J. Robot. Res. 42, 565–585. https://doi.org/10.1177/02783649231187670 (2023).
Mikolajczyk, T. et al. Recent advances in bipedal walking robots: Review of gait, drive, sensors and control systems. Sensors 22, 4440. https://doi.org/10.3390/s22124440 (2022).
Afsari, K. et al. in Construction Research Congress 2022 Proceedings 610–620 (2022).
Zhong, H. in Proceedings of the 41st International Symposium on Automation and Robotics in Construction 1073–1080 (International Association for Automation and Robotics in Construction (IAARC), 2024).
Halder, S. & Afsari, K. Robots in inspection and monitoring of buildings and infrastructure: A systematic review. Appl. Sci. 13, 2304. https://doi.org/10.3390/app13042304 (2023).
Halder, S., Afsari, K., Chiou, E., Patrick, R. & Hamed, K. A. Construction inspection & monitoring with quadruped robots in future human–robot teaming: A preliminary study. J. Build. Eng. 65, 105814. https://doi.org/10.1016/j.jobe.2022.105814 (2023).
Munawar, H. S., Ullah, F., Heravi, A., Thaheem, M. J. & Maqsoom, A. Inspecting buildings using drones and computer vision: A machine learning approach to detect cracks and damages. Drones 6, 5 (2022).
Chen, J., Lu, W., Fu, Y. & Dong, Z. Automated facility inspection using robotics and BIM: A knowledge-driven approach. Adv. Eng. Inform. 55, 101838. https://doi.org/10.1016/j.aei.2022.101838 (2023).
Halder, S., Afsari, K., Serdakowski, J. & DeVito, S. in Proceedings of the International Symposium on Automation and Robotics in Construction. 17–24.
Ohueri, C. C., Masrom, M. A. N. & Noguchi, M. Human–robot collaboration for building deconstruction in the context of construction 5.0. Autom. Constr. 167, 105723. https://doi.org/10.1016/j.autcon.2024.105723 (2024).
Xiao, W. et al. Development of an automatic sorting robot for construction and demolition waste. Clean Technol. Environ. Policy 22, 1829–1841. https://doi.org/10.1007/s10098-020-01922-y (2020).
Da, X., Harib, O., Hartley, R., Griffin, B. & Grizzle, J. From 2D design of underactuated bipedal gaits to 3D implementation: Walking with speed tracking. IEEE Access 4, 1–1. https://doi.org/10.1109/ACCESS.2016.2582731 (2016).
Mu, Z. et al. Intelligent demolition robot: Structural statics, collision detection, and dynamic control. Autom. Constr. 142, 104490. https://doi.org/10.1016/j.autcon.2022.104490 (2022).
Derlukiewicz, D. Application of a design and construction method based on a study of user needs in the prevention of accidents involving operators of demolition robots. Appl. Sci. 9, 1500 (2019).
Ye, Y., Cen, Y. & Xie, N. in 2015 IEEE Fifth International Conference on Big Data and Cloud Computing. 335–337.
Lv, H. et al. A compound planning algorithm considering both collision detection and obstacle avoidance for intelligent demolition robots. Robot. Auton. Syst. 181, 104781. https://doi.org/10.1016/j.robot.2024.104781 (2024).
Huang, B. J. https://www.wired.com/story/the-lab-making-robots-walk-through-fire-and-ride-segways/ (2018).
Zhou, T., Zhu, Q., Ye, Y. & Du, J. Humanlike inverse kinematics for improved spatial awareness in construction robot teleoperation: Design and experiment. J. Constr. Eng. Manag. 149, 04023044. https://doi.org/10.1061/JCEMD4.COENG-13350 (2023).
Singh, J., Srinivasan, A. R., Neumann, G. & Kucukyilmaz, A. Haptic-guided teleoperation of a 7-DoF collaborative robot arm with an identical twin master. IEEE Trans. Haptics 13, 246–252. https://doi.org/10.1109/TOH.2020.2971485 (2020).
Zhu, Q., Zhou, T. & Du, J. Upper-body haptic system for snake robot teleoperation in pipelines. Adv. Eng. Inform. 51, 101532. https://doi.org/10.1016/j.aei.2022.101532 (2022).
Zhu, Q., Du, J., Shi, Y. & Wei, P. Neurobehavioral assessment of force feedback simulation in industrial robotic teleoperation. Autom. Constr. 126, 103674. https://doi.org/10.1016/j.autcon.2021.103674 (2021).
Zhou, T., Zhu, Q. & Du, J. Intuitive robot teleoperation for civil engineering operations with virtual reality and deep learning scene reconstruction. Adv. Eng. Inform. 46, 101170. https://doi.org/10.1016/j.aei.2020.101170 (2020).
Liu, Y., Habibnezhad, M. & Jebelli, H. Brain–computer interface for hands-free teleoperation of construction robots. Autom. Constr. 123, 103523. https://doi.org/10.1016/j.autcon.2020.103523 (2021).
Hong, Z., Zhang, Q., Su, X. & Zhang, H. Effect of virtual annotation on performance of construction equipment teleoperation under adverse visual conditions. Autom. Constr. 118, 103296. https://doi.org/10.1016/j.autcon.2020.103296 (2020).
Saidi, K. S., OʼBrien, J. B. & Lytle, A. M. in Springer Handbook of Robotics (eds Siciliano, B. & Khatib, O.) 1079–1099 (Springer, Berlin, 2008).
Ragaglia, M. in Proceedings of the 35th International Symposium on Automation and Robotics in Construction (ISARC) 866–873 (International Association for Automation and Robotics in Construction (IAARC), 2018).
Yarovoi, A. & Cho, Y. K. Review of simultaneous localization and mapping (SLAM) for construction robotics applications. Autom. Constr. 162, 105344. https://doi.org/10.1016/j.autcon.2024.105344 (2024).
Cai, J., Du, A. & Li, S. in Computing in Civil Engineering 2021 Proceedings 34–41 (2022).
Lee, J., Kim, B., Sun, D., Han, C. & Ahn, Y. Development of unmanned excavator vehicle system for performing dangerous construction work. Sensors 19, 4853 (2019).
Thaker, R. Robotics in construction: A critical review of reinforcement learning, imitation learning, and industry-specific challenges for adoption. Int. J. Multidiscip. Res. https://doi.org/10.36948/ijfmr.2024.v06i02.29707 (2024).
Li, R. & Zou, Z. Enhancing construction robot learning for collaborative and long-horizon tasks using generative adversarial imitation learning. Adv. Eng. Inform. 58, 102140. https://doi.org/10.1016/j.aei.2023.102140 (2023).
Zhang, B. et al. Computer vision-based construction process sensing for cyber-physical systems: A review. Sensors 21, 5468 (2021).
Pan, M., Linner, T., Pan, W., Cheng, H. & Bock, T. Structuring the context for construction robot development through integrated scenario approach. Autom. Constr. 114, 103174. https://doi.org/10.1016/j.autcon.2020.103174 (2020).
Lafhaj, Z. et al. Complexity in construction projects: A literature review. Buildings 14, 680 (2024).
Kirsch, W. & Kunde, W. Moving further moves things further away in visual perception: Position-based movement planning affects distance judgments. Exp. Brain Res. 226, 431–440. https://doi.org/10.1007/s00221-013-3455-y (2013).
Gomez, C., Hernandez, A. C. & Barber, R. Topological frontier-based exploration and map-building using semantic information. Sensors 19 (2019).
Xu, X. et al. A review of multi-sensor fusion SLAM systems based on 3D LIDAR. Remote Sens. 14, 2835 (2022).
Zhang, C. et al. Map construction based on LiDAR vision inertial multi-sensor fusion. World Electr. Veh. J. 12, 261 (2021).
Guan, M., Hao, Z., Zhang, S. & Chen, S. A multi-sensor fusion framework for localization using LiDAR, IMU and RGB-D camera. Meas. Sci. Technol. https://doi.org/10.1088/1361-6501/ad601f (2024).
Liu, X. et al. Vision-IMU multi-sensor fusion semantic topological map based on RatSLAM. Measurement 220, 113335. https://doi.org/10.1016/j.measurement.2023.113335 (2023).
He, J., Fang, J., Xu, S. & Yang, D. Indoor robot SLAM with multi-sensor fusion. Int. J. Adv. Netw. Monit. Controls 9, 10–21. https://doi.org/10.2478/ijanmc-2024-0002 (2024).
Sabelhaus, A. P., Mehta, R. K., Wertz, A. T. & Majidi, C. In-situ sensing and dynamics predictions for electrothermally-actuated soft robot limbs. Front. Robot. AI 9 (2022).
Choi, Y., Yoon, S., Park, C.-Y. & Lee, K.-C. In-situ observation and calibration in building digitalization: Comparison of intrusive and nonintrusive approaches. Autom. Constr. 145, 104648. https://doi.org/10.1016/j.autcon.2022.104648 (2023).
Helm, V., Ercan Jenny, S., Gramazio, F. & Kohler, M. In-Situ Robotic Construction: Extending the Digital Fabrication Chain in Architecture. (2012).
Choi, B., Hwang, S. & Lee, S. What drives construction workers’ acceptance of wearable technologies in the workplace?: Indoor localization and wearable health devices for occupational safety and health. Autom. Constr. 84, 31–41. https://doi.org/10.1016/j.autcon.2017.08.005 (2017).
Marcy, L. & Iordanova, I. Slam and Beacon data for automation of indoor construction progress tracking. IOP Conf. Ser. Mater. Sci. Eng. 1218, 012009. https://doi.org/10.1088/1757-899X/1218/1/012009 (2022).
Rao, A. et al. Real-time monitoring of construction sites: Sensors, methods, and applications. Autom. Constr. 136, 104099. https://doi.org/10.1016/j.autcon.2021.104099 (2022).
Wang, Z., Li, H. & Yang, X. Vision-based robotic system for on-site construction and demolition waste sorting and recycling. J. Build. Eng. 32, 101769. https://doi.org/10.1016/j.jobe.2020.101769 (2020).
Arents, J., Greitans, M. & Lesser, B. Construction of a Smart Vision-Guided Robot System for Manipulation in a Dynamic Environment. In Artificial Intelligence for Digitising Industry – Applications, 1st ed. 205–220 (River Publishers, 2021). https://doi.org/10.1201/9781003337232-18.
Ding, Y., Khazoom, C., Chignoli, M. & Kim, S. in 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids). 299–305.
Karkowski, P. & Bennewitz, M. in 2019 International Conference on Robotics and Automation (ICRA). 2517–2523.
Karkowski, P., Oßwald, S. & Bennewitz, M. Real-time footstep planning in 3D environments. In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), 69–74 (2016).
Perrin, N., Stasse, O., Lamiraux, F. & Yoshida, E. Weakly Collision-Free Paths for Continuous Humanoid Footstep Planning. (2011).
Tak, S. & Ko, H.-S. A physically-based motion retargeting filter. ACM Trans. Graph. 24, 98–117. https://doi.org/10.1145/1037957.1037963 (2005).
Qiu, X. et al. Upright and crawling locomotion and its transition for a wheel-legged robot. Micromachines 13, 1252. https://doi.org/10.3390/mi13081252 (2022).
Arcos-Legarda, J., Cortes-Romero, J., Beltran-Pulido, A. & Tovar, A. Hybrid disturbance rejection control of dynamic bipedal robots. Multibody Syst. Dyn. 46, 281–306. https://doi.org/10.1007/s11044-019-09667-3 (2019).
Tao, C. et al. Gait optimization method for humanoid robots based on parallel comprehensive learning particle swarm optimizer algorithm. Front. Neurorobot. 14, 600885 (2021).
Poonja, H. A. et al. Walking algorithm using gait analysis for humanoid robot. Eng. Proc. 20, 35 (2022).
Yang, T., Tong, Y. & Zhang, Z. Flexible model predictive control for bounded gait generation in humanoid robots. Biomimetics 10, 30 (2025).
Wang, Z. et al. Hybrid bipedal locomotion based on reinforcement learning and heuristics. Micromachines 13, 1688 (2022).
Chu, B., Jung, K., Lim, M.-T. & Hong, D. Robot-based construction automation: An application to steel beam assembly (Part I). Autom. Constr. 32, 46–61. https://doi.org/10.1016/j.autcon.2012.12.016 (2013).
Stumm, S., Braumann, J., von Hilchen, M. & Brell-Cokcan, S. in Advances in Robot Design and Intelligent Control. (eds Rodić, A. & Borangiu, T.) 583–592 (Springer).
Ding, L., Jiang, W., Zhou, Y., Zhou, C. & Liu, S. BIM-based task-level planning for robotic brick assembly through image-based 3D modeling. Adv. Eng. Inform. 43, 100993. https://doi.org/10.1016/j.aei.2019.100993 (2020).
Li, B., Mao, S. & Zhang, H. Laser attenuation and ranging correction in the coal dust environment based on Mie theory and phase ranging principle. Atmosphere 14, 845. https://doi.org/10.3390/atmos14050845 (2023).
Ryde, J. & Hillier, N. Performance of laser and radar ranging devices in adverse environmental conditions. J. Field Robot. 26, 712–727. https://doi.org/10.1002/rob.20310 (2009).
Alsayed, A., Yunusa-kaltungo, A., Quinn, M., Arvin, F. & Nabawy, M. Drone-assisted confined space inspection and stockpile volume estimation. Remote Sens. 13, 3356. https://doi.org/10.3390/rs13173356 (2021).
Zhou, T., Xia, P., Ye, Y. & Du, J. Embodied robot teleoperation based on high-fidelity visual-haptic simulator: Pipe-fitting example. J. Constr. Eng. Manag. 149, 04023129. https://doi.org/10.1061/JCEMD4.COENG-13916 (2023).
Zhu, G. et al. A bimanual robotic teleoperation architecture with anthropomorphic hybrid grippers for unstructured manipulation tasks. Appl. Sci. 10, 2086 (2020).
Saliba, M., Zarb, A. & Borg, J. Development of a Modular, Reconfigurable End Effector for the Plastics Industry: A Case Study. (2009).
A, M. et al. in 2023 2nd International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation (ICAECA). 1–6.
Kim, U., Jeong, H., Do, H., Park, J. & Park, C. Six-axis force/torque fingertip sensor for an anthropomorphic robot hand. IEEE Robot. Autom. Lett. 5, 5566–5572. https://doi.org/10.1109/LRA.2020.3009072 (2020).
Park, S. & Hwang, D. Three-axis flat and lightweight force/torque sensor for enhancing kinesthetic sensing capability of robotic hand. IEEE Trans. Ind. Electron. 71, 12707–12717. https://doi.org/10.1109/TIE.2023.3344833 (2024).
Nagano, H., Takenouchi, H., Cao, N., Konyo, M. & Tadokoro, S. Tactile feedback system of high-frequency vibration signals for supporting delicate teleoperation of construction robots. Adv. Robot. 34, 730–743. https://doi.org/10.1080/01691864.2020.1769725 (2020).
Grebenstein, M. et al. in 2011 IEEE International Conference on Robotics and Automation. 3175–3182.
Ruehl, S. et al. Experimental Evaluation of the Schunk 5-Finger Gripping Hand for Grasping Tasks. (2014).
Robots, A. (Active8 Robots, 2024).
Zhe, X. & Todorov, E. in 2016 IEEE International Conference on Robotics and Automation (ICRA). 3485–3492.
Escribà Montagut, G. Inmoov robot: Building of the first open source 3D printed life-size robot. (2016).
Company, S. R. Vol. 2024 (Shadow Robot Company, 2024).
Lin, Y.-Y., Raj, R. & Juang, J.-Y. A comprehensive review of dexterous robotic hands: Design, implementation, and evaluation. Bioinspir. Biomim. 20, 041003. https://doi.org/10.1088/1748-3190/ade7e1 (2025).
Villani, L. in Encyclopedia of Robotics (eds Ang, M. H., Khatib, O. & Siciliano, B.) 1–6 (Springer, Berlin, 2020).
Vladareanu, L. et al. in Proceedings of the 10th WSEAS International Conference on Mathematical and Computational Methods in Science and Engineering 384–389 (World Scientific and Engineering Academy and Society (WSEAS), Bucharest, Romania, 2008).
Huang, L., Zhu, Z. & Zou, Z. To imitate or not to imitate: Boosting reinforcement learning-based construction robotic control for long-horizon tasks using virtual demonstrations. Autom. Constr. 146, 104691. https://doi.org/10.1016/j.autcon.2022.104691 (2023).
Duan, K. & Zou, Z. Learning from Demonstrations: An Intuitive VR Environment for Imitation Learning of Construction Robots. (2023).
Weiner, P., Neef, C., Shibata, Y., Nakamura, Y. & Asfour, T. An embedded, multi-modal sensor system for scalable robotic and prosthetic hand fingers. Sensors 20, 101 (2020).
Park, S. et al. Multimodal sensing and interaction for a robotic hand orthosis. IEEE Robot. Autom. Lett. 4, 315–322. https://doi.org/10.1109/LRA.2018.2890199 (2019).
Zhang, D. et al. Design and Benchmarking of a Multi-modality Sensor for Robotic Manipulation with GAN-Based Cross-Modality Interpretation. (2025).
Huang, H., Lin, J., Wu, L., Wen, Z. & Dong, M. Trigger-based dexterous operation with multimodal sensors for soft robotic hand. Appl. Sci. 11, 8978 (2021).
Collyer, S. & Warren, C. Project management approaches for dynamic environments. Int. J. Proj. Manag. https://doi.org/10.1016/j.ijproman.2008.04.004 (2013).
Arslan, M., Cruz, C. & Ginhac, D. Visualizing intrusions in dynamic building environments for worker safety. Saf. Sci. 120, 428–446. https://doi.org/10.1016/j.ssci.2019.07.020 (2019).
Santos, S. R. B. d., Givigi, S. N. & Nascimento, C. L. in 2013 IEEE International Systems Conference (SysCon). 452–459.
Kaneko, K. et al. Humanoid robot HRP-5P: An electrically actuated humanoid robot with high-power and wide-range joints. IEEE Robot. Autom. Lett. 4, 1431–1438. https://doi.org/10.1109/LRA.2019.2896465 (2019).
Men, Y. et al. Policy fusion transfer: The knowledge transfer for different robot peg-in-hole insertion assemblies. IEEE Trans. Instrum. Meas. 72, 1–10. https://doi.org/10.1109/TIM.2023.3305709 (2023).
Zhou, S., Helwa, M. K., Schoellig, A. P., Sarabakha, A. & Kayacan, E. in 2019 18th European Control Conference (ECC). 1–8.
Liu, Y., Li, Z., Liu, H. & Kan, Z. Skill transfer learning for autonomous robots and human-robot cooperation: A survey. Robot. Auton. Syst. 128, 103515. https://doi.org/10.1016/j.robot.2020.103515 (2020).
Jaquier, N. et al. Transfer learning in robotics: An upcoming breakthrough? A review of promises and challenges. Int. J. Robot. Res. https://doi.org/10.1177/02783649241273565 (2024).
Vettoruzzo, A., Bouguelia, M.-R. & Rögnvaldsson, T. Meta-learning for efficient unsupervised domain adaptation. Neurocomputing 574, 127264. https://doi.org/10.1016/j.neucom.2024.127264 (2024).
Farahani, A., Voghoei, S., Rasheed, K. & Arabnia, H. A Brief Review of Domain Adaptation. (2020).
Nozza, D., Fersini, E. & Messina, E. in 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI). 184–189.
Parnami, A. & Lee, M. Learning from Few Examples: A Summary of Approaches to Few-Shot Learning. (2022).
Bahrami, M., Mansoorizadeh, M. & Khotanlou, H. in 2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA). 1–5.
Ye, H. J., Ming, L., Zhan, D. C. & Chao, W. L. Few-shot learning with a strong teacher. IEEE Trans. Pattern Anal. Mach. Intell. 46, 1425–1440. https://doi.org/10.1109/TPAMI.2022.3160362 (2024).
Yang, S. et al. Foundation Models for Decision Making: Problems, Methods, and Opportunities. ArXiv http://arxiv.org/abs/2303.04129 (2023).
Fu, L. et al. A touch, vision, and language dataset for multimodal alignment. arXiv preprint http://arxiv.org/abs/2402.13232 (2024).
Niedzwiecki, A. et al. Cloud-Based Digital Twin for Cognitive Robotics. (2024).
Watanobe, Y., Yaguchi, Y., Miyaji, T., Yamada, R. & Naruse, K. in 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST). 1–7.
Yu, H., Kamat Vineet, R. & Menassa Carol, C. Cloud-based hierarchical imitation learning for scalable transfer of construction skills from human workers to assisting robots. J. Comput. Civ. Eng. 38, 04024019. https://doi.org/10.1061/JCCEE5.CPENG-5731 (2024).
Peng, X. B., Andrychowicz, M., Zaremba, W. & Abbeel, P. in 2018 IEEE International Conference on Robotics and Automation (ICRA). 3803–3810.
Valdivia, J. P., Hata, A. & Terra, A. in 2024 IEEE 29th International Conference on Emerging Technologies and Factory Automation (ETFA). 1–8.
Lesort, T. et al. Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Inf. Fus. 58, 52–68. https://doi.org/10.1016/j.inffus.2019.12.004 (2020).
Hajizada, E., Swaminathan, B. & Sandamirskaya, Y. Continual Learning for Autonomous Robots: A Prototype-Based Approach. (2024).
Kanazawa, M., Hatledal, L. I., Li, G. & Zhang, H. Co-simulation-based pre-training of a ship trajectory predictor. In Software Engineering and Formal Methods: SEFM 2021 Collocated Workshops (eds. A. Cerone et al.) 173–188 (Springer, 2022). https://doi.org/10.1007/978-3-031-12429-7_13.
Harris, P., Kagan, M., Krupa, J. D., Maier, B. & Woodward, N. S. Re-Simulation-based Self-Supervised Learning for Pre-Training Foundation Models. ArXiv http://arxiv.org/abs/2403.07066 (2024).
Somerville, A., Lynar, T., Joiner, K. & Wild, G. Use of simulation for pre-training of drone pilots. Drones 8, 640 (2024).
Wang, W. et al. Visual robotic manipulation with depth-aware pretraining. ArXiv http://arxiv.org/abs/2401.09038 (2024).
Villasevil, M. et al. Reconciling Reality Through Simulation: A Real-to-Sim-to-Real Approach for Robust Manipulation. (2024).
Gassert, P. & Althoff, M. Stepping Out of the Shadows: Reinforcement Learning in Shadow Mode. (2024).
Wulfmeier, M., Byravan, A., Bechtle, S., Hausman, K. & Heess, N. Foundations for transfer in reinforcement learning: A taxonomy of knowledge modalities. arXiv preprint http://arxiv.org/abs/2312.01939 (2023).
Taylor, M. E. & Stone, P. Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res. 10, 1633–1685 (2009).
Ju, H., Juan, R., Gomez, R., Nakamura, K. & Li, G. Transferring policy of deep reinforcement learning from simulation to reality for robotics. Nat. Mach. Intell. 4, 1077–1087. https://doi.org/10.1038/s42256-022-00573-6 (2022).
Yin, D., Farajtabar, M., Li, A., Levine, N. & Mott, A. Optimization and generalization of regularization-based continual learning: A loss approximation viewpoint. arXiv preprint http://arxiv.org/abs/2006.10974 (2020).
Li, H., Wu, J. & Braverman, V. Fixed Design Analysis of Regularization-Based Continual Learning. (2023).
Mirzadeh, S. I., Farajtabar, M., Pascanu, R. & Ghasemzadeh, H. Understanding the role of training regimes in continual learning. Adv. Neural. Inf. Process. Syst. 33, 7308–7320 (2020).
Cai, Z., Sener, O. & Koltun, V. 8281–8290.
Aich, A. Elastic Weight Consolidation (EWC): Nuts and Bolts. Preprint at https://arxiv.org/abs/2105.04093 (2021).
Kirkpatrick, J. et al. Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114, 3521–3526. https://doi.org/10.1073/pnas.1611835114 (2017).
Sartore, C., Rapetti, L. & Pucci, D. in 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids). 722–729.
Valsecchi, G., Vicari, A., Tischhauser, F., Garabini, M. & Hutter, M. in 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 13282–13288.
Goutham, M. & Stockar, S. in 2024 American Control Conference (ACC). 839–844.
Justo, V. B., Gupta, A., Umland, T. F. & Göhlich, D. Minimum energy utilization strategy for fleet of autonomous robots in urban waste management. Robotics 12, 159 (2023).
Wang, D. et al. Realizing high-capacity all-solid-state lithium-sulfur batteries using a low-density inorganic solid-state electrolyte. Nat. Commun. 14, 1895. https://doi.org/10.1038/s41467-023-37564-z (2023).
Sun, Y.-K. Emerging all-solid-state lithium–sulfur batteries: Holy grails for future secondary batteries. ACS Energy Lett. 9, 5092–5095. https://doi.org/10.1021/acsenergylett.4c02563 (2024).
Song, H. et al. All-solid-state Li–S batteries with fast solid–solid sulfur reaction. Nature 637, 846–853. https://doi.org/10.1038/s41586-024-08298-9 (2025).
Su, D. et al. Application of machine learning in fuel cell research. Energies. https://doi.org/10.3390/en16114390 (2023).
Mohamad, A., Abdul Rahim, N., Abdul Shukor, S. & Mohd Nasir, N. A Review of Fuel Cell Powered Mobile Robots. (2007).
Lü, X. et al. Performance optimization of fuel cell hybrid power robot based on power demand prediction and model evaluation. Appl. Energy 316, 119087. https://doi.org/10.1016/j.apenergy.2022.119087 (2022).
Kishore, S. C. et al. A critical review on artificial intelligence for fuel cell diagnosis. Catalysts. https://doi.org/10.3390/catal12070743 (2022).
Kundu, T. & Saha, I. in 2019 International Conference on Robotics and Automation (ICRA). 8599–8605.
Robbins, J., Thompson, A., Brennan, S. & Pangborn, H. Energy-Aware Predictive Motion Planning for Autonomous Vehicles Using a Hybrid Zonotope Constraint Representation. (2024).
Takemura, R. & Ishigami, G. in 2024 IEEE International Conference on Robotics and Automation (ICRA). 10103–10109.
Endsley, M. & Endsley, M. R. Toward a theory of situation awareness in dynamic systems. Hum. Factors J. Ergon. Soc. 37(1), 32–64. https://doi.org/10.1518/001872095779049543 (1995).
Lee Jin, S., Oh, S., Park, H. & Ham, Y. Feedback system for enhancing human-robot interaction performance in construction robots. Constr. Res. Congr. 79–88, 2024. https://doi.org/10.1061/9780784485262.009 (2024).
GmbH, N. R. in https://media-neurarobotics.px.media/collections/65008614/media/1675469103/download Vol. 1920x1008 px (ed 003_4NE-1_sorting_tube.png) (NEURA Robotics, https://media-neurarobotics.px.media/overview, 2024).
GmbH, N. R. in https://media-neurarobotics.px.media/collections/65008614/media/1386571715 Vol. 2500x1403 px (ed 012_4NE-1_dangerous_work.jpg) (NEURA Robotics, https://media-neurarobotics.px.media/overview, 2024).
Wong Chong, O., Zhang, J., Voyles, R. M. & Min, B.-C. BIM-based simulation of construction robotics in the assembly process of wood frames. Autom. Constr. 137, 104194. https://doi.org/10.1016/j.autcon.2022.104194 (2022).
Wang, X., Yu, H., McGee, W., Menassa, C. C. & Kamat, V. R. Enabling building information model-driven human–robot collaborative construction workflows with closed-loop digital twins. Comput. Ind. 161, 104112. https://doi.org/10.1016/j.compind.2024.104112 (2024).
Moura, M. S., Rizzo, C. & Serrano, D. in 2021 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). 12–18.
Venkatesh, V. & Bala, H. Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 39, 273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x (2008).
Venkatesh, V., Thong, J. & Xu, X. Unified theory of acceptance and use of technology: A synthesis and the road ahead. J. Assoc. Inf. Syst. 17, 328–376. https://doi.org/10.17705/1jais.00428 (2016).
Cardiell, L. “A robot is watching you”: Humanoid robots and the different impacts on privacy. Masaryk Univ. J. Law Technol. 15, 247–278 (2021).
Emaminejad, N. & Akhavian, R. Trustworthy AI and robotics: Implications for the AEC industry. Autom. Constr. 139, 104298. https://doi.org/10.1016/j.autcon.2022.104298 (2022).
Chauhan, H., Jang, Y. & Jeong, I. Predicting human trust in human–robot collaborations using machine learning and psychophysiological responses. Adv. Eng. Inform. 62, 102720. https://doi.org/10.1016/j.aei.2024.102720 (2024).
Chang, W.-C. A., Hasanzadeh, S., Esmaeili, S., Behzad, D. I. European Council on computing in construction. In Attributing Responsibility for Performance Failure on Worker-Robot Trust in Construction Collaborative Tasks.
Xing, B. & Marwala, T. in Smart Maintenance for Human–Robot Interaction: An Intelligent Search Algorithmic Perspective (eds Xing, B. & Marwala, T.) 3–19 (Springer, 2018).
Ma, X., Mao, C. & Liu, G. Can robots replace human beings?, Assessment on the developmental potential of construction robot. J. Build. Eng. 56, 104727. https://doi.org/10.1016/j.jobe.2022.104727 (2022).
Smith, D. The robots are coming: Probing the impact of automation on construction and society. Constr. Res. Innov. 10, 2–6. https://doi.org/10.1080/20450249.2019.1582938 (2019).
Mohapatra, B., Mohapatra, S. & Mohapatra, S. in Process Automation Strategy in Services, Manufacturing and Construction 217–228 (Emerald Publishing Limited, 2023).
Kurt, R. Industry 4.0 in terms of industrial relations and its impacts on labour life. Procedia Comput. Sci. 158, 590–601. https://doi.org/10.1016/j.procs.2019.09.093 (2019).
Praba, K. et al. in Advancements in Intelligent Process Automation 79–102 (IGI Global Scientific Publishing, 2025).
Tenakwah, E. S. & Watson, C. Embracing the AI/automation age: Preparing your workforce for humans and machines working together. Strat. Leadersh. 53, 32–48. https://doi.org/10.1108/SL-05-2024-0040 (2024).
Autor, D. H. Why are there still so many jobs? The history and future of workplace automation. J. Econ. Perspect. 29, 3–30. https://doi.org/10.1257/jep.29.3.3 (2015).
Michalos, G., Makris, S., Papakostas, N., Mourtzis, D. & Chryssolouris, G. Automotive assembly technologies review: Challenges and outlook for a flexible and adaptive approach. CIRP J. Manuf. Sci. Technol. 2, 81–91. https://doi.org/10.1016/j.cirpj.2009.12.001 (2010).
Cody, J. How labor manages productivity advances and crisis response: A comparative study of automotive manufacturing in Germany and the US. Report No. 32, (Global Labour University Working Paper, 2015).
Krzywdzinski, M. Automation approaches in the automotive industry: Germany, Japan and the USA in comparison. Int. J. Automot. Technol. Manag. 21, 180–199. https://doi.org/10.1504/IJATM.2021.116605 (2021).
Katoch, V. & Mohan, S. Design and fabrication of a safety frame for workers carrying out head lifting at construction sites. J. Eng. Des. Technol. 17, 1250–1265. https://doi.org/10.1108/JEDT-02-2019-0037 (2019).
Antwi-Afari, M. F. Evaluation of biomechanical risk factors for work-related musculoskeletal disorders and fall injuries among construction workers. (2019).
ISO 10218-1:2011. ISO.
ANSI/RIA R15.06-2012—Industrial Robots and Robot Systems—Safety Requirements (CONTAINS CORRIGENDUM).
Zhou, C., Fang, C., Wang, X., Li, Z. & Tsagarakis, N. in 2016 IEEE International Conference on Automation Science and Engineering (CASE). 1026–1033.
Nonomura, Y. Sensor technologies for automobiles and robots. IEEJ Trans. Electr. Electron. Eng. 15, 984–994. https://doi.org/10.1002/tee.23142 (2020).
Schäffner, V. Crash dilemmas and the ethical design of self-driving vehicles: Implications from metaethics and pragmatic road marks. AI Ethics https://doi.org/10.1007/s43681-024-00591-7 (2024).
Viktor, P. & Szeghegyi, Á. Safety of the introduction of self-driving vehicles in a logistics environment. Period. Polytech. Transp. Eng. 50, 387–399. https://doi.org/10.3311/PPtr.20006 (2022).
De Lombaert, T., Rijal, A., Costrasal, R. & Molè, M. Mass collection of workers’ data in warehouse facilities: Reflections on privacy and workforce well-being. Ital. Labour Law e-J. 17, 145–168. https://doi.org/10.6092/issn.1561-8048/20837 (2024).
Lowe, B. D., Hayden, M., Albers, J. & Naber, S. Case studies of robots and automation as health/safety interventions in small manufacturing enterprises. Hum. Factors Ergon. Manuf. Serv. Ind. 33, 69–103. https://doi.org/10.1002/hfm.20971 (2023).
Kyriakidis, M., Happee, R. & de Winter, J. C. F. Public opinion on automated driving: Results of an international questionnaire among 5000 respondents. Transp. Res. F: Traffic Psychol. Behav. 32, 127–140. https://doi.org/10.1016/j.trf.2015.04.014 (2015).
Friedman, C. Ethical concerns with replacing human relations with humanoid robots: An ubuntu perspective. AI Ethics 3, 527–538. https://doi.org/10.1007/s43681-022-00186-0 (2023).
Nguyen, C. T. et al. A comprehensive survey of enabling and emerging technologies for social distancing, part II: Emerging technologies and open issues. IEEE Access 8, 154209–154236. https://doi.org/10.1109/ACCESS.2020.3018124 (2020).
Shaik, A. K. Assessment of Cyber-Physical Vulnerabilities of Industrial Robotic Sensing Systems, (2023).
Timan, T. et al. Study on Safety of Non-embedded Software; Service, Data Access, and Legal Issues of Advanced Robots, Autonomous, Connected, and AI-Based Vehicles and Systems: Final Study Report Regarding CAD/CCAM and Industrial Robots (BEL, 2019).
Funding
Funding was provided by NVIDIA AI Technology Center, University of Florida.
Author information
Authors and Affiliations
Contributions
T.U., H.Y., and M.W. conducted the literature review and wrote the main manuscript text. K.S., E.S., and Z.R. developed the technological roadmap and edited the manuscript. S.L. provided substantial revisions. J.D. conceptualized the project, proposed the roadmap, edited the manuscript, and oversaw the manuscript preparation. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Uthai, T., You, H., Wang, M. et al. Opportunities challenges and roadmap for humanoid robots in construction. Sci Rep 16, 905 (2026). https://doi.org/10.1038/s41598-025-30252-6
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-30252-6













