Nimitz Tech Hearing - 2-11-2026

News Flash: Building an AI-Ready America: Safer Workplaces Through Smarter Technology

⚡️ News Flash ⚡️

Building an AI-Ready America: Safer Workplaces Through Smarter Technology

House Education and Workforce

February 11, 2026 (recording linked here)

HEARING INFORMATION

Witnesses and Written Testimony (Linked):

HEARING HIGHLIGHTS

QUICK SUMMARY

  • AI-enabled workplace safety tools were framed as both transformative and controversial. Industry witnesses described measurable reductions in crashes, injuries, and unsafe behaviors through in-cab cameras, predictive analytics, robotics, and wearables, arguing that AI shifts safety from reactive investigations to real-time prevention. Members supportive of deployment emphasized improved retention, cost savings, and life-saving interventions across transportation and construction sectors.

  • Surveillance and worker privacy dominated Democratic questioning. Members raised concerns that continuous monitoring systems—particularly in-cab recording—could capture private conversations and create psychological pressure tied to performance metrics. Witnesses acknowledged certain recording configurations were technically possible but said usage depended on employer settings and was designed for safety, not discipline.

  • Human oversight emerged as a central fault line. Republican members repeatedly pressed witnesses on whether AI could operate without a “human in the loop,” and industry representatives emphasized that their systems informed rather than replaced human decision-making. A former OSHA official warned that poorly implemented automation and robotics could create new hazards, citing a fatal example involving improperly reprogrammed equipment.

  • Regulatory philosophy divided the panel. Industry leaders cautioned that one-size-fits-all or overly complex regulations—especially across different states—could burden small businesses and slow adoption of safety-enhancing tools. In contrast, worker advocates argued that strong standards historically drive safer innovation and that agencies must build capacity to regulate AI-integrated systems effectively.

  • The broader tension centered on guardrails versus acceleration. Supporters of rapid AI deployment framed it as comparable to seatbelts or airbags in its safety potential, while critics argued that without transparency, enforcement, and worker input, automation could shift risks onto workers. Despite sharp disagreement over oversight, there was bipartisan acknowledgment that AI will shape workplace safety and that Congress will likely need to define clearer boundaries as adoption expands.

IN THEIR WORDS

“When we unleash technology without guardrails, workers will always pay the price.”

Ranking Member Omar

“The question is not whether to regulate AI, it is how to prepare agencies to address the impacts of AI so that they can carry out their basic missions in an AI-enabled world.”

— Mr. Doug Parker, Witness

“Technology is a tool that can immediately change the landscape… but it must be used to increase employee safety and protection — not to punish, not to replace workers.”

— Mr. Jeff Buczkiewicz, Witness

SUMMARY OF OPENING STATEMENTS

  • Subcommittee Chair Mackenzie stated that the subcommittee was examining how AI and advanced technologies were improving workplace safety. He explained that AI-powered tools were shifting safety management from a reactive, incident-based approach to a preventive, data-driven model. He highlighted wearable sensors, predictive analytics, and hazard detection systems that could identify risks before accidents occurred. He emphasized that these technologies could reduce injuries, lower costs, and strengthen business operations, but required proper implementation and human oversight. He raised questions about validation, worker privacy, and transparency, stressing that policy must balance innovation with safeguards. He concluded that AI had the potential to make workplaces significantly safer if protection and progress were aligned.

  • Subcommittee Ranking Member Omar stated that automation was already embedded in workplace systems and was shaping working conditions without sufficient transparency or worker input. She argued that AI was often portrayed as an inevitable race requiring minimal government interference, but she maintained that guardrails were necessary to protect workers. She expressed concern that weakened federal protections and expanded corporate access to data increased risks to workers and communities. She asserted that responsible development of AI could improve workplace safety if supported by strong standards and full funding for safety agencies. She cited OSHA standards and state enforcement as examples of how regulation could foster both safety and innovation. She concluded that AI should be integrated in ways that ensured workers and communities benefited rather than bore the costs.

SUMMARY OF WITNESS STATEMENTS

  • Mr. Land testified that AI was already delivering measurable safety improvements in high-risk sectors such as transportation, construction, and public services. He explained that Samsara’s AI-powered sensors and video systems provided real-time insights that reduced crashes, distracted driving, and harsh driving events. He emphasized that AI functioned as a task-based, preventive tool in physical operations where risks were significant. He cited data showing substantial reductions in crashes and mobile phone usage, as well as improved worker retention after deployment of safety technologies. He stressed that pairing detection with targeted coaching strengthened safety culture and produced measurable outcomes. He concluded that thoughtful policy could help expand these proven technologies to protect more frontline workers.

  • Mr. Hoplin described the diversity of distribution centers and emphasized that safety culture was a common feature across the industry. He explained that distributors were deployers, not developers, of AI and were using the technology to enhance established safety protocols. He outlined four applications of AI in warehouses: environmental scanning, predictive maintenance, wearable alert systems, and automated retrieval technologies. He stressed that operational differences meant one-size-fits-all regulation would not be effective. He warned that excessive regulation could create financial burdens, particularly for small businesses seeking to adopt safety-enhancing technologies. He concluded that AI was already helping improve safety in warehouses and should be supported through flexible policy approaches.

  • Mr. Parker stated that AI had the potential to reduce workplace injuries but warned that it must be grounded in core safety principles and ethical frameworks. He argued that many AI tools focused on modifying worker behavior rather than eliminating hazards through engineering controls. He cautioned that using AI for disciplinary purposes undermined safety goals. He also warned that algorithmic management and surveillance tools could create physical and psychological risks, including stress and unsafe production pressures. He advocated for prevention-by-design strategies, risk assessments, auditing standards, and meaningful worker participation. He concluded that government agencies must build capacity to oversee and respond to AI’s impacts in the workplace.

  • Mr. Buczkiewicz explained that the masonry industry had developed a purpose-built AI system called “George” to enhance job-site safety. He stated that the system analyzed images and video to ensure compliance with wall bracing plans, PPE requirements, and best practices in real time. He emphasized that the technology shifted safety efforts from reacting after accidents to preventing them proactively. He noted that significant workforce retirements were expected in construction and argued that AI could improve productivity, extend careers, and attract new workers. He stressed that technology should not be used to punish or replace workers, but to protect and support them. He urged bipartisan collaboration to ensure responsible and beneficial deployment of AI in the industry.

SUMMARY OF KEY Q&A

  • Chair Walberg asked what major workforce challenges construction faced and what technologies could help with retention amid concerns that AI would eliminate jobs. Mr. Buczkiewicz said the industry faced an aging workforce and explained that assistive technologies such as the Mule and exoskeletons reduced physical strain, prevented injuries, and helped extend workers’ careers.
    Chair Walberg asked how frontline workers responded to the introduction of AI safety technology and what strategies helped secure buy-in. Mr. Land said worker response had been overwhelmingly positive, cited improved retention, and explained that buy-in came from personalized coaching that paired positive reinforcement with constructive feedback.
    Chair Walberg asked how disembodied AI, predictive AI, and human-centered AI worked together to improve warehouse safety and whether employees could override those systems. Mr. Hoplin said the technologies operated as a layered safety framework addressing different risks, and he indicated employees had input into implementation and could override certain features depending on system design.

  • Ranking Member Omar asked how AI-enabled monitoring and surveillance could create real health and safety risks for workers. Mr. Parker said constant surveillance could increase stress and anxiety, and he explained that perceived performance pressure could push workers to unsafe speeds and increase injury risk.
    Ranking Member Omar asked for an explanation of OSHA’s state plan program and what could happen if states were prevented from regulating AI-related workplace hazards. Mr. Parker said state plans were federally approved OSHA programs that enforced safety standards at the state level, and he argued that barring states from regulating AI-related hazards would be impractical because AI was integrated into the very systems agencies were required to oversee.

  • Rep. Grothman asked how human judgment should be incorporated into AI-driven workplace safety decisions and what could go wrong without humans in the loop. Mr. Hoplin said humans had to be involved from system design through operational decision-making and emphasized that their industry did not deploy AI that overrode human authority.
    Rep. Grothman asked whether AI safety technologies would become more affordable over time and what needed to occur for broader access. Mr. Land said hardware and computing costs were declining and stated that wider market adoption, supported by sound policy, would drive affordability.
    Rep. Grothman asked for examples of harm caused by insufficient human supervision of AI systems. Mr. Land said he was not aware of such examples within his experience and emphasized that his company’s systems informed rather than autonomously acted.
    Rep. Grothman asked other witnesses for examples. Mr. Buczkiewicz said he could not cite a specific case but warned that lack of oversight would lead to errors, and Mr. Parker described a fatal incident in which a robotics device was moved without proper reprogramming and crushed a worker.

  • Rep. Casar asked whether in-cab monitoring systems could record drivers continuously during operation, continue recording after the vehicle stopped, and potentially capture private conversations such as calls to doctors, family members, or union representatives. Mr. Land said recording depended on installation and customer privacy settings, acknowledged continuous and post-stop recording were technically possible though uncommon and costly, and stated he was not aware of customers using the technology to capture private conversations. Rep. Casar said he supported AI for safety purposes but argued that he had not heard clear guardrails preventing invasive monitoring practices.

  • Chair Mackenzie asked how AI-powered sensors and video systems translated real-time data into practical safety improvements. Mr. Land said in-vehicle cameras ran local models to detect unsafe behaviors such as close following or drowsiness and that cloud-based analysis identified broader safety patterns, which he described as transformative for reducing crashes and protecting workers.
    Chair Mackenzie asked how AI applications in construction safety could reduce operational costs. Mr. Buczkiewicz said their wall-bracing AI tool determined when internal or external bracing was required, guided correct placement without complex manual calculations, verified compliance through uploaded images, and reduced the risk of costly structural failures.

  • Rep. Fine asked how workers responded to AI-enabled coaching and what lessons employers learned from successful implementation. Mr. Land said large-scale data analysis enabled significant crash reductions and that workers responded positively because training was personalized and reinforced safe behavior.
    Rep. Fine asked how burdensome regulations could affect AI deployment in warehouses. Mr. Hoplin said most distributors were small businesses with limited resources, and he argued that complex or inconsistent regulations could increase compliance costs, create uncertainty, and delay or discourage adoption of safety-enhancing technologies.

ADD TO THE NIMITZ NETWORK

Know someone else who would enjoy our updates? Feel free to forward them this email and have them subscribe here.

Update your email preferences or unsubscribe here

© 2026 Nimitz Tech

415 New Jersey Ave SE, Unit 3
Washington, DC 20003, United States of America

Powered by beehiiv Terms of Service