According to Iraf News Agency, citing ISNA, at the CES 2026 exhibition, the humanoid robotics company “Realbotix” showcased one of the first completely autonomous and script-free conversations between two physical humanoid robots, entirely supported by embedded artificial intelligence.
The interaction, which took place live on the exhibition floor, featured two humanoid robots named “Aria” and “David”, which engaged in a continuous and uninterrupted dialogue for over two hours without human intervention, pre-scripted text, or remote operation.
According to the company, both robots ran Realbotix’s proprietary AI software locally on the device, rather than relying on cloud processing.
This demonstration was presented as an example of “embodied AI”, with two embodied systems that understood each other in real-time, responded to each other, and adapted to each other, rather than following pre-programmed conversations.
Andrew Kiguel, CEO of Realbotix, said: “Realbotix specializes in human-interactive robots. In this case, we demonstrated that our robots can interact with each other. This is a true demonstration of embodied AI in action, as this interaction was completely script-free and lasted over two hours.”
Multilingual Conversation and On-Device AI
During the demonstration, Aria and David conversed in multiple languages, including English, Spanish, French, and German.
Realbotix showcased not only the multilingual capabilities of its language models but also the flexibility of its embodied AI platform. The exchange was conducted naturally, with the robots spending time responding to each other’s statements rather than following a fixed conversation structure.
However, observers noted that the live interaction had noticeable pauses, speech inconsistencies, and irregular pacing.
The robots’ visual and expressive performance remained limited and certainly couldn’t compare to prominent humanoid robots like Ameca, which have facial expressions and fluidity that are alarmingly realistic.
Even compared to everyday AI voice assistants like GPT-4o, the Realbotix humanoid robots appeared more mechanical, with limited facial expressions and speech. Online viewers described them as “rubber mannequins with speakers.”
Despite the impressive achievement, the demonstration highlighted the current limitations of humanoid robotics and the challenges that remain in creating truly lifelike and interactive robots.”
Vision System and Human Interaction Display
In addition to the humanoid robot-to-robot exchange, Realbotix presented a separate demonstration focusing on human-robot interaction. A third humanoid robot showcased the company’s patented vision system, embedded in the robot’s eyes.
During this display, the robot verbally interacted with participants, identified individuals, visually tracked them, and interpreted audio and facial cues to infer emotional states.
The company stated that this vision system enables the robot to naturally follow people and respond conversationally, highlighting advancements in real-time visual understanding and embodied social interaction.
This demonstration showcased the robot’s ability to engage in dynamic and context-aware interactions, further blurring the lines between human and robot communication.
Authenticity surpasses elegance
Although the conversation quality was not flawless, the significance of this demonstration lay elsewhere. Most humanoid robot demonstrations rely on highly controlled environments, remote operation, or pre-scripted dialogues to minimize errors. In contrast, Realbotix allowed its systems to operate openly, revealing limitations such as pauses, timing mismatches, and uneven speech delivery.
The company chose to showcase not a pre-programmed performance, but rather how two independent humanoid systems currently behave when interacting freely, in public, and for an extended period.
Realbotix designs and manufactures AI-powered humanoid robots for entertainment, customer service, and companionship. The company states that its robots are produced in the United States and utilize patented technologies to enable realistic facial expressions, movement, vision, and social interaction.
By presenting this demonstration at CES 2026, the company showcased its technology to industry leaders, investors, and media, providing an unfiltered look at the current state of embodied conversational AI.




