Human Interaction

Human Interaction

A demo-first look at natural language, real environments, and unscripted assistance.

PUBLIC DEMO

A conversation, not a script.

Natural language, real environment, no teleoperator, no task-specific pre-training, and no scripted skill set.

Agent with body

A general-purpose agent, connected to a body.

CyberBrain is not just a robot controller. It is a general-purpose LLM/VLM reasoning agent connected to a robotics framework.

Understands people

  • Language instructions
  • Intent extraction
  • Context interpretation

Reasons about environment

  • Object state
  • Spatial relationships
  • Constraints and memory

Acts through the body

  • Motion control
  • Manipulation
  • Navigation and assistance

Learns over time

  • Better models
  • More robot movement data
  • Capability growth

Interaction flow

From Human Instruction to Physical Action.

Example: "Take this empty bottle and throw it away."

1

Human gives instruction

2

Robot identifies the bottle

3

CyberBrain understands it is trash

4

CyberBrain plans a safe grasp

5

Robot moves to the bin

6

Robot throws it away

7

Memory updates context

Human gives a natural real-world instruction to the robot

Real-world instruction

An untrained human gives a natural request in a real environment.

CyberBrain understands objects, context, and the scene

Scene understanding

CyberBrain identifies objects, context, and constraints.

CyberBrain reasons about the task and selects an action plan

Reasoning

The brain reasons about what the human means and what the body should do.

Robot executes the instruction through its body

Physical action

The body moves, grasps, hands over, or assists in the real world.

CyberBrain updates memory after completing the task

Memory Update

The system stores context and outcomes to improve future similar tasks.

Brain vs action

Train the brain, not the body.

Most systems learn motion patterns. CyberBrain trains language, memory, spatial reasoning, planning, and decision-making, then lets the body execute.

VLA / Action-centric

  • learn motion patterns
  • depend on task-specific robot data
  • struggle with new instructions
  • limited reasoning

CyberBrain / Brain-centric

  • understands language
  • reasons about context
  • remembers prior interactions
  • plans before acting
  • adapts across tasks
Language
Memory
Reasoning
Plan
Body execution

Interaction examples

How instructions become embodied tasks.

Household assistance

"Can you bring me the cup on the table?"

Designed to support everyday physical help through natural language.

Bottle disposal

"Take this empty bottle and throw it away."

Shows object understanding, intent reasoning, grasp planning, navigation, and memory update.

Lab support

"Move the sensor kit next to the test platform."

Intended for repeatable technical workflows in research environments.

Elderly support

"Help me pick up what I dropped."

Shows the direction toward safer supportive interaction flows.

Developer demo

"Stack the blocks by color."

Enables quick demos of reasoning + perception + manipulation loops.

Why it matters

Why human interaction changes robotics.

Non-experts can use robots more naturally

Developers do not need to hardcode every task

Human instructions become the interface

Memory enables continuity across interactions

Reasoning enables adaptation in new contexts

Robots move toward assistant-like collaboration over fixed-purpose behavior

Build robots people can actually talk to.

CyberBrain turns embodied AI from isolated control policies into interactive reasoning agents with bodies.