PrismaX Blog

Tele-op: now and later

Written by PrismaX | Aug 19, 2025 5:08:20 AM

Congratulations! If you’re reading this article, that means you’re interested in contributing to the robotics revolution, which will reshape the way we work and interact with the physical world. More likely than not, you’re also interested in becoming (or already are!) a PrismaX Amplifier, which will let you operate a variety of robots around the world.

Let’s walk through a bit of what you can do now as an Amplifier, why that matters, and what you’ll be able to do in the future.

Right Now

Amplifier members can remotely operate a tabletop robot arm to complete simple pick-and-place type tasks. By doing this, you’re:

  • Collecting data which teaches AI models how robots interact with the physical world - for example, pushing on objects makes them move, and closing the gripper sometimes grabs objects (and sometimes, they slip out). We’re collecting the video feed from the cameras surrounding the robot as well as the angles and positions of the robot’s joints; during training, the AI model learns to predict what happens in the videos when the robot’s joints move.
  • Learning how to smoothly and accurately teleoperate the robot through the latency of the Internet and the limited number of cameras available.

Believe it or not, the second point is more important right now. It’s the reason why the tasks are kept intentionally simple right now - if you’ve played with the teleop online a bit, you’ll find that it’s not trivial to operate the arm smoothly since the cameras don’t provide depth information and the network connection has latency. A bunch of companies are trying to build technology solutions to the problem, but we’re strong believers in the most powerful neural network in the world - the one in your head - and its amazing capabilities to predict depth and compensate for latency.

The data we’re collecting right now will be distributed to research groups around the world to train better frontier models for robotics. Long form interaction data (including mistakes!) helps robots more robustly learn how the real world works.

Soon

We’re already seeing a lot of experience build up among the community, so we’ll be deploying new tasks on the same tabletop arms:

  • More complex tasks that require more precision manipulation: for example, stacking or hanging objects
  • Tasks that require longer, planned sequences of steps, for example, making a sandwich (out of fake food!)

The data collected through these tasks will enhance robots’ ability to reason about the environment, as well as perform more complex environmental interactions beyond just picking up and releasing objects.

With the help of the community, we’ll also be rolling out more varied scenarios for the robots to play in - top community members will have the opportunity to contribute their own setups for teleoperation. Having community-contributed setups enriches the dataset and improves model robustness - even something as simple as lighting changes are incredibly helpful for model performance.

Upcoming

We’ve got a lot of exciting things coming up! This includes:

  • Faster, more powerful arms based on open-source hardware that can do more
  • Bimanual (two-armed) setups, which will greatly increase the tasks available
  • Fancier manipulators (hands) for the arms to be able to do things like pick up and use tools

A question we’ve gotten is “when will you integrate humanoids”? The short answer is “eventually”; more precisely, humanoids introduce a lot of intricacies and failure modes that need to be scrutinized. Another way of looking at it: arms without legs are useful, legs without arms are very questionable, so we want to solve arms first.

A small addendum

Why is tabletop manipulation useful? Well, it turns out it is useful beyond just tabletop tasks. Picking up, moving, and dropping objects is a strong prior for other tasks - you can think of it as the equivalent of all the forum posts and marketing articles that get ingested into LLM training. The enormous quantity of tabletop data that can be collected in relatively little time helps robotics AI models pick up on the fundamental patterns that govern physical interaction, much as all that forum slop teaches LLMs how English works and how conversations work.

This is why community involvement is crucial. The value of those forum posts comes from their immense diversity - with millions upon millions of people posting, nearly every topic receives some coverage. Similarly, the creativity of the community will allow our project to build diverse scenarios with broad coverage across tasks, objects, lighting, and environments, which ultimately increases the value of the data, the models, and the robots beyond the sum of each individual contribution.