AI-Powered Table Tennis Robot 'Ace' Triumphs Over Top Human Players, Sparking Ethical Debates

April 22, 2026
AI-Powered Table Tennis Robot 'Ace' Triumphs Over Top Human Players, Sparking Ethical Debates
  • Ace, an autonomous table tennis robot powered by AI and advanced sensors, can compete with and beat elite human players under official competition rules.

  • In tests from spring 2025, Ace won three of five matches against top players, though it lost to seasoned professionals regularly competing in professional leagues.

  • By late 2025 and early 2026, Ace improved to beating some elite and several professional players, including Miyuu Kihara, showing rapid gains in speed, precision, and aggressive shot placement near the table edge.

  • The article provides direct links and a DOI to the main research article, a News and Views piece, and a companion video for further reading.

  • The research emphasizes real-time physical AI challenges—perception, state estimation from noisy sensors, and dealing with adversarial human interaction—rather than purely simulated environments.

  • Beyond sport, the piece flags broader security and military implications of ultra-fast autonomous systems, highlighting ethical considerations and real-world applicability concerns.

  • Experts acknowledge ongoing challenges in manipulating complex objects and integrating perception with robust mechanical design, underscoring that this breakthrough is part of a broader trajectory toward capable autonomous systems.

  • Ace relies on nine active pixel-sensor cameras to track ball position in 3D, plus additional cameras for velocity and spin, leveraging reinforcement learning and precision hardware.

  • The work suggests real-time, high-speed control AI could apply to other domains with human interaction, such as manufacturing and service robotics.

  • The control pipeline maps noisy observations to actions every 32 ms via a deep RL policy trained in simulation, with actions constrained into real-time trajectories through convex optimization and a safe MPC-based reset trajectory.

  • Learning was largely simulated; real-world play applied policies learned in simulation directly to the robot without retraining on real data.

  • A genetic algorithm helped develop a library of serves chosen for high performance during training.

Summary based on 10 sources


Get a daily email with more Tech stories

More Stories