Beihang Team Boosts Eye Surgery Safety with Viscoelastic-Aware Admittance Control

Beihang Team Boosts Eye Surgery Safety with Viscoelastic-Aware Admittance Control

In the high-stakes world of robotic microsurgery—where a tremor invisible to the naked eye can mean the difference between sight and blindness—precision isn’t just desirable; it’s non-negotiable. Now, a team of engineers and biomedical researchers from Beihang University has taken a decisive step toward redefining that precision. Their breakthrough? A force-control strategy that doesn’t just react to tissue resistance, but anticipates it—by treating the eye not as a static spring, but as a living, breathing, viscoelastic entity.

The human cornea—thin, transparent, and astonishingly resilient—possesses a mechanical personality that defies classical physics. Press on it, and it yields. Hold the pressure, and it continues to relax. Release, and it rebounds—not instantly, but with a lag dictated by its molecular architecture. This time-dependent blend of elasticity and viscosity is what engineers call viscoelasticity. For decades, it’s been a footnote in surgical robotics: acknowledged in biomechanics papers, yet routinely oversimplified in control algorithms as a simple linear spring. That shortcut, the Beihang team argues, is no longer tenable—especially when forces measured in millinewtons dictate surgical success.

Enter the work of Zheng Yu, He Changyan, Lin Chuang, Han Shaofeng, Guang Chenhan, Chen Zilu, and Professor Yang Yang from the School of Mechanical Engineering and Automation at Beihang University. Their recently published study in ROBOT journal doesn’t just refine existing methods—it reframes the entire control paradigm. By embedding a generalized Maxwell viscoelastic model directly into the controller’s logic, they’ve created a system that “listens” to the tissue’s mechanical language in real time, adjusting motion to deliver exactly the intended force—without overshoot, without guesswork, and without damaging delicate ocular structures.

This isn’t theory confined to simulation. The team validated their approach on ex vivo porcine eyes—the gold-standard surrogate for human ocular tissue in preclinical testing. Their results are striking: a steady-state error of just 4.6% in step-force delivery, a clean response time of 2.5 seconds, and—critically—zero observable overshoot. In a domain where overshoot means perforation, that last metric isn’t an academic footnote; it’s a patient-safety guarantee.

But what makes this work stand out isn’t just performance—it’s philosophy. While much of the field races toward AI-heavy solutions—deep neural networks trained on thousands of insertion trajectories, real-time vision-based force estimation, or complex multi-sensor fusion—the Beihang group has doubled down on first-principles physics. Their controller is elegant: a proportional–integral (PI) regulator, a workhorse of industrial automation, reimagined not as a brute-force corrector, but as a viscoelastic compensator. By replacing the ideal differentiator in conventional admittance control with a first-order dynamic element, they’ve smoothed out the low-frequency instability that plagues traditional approaches—particularly where robotic surgery operates: in the near-DC realm of steady, sub-hertz manipulations.

The implications ripple far beyond corneal suturing. Think vitreoretinal procedures, where a cannula must pierce the sclera and navigate the retina without dragging or buckling fragile vasculature. Think cataract surgery, where the anterior lens capsule—a film thinner than plastic wrap—must be torn with micron-level force control to avoid radial tears that compromise lens implantation. In each case, the tissue isn’t just resisting; it’s relaxing, creeping, recovering. Ignoring that behavior is like navigating a river by ignoring the current.

To grasp the significance of their modeling choice, consider this: when robotic tools press into the cornea, the initial peak force can be nearly double the eventual steady-state value—purely due to viscoelastic relaxation. A controller assuming instantaneous elasticity would misinterpret that relaxation as loss of contact, prompting it to push harder, risking overtissue deformation. The Beihang controller, by contrast, expects the force to decay and modulates displacement accordingly—holding position not rigidly, but intelligently.

Their parameter identification process underscores their commitment to biological fidelity. Using a high-resolution linear stage (0.5-micron resolution) and a nano-force sensor (2.93-mN resolution), they performed force-relaxation experiments on freshly harvested pig eyes, compressing the corneal apex with a 1-mm-diameter cylindrical probe at 1 mm/s to a depth of 3 mm—clinically relevant for suturing or incision prep. Over 450 seconds per trial, they captured the full decay curve, then fitted it to a three-term exponential:
f(t) = 0.095·exp(−t/3.63) + 0.077·exp(−t/25.27) + 0.072·exp(−t/196.9)
This isn’t an arbitrary curve. Each term corresponds to a distinct Maxwell element in their generalized model—a fast-relaxing component (τ ≈ 3.6 s), a medium one (τ ≈ 25 s), and a slow, long-term creep mode (τ ≈ 197 s). The fit is exceptional: a coefficient of determination () of 99.94% and a root-mean-square error of just 0.0145 (normalized units). In engineering terms, that’s not just good—it’s surgical-grade accurate.

The real ingenuity, however, lies in how they use that model—not in feedforward prediction (which would require real-time integration of complex dynamics), but in shaping the controller’s frequency response. Classical admittance control uses a second-order mass-damper-spring filter to map force error into motion commands. But that filter, when cascaded with a viscoelastic plant, produces an open-loop transfer function whose magnitude plunges below −14 dB at low frequencies—guaranteeing force attenuation and sluggish response. The Beihang insight? Swap that rigid mechanical analogy for a tunable PI loop. Suddenly, the controller gains a crucial degree of freedom: the ratio kᵢ/kₚ becomes a design knob for low-frequency gain. By enforcing kᵢ/kₚ > ωₜ₅ (where ωₜ₅ ≈ 0.278 rad/s is the highest break frequency of the tissue model), they ensure positive gain at near-zero frequencies—turning attenuation into amplification where it matters most.

And it works. In their step-force trials targeting 0.3 N (a realistic suturing load), the system settles smoothly—no ringing, no bounce, no dangerous spikes. Compare that to prior art: He et al.’s recurrent neural network approach reported a steady-state error of 17.64 mN (≈5.9%), while this physics-based method achieves 13.8 mN (4.6%)—with far lower computational overhead and no dependency on massive training datasets. In surgical robotics, where certification demands traceable, deterministic behavior, that simplicity is a strategic advantage.

Even the sinusoidal force tests—though not clinically dominant—reveal telling details. At 0.5 Hz, amplitude error sits at 23.7%; at 1 Hz, it climbs to 32.1%; at 5 Hz, it hits 50.9%. Crucially, the researchers don’t treat this as a flaw—they diagnose it. The phase lag and amplitude roll-off match the Bode plot predictions of their combined PI–viscoelastic system, confirming the model’s validity. More importantly, they contextualize it: “Ophthalmic surgery typically requires stable, constant forces—not oscillatory ones.” In other words, optimizing for steady-state accuracy, not bandwidth, is the right call.

Still, the team acknowledges room to grow. Future work, they note, will incorporate hyperelasticity—the nonlinear stiffening corneas exhibit under large strains—and extend the framework to multidimensional force control, essential for tasks like capsulorhexis, where shear and peeling forces dominate. They also plan to tailor model parameters for different tissues: sclera (stiffer, slower relaxation), lens capsule (ultra-thin, highly anisotropic), vitreous (gel-like, thixotropic). One model won’t fit all; adaptability is key.

This research arrives at a pivotal moment. Robotic ophthalmic platforms—from Preceyes’ CE-marked system to the da Vinci-derived Steady-Hand Eye Robot—are moving from lab curiosities to operating-room collaborators. But their adoption hinges on trust: trust that the machine won’t slip, won’t overshoot, won’t fatigue. Trust built not on flashy autonomy, but on quiet, reliable competence. The Beihang team’s contribution is a masterclass in that ethos—leveraging deep biomechanical insight to make robots not smarter, but safer.

Consider the alternative paths the field could have taken. One is pure teleoperation: give surgeons better joysticks, higher-resolution displays, haptic feedback vests. It’s intuitive, but it doesn’t solve tremor or fatigue—it just relays them more faithfully. Another is full autonomy: let AI plan and execute maneuvers end-to-end. While promising for standardized tasks (e.g., intravitreal injections), it faces steep regulatory and ethical hurdles for complex, anatomy-variable procedures like membrane peeling.

The Beihang approach occupies the pragmatic middle ground: shared control. The surgeon remains in command, defining what force to apply and where; the robot handles how—translating intent into motion with biomechanical awareness. It’s assistance without abdication.

And the hardware demands? Surprisingly modest. Their setup uses off-the-shelf components: a SCARA robot, a linear stage, a commercial nano-force sensor. No custom MEMS arrays, no embedded OCT probes, no multi-camera tracking rigs. That accessibility matters. Hospitals won’t overhaul ORs for exotic tech; they’ll adopt upgrades that slot into existing workflows. A PI controller running at 200 Hz? That’s feasible on today’s embedded DSPs.

Critically, the team bridges the lab-to-clinic gap with concrete safety thresholds. They cite literature showing anterior capsule tears occur at forces around 20 mN—meaning their 13.8-mN error margin sits comfortably below the damage threshold. That’s not just data; it’s a risk assessment. Regulators (FDA, NMPA, EMA) crave such quantifiable safety margins—and this work delivers them.

Beyond ophthalmology, the methodology is portable. Any soft-tissue interaction—neurosurgery, fetal intervention, microvascular anastomosis—suffers from the same modeling gap: treating living matter as Hookean solids. The generalized Maxwell framework, parameterized via relaxation tests, offers a template for domain-specific adaptation. Pair it with a PI-based admittance loop, and you get a controller that’s both interpretable (engineers can tweak kₚ, kᵢ, τᵢ with physical intuition) and robust (no black-box neural nets to retrain when tissue properties shift).

One subtle but profound shift in their framing deserves attention: they don’t call their output a “force controller.” They call it a contact force controller. That distinction is telling. In robotics, “force control” often implies full environmental interaction—pushing, pulling, grinding. But in microsurgery, contact is the operative word. It’s about maintaining just enough engagement to perform a task—no more, no less. It’s a feather-light dialogue between tool and tissue, where silence (loss of contact) is as dangerous as shouting (excessive force). Their system masters that whisper.

Looking ahead, the integration path seems clear. First, embed the model parameters into surgical planning software—letting surgeons preview how tissue will respond to planned maneuvers. Second, couple it with micro-vibration strategies (a specialty of Professor Yang’s lab), where high-frequency dither reduces static friction and eases instrument insertion. Third, link force profiles to real-time imaging: if OCT detects unexpected tissue movement, dynamically adjust kᵢ/kₚ to maintain setpoint.

This isn’t about replacing surgeons. It’s about augmenting them—taking the physiological limits of human hands (tremor >8 Hz, fatigue after 45 minutes, force resolution >10 mN) and transcending them with machines that are steadier, tireless, and exquisitely sensitive. As robotic assistance moves from “nice-to-have” to “standard-of-care,” approaches like this one—grounded in biomechanics, validated on biological tissue, and engineered for clinical pragmatism—will define the next generation of surgical excellence.

The eyes have it—and now, thanks to this work, robots are learning to listen to what they’re saying.

Zheng Yu, He Changyan, Lin Chuang, Han Shaofeng, Guang Chenhan, Chen Zilu, Yang Yang
School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China
ROBOT, Vol. 43, No. 3, May 2021
DOI: 10.13973/j.cnki.robot.200497