A Systematic Review and Meta-Analysis on the Impact of Proficiency-Based Progression Simulation Training on Performance Outcomes



Elio Mazzone, Stefano Puliatti, Marco Amato, Brendan Bunting, Bernardo Rocco, Francesco Montorsi, Alexandre Mottrie and Anthony G Gallagher


Objective: To analyze all published prospective, randomized and blinded clinical studies on the PBP training using objective performance metrics.
Background: The benefit of 'proficiency-based progression' (PBP) methodology to learning clinical skills in comparison to conventional training is not settled.
Methods: Search of PubMed, Cochrane library‟s Central, EMBASE, MEDLINE and Scopus databases, from inception to 1st March 2020. Two independent reviewers extracted the data. The Medical Education Research Study Quality Instrument (MERSQI) was used to assess the methodological quality of included studies. Results were pooled using biased corrected standardized mean difference and ratio-of-means (ROM). Summary effects were evaluated using a series of fixed and random effects models. The primary outcome was the number of procedural errors performed comparing PBP and non-PBP-based training pathways. Secondary outcomes were the number of procedural steps completed and the time to complete the task/procedure.
Results: From the initial pool of 468 studies, 12 randomized clinical studies with a total of
239 participants were included in the analysis. In comparison to the non-PBP training, ROM results showed that PBP training reduced the number of performance errors by 60% (p < 0.001) and procedural time by 15% (p = 0.003) and increased the number of steps performed by 47% (p < 0.001).
Conclusions and Relevance: Our systematic review and meta-analysis confirms that PBP
training in comparison to conventional or quality assured training improved trainees‟
performances, by decreasing procedural errors and procedural time, while increasing the number of correct steps taken when compared to standard simulation-based training.

Objective Assessment of Intra-Operative Skills for Robot-Assisted Radical Prostatectomy (RARP): Results from the ERUS Scientific and Educational Working Groups Metrics Initiative



Alexandre Mottrie, Elio Mazzone, Peter Wiklund, Markus Graefen, Justin W. Collins,
Ruben De Groote, Paolo Dell’Oglio, Stefano Puliatti, Anthony G Gallagher


Background: Identifying objective performance metrics for surgical training in robotic
surgery is imperative for patient safety. Based on this, we aimed to develop and seek
consensus from procedure experts on the metrics which best characterize a reference robotassisted radical prostatectomy (RARP). To determine if the metrics distinguished between the objectively assessed RARP performance of experienced and novice urologists.

Materials and methods: In Study 1, the metrics, i.e., 12 phases of the procedure, 81 steps, 245 errors and 110 critical errors for a reference RARP were developed and then presented to an international Delphi panel of 19 experienced urologists. In Study 2, 12 very experience surgeons (VES) who performed > 500 RARP and 12 novice urology surgeons performed a RARP which was video recorded and assessed by two experienced urologists blinded as to subject and group. Percentage agreement between experienced urologists for the Delphi meeting and Mann-Whitney U and Kruskal-Wallis tests were used for construct validation of the newly identified RARP metrics.

Results: At the Delphi panel, consensus was reached on the appropriateness of the metrics for a reference RARP. In Study 2 the results showed that the VES performed ~4% more procedure steps and made 72% fewer procedure errors than the novice group (p = 0.027). Phases 7a & b (i.e., neurovascular bundle dissection) best discriminated between the VES and Novice surgeons. Limitations: VES whose performance was in the bottom half of their group demonstrated considerable error variability, made five-times as many errors as the other half of the group (p = 0.006).

Conclusions: The international Delphi panel reached high-level consensus on the RARP
metrics which reliably distinguished between the objectively scored procedure performance of VES and novice RARP surgeons. Reliable and valid performance metrics of robotic prostatectomy are imperative for effective and quality assured surgical training.

Development and validation of the objective assessment of robotic suturing and knot tying skills for chicken anastomotic model



Stefano Puliatti, Elio Mazzone, Marco Amato, Ruben De Groote, Alexandre Mottrie & Anthony G. Gallagher


Background To improve patient safety, there is an imperative to develop objective performance metrics for basic surgical skills training in robotic surgery.

Objective To develop and validate (face, content, and construct) the performance metrics for robotic suturing and knot tying, using a chicken anastomotic model.

Design, setting and participants Study 1: In a procedure characterization, we developed the performance metrics (i.e., procedure steps, errors, and critical errors) for robotic suturing and knot tying, using a chicken anastomotic model. In a modified Delphi panel of 13 experts from four EU countries, we achieved 100% consensus on the five steps, 18 errors and four critical errors (CE) of the task.

Study 2: Ten experienced surgeons and nine novice urology surgeons performed the robotic suturing and knot tying chicken anastomotic task. The mean inter-rater reliability for the assessments by two experienced robotic surgeons was 0.92 (95% CI, 0.9–0.95). Novices took 18.5 min to complete the task and experts took 8.2 min. (p = 0.00001) and made 74% more objectively assessed performance errors than the experts (p = 0.000343).

Conclusions We demonstrated face, content, and construct validity for a standard and replicable basic anastomotic robotic suturing and knot tying task on a chicken model.

Patient summary Validated, objective, and transparent performance metrics of a robotic surgical suturing and knot tying tasks are imperative for effective and quality assured surgical training.

Orsi Consensus Meeting on European Robotic Training (OCERT): Results from the First Multispecialty Consensus Meeting on Training in Robot-assisted Surgery



Aude E. Vanlander, Elio Mazzone, Justin W. Collins, Alexandre M. Mottrie,
Xavier M. Rogiers, Henk G. van der Poel, Isabelle Van Herzeele, Richard M. Satava,
Anthony G. Gallagher


To improve patient outcomes in robotic surgery, robotic training and education need to be
modernised and augmented. The skills and performance levels of trainees need to be
objectively assessed before they operate on real patients. The main goal of the
first Orsi Consensus Meeting on European Robotic Training (OCERT) was to establish the opinions of experts from different scientific societies on standardised robotic training pathways and training methodology. After a 2-d consensus conference, 36 experts identified 23 key statements allotted to three themes: training standardisation pathways, validation metrics, and implementation prerequisites and certification. After two rounds of Delphi voting, consensus was obtained for 22 of 23 questions among these three categories. Participants agreed that societies should drive and support the implementation of benchmarked training using validated proficiency-based pathways. All courses should deliver an internationally agreed curriculum with performance standards, be accredited by universities/professional societies, and, trainees should receive a certificate approved by professional societies and/or universities after successful completion of the robotic training courses. This OCERT meeting established a basis for bringing surgical robotic training out of the operating room by seeking input and consensus across surgical specialties for an objective, validated, and standardised training programme with transparent, metric-based training outcomes.