Next-Generation SCS Programming Platform: Enhancing ECAP Fidelity and Objectivity to Improve Patient Experience

Study Population

Data from two prospective studies evaluating the APM were analyzed: Freshwater (NCT04662905) and Rosella (NCT06057480). Both studies included patients with chronic, intractable trunk and/or limb pain.

Each study was designed, implemented and reported in accordance with the ICH Guidelines for Good Clinical Practice, with applicable local regulations, and with the ethical principles laid down in the Declaration of Helsinki. Ethical approval for each study was granted by the Ethics Committee and/or Institutional Review Board (IRB) for Freshwater (NCT04662905) and Rosella (NCT06057480) studies. All patients provided written (signed) informed consent prior to participation in each study.

Next-Generation APM Platform (EVA™, Saluda Medical, Minnetonka, MN, USA)

The APM consists of a streamlined patient-centric user interface that guides the programmer through one of two workflows:

1.

New Program Assistant (NPA): a four-step, complete, automated workflow where the system independently establishes an optimal stimulation configuration based on the patient’s ECAP profile and interaction with the software (Fig. 1a). The NPA evaluates between four and eight distinct stimulation configuration candidates, each consisting of automatically selected anode/cathode locations and stimulation pulse-widths. Stimulation candidates are evaluated in sequence via a simple press-and-hold button interface used to ramp the stimulation level up to maximal intensity, and upon button release, stimulation is immediately ramped down. For each candidate, NPA objectively evaluates multiple sensing configurations concurrently to ascertain neural activation and spinal cord sensitivity, which is used to predict and configure personalized therapeutic settings (see Fig. 1b). Stimulation candidates are refined between ramps, and at most four candidates are short-listed for final evaluation by the patient. Short-listed candidates are interleaved, and any combination of candidates may be isolated and evaluated by the patient, significantly expanding the pool of available programs without increasing workflow complexity or duration. Patient preference then determines a subset of these stimulation candidates to deploy in the final program. Finally, the optimal closed-loop program is algorithmically determined and activated on the device.

2.

ECAP Assistant (EA): in this mode, the APM facilitates the refinement of an existing stimulation program via a two-step workflow (steps 1 and 4 as per Fig. 1a), offering automated adjustments derived from automated ECAP recordings and their corresponding activation plots.

Fig. 1Fig. 1The alt text for this image may have been generated using AI.

APM workflow and data collection. a Four-step APM New Program Assistant workflow: stimulation candidate testing, patient feedback refinement, coverage evaluation of shortlisted candidates, and closed-loop deployment. ECAP Assistant implements only testing and deployment steps. b Data from one stimulation candidate evaluation: stimulation current is increased to a level where the patient first reports discomfort, with ECAP recordings processed for six interleaved configurations, yielding six activation plots per stimulation candidate

The APM provides an automated workflow to assist the user when generating a new program (via the NPA) or to configure the neural measurement and closed-loop parameters of an existing program (via the EA). The dual-mode operation of APM makes it suitable for both initial programming sessions and subsequent adjustments as patient needs evolve over time. All APM programming parameters remain within on-label settings.

The APM performs parallel data processing on the interleaved sensing configurations on-the-fly. The APM introduces a new filter that is configured using the set of recorded signals from each data-stream, resulting in a set of activation plots that are updated with each incoming signal in real-time (Fig. 1b). The sensing configuration with the highest signal-to-noise ratio (SNR) (Fig. 2a) identifies the optimal sensing configuration associated with the stimulation candidate under test and finally the optimal sensing configuration for closed-loop therapy from the highest SNR across all selected stimulation candidates.

Fig. 2Fig. 2The alt text for this image may have been generated using AI.

Signal to noise ratio performance with the new APM filter. a SNR quantification method using feedback variable (ECAP amplitude) at 1.4 × threshold normalized by feedback variable noise standard deviation. b SNR statistics for the Freshwater (FW) cohort pre and post APM testing. The asterisk (\(\ast\)) denotes a statistically significant two-sample t-test (\(\alpha =0.05\)) for the within patient pre (FW) versus post (APM) programming comparison. c Comparative SNR analysis between APM filter and Chakravarthy et al. method [12] for ECAPs recorded with 6 (7) recording electrode gap and 8 (9) reference electrode gap. The asterisk (\(\ast\)) indicates a statistically significant one-sample t-test (\(\alpha =0.05\)). d Graphic of (6, 8) and (7, 9) sensing electrode configurations with reference to the Chakravarthy el al. (7, 8) sensing configuration. Diamonds indicate population means for all box plots

APM PerformanceSignal-to-Noise Ratio (SNR)

SNR is measured from activation plots collected for each patient as the ratio of the ECAP amplitude to the standard deviation of the noise present in the activation plot. To ensure fair comparison of SNR across different patients, ECAP amplitude is taken at 1.4× ECAP threshold (see Fig. 2a).

SNR for patients programmed manually in the Freshwater study were compared with the program generated by the APM. This comparative analysis was not possible in the Rosella study as initial programming was performed using only APM workflows.

To demonstrate state-of-the-art filtering performance, SNR is compared against the method published by Chakravarthy et al. [12]. In their paper, the recording electrode was placed seven contacts from the end of the lead where stimulation occurred, and the reference electrode was placed eight contacts from the end of the lead, adjacent to the recording electrode. For brevity, we refer to this hereafter as a (7, 8) recording configuration. Recordings from a (7, 8) configuration were not collected as part of this work. However, data was collected using similar [(7, 9) and (6, 8)] configurations which utilize the same recording and reference electrode locations, but with a gap of one contact separating them (Fig. 2d). Data are pooled from Freshwater and Rosella studies and limited to stimulation candidates where a tight tripole was used. Signals for the (7, 9) and (6, 8) recording configurations were reprocessed using the best performing algorithm from Chakravarthy et al., named the Standard Method [12], which we will refer to as the Chakravarthy filter henceforth.

Pairwise comparisons of SNR are performed by taking the ratio of the measured SNRs under each condition.

Artifact Rejection

To accurately demonstrate state-of-the-art artifact rejection, we compare the filtering method used by the APM against the Chakravarthy filter [12]. We implemented a new test methodology for filter comparison that improves upon the test method of Chakravarthy et al. [12] by directly measuring the impact of artifact leakage as opposed to estimating it from an activation plot collected in a single posture. Patients from the Freshwater study were stimulated at both half and one-fourth of ECAP threshold while sitting and then standing (see Fig. 3a). Signals were recorded for at least 5 s for each of the four conditions (Fig. 3b); (\(_/2\), Sitting), (\(_/2\), Standing), (\(_/4\), Sitting), (\(_/4\), Standing). The use of subthreshold stimulation levels is intended to ensure that signals contain only artifact and no neural response, which was visually confirmed in the dataset prior to analysis.

Fig. 3Fig. 3The alt text for this image may have been generated using AI.

Artifact rejection performance. a Experimental methodology, whereby each patient assumed a seated and a standing posture and experienced stimulation at two currents IT/2 and IT/4. b Example epidural recordings for one patient under each of the four conditions and corresponding Vrms distributions that highlight magnitude of artifact variability within a single patient. c Distributions of median filter outputs for the APM and Chakravarthy signal processing methods. d Rates of statistically significant artifact change at the output of the APM, Chakravarthy et al. and Vrms filters measured as null-hypothesis acceptance rates for Kruskal–Wallis tests (\(\alpha =0.05\)) comparing feedback variable distributions across the four conditions for all n = 22 patients

The APM and Chakravarthy filters [12] are then applied to these artifact-only signals retrospectively, producing statistical samples for each of the four conditions (see example in Fig. 3b). A Kruskal–Wallis test (\(\alpha =0.05\)) is then applied to determine whether a statistically significant difference between the populations exists. The RMS voltage of the artifact signals is applied as a reference filtering method for comparison.

Outcome AssessmentsFreshwater Study

The Freshwater study, a first-in-human (FIH), prospective, multicenter, single-arm study, evaluated the feasibility of programming using APM in patients implanted with the EVOKE® System (Saluda Medical, Minnetonka, MN). Patients were initially programmed manually by trained representatives using the Clarity™ Programming Application and later programmed using the NPA. The EA workflow was not tested. Outcomes assessed included automation success rates and evaluation of the patient experience during programming.

Rosella Study

The Rosella study was a prospective, multicenter, single-arm trial designed to evaluate both automated programming workflows (i.e., NPA and EA) in both temporary trial and permanent implant phases of the EVOKE System. Outcomes included patient questionnaires regarding programming session experience, time required to successfully generate a closed-loop program, and objective neural metrics during both trial and postimplant phases.

Objective Device Metrics

Objective metrics obtained from the EVOKE system log files consist of various neurophysiologic dose metrics [6, 13, 14]. If no device log file was available for the visit or the device log file was incomplete (data missing for greater than 70% of analysis time range), the patient was excluded as the purpose of the analysis was to investigate the objective neural dose metrics from at-home use of the therapy. Variables and their definitions have been previously published [6, 13,14,15]. A summary is presented below:

Dose ratio: the ratio of current (mA) at the median ECAP level to the ECAP threshold current, normalizing for anatomical variations and individual spinal cord sensitivity (see Fig. 5a).

Dose accuracy: the root mean square error (RMSE) of the feedback variable in excess of expected system noise.

Therapy utilization: the percentage of time that measured neural activation was supra-ECAP threshold while the program was active.

Statistical Analysis

Continuous variables are reported as mean ± standard deviation (SD) [or median (IQR) when non-normal]. Categorical variables are summarized as n (%). Sample sizes for each comparison are given in the text and on figure axes.

The distributions of SNR ratios were inspected visually with violin/box plots and judged approximately symmetric, with no extreme skew; therefore, a parametric one-sample two-sided t-test (null hypothesis H0: mean SNR ratio = 1, \(\alpha =0.05\), scipy.stats.ttest_1samp) was deemed appropriate (Fig. 2c). For a within patient SNR comparison (Fig. 2b) a two-sample t-test (null hypothesis H0: mean SNR ratio = 1, \(\alpha =0.05\), scipy.stats.ttest_rel) was used.

For the artifact performance characterization (Fig. 3), unequal feedback variable variances across stimulation states were observed for some patients; hence the non-parametric Kruskal–Wallis omnibus test was used (\(\alpha =0.05\), scipy.stats.kruskal).

All analyses were performed with Python 3.12 (pandas 1.5, SciPy 1.15, NumPy 1.26). Graphs were created with seaborn/Matplotlib (seaborn 0.13, Matplotlib 3.8).

Comments (0)

No login
gif