A Physician-Created Platform to Speed Clinical Decision-Making and Referral Workflow

NEJM Catalyst

View Podcast

MedPearl News Team

September 2, 2024

Abstract

A core problem for busy primary care physicians (PCPs) and advanced practice providers (also referred to, collectively, as clinicians) is making the best possible evaluation and management decisions for multiple problems within the space of a short office visit. When a specialty referral is necessary, it is difficult to rapidly determine what laboratory tests, imaging, first-line medications, and other interventions should occur until the patient is seen. Health care organizations need medically appropriate access, reducing the number of referrals that could be managed by a PCP empowered with next-best specialty care actions, and increasing the number of referrals for the sickest patients most in need of timely specialty and procedural care. From the perspective of medical specialists, there is little capacity to provide automated support for PCP-based, first-line subspecialty care for lower-acuity problems in order to create the required space for the sickest patients to be seen quickly. To address these issues, Providence developed MedPearl, an electronic medical record (EMR)-integrated digital assistant and clinical knowledge platform for primary and urgent care clinicians. Frontline clinicians now consult MedPearl during patient visits to answer complex clinical questions, determine if specialty care is necessary, and, if so, determine the next-best action to optimize the referral. The product provides concise, human-authored, algorithmic primary and subspecialty guidance on more than 700 conditions, with an average read time of 2 minutes or less. When a referral is needed, MedPearl supports a precise workup tailored to a specific patient’s medical conditions. Fidelity to human-centered design is the key to the effectiveness of MedPearl. Clinicians wanted an automated specialty e-consult for their most pressing clinician concerns. The tool needed to be tailored to the patient’s specific concerns and informed by the patient’s data from the EMR. Using a jobs to be done framework, a no-code platform was created. Clinician-driven design, thousands of hours of clinician-to-engineer feedback, and direct observations of the use of the tool in the clinical workflow were integrated throughout the development process. All content and algorithms were built in a proprietary no-code environment by clinicians. As of June 2024, MedPearl has been used by 6,967 Providence physicians, advanced practice providers, and other care team members who have conducted more than 189,000 point-of-care searches.

KEY TAKEAWAYS

A human-centered design process can support the development of complex tools and processes while ensuring a seamless and simple user experience.

Based on input from about 270 clinicians, MedPearl developers embraced the insistence on a no-code, by clinicians for clinicians approach to Providence’s journey to create its multispecialty-curated referral knowledge base and content creation platform.

The platform itself relies on embedded code and a large language model to facilitate a semantic search function to identify clinician-curated resources, but no coding knowledge or interaction is required for the user.

Timely point-of-care access to such information can improve clinical decision-making and optimize the referral workflow, thus ensuring that patients sent to specialists arrive with the correct preconsult workup while eliminating the need for referrals for those who do not need a specialist.

The Challenge

Dysfunction surrounding the process of referring patients from primary care to specialists negatively impacts integrated health systems in the form of suboptimal patient care, decreased patient satisfaction, increased total cost of care, patient leakage to out-of-network providers, and clinician distress. Patients often wait, untreated, for months to see a specialist, which contributes to patients experiencing ED and hospital admissions at a rate as much as 8.7 times higher than patients not awaiting appointments.1,2 Clinicians can experience frustration when referrals are delayed or denied, or when they cannot access current, evidence-based specialty guidance. Specialists, in turn, experience disruption through inappropriate or incomplete referrals: referred patients who could have been treated in primary care; urgent referrals for nonurgent conditions or routine referrals for urgent issues; and, referrals made without workup or initiation of first-line treatments.

Increasing the amount of time specialists can spend providing surgical and procedural care is a goal of health systems to improve both patient access to care and system financial sustainability. In 2020, elective procedures alone accounted for US$48 billion to US$64 billion in net income to hospitals in the United States.3 Higher-quality referrals include the necessary imaging and laboratory testing to determine at the initial specialty visit whether or not a procedure is warranted. These referrals, we hypothesized, would have an overall greater downstream contribution margin, as a greater proportion of appropriate referrals would result in procedural care.

However, interventions to improve referrals from primary to specialty care have had suboptimal results; for example, in a 2008 Cochrane analysis, passive dissemination of referral guidelines was not effective.4 In 2024, health systems continue to use these passive strategies of sharing emails and PDF files, to little result. The Cochrane study, however, found that electronic medical record (EMR)-integrated reminders and the inclusion of specialty providers in the workflow did show promise; MedPearl builds on these findings by operationalizing knowledge sharing at scale. As primary care clinicians struggle to access the expanding base of specialized knowledge and experience,57 they attempt to refer patients to specialists, often without the advantage of usable tools at the point of care. The frontline experience of primary care clinicians includes managing an expanding spectrum of clinical problems simply because there is no timely specialist access available. Traditional approaches to curtailing the number of specialist referrals for “easier” or lower-acuity problems have included hiring additional primary care clinicians, managing productivity, or creating barriers to referral within EMRs. None of these solutions solves the problem because none creates efficient knowledge-sharing practices tailored to the clinical workflow.

Providence Health System decided to develop its own technological solution to this referral dysfunction: a unique platform providing frontline clinicians with succinct clinical guides and tappable algorithms engineered to support their clinical decision-making and referral workflow. Although we have deployed it only in the Epic EMR, it is designed to be embedded into any EMR.

"As primary care clinicians struggle to access the expanding base of specialized knowledge and experience, they attempt to refer patients to specialists, often without the advantage of usable tools at the point of care."

The Goals

Providence’s goals for MedPearl were to create a dynamic platform that would improve patient services and outcomes relating to specialty referrals; give PCP a simple tool to become more effective and efficient in treating patients; increase clinician job satisfaction; use specialist resources to better advantage; and, minimize waste and errors. The platform needed to be highly trusted, useful within established workflows, and adopted voluntarily with no or minimal formal user training.

The Execution

MedPearl was founded in July 2021 by a Providence chief medical officer and practicing obstetrician and gynecologist (E.C.). The AskProv chatbot was created as the first iteration of a solution to enhance knowledge sharing and improve referrals in a multispecialty medical group.

From July 2021 to June 2022, the effort developed and expanded, using more than 300 meetings with executive leaders and practicing clinicians, to fine-tune the product and consolidate support for a full platform which, in July 2022, became known as MedPearl. In the early stages, from October 2021 to December 2021, a small team of six practicing physicians met weekly with the software development team in an iterative process of human-centered design to build the platform. This core team was focused on honing usability and minimizing cognitive strain and relied on 15 focus groups and 269 clinicians who provided thousands of feedback items to improve the technical build and help identify countless practice guidelines and job aids for inclusion in the core clinical content library. Expert software engineers and designers developed multiple prototypes that were then tested in the clinical setting. The technical team spent more than 350 hours interviewing clinicians and directly observing their work so that they could produce a no-code platform to support all of the guides and algorithms that were written, curated, and governed by the clinicians themselves. That is, the platform itself relies on embedded code, but no coding knowledge or interaction is required for the user.

In July 2022, MedPearl was launched for pilot testing to an initial group of 216 participants. Clinician adoption required strong clinical champions connected to the pilot participants.

The champions — many of whom also authored or peer-reviewed the clinical content and algorithms — committed to using the platform themselves at least four times per week, sharing their experiences and encouraging others to try MedPearl. A communication campaign was completed, including personal follow-up for all feedback suggestions, in-person colleague-to-colleague rounding, weekly new-topic emails and FAQ sessions, and an active Teams chat where pilot participants could ask questions. In addition, area clinical champions were given weekly feedback on the engagement of their region’s pilot participants. We also offered continuing medical education credits (at 0.25 American Medical Association Physician’s Recognition Award Category 1 credits per unique topic opened) for pilot participants who answered both the pre- and postpilot surveys. In the course of the 12-week pilot, more than 14,000 searches were performed in MedPearl by the 216 participants.

"The technical team spent more than 350 hours interviewing clinicians and directly observing their work so that they could produce a no-code platform to support all of the guides and algorithms that were written, curated, and governed by the clinicians themselves."

Feedback from pilot participants validated specific elements of the human-centered design, including:

Evidence-based, peer-reviewed, and professionally medically edited clinical content presented succinctly, highlighting information important for users at the point of care.

Patient and clinician resources provided through a content library in the form of embedded videos, hyperlinks to major guidelines, and scannable quick response (QR) codes linking to evidence-based applications.

Signaling flags (also known as hoverable iconography) used to minimize cognitive fatigue while providing access in the content library to key and often underappreciated insights into health equity, integrative health, resource-limited scenarios, and value-based care, among others (Figure 1).

figure 2. MedPearl-Embedded Icon

At the conclusion of the pilot, October 18, 2022, MedPearl was scaled up across Providence. The number of MedPearl users has grown from 216 initial pilot participants to 6,967 total unique users as of May 2024, with a core group of 4,314 counted as monthly active users making approximately 26,000 searches per month, with the heaviest use in primary and urgent care. We engage in continuous improvement via focus groups and clinician-suggested content improvements to ensure the platform’s ongoing utility in real-world primary care.

We were surprised at the uptake among specialist physicians because we felt they would not need such a tool. Even highly specialized surgeons found that algorithms were helpful in simplifying their common decision-making tasks (like management of aortic aneurysm). The specialists in our focus groups noted that the content was easier to use than their usual methods of knowledge management (static documents). In fact, several specialists in neurology, psychiatry, and pulmonology were so impressed with the platform’s ability to contextualize patient data that they created their own patient data finders for inpatient workflows. We are working on gathering more data and information on uptake by specialty and, in June 2024, we piloted these patient data finders among inpatient clinicians.

Clinician users provide feedback as they use the tool by sending notes to the platform, which are reviewed and acted on daily by a robust, multispecialty clinical content team. In addition to scheduled reviews of existing topics, the clinical content team constantly develops new hot topics based on user topic searches, user requests, or in response to emerging medical developments.

To hardwire continuous improvement, a product strategy and advisory group of 20 key clinicians throughout Providence continues to meet monthly and identify new, high-value features to enhance the platform. In early 2024, for instance, the advisory group directed the MedPearl team to focus on EMR contextualization. When clinicians are working in the EMR with an open chart, MedPearl understands the context (i.e., the patient chart) and pulls patient-specific data into the algorithm or guide. For example, a heart failure guide pulls the specific echocardiogram, electrocardiogram, and laboratory results needed for decision-making. This feature has been greatly appreciated, as it was developed by and with clinicians at every stage.

"The number of MedPearl users has grown from 216 initial pilot participants to 6,967 total unique users as of May 2024, with a core group of 4,314 counted as monthly active users making approximately 26,000 searches per month, with the heaviest use in primary and urgent care."

When using MedPearl, clinicians may enter symptoms, signs, a suspected diagnosis, the first line of the clinical note, or the patient’s own words into a search bar. The response generated by the algorithm is a fixed, physician-curated, easily navigable guide, presented on the left, with the patient’s specific data in a parallel column on the right (Figure 2).

Figure 2. MedPearl Algorithm (Left) and Patient Data Contextualized (Right)

Using Substitutable Medical Applications and Reusable Technologies on Fast Healthcare Interoperability Resources (SMART on FHIR) application programming interfaces, MedPearl pulls the standardized patient data from the EMR into a topic searched during a patient visit. Of note is that no patient data are stored in MedPearl. The pertinent laboratory results and studies are displayed alongside the clinical topic being reviewed, thus sparing the clinician the work of toggling back and forth between MedPearl and the patient’s charts looking for the relevant patient data in the EMR. This saves time and decreases the likelihood of reordering studies already done.

In 2021–2022, a significant challenge was that search terms had to be added manually to each topic, and only an exact match resulted in a successful topic search. With the advent of ChatGPT by OpenAI, in 2023 we were able to replace this laborious process with semantic searching (also known as fuzzy matching). The first line of a clinical note is now enough input to route users to the most appropriate guides and algorithms. The most significant hurdle to using ChatGPT was structuring the entire knowledge base to be searchable by a large language model (LLM), a process called vectorization.

Hurdles

Here we identify four hurdles and briefly describe the approach we took to mitigate the obstacle.

Complex Platforms That Are Not Clinician Friendly: The creation of even a single order set in an EMR requires engineers or certified builders, which is far too slow to meet the speed at which medical knowledge is evolving. We developed an entirely no-code, clinician-facing platform. This enables ease of use and, importantly, timely content updates, which are difficult unless, as we have done with the MedPearl model, clinical guides and algorithms can be built, maintained, and governed by clinicians — not programmers.

Lack of a Robust Clinical Content Library: We established a library ideally suited to address common clinical questions and problems surrounding specialty referrals. We created an LLM using Providence e-consult data (n=40,000 e-consults) to identify core topic areas relating to referrals, and 95% of those e-consults now have been mapped to MedPearl content. The library is expanding beyond its initial use case in response to the direct clinician feedback, to include inpatient-focused patient data finders, emergency medicine content, and expanded subspecialty content.

Lack of Judicious Use of Advanced AI: We recognize the need for appropriate adoption and integration of AI tools. We use AI for semantic search, but it is not used on its own for content creation, where expert human curation is required. Extensive research with the Providence data science team and our clinical content team physicians created specific guardrails around the use of AI tooling. We do not copy and paste from LLMs as these outputs frequently have significant errors that would affect patient care. We have found several LLMs useful in quickly identifying new clinical studies, which then require human review to incorporate rigorous findings into the MedPearl library.

Patient Data Without Relevant Contextualization: The Providence team uncovered several major difficulties with pulling patient data from medical records for display within MedPearl content. The challenge of heterogeneous laboratory and imaging naming conventions required the team to create 545 unique widgets capable of pulling and displaying useful data. We did not limit the time period of data contextualization, electing to pull and curate all available data from the chart into our curated widgets. We began in Epic, where Providence has three separate instances of the EMR across seven states. We believe we have captured an extensive library of test names and codes, and will continue to improve our widget library as the platform deploys to additional EMRs.

"When clinicians are working in the electronic medical record with an open chart, MedPearl understands the context (i.e., the patient chart) and pulls patient-specific data into the algorithm or guide."

The Team

The team involved in developing the pilot program and facilitating the scale-up included the chief of virtual care and digital health, chief of content, chief medical officer, chief medical editor, chief product officer, and chief technology officer.

Metrics

First, we share examples of how the MedPearl system has functioned in selected use cases; then we will describe some objective data from the initiative.

Use Case Examples

Example 1. Acute Unilateral Hearing Loss

A family physician is on patient number 40, in hour 13 of a 12-hour shift. A 54-year-old male patient presents with acute unilateral hearing loss. There is no otorhinolaryngologist (ear, nose, and throat specialist [ENT]) on call for their practice. Their choices are to send the patient to the ED so that an ENT has to see him, as per the Emergency Medical Treatment and Active Labor Act (EMTALA), or to try to figure out the next steps and call ENT offices in the morning until one will see him. In MedPearl, in three clicks they were able to identify the appropriate timeline (within the next 3 days), the name of the condition (acute sensorineural hearing loss), the appropriate medication (oral [PO] steroids), and the right specialist (not neurology but ENT). They were able to advocate for the patient; he was seen the next day, and recovered all of his hearing. In the absence of MedPearl, he might have waited months to see an ENT specialist, at which point his hearing would have been permanently lost, and he would have been unable to return to work. They now have had three such cases where the patient recovered hearing as a result of this knowledge being provided at the point of care (Figure 3).

Example 2. Lung Nodule

A 72-year-old female with deep-vein thrombosis (DVT) in an ED has a chest computed tomography (CT) scan to rule out pulmonary embolus. An incidental 5-mm lung nodule is discovered. The patient returns to her PCP, who refers this asymptomatic finding to a pulmonologist, but the wait is 6 months. The PCP refers the patient to thoracic surgery. At this point, the patient gets a letter that this referral has been denied. A new PCP uses the MedPearl pulmonary nodule referral algorithm and identifies that the patient does not require any further workup, as this finding is not concerning (Figure 4).

Figure 3. Screenshot: Hearing Loss

Example 3. Gout

An 81-year-old female presents to her PCP with possible gout symptoms. In the EMR, the PCP uses MedPearl to quickly see every relevant study. Contextualized patient data are automatically shown with purple highlighting on the right of the screen, and the best practice guideline and rheumatology pearls are shown on the left (Figure 5). In this case, the PCP can quickly see all of the laboratory test results that have and have not been done, and get to a clinical decision faster, having reviewed all of the data for this clinical presentation.

Next, we present metrics associated with the pilot phase and the postpilot period when the MedPearl tool was implemented at scale, across the entire Providence system.

MedPearl Pilot Phase: July–October 2022

During the pilot phase (July 24–October 18, 2022), clinicians were asked to voluntarily provide feedback every time they used MedPearl. We requested feedback on every use, specifically regarding whether or not the referral guide or algorithm was helpful in each of the following areas: management and workup, determining referral not needed, referral navigation (help with specialist selection and/or referral urgency), and plan validation. Out of the 981 unique clinician user responses, 72% stated that the clinical content was helpful in management and workup decisions; 41% reported that MedPearl helped them validate their care plan; 20% reported that the guide or algorithm helped identify that a referral was not needed; and 20% reported that the content helped them identify the correct specialist or referral urgency.

"We use AI for semantic search, but it is not used on its own for content creation, where expert human curation is required."

We believe the tool creates clinically appropriate access by keeping patients in their medical home (with the PCP) in cases that do not require a specialist, and by speeding the correct prespecialist workup and treatment(s) prior to an appropriate medical specialty referral when needed. We also used the feedback form to ask the clinicians how helpful the referral guide or algorithm was, using a five-point Likert range from 1 (horrible) to 5 (great); 80% of clinicians gave scores of 4 or 5. Finally, an open-response option was provided for users to offer suggestions on how we could improve the referral guide or algorithm (Table 1 and Table 2).

Table 1.Clinician Feedback: Helpfulness of Referral Guide/Algorithm Functions

In addition, MedPearl pilot users demonstrated statistically significant improvements in self-ratings of competence with respect to managing patient conditions and recognizing referral criteria, identifying optimal referral timing and urgency, and ordering relevant prespecialty workup (matched responses n=52; P<0.0001–0.001). It is important to note that the prepilot survey data had 162 responses, and these results were derived only from matched respondents (n=52) who completed the postpilot survey. The response numbers were not as robust as we would have hoped but do indicate an improvement in competence when managing referrals (Table 3).

Table 2. Clinician Feedback: Helpfulness of Referral Guide/Algorithm Helpfulness


Table 3. MedPearl Pilot: Clinician Competence Impact, 202

Hypotheses

MedPearl is designed to improve the efficiency, quality, and value of care provided. As such, we had several hypotheses about the association between MedPearl use and the clinical operational outcomes. Our primary hypothesis was that MedPearl use was associated with a decrease in the number of referrals to specialty care. Our secondary hypotheses were that MedPearl use was associated with an increase in relative value units (RVUs) produced by clinicians, and that MedPearl use was associated with an increase in clinician efficiency, as measured by both after-hours time spent in the EMR and the provider efficiency profile (PEP) score, a proprietary EMR-efficiency score.

Methods

MedPearl use was measured by topic open events, which occurs when a user views a given topic. MedPearl use was modeled in four different ways, with the majority of analyses using the difference-in-differences approach, where changes in a given outcome over time for the top 20% of MedPearl users were compared with the change in the outcome for all other MedPearl users (Figure 6).

The MedPearl use dataset was matched to existing Providence clinical operational datasets by clinicians. Categorical variables were compared with a chi-squared test, and continuous variables were compared using the t-test or multivariable linear regression models. The analyses were done in Python (v3.11) and RStudio (v2023.06)

Figure 6. MedPearl Use and Clinical Operational Outcomes: Four Analytic Approache

Metrics and Results

Figure 7. MedPearl Use and Change in Referral Rate Over Time

The primary end point of referrals was first analyzed in two ways. First, after adjusting for full-time equivalent (FTE) roles in the organization, the top 20% of MedPearl users’ change in referrals over time was compared with all other MedPearl users. Second, after adjusting for FTE and encounter count, the change in referral rate between July and October 2022 (the extent of the pilot period) versus between July and October 2023 (a logical convenience sample within the at-scale period, January 1 to December 31, 2023) was compared with MedPearl use as a continuous variable. No significant difference was seen with either analysis (P=0.6; Figure 7).

Figure 8. Primary Care MedPearl Use and Change in Referral Rate Over Time

The subgroup analyses were done to look at different effects in referrals from primary care and urgent care. In primary care, MedPearl use was associated with an increase of 1.0 referrals per 100 topic opens between July–October 2022 and January–April 2024 (P<0.01; Figure 8).

In urgent care, MedPearl use was associated with a decrease of 3.4 referrals per 100 topic opens between July–October 2022 and January–April 2024 (P=0.048; Figure 9). An explanation for the difference is not certain but could be influenced by the fact that urgent care providers do not usually have an ongoing continuity of care relationship with patients, potentially driving up referrals for patients they have not seen before and for conditions they do not manage chronically. We hypothesize that urgent care may have a larger opportunity to optimize its referral patterns.

As a secondary end point, we examined the association between MedPearl use and FTE-adjusted RVU production changes between July–September 2022 (the pilot phase) and July–September 2023 (a logical convenience sample within the at-scale period). The top 20% of MedPearl users by topic open increased their monthly RVU production by 88.16 RVUs/month versus 67.48 RVUs/month for all other physicians and advanced practice clinicians, a 20.7 RVU/month difference (P<0.01; Figure 10).

Second, we examined the relationship between MedPearl use and clinician efficiency. We looked at the association between MedPearl use and changes in the number of hours spent working in the EMR outside normal working hours between July 1 and September 30, 2022, versus between July 1 and September 30, 2023. The top 20% of MedPearl users by topic opens decreased the number of hours spent outside working hours by 8.9 hours versus 2.8 hours in all other clinicians, a difference of 2.0 hours per month (P<0.01; Figure 11).

Finally, we looked at the association between MedPearl use and the increases in the Epic proprietary PEP score, which ranges from 0 to 10, with higher numbers denoting better EMR efficiency. The clinicians in our system have a mean PEP score of 4.7 and a standard deviation of 1.7 PEP units. In regression analysis, 375 topic open events in MedPearl were associated with a 1.0 increase in average PEP score in the third quarter of 2023 versus the third quarter of 2022 (P<0.01; Figure 12).

Figure 9. Urgent Care MedPearl Use and Change in Referral Rate over Time



Figure 10. Year-over-Year Change in Monthly Relative Value Unit Production by MedPearl Use

Figure 11. Year-over-Year Change in After-Hours Time Spent in the Electronic Medical Record



Figure 12. MedPearl Use Provider Efficiency over Time

Discussion

In the pilot of MedPearl, direct feedback from 981 unique clinician user responses provided the clearest report of behavior change when using MedPearl in clinicial practice. We know that these users were able to multiselect how the platform helped (or did not), which is a weakness in our pilot data. The responses show that 20% of clinicians stated that the clinical content presented helped them determine that a referral was not needed. Likewise, 20% of clinicians reported that the tool helped them with referral urgency and/or referral navigation with respect to specialty selection, and 41% reported that MedPearl helped them validate their care plan. Also, 72% of respondents indicated that the content in MedPearl improved their clinical care plan with respect to management and workup, and 80% of the respondents gave the referral guide or algorithm a score of 4 or 5 on a scale of 1 to 5, where 5 is designated as great.

At scale (n=6,967 users), we encountered significant methodological problems and confounders in our large dataset. We were not able to control for the medical complexity of the patient populations before and after MedPearl was introduced and scaled across Providence. Although Providence has seen organization-wide improvements in its specialty contribution margin, with MedPearl as a major strategy in this effort; there were confounders including works affecting surgical block utilization and FTE changes across the clinical workforce that could not be rigorously accounted for.

The final ROI for the MedPearl investment is realized in greater PCP productivity and overall service line profitability enterprise-wide. This has been found to be the case despite multiple factors, such as surgical FTEs and block time allocations, confounding the direct relationship between MedPearl use and overall profitability.

The top 20% of MedPearl users by topic opens decreased the number of hours spent outside working hours by 8.9 hours versus 2.8 hours in all other clinicians, a difference of 2.0 hours per month.

We have not been able to reliably distinguish the medical specialty of each user outside the Providence Clinical Network, making the division of primary and specialty clinicians difficult, and further blunting the effect of the tool meant for primary care use in the referral workflow.

We continue to seek ways to improve our dataset, which is preliminary at this juncture. In the largest dataset at scale, MedPearl use (with confounders above recognized) is associated with a decrease in referrals from urgent care clinicians and a small increase in referrals in primary care. We have not been able to quantify the quality of referrals with available data. MedPearl use has been associated with a nonstatistically significant improvement in suboxone prescribing for opioid use disorder.

MedPearl use is associated with an increase in primary care productivity, as measured in RVU/month/FTE over time, a decrease in after-hours EMR time, and an improvement in a provider efficiency profile score over time.

As of early 2024, MedPearl has a 95% chance of matching the PCP-entered query to its relevant guide or algorithm; this is measured by counting the referred document as relevant if the clinician opens it. We define at scale as meaning that the MedPearl tool is available to all 7,500 primary care clinicians across the seven-state Providence system, a Renton, Washington–based integrated system with 34,000 total physicians, 38,000 nurses, 51 hospitals, and 1,000 clinics that collectively provide care to 2.6 million covered lives through 29 million patient visits annually.

Direct user feedback at scale yields a Net Promoter Score of 63, considered to be in the very good range; this is based on 98 responses from 956 deployed surveys covering 956 unique users in a 2-week period between April 20 and May 4, 2024. Another critical metric is the stickiness of the platform, in other words, how many users of MedPearl use it the following month. Overall stickiness defined in this way has consistently tracked around 75% from July 2023 to April 2024. For example, in a snapshot of April 2024, 1,211 of the 1,663 (72.8%) users who opened a topic in March return to MedPearl to open another topic in April.

Additionally important are the operational outcomes resulting from MedPearl use across Providence. We have found a statistically significant productivity improvement in MedPearl users compared with nonusers (P<0.01). We continue to refine our outcomes data to identify confounders and drive enterprise-wide operational performance.

Where to Start

Human-centered design is built on the real lives, struggles, and workflows of users. This approach, which is a key aspect of the MedPearl effort, requires four foundational steps:

Learn. Study the clinicians doing the work while they care for patients.

Ideate. Identify core problems as jobs to be done by the technology.

Prototype and test possible solutions. Iterate and be willing to start over.

Stay customer (clinician) obsessed. Welcome feedback and use it to improve. Admit weaknesses and own the core problems. Never blame the user.When clinicians and technologists work together, that is where true human-centered design happens. Clinicians and technologists built and maintain MedPearl, a novel, no-code knowledge base uniting patient data with clinical knowledge at the point of care. This is the critical element missing in most health care technology. We recommend human-centered design for innovators seeking adoption by clinicians and health care organizations.

Notes

Eve Cunningham, Jessica Schlicher, Erin Longley, Rebecca Poage, and Adrian Yanes work for Providence and are participating in MedPearl commercialization in 2024–2025. At the time of this publication, none has equity in the company.

The authors wish to acknowledge the significant contributions to the success of this project made by Rod Hochman, MD, Amy Compton-Phillips, MD, David Kim, MD, Sara Vaezy, Meredith Sefa, Connie Bartlett, MD, Ruth Fischer-Wright, MD, Jeff Wolff Gee, MD, Heidi Bray, DNP, Anh Nguyen, MD, Todd Wise, MD, Scott Smitherman, MD, Adar Palis, Derek Williams, MD, Vikramsinh Dabhi, MD, Chris Celio, MD, Rhonda Smith, and Alice Okali of Providence; Tye Cook and Kaitlyn Torrence of Tegria Services Group and Tushar Sud and his team of Providence Global Center. The authors further wish to thank members of the engineering, clinical content and product teams including John Tapsell, Jose Lima Azeredo, Raul Murciano Quejido, Colleen Finnegan, MD, Rob Fearn, MD, Toyin Falola, MD, Nathan Schlicher, MD, Chris Dale, MD, Teri Renfrow, and the hundreds of clinicians and champions without whose contributions MedPearl would not exist.

View original Post here

written by

MedPearl News Team

Medpearl

Making every visit count

About us

Contact us

Follow us

© 2024 MedPearl Inc. All rights reserved.