Normal saline (0.9% sodium chloride) has traditionally been considered the standard isotonic solution for fluid resuscitation and maintenance. Yet, substantial evidence indicates that large-volume administration of normal saline can produce a non-anion gap, hyperchloremic metabolic acidosis that disrupts both acid-base equilibrium and organ perfusion (1). Once regarded as a benign biochemical effect, this iatrogenic acidosis is now recognized as clinically significant, especially in perioperative and critical care settings.

The mechanism of saline-induced acidosis can be understood using the physicochemical, or Stewart, approach to acid-base balance. In this model, plasma pH depends on three independent variables: the partial pressure of carbon dioxide, the total concentration of weak acids, and the strong ion difference (SID), which is the numerical difference between fully dissociated cations and anions. Under normal physiological conditions, the SID of human plasma averages 38–42 mEq/L. However, the SID of normal saline is essentially zero because it contains equal concentrations of sodium and chloride. Therefore, infusion of saline reduces the plasma SID, leading to an increase in hydrogen ion concentration and a decline in pH, a.k.a. acidosis (2). In contrast, balanced crystalloids, such as Ringer’s lactate or Plasma-Lyte, contain metabolizable anions (lactate, acetate, or gluconate) that serve as buffers and prevent the development of acidosis.

In addition to its effects on acid-base balance, chloride influences kidney and circulatory function. Elevated plasma chloride levels (compared to baseline) cause the afferent arterioles in the kidneys to constrict, decreasing renal blood flow and glomerular filtration. Wilcox first demonstrated this mechanism in human and animal studies, showing that chloride itself (rather than sodium or osmotic factors) drives these changes (3). This reduction in renal perfusion provides a physiological explanation for the risk of kidney injury associated with chloride-rich fluids. Yunos and colleagues later confirmed the clinical importance of this mechanism by finding that adopting a chloride-restrictive fluid strategy in critically ill adults lowered the incidence of acute kidney injury and reduced the need for renal replacement therapy (4).

Clinically, saline-induced acidosis manifests as a gradual decrease in serum bicarbonate concentration, accompanied by a proportional increase in chloride. This condition can develop rapidly following the administration of several liters of saline, especially in patients with limited renal compensatory capacity. Although respiratory alkalosis can offset some of the decrease in pH through hyperventilation, sustained metabolic acidosis can impair myocardial contractility, blunt catecholamine responsiveness, and worsen systemic inflammation. McFarlane and Lee observed that patients who received balanced crystalloids during surgery had more stable hemodynamic and acid-base profiles than patients who received normal saline, which highlights the clinical consequences of fluid selection (5).

Furthermore, the traditional characterization of normal saline as “physiologic” is misleading. Its chloride content (154 mmol/L) substantially exceeds normal plasma levels, creating an ionic environment that is not representative of human plasma. For this reason, many experts now recommend balanced crystalloids as the preferred resuscitation fluid for most clinical situations and reserve normal saline for conditions such as hyponatremia or traumatic brain injury, where hypotonic fluids may be contraindicated.

Acidosis resulting from normal saline administration is a well-characterized and preventable disturbance in acid-base homeostasis. The Stewart framework provides a clear mechanistic basis for understanding this effect, and clinical studies continue to demonstrate its relevance to renal and systemic outcomes. Appropriate fluid choice, guided by physiological principles rather than convention, remains essential to optimizing patient safety and therapeutic efficacy, especially when large volumes of fluid are needed.

References

  1. Kellum JA. Saline-induced hyperchloremic metabolic acidosis. Crit Care Med. 2002;30(1):259-261. doi:10.1097/00003246-200201000-00046
  2. Morgan TJ. The meaning of acid-base abnormalities in the intensive care unit: part III — effects of fluid administration. Crit Care. 2005;9(2):204-211. doi:10.1186/cc2946
  3. Wilcox CS. Regulation of renal blood flow by plasma chloride. J Clin Invest. 1983;71(3):726-735. doi:10.1172/jci110820
  4. Yunos NM, Bellomo R, Hegarty C, Story D, Ho L, Bailey M. Association between a chloride-liberal vs chloride-restrictive intravenous fluid strategy and kidney injury in critically ill adults. JAMA. 2012;308(15):1566–1572. doi:10.1001/jama.2012.13356
  5. McFarlane C, Lee A. A comparison of Plasmalyte 148 and 0.9% saline for intra-operative fluid replacement. Anaesthesia. 1994;49(9):779-781. doi:10.1111/j.1365-2044.1994.tb04450.x

The reversal of neuromuscular blockade during anesthesia ensures that patients regain safe respiratory function and airway protection following surgery. In the United States, anesthesiologists primarily rely on two drugs for this purpose: neostigmine, a long-established cholinesterase inhibitor, and sugammadex, a newer selective relaxant binding agent. Their usage reflects a balance of tradition, cost considerations, and emerging evidence on patient outcomes.

Neostigmine acts by inhibiting acetylcholinesterase, allowing acetylcholine to accumulate and compete with neuromuscular blocking agents at the receptor level. This indirect mechanism means that its effectiveness depends on partial spontaneous recovery having already occurred. As a result, reversal from deep blockade may be incomplete or delayed, increasing the risk of residual paralysis.

Sugammadex, in contrast, provides a more direct and predictable effect. By encapsulating rocuronium or vecuronium molecules, it rapidly clears them from the neuromuscular junction directly, leading to swift restoration of muscle function. Some clinical trials suggest a faster return to normal function with sugammadex compared with neostigmine, along with better recovery of respiratory muscle strength 1–7.

The side effect profiles of these drugs differ significantly. Neostigmine is almost always administered with an anticholinergic agent such as glycopyrrolate to counteract muscarinic effects like bradycardia, bronchospasm, and increased salivation. Despite this precaution, hemodynamic fluctuations remain a concern.

Sugammadex does not share this cholinergic mechanism, and patients treated with it often experience more stable cardiovascular responses. Although rare reports of hypersensitivity or anaphylaxis exist, the overall safety record of sugammadex is favorable, particularly in higher-risk populations such as those with cardiovascular disease or respiratory compromise 3,4,8–11.

Cost has been the most significant barrier to broader adoption of sugammadex in the United States. A single dose of sugammadex costs several times more than the combination of neostigmine and glycopyrrolate. As a result, many hospitals restrict its use to select scenarios, preserving neostigmine as the default agent for routine cases.However, when broader economic factors are considered, the value proposition shifts: Sugammadex frequently reduces time to extubation, accelerates operating room turnover, and decreases postoperative pulmonary complications. Replacing neostigmine with sugammadex in a portion of procedures could perhaps contribute to reducing pulmonary complications and result in overall savings after factoring in downstream costs 12–17.

In everyday practice in the United States, institutions often adopt a hybrid strategy, with neostigmine remaining a familiar and cost-effective option for patients at lower risk or when only shallow blockade is anticipated. Sugammadex, however, is increasingly favored in situations involving deeper blockade, higher-risk patients, or settings where efficiency and safety are prioritized 3,4,18,19.

As more outcome-driven evidence becomes available and hospitals refine cost–benefit considerations, the role of sugammadex is expected to expand, with its broader adoption ultimately depending on how institutions balance immediate drug costs against potential long-term gains in reduced complications, improved workflow, and enhanced patient safety.

References

1.         Moss, A. P., Powell, M. F., Morgan, C. J. & Tubinis, M. D. Sugammadex versus neostigmine for routine reversal of neuromuscular blockade and the effect on perioperative efficiency. Proc (Bayl Univ Med Cent) 35, 599–603. DOI: 10.1080/08998280.2022.2079921

2.         Maqusood, S., Bele, A., Verma, N., Dash, S. & Bawiskar, D. Sugammadex vs Neostigmine, a Comparison in Reversing Neuromuscular Blockade: A Narrative Review. Cureus 16, e65656. DOI: 10.7759/cureus.65656

3.         Chandrasekhar, K., Togioka, B. M. & Jeffers, J. L. Sugammadex. in StatPearls (StatPearls Publishing, Treasure Island (FL), 2025).

4.         Neely, G. A., Sabir, S. & Kohli, A. Neostigmine. in StatPearls (StatPearls Publishing, Treasure Island (FL), 2025).

5.         Calvey, T. N., Wareing, M., Williams, N. E. & Chan, K. Pharmacokinetics and pharmacological effects of neostigmine in man. Br J Clin Pharmacol 7, 149–155 (1979). DOI: 10.1111/j.1365-2125.1979.tb00915.x

6.         Nguyen-Lee, J. et al. Sugammadex: Clinical Pharmacokinetics and Pharmacodynamics. Curr Anesthesiol Rep 8, 168–177 (2018).     DOI: 10.1007/s40140-018-0266-5

7.         Panhuizen, I. F. et al. Efficacy, safety and pharmacokinetics of sugammadex 4 mg kg−1 for reversal of deep neuromuscular blockade in patients with severe renal impairment. British Journal of Anaesthesia 114, 777–784 (2015). DOI: 10.1093/bja/aet586

8.         Herring, W. J. et al. A randomized trial evaluating the safety profile of sugammadex in high surgical risk ASA physical class 3 or 4 participants. BMC Anesthesiol 21, 259 (2021). DOI: 10.1186/s12871-021-01477-5

9.         Mao, X. et al. A pharmacovigilance study of FDA adverse events for sugammadex. Journal of Clinical Anesthesia 97, 111509 (2024). DOI: 10.1016/j.jclinane.2024.111509

10.       Luo, J., Chen, S., Min, S. & Peng, L. Reevaluation and update on efficacy and safety of neostigmine for reversal of neuromuscular blockade. Ther Clin Risk Manag 14, 2397–2406 (2018). DOI: 10.2147/TCRM.S179420

11.       Hristovska, A.-M., Duch, P., Allingstrup, M. & Afshari, A. The comparative efficacy and safety of sugammadex and neostigmine in reversing neuromuscular blockade in adults. A Cochrane systematic review with meta-analysis and trial sequential analysis. Anaesthesia 73, 631–641 (2018). DOI: 10.1111/anae.14160

12.       Neostigmine Prices, Coupons, Copay Cards & Patient Assistance. Drugs.com https://www.drugs.com/price-guide/neostigmine.

13.       Benscheidt, A. & Daratha, K. B. Sugammadex versus Neostigmine: Operating Room Time and Cost. Books, Presentations, Posters, Etc. (2019).

14.       Bartels, K., Fernandez-Bustamante, A. & Melo, M. F. V. Reversal of neuromuscular block: what are the costs? British Journal of Anaesthesia 131, 202–204 (2023). DOI: 10.1016/j.bja.2023.04.037

15.       Lan, W. et al. Cost-Effectiveness of Sugammadex Versus Neostigmine to Reverse Neuromuscular Blockade in a University Hospital in Taiwan: A Propensity Score-Matched Analysis. Healthcare (Basel) 11, 240 (2023). DOI: 10.3390/healthcare11020240

16.       Maddock, A. The ‘cost’ of sugammadex. Anaesthesia 72, 1558–1559 (2017). DOI: 10.1111/anae.14150

17.       Bruceta, M., Singh, P. M., Bonavia, A., Carr, Z. J. & Karamchandani, K. Emergency use of sugammadex after failure of standard reversal drugs and postoperative pulmonary complications: A retrospective cohort study. J Anaesthesiol Clin Pharmacol 39, 232–238 (2023). DOI: 10.4103/joacp.joacp_289_21

18.       Adeyinka, A. & Layer, D. A. Neuromuscular Blocking Agents. in StatPearls (StatPearls Publishing, Treasure Island (FL), 2025).

19.       Nemes, R. & Renew, J. R. Clinical Practice Guideline for the Management of Neuromuscular Blockade: What Are the Recommendations in the USA and Other Countries? Curr Anesthesiol Rep 10, 90–98 (2020). DOI: 10.1007/s40140-020-00389-3

Anesthesia management for patients with a history of stem cell transplant poses unique clinical challenges due to the complex interplay of immunosuppression, systemic toxicity, and organ dysfunction that are commonly observed in this patient population. These patients often have significant comorbidities as a result of their underlying hematologic malignancies or the intensive chemoradiotherapy regimens they undergo before the transplant. A comprehensive preoperative evaluation, meticulous intraoperative planning, and vigilant postoperative care are essential to optimizing outcomes and reducing complications.

The preoperative assessment should include a detailed history of the transplant process, including the conditioning regimen, the source of the graft (autologous or allogeneic), and post-transplant complications, such as graft-versus-host disease (GVHD). Pulmonary complications are common in this group and include idiopathic pneumonia syndrome, bronchiolitis obliterans, and infectious pneumonias. These complications can severely impair gas exchange and increase the risk of perioperative hypoxia (Ji et al., 2022). Thus, pulmonary function tests and imaging studies are often warranted preoperatively. Cardiovascular toxicity from chemotherapy agents such as anthracyclines may manifest as cardiomyopathy. Echocardiographic evaluation is necessary to assess ejection fraction and valvular function before anesthesia induction (Mahadeo et al., 2021).

The hematologic status of patients with stem cell transplant needing surgery is crucial for anesthesia and surgery planning. Profound cytopenias, including thrombocytopenia, may be present and increase the risk of perioperative bleeding. Platelet transfusions are frequently required and must be timed appropriately to ensure hemostatic efficacy during invasive procedures (Russell et al., 2024). Additionally, neutropenia increases the risk of perioperative infections, necessitating strict aseptic techniques and potentially prophylactic antibiotics.

Anesthesia medications must be carefully selected in patients with stem cell transplant. Hepatic and renal dysfunction, which are not uncommon due to previous chemotherapy or GVHD-related organ damage, can alter drug metabolism and excretion. Agents such as propofol, which have a short, context-sensitive half-life and have relatively predictable kinetics, are often preferred. However, volatile anesthetics may exacerbate hepatic injury and require cautious use. Neuromuscular blockers that do not rely on hepatic or renal clearance, such as cisatracurium, are preferred in patients with end-organ impairment (Alodhaib et al., 2025).

Intraoperative monitoring should be comprehensive. Arterial lines should be considered for hemodynamic monitoring and blood gas analysis in any patients with cardiovascular instability or respiratory compromise. Temperature regulation is critical due to the patients’ immunocompromised state; even mild hypothermia can predispose them to infections and coagulopathy. Furthermore, fluid management must be judicious, especially in cases of capillary leak syndrome or hypoalbuminemia, both of which may occur post-transplant (Mahadeo et al., 2021).

Postoperative care includes aggressive pain management while minimizing the immunosuppressive and respiratory depressive effects of opioids. Regional anesthesia techniques can provide superior analgesia and reduce systemic opioid use when not contraindicated by thrombocytopenia or coagulopathy. However, the risks of bleeding must be weighed against the benefits of invasive blocks, especially in patients with platelet counts below 50,000/µL (Russell et al., 2024). Additionally, anesthesiologists must be alert to potential complications, such as septic shock or acute respiratory distress syndrome (ARDS), which can occur even during the immediate postoperative period. Due to the complex physiology of SCT recipients, even minor procedures require careful planning and execution.

Patients with stem cell transplants are at high risk for anesthesia complications due to multisystem involvement, immunosuppression, and the potential for rapid decompensation. Tailored anesthetic strategies that consider the transplant timeline and associated complications are critical for safe and effective perioperative care.

References

  1. Liang J, Chen Y, Zhou J, et al. Lung transplantation for bronchiolitis obliterans after hematopoietic stem cell transplantation: a retrospective single-center study. Ann Transl Med. 2022;10(12):659. doi:10.21037/atm-22-2517
  2. Ragoonanan D, Khazal SJ, Abdel-Azim H, et al. Diagnosis, grading and management of toxicities from immunotherapies in children, adolescents and young adults with cancer. Nat Rev Clin Oncol. 2021;18(7):435-453. doi:10.1038/s41571-021-00474-4
  3. Anthon CT, Pène F, Perner A, et al. Platelet transfusions in adult ICU patients with thrombocytopenia: A sub-study of the PLOT-ICU inception cohort study. Acta Anaesthesiol Scand. 2024;68(8):1018-1030. doi:10.1111/aas.14467
  4. Khan MA, Abu Esba LC, Yousef CC, et al. Practical challenges and considerations in adopting biosimilars in oncology clinical practice within a large healthcare system. Expert Rev Clin Pharmacol. 2025;18(6):323-331. doi:10.1080/17512433.2025.2492063
  5. Anthon CT, Pène F, Perner A, et al. Platelet transfusions and thrombocytopenia in intensive care units: Protocol for an international inception cohort study (PLOT-ICU). Acta Anaesthesiol Scand. 2022;66(9):1146-1155. doi:10.1111/aas.14124

As healthcare policy has evolved, healthcare delivery has become increasingly influenced by administrative procedures such as prior authorization. Although prior authorization was initially implemented to control healthcare costs and guarantee appropriate care, its effect on perioperative services, including anesthesia, is becoming a major concern. Delays caused by prior authorization requirements can affect operating room scheduling, anesthesia planning, and, ultimately, patient outcomes. Mounting evidence suggests that these delays introduce clinical inefficiencies and potential risks for patients requiring time-sensitive surgical interventions.

A significant barrier to timely anesthesia services is the delay in scheduling procedures due to pending insurance approval. This is particularly problematic for outpatient and elective procedures, where administrative hurdles may delay surgery by days or weeks. These delays can have more than just logistical consequences—they may lead to worsening patient conditions, the need to reassess preoperative plans, or even the progression of disease, which requires more complex anesthesia care. Although anesthesia itself is not typically the direct target of prior authorization, the services it supports, such as imaging, surgery, and certain pain management interventions, frequently are. As such, anesthesiology teams may experience indirect but significant disruptions in workflow and resource allocation (1).

Socioeconomic factors have been shown to correlate with longer wait times for procedures involving anesthesia, especially in pediatric populations. For instance, a study analyzing pediatric outpatient MRIs revealed that patients with public insurance or belonging to minority racial or non-English speaking groups had significantly longer wait times for imaging completion, a process that requires anesthesia in many cases. These findings suggest that systemic inequities linked to prior authorization disproportionately affect vulnerable populations, exacerbating healthcare disparities and potentially influencing anesthetic outcomes through delayed diagnosis or treatment (1).

The potential downstream effects of these delays on trauma and acute care cases have also been observed. Although trauma care is often exempt from prior authorization requirements due to its emergent nature, subsequent surgical planning may be delayed when post-acute interventions, such as imaging or certain medications, require authorization. In such cases, the anesthesia team may face uncertainty in scheduling and resource mobilization, which can affect the safety and effectiveness of perioperative care. Furthermore, since trauma cases can evolve quickly, delays in post-trauma surgical intervention can necessitate changes in anesthetic plans, including different pharmacological approaches or monitoring standards, due to progression in clinical status (2).

Another area in which prior authorization impacts anesthesia care is chronic pain management. Interventional pain procedures, particularly those requiring guided injections or radiofrequency ablations, often fall under the purview of the anesthesiology department. These services are commonly flagged for prior authorization, which can lead to frustration among providers and patients. Delays in these procedures can lead to prolonged opioid use, increased emergency department visits, or deterioration in patient function. All of these outcomes increase the burden on the anesthesia teams involved in the longitudinal care of these patients (3).

While definitive studies specifically quantifying anesthesia-related complications from prior authorization delays are still limited, the indirect evidence is compelling. Prior authorization often presents a significant operational and clinical hurdle. Reforming the system to prioritize efficiency and transparency could mitigate its negative impact on anesthesiology and surgical services.

References

  1. Noda SM, Alp Oztek M, Sullivan E, Otto RK, Stanford S, Iyer RS. The Effects of Race, Primary Language, Insurance and Other Factors on Time to Pediatric Outpatient MRI Completion: A Retrospective Cohort Study. Acad Radiol. 2024;31(11):4643-4649. doi:10.1016/j.acra.2024.08.042
  2. Scott JW, Groner JI, Jensen AR; ACS TQIP Mortality Reporting System Writing Group. Trauma Quality Improvement Program mortality reporting system case reports: Unanticipated mortality because of imaging-related delays in care. J Trauma Acute Care Surg. Published online July 3, 2025. doi:10.1097/TA.0000000000004691
  3. Slat S, Yaganti A, Thomas J, et al. Opioid Policy and Chronic Pain Treatment Access Experiences: A Multi-Stakeholder Qualitative Analysis and Conceptual Model. J Pain Res. 2021;14:1161-1169. Published 2021 Apr 27. doi:10.2147/JPR.S282228

Glucagon-like peptide-1 (GLP-1) receptor agonists have transformed the therapeutic landscape for type 2 diabetes mellitus (T2DM) and obesity. These agents mimic endogenous GLP-1, which enhances glucose-dependent insulin secretion, delays gastric emptying, and promotes satiety. Initially evaluated in randomized controlled trials (RCTs), GLP-1 RAs have demonstrated significant efficacy in improving glycemic control and reducing weight. However, RCTs often fail to capture the heterogeneity of real-world clinical populations. Therefore, real-world data is essential for evaluating the broader clinical utility, comparative effectiveness, and safety of GLP-1 agonists across different patient demographics and comorbidity profiles.

A comprehensive study by Thomsen et al. used nationwide registries to evaluate real-world outcomes associated with GLP-1 agonists, particularly with newer agents such as semaglutide and liraglutide. Their analysis revealed consistent weight loss and glycemic improvement across various subgroups, including individuals with significant comorbid conditions, such as cardiovascular disease or renal impairment (1).

Real-world data suggest that GLP-1 agonists may offer advantages over traditional therapies, such as basal insulin. In another study, Yang et al. analyzed the cost-effectiveness and clinical outcomes of GLP-1 agonists versus insulin using population-level datasets. Their findings revealed that GLP-1 agonists not only resulted in similar or superior glycemic control but also significantly reduced body weight and the risk of hypoglycemia (2).

Safety is an important concern in long-term pharmacotherapy. Gastrointestinal (GI) adverse events are the most commonly reported side effect associated with GLP-1 receptor agonist use. Liu et al. analyzed real-world data from the FDA Adverse Event Reporting System (FAERS) to compare the incidence of GI events among different GLP-1 receptor agonists. They found that semaglutide was associated with a higher incidence of nausea and vomiting, while exenatide was associated with a higher incidence of diarrhea. Despite these associations, the events were largely self-limiting and rarely led to treatment discontinuation, particularly when slow dose escalation protocols were employed (3).

Tumor-related adverse effects are a more serious but less frequent safety concern. Yang et al. analyzed FAERS data spanning nearly two decades to investigate potential oncogenic risks associated with GLP-1 agonist therapy. The study identified a small number of reports indicating possible associations with medullary thyroid carcinoma and pancreatic neoplasms. However, the authors cautioned that these findings were based on spontaneous reporting systems and did not establish causality. They emphasized the need for continued surveillance and mechanistic studies to clarify any potential biological links (4).

Renal and cardiovascular outcomes also merit consideration when evaluating GLP-1 agonists in the real world. In a meta-analysis of observational studies, Caruso et al. reported favorable renal outcomes among GLP-1 agonist users, including slower progression of albuminuria and a reduced incidence of acute kidney injury, compared to other glucose-lowering drugs. Cardiovascular outcomes also trended positively, consistent with randomized controlled trial (RCT) findings, such as those from the LEADER and SUSTAIN-6 trials. However, the authors noted the need for longer follow-up periods and stratification by baseline renal function and cardiovascular risk to validate these findings in the general population.

In conclusion, the growing body of real-world evidence confirms that GLP-1 agonists effectively improve glycemic control and promote weight loss and are generally safe when used in appropriate populations. Their comparative advantage over insulin and other antidiabetic agents makes them an appealing choice in many clinical scenarios. Nevertheless, careful attention to adverse event profiles, particularly GI symptoms and potential tumor risks, is warranted. As their use continues to expand, real-world studies will remain indispensable in refining clinical guidelines and optimizing patient outcomes.

References

  1. Thomsen RW, Mailhac A, Løhde JB, Pottegård A. Real-world evidence on the utilization, clinical and comparative effectiveness, and adverse effects of newer GLP-1RA-based weight-loss therapies. Diabetes Obes Metab. 2025;27 Suppl 2(Suppl 2):66-88. doi:10.1111/dom.16364
  2. Yang CY, Chen YR, Ou HT, Kuo S. Cost-effectiveness of GLP-1 receptor agonists versus insulin for the treatment of type 2 diabetes: a real-world study and systematic review. Cardiovasc Diabetol. 2021;20(1):21. Published 2021 Jan 19. doi:10.1186/s12933-020-01211-4
  3. Liu L, Chen J, Wang L, Chen C, Chen L. Association between different GLP-1 receptor agonists and gastrointestinal adverse reactions: A real-world disproportionality study based on FDA adverse event reporting system database. Front Endocrinol (Lausanne). 2022;13:1043789. Published 2022 Dec 7. doi:10.3389/fendo.2022.1043789
  4. Yang Z, Lv Y, Yu M, et al. GLP-1 receptor agonist-associated tumor adverse events: A real-world study from 2004 to 2021 based on FAERS. Front Pharmacol. 2022;13:925377. Published 2022 Oct 25. doi:10.3389/fphar.2022.925377
  5. Caruso I, Cignarelli A, Sorice GP, et al. Cardiovascular and Renal Effectiveness of GLP-1 Receptor Agonists vs. Other Glucose-Lowering Drugs in Type 2 Diabetes: A Systematic Review and Meta-Analysis of Real-World Studies. Metabolites. 2022;12(2):183. Published 2022 Feb 15. doi:10.3390/metabo12020183

Intravenous (IV) access is essential in anesthesia and surgery for rapid and reliable administration of fluids and medications. While upper extremity veins are preferred due to lower complication rates and easier access, there are situations (such as trauma, burns, or inaccessible upper extremity veins) where IV access in the lower extremities is required to proceed with anesthesia and surgery.

The most common lower extremity sites for peripheral IV access are the great saphenous vein at the ankle and the lesser saphenous vein, chosen for their superficial locations and consistent anatomy. In emergency or critical care situations, the femoral vein is preferred for central venous access because it is large, easy to identify, and suitable for rapid infusion. However, IV access in the lower extremities carries unique risks, particularly a higher incidence of complications such as thrombophlebitis, deep vein thrombosis (DVT), and catheter-related infections compared to upper extremity sites. These risks are amplified in adults due to factors such as slower blood flow, increased limb dependence, and a higher prevalence of peripheral vascular disease. Because of these concerns, guidelines consistently recommend limiting the duration of lower extremity IV access and transitioning to upper extremity access as soon as possible (1).

Recent advances in ultrasound technology have significantly improved the safety and success rates of acquiring IV access in the lower extremities, making ultrasound an increasingly useful tool for anesthesia and surgery. Ultrasound guidance provides direct visualization of veins, which not only facilitates cannulation in patients with difficult vascular access but also reduces the risk of complications such as arterial puncture or multiple failed attempts. Several studies support the use of ultrasound for both peripheral and femoral vein cannulation, showing improvements in first-pass success rates and reductions in procedure-related complications (1). This is particularly relevant for central access via the femoral vein, which, while easy to access, is associated with higher rates of infection and thrombotic events compared to central access via the subclavian or internal jugular vein. In a landmark study, Merrer et al. found that femoral catheterization had higher rates of both infection and thrombosis than subclavian access, reinforcing the recommendation that femoral sites should be reserved for specific clinical situations or emergencies (2).

IV access in the lower extremities is generally considered safer and more feasible in pediatric patients than in adults because children have fewer vascular comorbidities and their veins are less prone to thrombosis. However, complications such as infiltration, infection, and phlebitis still occur and require regular evaluation and prompt intervention if problems arise (3). In adult patients, early removal and routine monitoring are essential strategies to reduce complications, especially in those with risk factors for thrombosis or infection.

To minimize risk, several key practices are recommended: use of ultrasound guidance, strict aseptic technique during insertion, regular inspection of the IV site, and prompt removal of lower extremity catheters when no longer needed. Providers should monitor for signs of DVT, such as swelling, pain, and erythema, as well as for local or systemic infection. In addition, when femoral access is required for central venous catheterization, careful attention to catheter care protocols and early transition to safer sites are critical to reducing adverse outcomes (4).

Although lower extremity IV access is sometimes unavoidable in anesthesia and surgery, its use is associated with increased risks. Modern techniques such as ultrasound have improved the safety profile of these procedures, but clinicians must remain vigilant for potential complications and limit the use of lower extremity access when possible. With careful management, lower extremity IV access can be a safe and effective option for patients with limited alternatives.

References

  1. Witting MD. IV access difficulty: incidence and delays in an urban emergency department. J Emerg Med. 2012;42(4):483-487. doi:10.1016/j.jemermed.2011.07.030
  2. Merrer J, De Jonghe B, Golliot F, et al. Complications of femoral and subclavian venous catheterization in critically ill patients: a randomized controlled trial. JAMA. 2001;286(6):700-707. doi:10.1001/jama.286.6.700
  3. Geerts WH, Code KI, Jay RM, Chen E, Szalai JP. A prospective study of venous thromboembolism after major trauma. N Engl J Med. 1994;331(24):1601-1606. doi:10.1056/NEJM199412153312401
  4. Dargin JM, Rebholz CM, Lowenstein RA, Mitchell PM, Feldman JA. Ultrasonography-guided peripheral intravenous catheter survival in ED patients with difficult access. Am J Emerg Med. 2010;28(1):1-7. doi:10.1016/j.ajem.2008.09.001
Categories
Uncategorized

3D Printing for the OR

Three-dimensional (3D) printing, also known as additive manufacturing, has significantly impacted the surgical field by enabling the creation of patient-specific models, implants, and instruments that enhance surgical precision and outcomes. These advancements enable more precise pre-surgical planning and personalized interventions, transforming how clinicians operate in the OR (Hoang et al., 2016).

Among its most impactful applications, 3D-printed anatomical models have been widely used in pre-surgical planning, particularly for complex procedures. Nearly every part of the human body has been replicated, providing surgeons with valuable tools for study and rehearsal before performing procedures (Ballard et al., 2018; Hoang et al., 2016). Traditionally, surgeons have relied on two-dimensional (2D) imaging and their clinical experience to plan operations. However, converting imaging data into a 3D model offers a more intuitive and comprehensive understanding of patient-specific anatomy (Ballard et al., 2018). Using 3D printing, lifelike models are created to enable surgeons to visualize and even rehearse surgical interventions, often leading to improved strategies and reduced surprises in the OR. Empirical studies have consistently demonstrated that integrating 3D models into surgical planning enhances outcomes, with reports showing improved accuracy and reduced operative times (Tack et al., 2016).

The efficiency of surgical procedures is notably increased when surgeons prepare using patient-specific models. In orthopedic trauma cases, research found that implementing 3D-printed models reduced average operative time by approximately 20% and decreased intraoperative blood loss by 25% compared to conventional planning methods (Morgan et al., 2020). Furthermore, a recent cost analysis indicated that using 3D-printed anatomical models saved 62 minutes of operating room time per case, equating to approximately $3,700 in cost savings per surgery (Ballard et al., 2020). These time savings not only enhance efficiency but also diminish the duration patients spend under anesthesia, improving patient safety. Additionally, 3D models are effective tools for patient education, offering a tangible representation of the relevant anatomy and the planned procedure (Ballard et al., 2018).​

3D printing technology also enables the design and fabrication of implants tailored precisely to an individual’s anatomical and pathological characteristics. Traditionally, surgeons have been constrained to using standard implants in limited sizes, which may not perfectly fit every patient’s unique anatomy. A prominent example is observed in craniofacial surgery, where computed tomography (CT) scans of a skull defect can be utilized to create a titanium implant that accurately restores the patient’s original skull contour (Martelli et al., 2016). Such personalized implants have been employed in complex reconstructions involving the head, spine, and limbs, particularly in cases where standard implants proved inadequate (Kadakia et al., 2020). In scenarios such as orthopedic oncology or severe trauma, 3D printing facilitates the replacement of substantial bone segments with implants that conform precisely to the patient’s unique geometry, thereby potentially enhancing functional outcomes and reducing the chance that the patient will need to return to the OR for further correction. Preliminary clinical reports are promising, indicating that custom implants have enabled limb-sparing surgeries and anatomically precise joint replacements that would have been challenging or unfeasible with conventional implants (Kadakia et al., 2020).​

3D printing has become a transformative tool in the modern OR, offering unparalleled precision, customization, and efficiency. This technology enhances preoperative planning, reduces intraoperative risks, and improves surgical outcomes across multiple specialties by enabling the creation of patient-specific anatomical models, implants, and surgical guides. Challenges such as cost, regulatory considerations, and material limitations remain, but ongoing advancements in biocompatible materials, faster printing techniques, and regulatory frameworks will likely facilitate broader clinical adoption. As technology continues to evolve, 3D printing is poised to revolutionize surgery further, paving the way for increasingly personalized and efficient medical interventions.

References

  1. Ballard, D. H., Trace, A. P., Ali, S., Hodgdon, T., Zygmont, M. E., DeBenedectis, C. M., & Lenchik, L. (2018). Clinical applications of 3D printing: Primer for radiologists. Academic Radiology, 25(1), 52–65. https://doi.org/10.1016/j.acra.2017.08.004
  2. Ballard, D. H., Mills, P., Duszak, R. Jr., Weisman, J. A., Rybicki, F. J., & Woodard, P. K. (2020). Medical 3D printing cost-savings in orthopedic and maxillofacial surgery: Cost analysis of operating room time saved with 3D printed anatomic models and surgical guides. Academic Radiology, 27(8), 1103–1113. https://doi.org/10.1016/j.acra.2020.02.012
  3. Hajnal, B., Pokorni, A. J., Turbucz, M., Bereczki, F., Bartos, M., Lazáry, Á., & Eltes, P. E. (2025). Clinical applications of 3D printing in spine surgery: A systematic review. European Spine Journal, 34(2), 454–471. https://doi.org/10.1007/s00586-025-07241-9
  4. Hoang, D., Perrault, D., Stevanovic, M., & Ghiassi, A. (2016). Surgical applications of three-dimensional printing: A review of current literature & how to get started. Annals of Translational Medicine, 4(23), 456. https://doi.org/10.21037/atm.2016.12.18
  5. Kadakia, R. J., Wixted, C. M., Allen, N. B., Hanselman, A. E., & Adams, S. B. (2020). Clinical applications of custom 3D-printed implants in complex lower extremity reconstruction. 3D Printing in Medicine, 6(1), 29. https://doi.org/10.1186/s41205-020-00075-2
  6. Martelli, N., Serrano, C., van den Brink, H., Pineau, J., Prognon, P., Borget, I., & El Batti, S. (2016). Advantages and disadvantages of three-dimensional printing in surgery: A systematic review. Surgery, 159(6), 1485–1500. https://doi.org/10.1016/j.surg.2015.12.017
  7. Morgan, C., Khatri, C., Hanna, S. A., Ashrafian, H., & Sarraf, K. M. (2020). Use of three-dimensional printing in preoperative planning in orthopedic trauma surgery: A systematic review and meta-analysis. World Journal of Orthopaedics, 11(1), 57–66. https://doi.org/10.5312/wjo.v11.i1.57
  8. Tack, P., Victor, J., Gemmel, P., & Annemans, L. (2016). 3D-printing techniques in a medical setting: A systematic literature review. BioMedical Engineering OnLine, 15(1), 115. https://doi.org/10.1186/s12938-016-0236-4

The preparation of a patient’s skin before surgery is a critical step in reducing the risk of surgical site infections (SSIs), which are among the most common postoperative complications. When combined with sterile techniques, proper skin preparation helps to minimize the microbial load on the skin, thereby preventing pathogens from entering the surgical site. SSIs can lead to increased healthcare costs, prolonged hospital stays, and significant patient morbidity. Sterile technique—encompassing practices such as hand hygiene, sterile draping, and appropriate antiseptic application—is essential in maintaining a controlled environment free of harmful microorganisms. Preoperative skin preparation is particularly important as it aims to remove transient microorganisms and reduce resident flora to levels that are no longer pathogenic.

There are various protocols available for preoperative patient skin preparation, and the appropriate selection depends on factors such as the surgical site, patient sensitivities, and type of procedure. Two commonly used protocols are chlorhexidine-alcohol and povidone-iodine solutions. Chlorhexidine-alcohol, a combination of 2% chlorhexidine gluconate and 70% isopropyl alcohol, is widely recommended due to its broad-spectrum antimicrobial activity and prolonged residual effect. It is effective against both gram-positive and gram-negative bacteria, making it suitable for a wide range of surgeries. Povidone-iodine is another widely used antiseptic, particularly for patients sensitive to chlorhexidine or for procedures near mucous membranes. It offers broad antimicrobial coverage but has a shorter residual effect compared to chlorhexidine. Preoperative protocols may also include patient cleansing with chlorhexidine-based wipes or solutions at home before surgery, further reducing microbial load on the skin.

The step-by-step technique for surgical skin preparation involves a systematic approach to ensure thorough antisepsis of the surgical site. First, the healthcare provider assesses the patient and surgical site to select the appropriate antiseptic solution. They then don sterile gloves to maintain aseptic technique throughout the process. The prep begins at the planned incision site and moves outward in an ever-widening circular motion, covering an area larger than the anticipated surgical field. For alcohol-based solutions, the skin is gently wiped following the manufacturer’s instructions, typically using a back-and-forth motion for the prescribed time based on the body location. It’s crucial to allow the solution adequate drying time, which can range from three minutes on hairless skin to up to an hour in hair for alcohol-based products. Once dry, sterile drapes are applied to isolate the prepped area. For procedures involving multiple sites, such as grafts or abdominal-perineal surgeries, separate prep set-ups are used for each area to prevent cross-contamination. Throughout the process, care is taken to avoid touching non-sterile items and to prevent the accumulation of prep solution, which could lead to chemical burns. After patient skin preparation is complete, the surgical team maintains vigilance to ensure the sterile field remains intact until the wound is sealed and dressed.

The effectiveness of these disinfectants is due to their mechanisms of action. Chlorhexidine disrupts bacterial cell membranes, causing leakage of cellular contents and leading to cell death. It is highly effective against gram-positive bacteria like Staphylococcus aureus and gram-negative bacteria like Escherichia coli, though it has limited efficacy against fungi and viruses. Alcohol, often used in combination with other agents, works by denaturing proteins and rapidly killing microorganisms upon contact. Povidone-iodine penetrates microbial cell walls and disrupts metabolic pathways, making it effective against a wide range of bacteria, fungi, and some viruses.

Evidence from clinical studies highlights the importance of selecting appropriate skin prep solutions to reduce SSIs. For example, a systematic review found that 2–2.5% chlorhexidine in alcohol significantly reduced SSI rates compared to aqueous iodine solutions (relative risk 0.75). Another study demonstrated that chlorhexidine-alcohol outperformed povidone-iodine in bacterial eradication during foot and ankle surgeries. These findings underscore the importance of evidence-based practices when choosing antiseptic agents for preoperative skin preparation. By tailoring protocols to specific needs—such as anatomical location or patient sensitivities—healthcare providers can optimize outcomes and reduce the risk of SSIs.

In conclusion, patient skin preparation protocols are a cornerstone of infection prevention in surgical settings. Adherence to sterile techniques and the use of effective disinfectants tailored to individual circumstances can significantly lower the incidence of SSIs. As research continues to refine these protocols, healthcare providers have more evidence-based tools at their disposal to ensure safer surgical outcomes.

References

  1. AORN Outpatient Surgery Magazine. The Essentials: Peerless Skin Prep: Here’s How. July 2023. DOI: https://www.aorn.org/outpatient-surgery/article/the-essentials-peerless-skin-prep-here-s-how
  2. American Academy of Orthopaedic Surgeons (AAOS). Surgical Site Preparation Toolkit. Accessed February 2025. DOI: https://www.aaos.org/quality/quality-programs/quality-toolkits/surgical-site-preparation/
  3. Outpatient Surgery Magazine. Your Updated Guide to Surgical Skin Preps. November 2022. DOI: https://www.aorn.org/outpatient-surgery/article/2010-May-your-updated-guide-to-surgical-skin-preps
  4. PubMed Central (PMC). A Systematic Review on Skin Preparation Solutions for SSI Prevention. August 2022. DOI: https://pubmed.ncbi.nlm.nih.gov/35985350/
  5. Centers for Disease Control and Prevention (CDC). Chemical Disinfectants. November 2023. DOI: https://www.cdc.gov/infection-control/hcp/disinfection-sterilization/chemical-disinfectants.html

Healthcare disparities in the perioperative period—including the preoperative, intraoperative, and postoperative phases—are often overlooked, despite their significant impact on patient outcomes. There are many socioeconomic factors impacting surgical access, quality of care, and patient outcomes. One important factor is language barriers. Approximately 22% of the U.S. population speak a language other than English at home, and limited English proficiency is linked to poorer healthcare access, an increased risk of adverse advents, worse patient care experiences, reduced understanding of medical instructions, and worse overall health outcomes, in both in-patient and out-patient settings.1 Navigating language barriers with patients is an essential skill for anesthesiologists.

Even with interpretation services, limited English proficiency can affect the informed consent process, patient comprehension of perioperative instructions, and more. Many patients with limited English proficiency describe the process of finding and working with an interpreter as an uphill battle, with common feelings of betrayal and frustration regarding their healthcare experience.2 Patients can experience specific difficulties in the realm of anesthesia, especially regarding pain management. Communication barriers can lead to the misinterpretation of a patient’s pain level, an inadequate delivery of pain relief, or an inability to effectively induce anesthesia or analgesia, which can affect patient recovery, increase the risk of postoperative complications, and diminish patient satisfaction. As such, a comprehensive review of how language barriers impact anesthesia can guide large-scale efforts aimed at improving how anesthesiologists approach care for more vulnerable patients.3

Research on the impact of language barriers in obstetric anesthesia has found that Spanish-speaking women were significantly less likely to anticipate and use neuraxial anesthesia compared to English-speaking women, and these disparities persist even after adjusting for age, marital status, income, obstetric provider type (obstetrician or midwife), and labor type.4 One randomized controlled trial demonstrated increased use of labor epidurals by Latinx beneficiaries after the establishment of an education program. These findings suggest social determinants such as language barriers must be evaluated thoroughly to fully address health disparities.4

In a prospective survey study conducted at Harvard Medical Center, separate surveys were sent to the Department of Interpretation Services (DIS) and the Department of Anesthesia (DA). The questions sent to DIS staff explored the different languages spoken and the relative experiences of the interpreters, along with their level of training and understanding of anesthesia-specific procedures and concepts. The battery of questions sent to DA anesthesiologists and nurses inquired about their past experiences with interpreters and situations that led to ineffective communication due to language barriers. After the survey period closed, results showed 97% of DIS staff did not have anesthesia-specific training.5 Only 54% of respondents felt they were given adequate training regarding anesthesia consent, and only 25% reported complete understanding and comfort with consent for common invasive monitoring, such as central lines and arterial lines. Despite these numbers, the respondents reported feeling confident consenting for anesthesia on behalf of their patients, including for general, neuraxial, and regional anesthesia. 58% of DA providers felt unsure their interpreters had sufficient training regarding anesthesia consent. Most DA staff were comfortable with the interpretation of neuraxial or regional anesthesia but reported difficulty explaining the difference between monitored anesthesia care and general anesthesia. The study identified increasing interpreters’ level of anesthesia-specific training as an important step to help anesthesiologists and patients navigate language barriers.5  

Efficiently addressing healthcare disparities in the perioperative period, particularly those linked to language barriers, is critical for improving patient outcomes. Limited English proficiency poses significant challenges in communication, especially in informed consent and pain management, which can directly affect the quality of anesthesia and recovery. The evidence underscores the importance of targeted interventions, such as increasing the training of interpreters in anesthesia-related concepts and improving access to culturally competent care.

References

  1. Divi, Chandrika, et al. “Language Proficiency and Adverse Events in US Hospitals: A Pilot Study.” International Journal for Quality in Health Care, vol. 19, no. 2, Apr. 2007, pp. 60–67. https://doi.org/10.1093/intqhc/mzl069
  2. Steinberg, Emma M., et al. “The ‘Battle’ of Managing Language Barriers in Health Care.” Clinical Pediatrics, vol. 55, no. 14, Dec. 2016, pp. 1318–27. https://doi.org/10.1177/0009922816629760
  3. Joo, Hyundeok, et al. “Association of Language Barriers With Perioperative and Surgical Outcomes: A Systematic Review.” JAMA Network Open, vol. 6, no. 7, July 2023, p. e2322743. https://doi.org/10.1001/jamanetworkopen.2023.22743
  4. Ehie, Odinakachukwu, et al. “What Is the Role for Anesthesiologists and Anesthesia Practices in Ensuring Access, Equity, Diversity, and Inclusion?” ASA Monitor, vol. 85, no. S10, Oct. 2021, pp. 45–48. https://doi.org/10.1097/01.ASM.0000795196.13554.35
  5. Shapeton, Alexander, et al. “Anesthesia Lost in Translation: Perspective and Comprehension.” The Journal of Education in Perioperative Medicine: JEPM, vol. 19, no. 1, Jan. 2017, p. E505. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5327869/

Electronic Health Records (EHR) systems have become a cornerstone of modern healthcare, aiming to improve patient management, streamline operations, and enhance data-driven decision-making. While much attention is often given to EHR adoption in the United States, countries worldwide are developing and implementing their own systems, tailored to local needs, policies, and healthcare structures. EHR systems outside the United States reveal diverse approaches, successes, and challenges, providing valuable insights into global health informatics.

EHR Systems in Europe

The healthcare systems with the greatest adoption of EHR systems are outside the US, with Nordic countries like Denmark, Sweden, and Finland boasting near-universal coverage and interoperability across healthcare providers. Centralized systems make it easy to share patient data between hospitals, clinics, and general practitioners, improving continuity of care. Denmark’s “Sundhed.dk,” a national health portal, allows patients to access their records, book appointments, and communicate with healthcare providers 1,2.

Despite successes in some places, many European countries face issues with fragmented systems. In Germany and France, a number of structural factors hinder the smooth national implementation of EHR systems. Privacy concerns under the General Data Protection Regulation (GDPR) add complexity to data management and cross-border healthcare initiatives 3–5.

EHR Systems in Asia

China has rapidly scaled its EHR systems, driven by its National Health Informatization Plan. Large hospitals in urban areas have sophisticated EHRs, enabling efficient management of millions of patient records. However, rural regions lag behind, with limited access to digital healthcare infrastructure. Interoperability remains a challenge in connecting diverse systems across the country 6–8.

India’s National Digital Health Mission (NDHM) aims to establish a unified digital health system, including an EHR system linked to unique health identifications. However, the initiative faces significant obstacles, including low healthcare digitization in rural areas and varying literacy levels, on top of general concerns about data security and privacy 9.

EHR Systems in Africa

Africa’s EHR landscape reflects the continent’s diverse healthcare needs and economic disparities—countries like South Africa and Kenya have adopted EHRs in urban hospitals, while many rural areas rely on paper-based systems or basic digital tools. The lack of infrastructure, unreliable internet access, and funding constraints pose significant challenges to more widespread adoption in many African countries.

Innovative approaches, such as cloud-based EHRs and mobile health solutions, are gaining ground. For example, Kenya’s open-source OpenMRS platform supports EHR implementation in resource-limited settings 10–12.

EHR Systems in Latin America

Latin American countries are largely seeing gradual adoption of national EHR systems, spurred by governmental efforts and private sector investment. Countries like Brazil and Chile have implemented national health information systems, but uneven adoption persists due to economic disparities and limited interoperability. A notable success story is Uruguay’s Historia Clínica Electrónica Nacional (HCEN), a nationwide EHR system connecting public and private providers—demonstrating how strong government leadership and a collaborative approach can overcome systemic barriers 13–15.

Global Challenges and Lessons

Both inside and outside the US, one of the most persistent challenges facing the use of EHR systems is achieving interoperability—ensuring that systems can communicate seamlessly across different platforms, facilities, and borders. In addition, data privacy and cybersecurity are universal concerns 16,17.

References

1.            Hägglund, M. et al. A Nordic Perspective on Patient Online Record Access and the European Health Data Space. J Med Internet Res 26, e49084 (2024). :10.2196/49084

2.         sundhed.dk – Få klare svar om din sundhed. https://www.sundhed.dk/.

3.         General Data Protection Regulation (GDPR) – Legal Text. https://gdpr-info.eu/.

4.            Cuggia, M. & Combes, S. The French Health Data Hub and the German Medical Informatics Initiatives: Two National Projects to Promote Data Sharing in Healthcare. Yearb Med Inform 28, 195–202 (2019). doi: 10.1055/s-0039-1677917

5.            Schmitt, T. Implementing Electronic Health Records in Germany: Lessons (Yet to Be) Learned. International Journal of Integrated Care 23, (2023). doi: 10.5334/ijic.6578

6.         Translation: 14th Five-Year Plan for National Informatization – Dec. 2021. https://digichina.stanford.edu/work/translation-14th-five-year-plan-for-national-informatization-dec-2021/.

7.         Liang, J. et al. Adoption of Electronic Health Records (EHRs) in China During the Past 10 Years: Consecutive Survey Data Analysis and Comparison of Sino-American Challenges and Experiences. J Med Internet Res 23, e24813 (2021). doi: 10.2196/24813.

8.         Li, C. et al. Implementation of National Health Informatization in China: Survey About the Status Quo. JMIR Med Inform 7, e12238 (2019). doi: 10.2196/12238

9.         National Digital Health Mission | Make In India. https://www.makeinindia.com/national-digital-health-mission.

10.       Kenya – OpenMRS Talk. https://talk.openmrs.org/c/local/kenya/35.

11.       Katurura, M. C. & Cilliers, L. Electronic health record system in the public health care sector of South Africa: A systematic literature review. Afr J Prim Health Care Fam Med 10, 1746 (2018). doi: 10.4102/phcfm.v10i1.1746.

12.       Akanbi, M. O. et al. Use of Electronic Health Records in sub-Saharan Africa: Progress and challenges. J Med Trop 14, 1–6 (2012).

13.       Historia Clínica Electrónica Nacional | Agencia de Gobierno Electrónico y Sociedad de la Información y del Conocimiento. https://www.gub.uy/agencia-gobierno-electronico-sociedad-informacion-conocimiento/node/312.

14.       Capurro, D. et al. Chile’s National Center for Health Information Systems: A Public-Private Partnership to Foster Health Care Information Interoperability. Stud Health Technol Inform 245, 693–695 (2017).

15.       Alves, T. F., Almeida, F. A., Brito, F. A., Tourinho, F. S. V. & de Andrade, S. R. Regulation and Use of Health Information Systems in Brazil and Abroad: Integrative Review. Comput Inform Nurs 40, 373–381 (2022). doi: 10.1097/CIN.0000000000000828.

16.       What is EHR Interoperability and why is it important? | HealthIT.gov. https://www.healthit.gov/faq/what-ehr-interoperability-and-why-it-important.

17.       Basil, N. N., Ambe, S., Ekhator, C. & Fonkem, E. Health Records Database and Inherent Security Concerns: A Review of the Literature. Cureus 14, e30168. DOI: 10.7759/cureus.30168