Skip to main content
  • Research article
  • Open access
  • Published:

Ability of online drug databases to assist in clinical decision-making with infectious disease therapies

Abstract

Background

Infectious disease (ID) is a dynamic field with new guidelines being adopted at a rapid rate. Clinical decision support tools (CDSTs) have proven beneficial in selecting treatment options to improve outcomes. However, there is a dearth of information on the abilities of CDSTs, such as drug information databases. This study evaluated online drug information databases when answering infectious disease-specific queries.

Methods

Eight subscription drug information databases: American Hospital Formulary Service Drug Information (AHFS), Clinical Pharmacology (CP), Epocrates Online Premium (EOP), Facts & Comparisons 4.0 Online (FC), Lexi-Comp (LC), Lexi-Comp with AHFS (LC-AHFS), Micromedex (MM), and PEPID PDC (PPDC) and six freely accessible: DailyMed (DM), DIOne (DIO), Epocrates Online Free (EOF), Internet Drug Index (IDI), Johns Hopkins ABX Guide (JHAG), and Medscape Drug Reference (MDR) were evaluated for their scope (presence of an answer) and completeness (on a 3-point scale) in answering 147 infectious disease-specific questions. Questions were divided among five classifications: antibacterial, antiviral, antifungal, antiparasitic, and vaccination/immunization. Classifications were further divided into categories (e.g., dosage, administration, emerging resistance, synergy, and spectrum of activity). Databases were ranked based on scope and completeness scores. ANOVA and Chi-square were used to determine differences between individual databases and between subscription and free databases.

Results

Scope scores revealed three discrete tiers of database performance: Tier 1 (82-77%), Tier 2 (73-65%) and Tier 3 (56-41%) which were significantly different from each other (p < 0.05). The top tier performers: MM (82%), MDR (81%), LC-AHFS (81%), AHFS (78%), and CP (77%) answered significantly more questions compared to other databases (p < 0.05). Top databases for completeness were: MM (97%), DM (96%), IDI (95%), and MDR (95%). Subscription databases performed better than free databases in all categories (p = 0.03). Databases suffered from 37 erroneous answers for an overall error rate of 1.8%.

Conclusion

Drug information databases used in ID practice as CDSTs can be valuable resources. MM, MDR, LC-AHFS, AHFS, and CP were shown to be superior in their scope and completeness of information, and MM, AHFS, and MDR provided no erroneous answers. There is room for improvement in all evaluated databases.

Peer Review reports

Background

Timely access to clinical decision support tools (CDSTs) has proven beneficial in selecting appropriate treatment options that result in improved therapeutic outcomes [16]. The use of such aids as personal digital assistants (PDAs), computerized physician order entry (CPOE), electronic health records (EHRs), and electronic databases has shown positive influence on patient morbidity and mortality, cost management and formulary compliance, and prevention of medication errors and related fatalities [3, 713]. The arena of infectious disease (ID) is a complex and dynamic field with new treatment guidelines being adopted and innovative pharmaceutical options being introduced at a rapid rate. As such, ID management has a great potential for medication errors [1416]. In fact, ID has benefited greatly from these innovative tools [3, 17, 18] both as a method to keep abreast of these rapid changes and as a mechanism to assist practitioners in understanding and embracing the most contemporary and appropriate therapies.

There is a dearth of information in the literature providing guidance to ID healthcare providers on the abilities of CDSTs, such as drug information databases, to provide the information most needed in this specialized practice setting. To date, no evaluations have been published that evaluated the ID content in online drug information databases, and only one study has been conducted that examined ID-specific drug information content in selected PDA programs. Miller et al. [19] evaluated four ID programs for PDAs, including: Epocrates ID, Johns Hopkins ABX Guide, Sanford's Guide to Antimicrobial Therapy, and Infectious Diseases and Antimicrobials Notes. This study focused on the salient features, advantages, disadvantages, and hardware and software requirements of each of these four databases. The evaluation of the scope and accuracy of the references' drug information was limited to a comparison of the monographs contained in each database to the information contained in the package insert for a single drug, fluconazole. Based on this narrow evaluation methodology, the study found that while each program contained the information physicians needed most at the point-of-care, such as dosing, adverse events, and drug interactions for antimicrobials, all of the applications had limited pharmacological and pharmacokinetic information. Some of the databases omitted important topics like pediatric dosing, contraindications, precautions, and adverse reactions. The authors concluded the use of PDA applications may decrease prescriptions errors, lead to significant improvements in patient outcomes, and cost reduction despite the identified shortcomings.

ID-specific PDA programs provide an important avenue for clinical decision support, especially for counsel needed at point-of-care. The question is: do they provide enough information about all aspects of pharmaceutical prescribing and management that specialty practitioners require? Utilization of online subscription or free drug information databases is an alternative to assist healthcare providers in making timely and accurate ID treatment determinations that encompass a plethora of medication topics relevant to patient care and treatment outcomes. ID practitioners have several products from which to choose and a multitude of factors must be weighed before making a decision on which online database best meets the user's needs. While several previous studies have been conducted to determine the abilities of online databases to satisfy the general drug information needs of healthcare providers [2023], this study helps to elucidate the differences in selected online drug databases and compares the effectiveness of each CDST's ability to perform for an ID specialty setting. This study specifically aimed to evaluate the ability of online drug information databases to provide clinical decision support when answering infectious disease-specific queries.

Methods

Database selection

To be considered for inclusion, databases had to be accessible online and could either be designed as general drug information databases or ID-specific drug information databases. A list of references for consideration was compiled based on previous database studies [2023], input from an expert panel, and utilization by current practitioners. Programs were included if the monographs were able to answer a diverse set of medication-related inquiries, including such topics as indications, adverse drug events, and drug interactions. Databases were excluded if they were intended exclusively for a particular practice setting other than ID, such as nursing, oncology, or pediatrics. Other references that were not designed primarily as drug information databases (e.g., Sanford's Guide to Antimicrobial Therapy and The 5 Minute Infectious Diseases Consult) were similarly excluded. Databases limited to answering a specific drug information question-type (e.g., drug interactions, compatibility/stability) were also omitted. Fourteen databases met the inclusion requirements: eight subscription and six freely accessible. Only one ID-specific database, Johns Hopkins ABX Guide, was identified that met the inclusion criteria. The subscription databases included American Hospital Formulary Service Drug Information (AHFS), Clinical Pharmacology (CP), Epocrates Online Premium (EOP), Facts & Comparisons 4.0 Online (FC), Lexi-Comp (LC), Lexi-Comp with AHFS (LC-AHFS), Micromedex (MM), and PEPID PDC (PPDC). The six freely accessible databases included: DailyMed (DM), DIOne (DIO), Epocrates Online Free (EOF), Internet Drug Index–RxList.com (IDI), Johns Hopkins ABX Guide (JHAG), and Medscape Drug Reference (MDR). Table 1 provides publisher and website details for each database included in this study.

Table 1 Database publishers and website addresses

Related databases

Lexi-Comp, Inc. offers two different versions of their drug information database: LC, which is a compilation of their standard drug monographs and LC-AHFS, which contains their standard monographs plus the information available in AHFS. Because of the potential similarity between AHFS, LC, and LC-AHFS, a subgroup analysis was performed to assess statistical differences between the databases.

Category design

Five general treatment classifications were created based on a review of ID-related Anatomical Therapeutic Chemical (ATC) classification codes as listed on the 14th World Health Organization (WHO) Model List of Essential Medicines: antibacterial, antiviral, antifungal, antiparasitic, and vaccination/immunization. These classifications were weighted based on importance of such factors as prevalence and incidence of infection type, morbidity and mortality data, and resistance patterns in the USA as reported by the Morbidity and Mortality Weekly Report (MMWR) and WHO. Within each of these classifications, 16 different drug information categories were designed, including dosage, interactions, emerging resistance, and spectrum of activity. Additionally, these categories were weighted based on their impact on direct patient care and importance to patient safety. For example, dosage and administration were considered more clinically relevant and thus weighted heavier, than categories less influential on direct patient care and safety such as cost and pharmacokinetics. Classification and category information, including weighting of each, is shown in Table 2.

Table 2 Weighting of classifications and categories

Question development

In order to have a greater likelihood of determining differences among the databases, a robust number of questions was needed. The most highly weighted categories were therefore assigned a total of 15 questions, and subsequent categories were populated with a stepwise reduction in the number of questions based on their weighted percentages. A set of 147 ID-specific question and answer pairs were developed and divided across the five ID classification categories and the 16 drug information categories. Answers were determined using manufacturer package inserts and primary literature, as well as gold standard references including the MMWR [24], Centers for Disease Control and Prevention (CDC) [25], Principles and Practices of Infectious Disease [26], and Natural Medicine Comprehensive Database [27]. The author-developed study design and question and answer set were reviewed by an external panel of ID physicians and pharmacists for accuracy and relevance to clinical practice. The question list was then finalized based on the panel recommendations. A sample of questions and answers is provided in Table 3.

Table 3 Sample questions and answers used in evaluation

Database assessment

Databases were evaluated for their ability to answer each of the 147 questions and the completeness of the answers that the databases were able to provide. The presence or absence of the answer (scope) was determined, and a score of one was assigned for scope if the database provided the answer or a score of zero was assigned if the answer was absent. Answer completeness was determined using a 3-point scale, with three being the most complete and one being the least. Questions were structured in such a way that differences in completeness could be detected, often containing more than one part to the answer. Answers with only one part (e.g., Can valacyclovir be given to treat herpes encephalitis? No) would receive a three for completeness if the answer was present. If an answer had two components (e.g., What are the concerns with ceftriaxone administration in neonates? May displace bilirubin and cannot be administered with calcium-containing solutions due to risk of ceftriaxone-calcium precipitation) then completeness would be scored either a two if one answer was present or a three if both answers were present. For questions requiring three or more components to provide a complete answer, the completeness score was assigned a three if all components were present (e.g., What are the visual disturbances associated with voriconazole? Abnormal vision, color vision change, and/or photophobia). Completeness scores were only assigned if there was a score for scope. Assessments were made independently by at least two authors for two consecutive months ending in November 2007. In the three instances where the results were disparate, a consensus was reached on score assignation by the authors. Erroneous answers that were provided by the databases were also documented.

Statistical analysis

Data were summarized using descriptive statistics to obtain rank order of databases based on scope and completeness scores. Inferential statistics were used to determine differences between individual databases and between subscription and free databases, via both ANOVA and Chi-square tests as appropriate. Tukey-Kramer's multiple comparison post-hoc tests were used to differentiate among databases. Similar analyses were conducted to determine statistical differences between AHFS, LC and LC-AHFS, as well as subscription and free versions of Epocrates. P values below 0.05 were considered statistically significant. This study was approved by the Health Professions Division Research Committee of Nova Southeastern University.

Results

Scope

Pair-wise comparisons of scope scores revealed three discrete tiers of database performance including: Tier 1 (Scope 82-77%), Tier 2 (Scope 73-65%) and Tier 3 (Scope 56-41%) which were all significantly different from each other (p < 0.05). The top tier performers: MM (82%), MDR (81%), LC-AHFS (81%), AHFS (78%), and CP (77%) answered significantly more questions when compared to the other databases (p < 0.05). The middle group of database scores (Tier 2) was: FC (73%), IDI (71%), DIO (65%), DM (65%), and LC (65%), and the lowest tier databases were: JHAG (56%), EOP (47%), EOF (46%), and PPDC (41%). Full details for database scores for scope across all categories and databases are included in Table 4.

Table 4 Scope of databases

Completeness

Similar to the scores for scope, results for completeness were stratified into three distinct tiers including: Tier 1 (97-89%), Tier 2 (83-81%), and Tier 3 (74%), which were all significantly different from each other (p < 0.05). The top scoring databases for completeness were MM (97%), DM (96%), IDI (95%), MDR (95%), AHFS (94%), CP (94%), FC (94%), LC-AHFS (94%), and DIO (89%). Mid-ranking databases (Tier 2) were LC (83%), JHAG (82%), EOF (81%), and EOP (81%). PPDC (74%) scored in the lowest tier. Full results for completeness scores, including scores in each drug information category, are provided in Table 5.

Table 5 Completeness of databases

Categorical analysis

When exclusively examining differences in the ability to answer questions (scope) in the ID categories, subscription databases performed better within both the ID-specific and non-ID specific categories than the free databases (p = 0.03). ID-specific categories included emerging resistance, spectrum of activity, and synergy. However, there was no difference between free and subscription online databases in scope between individual categories (e.g., dosing, administration, etc.). Comparisons of scope and completeness scores between free and subscription databases are shown in Figures 1 and 2, respectively.

Figure 1
figure 1

Scope comparison of drug information categories between subscription and free databases.

Figure 2
figure 2

Completeness comparison of drug information categories between subscription and free databases.

Sub-analysis of related databases

While no differences in scope were found between AHFS and LC-AHFS, both databases answered more questions than LC alone (p < 0.05). Similar findings were seen in regards to completeness, where AHFS and LC-AHFS were more complete in answering the questions than LC alone (p < 0.05), but no difference was found between AHFS and LC-AHFS. When comparing EOF and EOP databases, no differences in scope or completeness (p > 0.05) were seen.

Errors

There were 37 erroneous answers found in this analysis yielding an overall error rate of 1.8%. Of the fourteen databases evaluated, three databases had no errors (MM, AHFS, and MDR), four had two errors (DM, FC, LC, and LC-AHFS), four had three errors (CP, DIO, IDI, and JHAG), and five errors were found in both versions of Epocrates. PPDC had seven wrong answers, which was found to be significantly higher than the other databases (p < 0.05). A summary of errors in each category is shown in Table 6, and a sample of erroneous answers that have the potential to impact clinical outcomes and patient safety has been provided in Table 7.

Table 6 Errors by database and errors per category
Table 7 Sample of erroneous answers discovered in databases

Discussion

In a field as dynamic and evolving as ID, the use of CDSTs, such as drug information databases, can improve patient safety and clinical outcomes. However, a reference is only as good as the information it contains and this study revealed that improvements are necessary. Several general drug information databases were able to provide superior depth and breadth of information, while other references did not perform as well. MM (82%), MDR (81%), LC-AHFS (81%), AHFS (78%), and CP (77%) were all top performers, but even the database with the highest score for scope (MM) was unable to answer nearly one-fifth of the evaluative questions, and the database with the lowest scope score (PPDC) fell short by almost 60%. This deficit, in practical terms, shows that MM would be unable to provide an answer to one out of every five drug information queries, and PPDC would be unable to provide an answer to three out of every five. This disparity between the information needed by the healthcare professional and the information provided by the resources could have a far-reaching negative effect on both practical utility and clinical outcomes.

Not providing a sufficient scope of information is not as critical as providing accurate information. If an answer is not found, the healthcare provider can turn to another resource to locate the answer. But, if a wrong answer is given, the provider may not realize that the information is unreliable and use it to make a critical decision, potentially resulting in a negative outcome or even patient harm. The number of errors (1.8%) found amongst the databases was alarming. Out of the 14 references evaluated, only three (i.e. MM, AHFS, and MDR) returned no erroneous answers. Of the drug information categories in which errors were found, two (dosing and indication) were deemed so important to direct patient care and patient safety that they were the top two weighted categories in this study, both with 15 questions (10.2%) each. While some of the errors were blatantly wrong, others were instances where the database gave information that was not in line with that provided by the manufacturer but may be acceptable in current clinical practices. CDSTs have been proven to be useful in making timely and accurate patient care decisions at all stages of the decision-making process, but this study has shown that CDTSs cannot be considered reliable all of the time. Healthcare providers, in all aspects of practices, are expected to be accurate 100% of the time in order to ensure patient safety. This goal cannot be achieved if the tools that are used in the course of practice are not held to that same high standard.

Limitations

Our study had several potential limitations. The evaluative questions used were intended to be a subset of all possible drug information questions. While we were careful to represent as many types of drug information questions as possible, inclusion of all clinical aspects was not feasible. Performing the same evaluation with a different set of questions could produce different results; however, because of the broad range of scenarios represented in the original question list, the results would most likely yield little or no differences. Also, our study captured the data available from the databases at a set point in time, but the databases are tools that are updated with varying frequency. It is possible that changes have been made to the information contained in the databases since the time of this evaluation. And finally, some publishers offer additional components with their drug information databases (e.g., dose calculators). If these value-added features required an additional purchase, they were not used for this evaluation.

Conclusion

Drug information databases used in ID practices as CDSTs can be valuable resources for the healthcare provider. MM, MDR, LC-AHFS, AHFS, and CP were shown to be superior in their scope and completeness of information, and MM, AHFS, and MDR provided no erroneous answers. There is room for improvement in all databases evaluated in this study.

References

  1. Leape LL, Bates DW, Cullen DJ, Cooper J, Demonaco HJ, Gallivan T, Hallisey R, Ives J, Laird N, Laffel G, et al: Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA. 1995, 274: 35-43. 10.1001/jama.274.1.35.

    Article  CAS  PubMed  Google Scholar 

  2. Kohn LT, Corrigan JM, Donaldson MS, Institute of Medicine: To err is human: building a safer health system. 2000, Washington, DC: National Academy Press

    Google Scholar 

  3. Sintchenko V, Iredell JR, Gilbert GL: Comparative impact of guidelines, clinical data, and decision support on prescribing decisions: an interactive web experiment with simulated cases. J Am Med Inform Assoc. 2004, 11: 71-7. 10.1197/jamia.M1166.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Leape LL, Berwick DM: Five years after To Err Is Human: what have we learned?. JAMA. 2005, 293: 2384-90. 10.1001/jama.293.19.2384.

    Article  CAS  PubMed  Google Scholar 

  5. Partin B: Preventing medication errors: an IOM Report. Nurse Pract. 2006, 31: 8-10.1097/00006205-200612000-00002.

    Article  PubMed  Google Scholar 

  6. Greenfield S: Medication error reduction and the use of PDA technology. J Nurs Educ. 2007, 46: 127-31.

    PubMed  Google Scholar 

  7. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al: Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc. 2003, 10: 523-30. 10.1197/jamia.M1370.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Fernandopulle R, Ferris T, Epstein A, McNeil B, Newhouse J, Pisano G, et al: A research agenda for bridging the 'quality chasm.'. Health Aff (Millwood). 2003, 22: 178-90. 10.1377/hlthaff.22.2.178.

    Article  Google Scholar 

  9. Kaushal R, Shojania KG, Bates DW: Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med. 2003, 163: 1409-16. 10.1001/archinte.163.12.1409.

    Article  PubMed  Google Scholar 

  10. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005, 293: 1223-38. 10.1001/jama.293.10.1223.

    Article  CAS  PubMed  Google Scholar 

  11. Teich JM, Osheroff JA, Pifer EA, Sittig DF, Jenders RA, The CDS Expert Review Panel: Clinical decision support in electronic prescribing: recommendations and an action plan: report of the joint clinical decision support workgroup. J Am Med Inform Assoc. 2005, 12: 365-76. 10.1197/jamia.M1822.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Javitt JC, Rebitzer JB, Reisman L: Information technology and medical missteps: evidence from a randomized trial. J Health Econ. 2008, 27: 585-602. 10.1016/j.jhealeco.2007.10.008.

    Article  PubMed  Google Scholar 

  13. Johnston D, Pan E, Middleton B, Walker J, Bates DW: The value of computerized CPOE in ambulatory settings. [http://www.citl.org/research/ACPOE_Executive_Preview.pdf]

  14. Kanjanarat P, Winterstein AG, Johns TE, Hatton RC, Gonzalez-Rothi R, Segal R: Nature of preventable adverse drug events in hospitals: A literature review. Am J Health-Syst Pharm. 2003, 60: 1750-9.

    PubMed  Google Scholar 

  15. Winterstein AG, Johns TE, Rosenberg EI, Hatton RC, Gonzalez-Rothi R, Kanjanarat P: Nature and causes of clinically significant medication errors in a tertiary care hospital. Am J Health-Syst Pharm. 2004, 61: 1908-16.

    PubMed  Google Scholar 

  16. Kilbridge PM, Campbell UC, Cozart HB, Mojarrad MG: Automated surveillance for adverse drug events at a community hospital and an academic medical center. J Am Med Inform Assoc. 2006, 13: 372-7. 10.1197/jamia.M2069.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Keystone JS, Kozarsky PE, Freedman DO: Internet and computer-based resources for travel medicine practitioners. Clin Infect Dis. 2001, 32: 757-765. 10.1086/319234.

    Article  CAS  PubMed  Google Scholar 

  18. McGregor JC, Weekes E, Forrest GN, Standiford HC, Perencevich EN, Furuno JP, et al: Impact of a computerized clinical decision support system on reducing inappropriate antimicrobial use: a randomized controlled trial. J Am Med Inform Assoc. 2006, 13: 378-384. 10.1197/jamia.M2049.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Miller SM, Beattie MM, Butt AA: Personal digital assistant infectious diseases applications for health care professionals. Clin Infect Dis. 2003, 36: 1018-29. 10.1086/368198.

    Article  CAS  PubMed  Google Scholar 

  20. Belgado BS, Hatton RC, Doering PL: Evaluation of electronic drug information resources for answering questions received by decentralized pharmacists. Am J Health-Syst Pharm. 1997, 54: 2592-6.

    CAS  PubMed  Google Scholar 

  21. Clauson KA, Seamon MJ, Clauson AS, Van TB: Evaluation of drug information databases for personal digital assistants. Am J Health Syst Pharm. 2004, 61: 1015-24.

    CAS  PubMed  Google Scholar 

  22. Kupferberg N, Jones Hartel L: Evaluation of five full-text drug databases by pharmacy students, faculty, and librarians: do the groups agree?. J Med Libr Assoc. 2004, 92: 66-71.

    PubMed  PubMed Central  Google Scholar 

  23. Clauson KA, Marsh WA, Polen HH, Seamon MJ, Ortiz BI: Clinical decision support tools: analysis of online drug information databases. BMC Med Inform Decis Mak. 2007, 7: 7-10.1186/1472-6947-7-7.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Morbidity and Mortality Weekly Report (MMWR). [http://www.cdc.gov/mmwR/]

  25. Centers for Disease Control and Prevention (CDC). [http://www.cdc.gov]

  26. Mandell GL, Bennett JE, Dolin R: Principles and Practices of Infectious Diseases. 2004, Oxford: Church Livingstone, 6

    Google Scholar 

  27. Natural Medicine Comprehensive Database (NMCD). [http://www.naturaldatabase.com]

Pre-publication history

Download references

Acknowledgements

The authors wish to acknowledge the clinical insight and input provided by Raquel Mateo-Bibeau, MD and Jerri Jean Stambaugh, Pharm.D., as well as assistance with the statistical analysis and interpretation of results by William R. Wolowich, Pharm.D.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyla H Polen.

Additional information

Competing interests

HHP, AZ, JJ, and MP declare that they have no competing interests. KAC has received grant support from Elsevier Science/Gold Standard, Inc. which produces Clinical Pharmacology.

Authors' contributions

HHP conceived the project, developed the study design, wrote the question list, performed data collection, and wrote and critically edited the manuscript. AZ contributed to the study design and question list, conducted the statistical analysis, and wrote and critically edited the manuscript. KAC contributed to the study design and wrote and critically edited the manuscript. JJ wrote the question list, assisted with data collection, and critically edited the manuscript. MP assisted in question development and critically edited the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Polen, H.H., Zapantis, A., Clauson, K.A. et al. Ability of online drug databases to assist in clinical decision-making with infectious disease therapies. BMC Infect Dis 8, 153 (2008). https://doi.org/10.1186/1471-2334-8-153

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2334-8-153

Keywords