Navigating AMCP Dossier Submission: A Comprehensive Guide to Success

Healthcare decision-making requires a good knowledge of value communication and assessment. The AMCP (Academy of Managed Care Pharmacy) dossier submission is an important instrument for market access, health economics, and outcomes research professionals. This is a key to ensure that payers and formulary committees who are healthcare decision makers will understand the true value of what you have to offer them. This is how you can make an impact with your AMCP submission whether you are a seasoned professional or are new to the process.

1. Grasp the Purpose

The AMCP dossier isn’t just something done for solely for regulatory purpose; it’s a tool for strategic communication. It is done by providing comprehensive and evidence-based information on the value of your product which includes its clinical effectiveness, safety profile, economic consequences.

2. Adhere to the AMCP Format

The AMCP requires a standardized format for dossier submissions, which is essential for maintaining consistency and enabling comparability across various products. Generally, dossiers are divided into three main parts:

  • Executive Summary : Provides a concise overview of the product’s value proposition, highlighting key clinical and economic outcomes.
  • Clinical Evidence : Contains exhaustive synopsis of clinical trials, observational studies, and other relevant data that demonstrate the product’s efficacy and safety.
  • Economic Evidence : Includes cost-effectiveness analyses (CEA), budget impact analysis (BIA), and other economic evaluations that underscore the financial implications of adopting the product.

3. Leverage Real-World Evidence

While randomized controlled trials (RCTs) remain the gold standard for clinical evidence, the importance of real-world evidence (RWE) in the AMCP dossier cannot be understated. RWE offers valuable insights into how a product performs in routine clinical practice, enabling payers to assess its impact across diverse patient populations.

4. Customize the Value Proposition

Payers and decision-makers seek a comprehensive understanding of value beyond just efficacy. Customize your value proposition to address the specific concerns of your target audience. Emphasize outcomes that matter most to them, such as reducing hospitalizations, enhancing patient quality of life, or delivering cost savings.

5. Uphold Data Integrity and Transparency

The credibility of your AMCP submission relies on the quality and transparency of the data presented. Ensure that all data sources are reputable and that methodologies are thoroughly documented. Payers must have confidence that the evidence provided is both robust and impartial.

6. Stay Informed on AMCP Guidelines

As the healthcare landscape evolves, so do the guidelines for AMCP submissions. Regularly update yourself on the latest changes to the AMCP Format for Formulary Submissions, as these updates can significantly influence the evaluation of your dossier.

7. Leverage Health Economic Modelling

Health economic models, such as CEA and BIA, are vital elements of the AMCP dossier. These models offer quantitative insights into the product’s value in terms of both health outcomes and financial implications. Ensure your models are rigorously designed, transparent, and closely aligned with payer expectations.

8. Engage Stakeholders Early

Initiate early engagement with payers and key stakeholders during the development process. Gaining insight into their specific needs and concerns allows you to tailor your AMCP submission to effectively address their questions and challenges. This proactive approach also helps to build relationships and trust, which are essential when your dossier is under review.

9. Conduct a Thorough Review

Before finalizing your dossier, ensure it undergoes a comprehensive review. Focus on clarity, coherence, and consistency throughout the document. A meticulously organized and polished submission is more likely to make a strong, positive impression on decision-makers.

Conclusion

Mastering the AMCP dossier submission process is paramount to communicating effectively the value of your product to healthcare decision-makers. By following the format required, integrating real-world evidence, customizing the value proposition, and maintaining data integrity, you will be able to develop a compelling dossier that differentiates your product within a competitive marketplace. Refine your approach periodically regarding any new changes in the industry and be sure that your submissions are impactful and in accordance with  the changing standards.

Embracing AI in Health Technology Assessment: Insights from NICE’s Guidance

The integration of artificial intelligence (AI) into Health Technology Assessment (HTA) has become a focal point in the evolving landscape of evidence generation. The National Institute for Health and Care Excellence (NICE) has published a detailed position statement on the integration of AI in evidence generation for HTA. This guidance arrives at a pivotal time when AI’s potential to enhance evidence generation is being keenly explored across healthcare sectors. Below, we delve into the essential facets of NICE’s position and what it means for the future of HTA. Figure 1 summarizes the key points of NICE’s position on AI integration in HTA.

AI Integration in Health Technology Assessment
Figure 1. Key Points of NICE’s Position on AI Integration in Health Technology Assessment

1. Balancing Innovation with Risk

AI’s potential to transform evidence generation in HTA is undeniable. From automating literature reviews to enhancing the design of clinical trials, AI can significantly streamline processes. However, NICE emphasizes the importance of balancing these innovations against inherent risks, such as algorithmic bias, cybersecurity vulnerabilities, and transparency challenges. NICE suggests that AI should only be employed when it clearly adds value, ensuring that the benefits outweigh the potential downsides.

2. Human Oversight

AI should augment human decision-making, not replace it. NICE stresses the importance of keeping humans in the loop, ensuring that AI tools are used to enhance, rather than supplant, human expertise. This approach is crucial for maintaining ethical standards and ensuring the reliability of the evidence produced.

3. Ensuring Transparency and Accountability

NICE insists that organizations using AI in evidence generation must prioritize transparency. This means clearly justifying the use of AI, documenting methodologies, and ensuring that AI-driven results are understandable and accessible to all stakeholders. Transparency is vital to maintaining trust in the AI-generated evidence.

4. Early Engagement with NICE

Organizations considering the use of AI in their evidence generation processes are encouraged to engage with NICE early in the development process. This proactive approach can help ensure that AI methodologies are aligned with NICE’s expectations and regulatory standards from the outset, minimizing potential issues later on.

5. Building Trust through Compliance and Ethical Alignment

Any AI methods used must align with existing UK Government frameworks and ethical guidelines. This includes adherence to data protection laws and ensuring that AI applications are both scientifically and technically sound. Organizations are responsible for ensuring that all AI tools and methodologies meet these stringent standards.

6. Ongoing Monitoring and Adaptation

Given the rapidly evolving nature of AI technology, NICE plans to continually review and update its guidelines to reflect new developments and evidence. This commitment ensures that AI’s integration into HTA remains both innovative and responsible, ultimately supporting improved healthcare outcomes.

CONCLUDING REMARKS

NICE’s position on AI in HTA emphasizes the need for responsible, transparent, and ethical integration of AI technologies. While AI holds significant potential to transform HTA, NICE advocates for a cautious approach that safeguards the integrity of evidence generation and decision-making. By adhering to these guidelines, organizations can harness the power of AI to support and enhance evidence generation, ensuring that AI’s benefits are fully realized without compromising the quality or integrity of healthcare decisions.

References

  1. National Institute for Health and Care Excellence. (2024). Use of AI in evidence generation: NICE position statement. Retrieved from https://www.nice.org.uk/about/what-we-do/our-research-work/use-of-ai-in-evidence-generation–nice-position-statement

Understanding Tools and Checklists for Appraising Quality of Randomized Controlled Trials

Randomized controlled trials (RCTs) are considered the highest level of evidence to establish causal associations in clinical research. As the complexity of research questions has increased, new methods have appeared, and RCT designs have become more complex.(1) Hence, it is important to appraise the quality of the design using adequate tools and checklists for these trials. This article describes some of the most popular tools and checklists for assessing the risk of bias in publications reporting randomized trials results.

Cochrane risk-of-bias tool for randomized trials (RoB 2.0)

The ROB 2.0 tool is recommended for evaluating the risk of bias in publications presenting randomized trial results in systematic reviews. This tool is organized into specific domains of bias, each addressing different aspects of trial design, conduct, and reporting. Within each domain, a series of ‘signalling questions’ gather information about trial features relevant to the risk of bias. Based on these answers, an algorithm generates a proposed judgment about the risk of bias for each domain. The judgments can be classified as ‘Low’ or ‘High’ risk of bias or as having ‘Some concerns.'(2)

Reference: Risk of bias tools – RoB 2 tool (google.com)

CASP Randomized Controlled Trial Appraisal Tool

The CASP checklist offers a structured approach for appraising publications of RCTs.(3) It helps researchers systematically evaluate the validity, results, and relevance of trials, ensuring that critical aspects of the study design are explored.

Reference: Microsoft Word – CASP RCT Checklist 11 qus [RB1 EI amendments 16_9_2020].docx (casp-uk.net)

Jadad Scale

The Jadad Scale is designed to assess the quality of RCT publications, focusing on three core elements: random assignment, blinding, and the flow of participants through the trial. It provides a simple numerical score informing meta-analyses and systematic reviews and identifying high-quality studies that can be used to inform clinical practice.(4)

Reference: Assessing the quality of reports of randomized clinical trials: Is blinding necessary? – ScienceDirect

Centre for Evidence-Based Medicine (CEBM) – RCT Checklist

The CEBM RCT checklist is another critical appraisal tool designed to evaluate RCT publications. It covers essential aspects such as randomization, blinding, follow-up, and controlling confounding variables. It appraises the reliability, importance and applicability of clinical evidence.(5)

Reference: RCT.pdf (cebm.net)

Joanna Briggs Institute (JBI) Checklist

The JBI checklist is a comprehensive tool for critically appraising RCT publications. It includes criteria for evaluating the trial design, conduct, and analysis, ensuring that all relevant factors are considered.(6) The checklist is part of JBI’s suite of evidence-based practice resources, widely used in healthcare research.

Reference: JBI_Checklist_for_RCTs_archive_2020.docx (live.com)

Scottish Intercollegiate Guidelines Network (SIGN)

These guidelines provide a straightforward system for critically appraising RCT publications. Its methodology emphasizes assessing internal and external validity by identifying key factors such as bias and confounding. SIGN uses tailored checklists to evaluate RCTs, grading evidence strength on a scale from ‘++’ to ‘–’ and classifying recommendations from ‘A’ to ‘D’.(7)

Reference: checklist_for_controlled_trials.doc (live.com)

NHLBI Study Quality Assessment of Controlled Intervention Studies

This tool evaluates publications of RCTs by addressing key study design, conduct, and reporting aspects. It includes criteria such as adequacy of randomization, allocation concealment, blinding, baseline similarity, dropout rates, adherence to protocols, outcome assessment, power calculation, prespecified outcomes, and intention-to-treat analysis. Each criterion is assessed with specific questions to identify potential bias, and studies are rated as ‘Good,’ ‘Fair,’ or ‘Poor’ based on the overall risk of bias.(8)

Reference: Study Quality Assessment Tools | NHLBI, NIH

Conclusion

The tools presented above provide structured and systematic approaches for evaluating RCT publications of RCTs. These tools help the researchers to critically evaluate the methodological approach and risk of bias in publications reporting on results of RCTs, thus promoting more reliable health outcomes.

References

  1. Zabor EC, Kaizer AM, Hobbs BP. Randomized Controlled Trials. Chest. 2020;158(1s):S79-s87.
  2. Cochrane Methods. Bias. RoB 2: A revised Cochrane risk-of-bias tool for randomized trials. 2024. Available from: https://methods.cochrane.org/bias/resources/rob-2-revised-cochrane-risk-bias-tool-randomized-trials . Last accessed on: 07 August 2024.
  3. UNC. Systematic Reviews: Step 6: Assess Quality of Included Studies. What are Quality Assessment tools?. Randomized Controlled Trials (RCTs). 2024. Available from: https://guides.lib.unc.edu/systematic-reviews/assess-quality . Last accessed on: 07 August 2024.
  4. De Cassai A, Boscolo A, Zarantonello F, Pettenuzzo T, Sella N, Geraldini F, et al. Enhancing study quality assessment: an in-depth review of risk of bias tools for meta-analysis-a comprehensive guide for anesthesiologists. J Anesth Analg Crit Care. 2023;3(1):44.
  5. CEBM. Critical Appraisal tools. Randomised Controlled Trials (RCT) Critical Appraisal Sheet. 2024. Available from: https://www.cebm.ox.ac.uk/resources/ebm-tools/critical-appraisal-tools . Last accessed on: 07 August 2024.
  6. JBI. Checklist for Randomised Controlled trials. 2020. Available from: https://jbi.global/sites/default/files/2020-08/Checklist_for_RCTs.pdf . Last accessed on: 07 August 2024.
  7. Baker A, Young K, Potter J, Madan I. A review of grading systems for evidence-based guidelines produced by medical specialties. Clin Med (Lond). 2010;10(4):358-63.
  8. NIH. NHLBI. Study Quality Assessment Tools. 2021. Available from: https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools . Last accessed on: 07 August 2024.

Unlocking Hidden Gems: Exploring Unconventional Databases to Enhance SLRs

Systematic literature reviews (SLRs) are essential for gathering comprehensive evidence to support decision-making. While PubMed, Cochrane, and Embase are well-known resources, there are lesser-known databases that offer valuable insights and data. Exploring these unconventional databases can uncover hidden gems that significantly enhance the robustness and breadth of your SLRs. Figure 1 summarises seven such underutilised resources.

Unconventional databases to enhance systematic literature reviews.

1. TRIP Database (Turning Research Into Practice )

The TRIP Database is a clinical search engine designed to allow health professionals to rapidly identify high-quality clinical evidence. It indexes evidence-based content to support clinical practice and decision-making, ensuring your SLRs are based on the highest quality evidence available.

2. SciELO (Scientific Electronic Library Online)

SciELO provides open-access to scientific literature from Latin America, Spain, Portugal, and South Africa. It offers a treasure trove of articles in public health, social sciences, and health sciences, making it an excellent resource for regional studies and diverse perspectives.

3. Epistemonikos

A comprehensive database of health evidence, Epistemonikos aggregates SLRs and other types of evidence from multiple sources. Its user-friendly interface and focus on health make it a powerful tool for uncovering high-quality evidence across a range of medical and health-related topics.

4. International HTA Database

Maintained by the International Network of Agencies for Health Technology Assessment (INAHTA), this database provides reports from HTA agencies worldwide. It could be essential for finding grey literature and HTA reports that might not be indexed elsewhere.

5. Europe PMC (European PubMed Central)

Europe PMC offers access to a broad range of biomedical literature, including research articles, reviews, and patents. It integrates literature from PubMed as well as additional sources, providing a more extensive search for European studies and beyond.

6. LILACS (Latin-American scientific literature in health science)

LILACS, part of the Virtual Health Library (VHL), is the most important and comprehensive index of scientific and technical literature in Latin America and the Caribbean. It includes various types of documents, offering valuable insights into regional health issues.

7. CEA Registry (Cost-Effectiveness Analysis registry)

The CEA Registry, managed by Tufts Medical Centre, is a comprehensive database of cost-effectiveness analyses in health and medicine. It provides detailed information on the economic evaluations of health interventions, crucial for HEOR studies focusing on cost-effectiveness.

CONCLUDING REMARKS

Venturing beyond conventional databases can significantly enhance the depth and quality of your SLRs. Expanding your SLR toolkit with these free databases not only enhances the depth of your research but also ensures a broader spectrum of evidence is considered, making your findings more robust and reflective of the global research landscape.

References

  1. Stevens GA, Fitterling L, Kelly FV. Trip database: turning research into practice for evidence-based care. Medical Reference Services Quarterly. 2017 Oct 2;36(4):391-8.
  2. Rocha EM, Osaki TH, Kara N, Alves M, Moral C. SciELO 25 years: The Scientific Electronic Library Online celebrates its 25th anniversary. Arquivos Brasileiros de Oftalmologia. 2023 Dec 11;87(1):e2024-1007.
  3. Rada G, Perez D, Araya-Quintanilla F, Avila C, Bravo-Soto G, Bravo-Jeria R, Canepa A, Capurro D, Castro-Gutierrez V, Contreras V, Edwards J. Epistemonikos: a comprehensive database of systematic reviews for health decision-making. BMC medical research methodology. 2020 Dec;20:1-7.
  4. Bellemare CA, Dagenais P, Suzanne K, Béland JP, Bernier L, Daniel CÉ, Gagnon H, Legault GA, Parent M, Patenaude J. Ethics in health technology assessment: a systematic review. International Journal of Technology Assessment in Health Care. 2018 Jan;34(5):447-57.
  5. Europe PMC Consortium. Europe PMC: a full-text literature database for the life sciences and platform for innovation. Nucleic acids research. 2015 Jan 28;43(D1):D1042-8.
  6. Manriquez JJ. Searching the LILACS database could improve systematic reviews in dermatology. Archives of dermatology. 2009 Aug 1;145(8):947-68.
  7. Neumann PJ, Thorat T, Shi J, Saret CJ, Cohen JT. The changing face of the cost-utility literature, 1990–2012. Value in health. 2015 Mar 1;18(2):271-7.