Skip to main content

Contextual factors in maternal and newborn health evaluation: a protocol applied in Nigeria, India and Ethiopia

Abstract

Background

Understanding the context of a health programme is important in interpreting evaluation findings and in considering the external validity for other settings. Public health researchers can be imprecise and inconsistent in their usage of the word “context” and its application to their work. This paper presents an approach to defining context, to capturing relevant contextual information and to using such information to help interpret findings from the perspective of a research group evaluating the effect of diverse innovations on coverage of evidence-based, life-saving interventions for maternal and newborn health in Ethiopia, Nigeria, and India.

Methods

We define “context” as the background environment or setting of any program, and “contextual factors” as those elements of context that could affect implementation of a programme. Through a structured, consultative process, contextual factors were identified while trying to strike a balance between comprehensiveness and feasibility. Thematic areas included demographics and socio-economics, epidemiological profile, health systems and service uptake, infrastructure, education, environment, politics, policy and governance. We outline an approach for capturing and using contextual factors while maximizing use of existing data. Methods include desk reviews, secondary data extraction and key informant interviews. Outputs include databases of contextual factors and summaries of existing maternal and newborn health policies and their implementation. Use of contextual data will be qualitative in nature and may assist in interpreting findings in both quantitative and qualitative aspects of programme evaluation.

Discussion

Applying this approach was more resource intensive than expected, in part because routinely available information was not consistently available across settings and more primary data collection was required than anticipated. Data was used only minimally, partly due to a lack of evaluation results that needed further explanation, but also because contextual data was not available for the precise units of analysis or time periods of interest. We would advise others to consider integrating contextual factors within other data collection activities, and to conduct regular reviews of maternal and newborn health policies. This approach and the learnings from its application could help inform the development of guidelines for the collection and use of contextual factors in public health evaluation.

Background

Context is an important consideration when interpreting public health evaluation findings or contemplating implementation of a public health programme in a new setting. However, there are several challenges, among which is the imprecise and inconsistent usage of the word “context”.

Contextual factors, which we define as those elements of context that could affect implementation of a programme, can range from environmental disasters or political instability to health system weaknesses or the advent of another health programme. Collection and assessment of these contextual factors are critical to establishing internal and external validity of study findings. Those that are relevant to health programme evaluation are contextual factors that may confound or modify the effect of programmes on the outcome of interest, particularly for large scale programme evaluation where randomised studies are neither feasible nor appropriate [1]. In the absence of considering contextual factors, these insights and explanations for outcomes seen or unseen could be lost, leading to incorrect conclusions being drawn about the programme’s value. Furthermore there is a clear need to make evaluation findings useful not only to the programme in question, but also to provide information on transferability and applicability to inform decisions to scale up the programme or implement it elsewhere [2, 3].

There are several challenges faced by researchers incorporating context. The first is deciding which contextual factors are most relevant to capture. While all studies must be selective given the sheer magnitude of contextual factors that could be collected, one of the benefits of selecting a wide range of contextual factors is the ability to explore patterns in contextual factors that were not originally hypothesized to be directly relevant. Unfortunately, it’s often unclear what will be important until the analysis stage when changes that could be attributed to the intervention are revealed. This process is further complicated because of limited reliable secondary data sources at the desired unit of analysis and available during the appropriate time frame.

Public health practitioners can draw on academic disciplines and theoretical foundations to incorporate context, including: systems theory [4, 5], realist evaluation [6, 7], diffusion of innovations; [8, 9] normalisation process [10] policy analysis [11] and anthropology. Several evaluation frameworks also incorporate context [12,13,14,15,16,17]. These academic traditions and frameworks influence evaluator perspective on defining, collecting and using contextual information. For example, epidemiologists and more quantitatively orientated evaluators may view confounding and effect modification [see Table 1 for definitions] as the primary reason for collecting contextual factors and this affects the types of analyses they prioritise. Some quantitative researchers use randomisation as a way to deal with contextual variability, although there are limitations to randomised controlled trials and for complex or large scale evaluations this approach may be neither feasible nor appropriate, as mentioned above [18, 19]. Victora et al. [1] argue for contextual factors to be considered in randomised studies, as randomisation reduces but does not eliminate the risk of confounding, and randomised studies are often conducted in atypical conditions. This issue is particularly acute for cluster-randomised trials with a small number of clusters. Furthermore, randomisation does not address the issue of external validity. More qualitatively orientated researchers may argue that context is so deeply imbedded and intertwined with programmes that quantitative methods attempting to “control for context” are inherently invalid. Regardless of study design and paradigm, there is widespread agreement that evaluators should consider collecting contextual factors for use when interpreting their findings [20].

Table 1 Definitions of key epidemiological terms

There are examples of complex public health evaluation studies that have incorporated contextual analysis to varying degrees [1, 5, 16, 21,22,23,24,25,26,27,28,29,30,31,32]. Few of these studies include details regarding how they selected contextual factors or how these data were collected and analysed, instead reserving limited word space in publications for other evaluation components. This lack of attention to contextual data reflects an emphasis on other evaluation priorities. A recent workshop of researchers who evaluate complex public health programmes discussed the extent to which context had featured in their past work and an overwhelming message from the proceedings was a call for renewed focus and emphasis on context [33].

Recently, the UK Medical Research Council (MRC) released guidelines for how to conduct process evaluations, expanding upon earlier guidelines for developing and evaluating complex health programmes that called for integrating process and outcome evaluation [34,35,36]. The new guideline defines context and process evaluation, highlighting their interrelated nature [see Table 2] [34]. This represents one of the most comprehensive efforts to date to set standards for how best to understand, measure and account for “context” in public health evaluation [35]. The MRC guideline embeds contextual analysis in process evaluation and represents a substantial movement forward in supporting a more consistent and transparent meaning of “context” and how it is used in public health evaluations. However specific guidance would be useful, including a minimum set of contextual factors to collect specific to areas of public health, how to assess data quality, how to analyse the data and how to use the findings.

Table 2 MRC guideline definitions of context and process evaluation

In parallel, there have been efforts to make the process of adapting proven programmes to new contexts more rigorous. Gomm [37] a realist evaluator, developed a tool to analyse a programme’s context with specific vision for how to adapt to another context. Stirman et al. [38] developed a framework and coding system for documenting how programmes evolve during adaptation to new contexts. Bergstrom et al. [39] designed a tool to inform programme design in low and middle income countries that assesses organisational context through interviews with healthcare workers. Waters et al. [3] argue for the need to have a clear description of study context for the purposes of making systematic reviews more useful to decision makers. The Oxford Implementation Index aims to guide systematic review development through standardising review of implementation data, including which contextual factors should be collected in public health trials. The study of scale up and the role of contextual factors has been studied at length in the health policy field [40,41,42,43].

Most of these resources do not provide explicit guidance around which contextual factors are relevant for a maternal and newborn health programme evaluation. Bryce et al. [44] developed an evaluation framework for maternal, newborn and child health programmes that includes some detail about which contextual factors are relevant to collect. While this framework is the most specific, it leaves room for further recommendations around methods for collection, analysis and use.

Here we present an approach for the collection and use of contextual data in the evaluation of large scale complex maternal and newborn health (MNH) programmes in Ethiopia, Gombe state in northeast Nigeria and Uttar Pradesh in India [45]. We aimed to be consistent in how we defined context and prescriptive in how we captured relevant contextual information. In addition, we discuss how such contextual information has been applied to the interpretation of evaluation findings thus far and consequently the changes we have made to how we collect and use contextual information moving forward.

Main text

Methods/design

Process outline

Informed Decisions for Actions to improve maternal and newborn health (IDEAS) developed research questions to test the Bill and Melinda Gates Foundation theory of change for their maternal, newborn and child health strategy [45]. These questions frame the domains of interest for contextual information and can be found in Fig. 1.

Fig. 1
figure 1

IDEAS learning questions. Visual of learning questions guiding broader research project

Activities designed to answer these questions include: before and after household and facility based surveys, in intervention and comparison areas, capturing interactions between families and frontline health workers and coverage of life saving interventions; community based qualitative inquiry on interactions between families and frontline workers; and qualitative interviews with a range of stakeholders on scale up. In each geography, we engaged local partners who managed data collection both for the evaluation studies and for contextual factors.

A scoping exercise was conducted in 2012 to assess feasibility of collecting and using contextual data, informed by the experience of researchers involved in similar evaluations [1, 32, 44]. It was immediately clear that the quantity, availability and quality of secondary data sources varied considerably between geographies and therefore we decided to take a more systematic approach that would include primary contextual data collection to fill gaps, the methods for which are described below.

We established an internal advisory group to oversee the implementation and focus on maintaining quality across the geographies. One staff member was assigned to work with each of the country-based teams collecting and reviewing these data to maintain consistency in implementation, document implementation and engage the advisory board as needed.

Defining context and selecting contextual factors

This section contains an overview of the methods: a list of the steps involved can be found in Table 3.

Table 3 Contextual data protocol process

For the purposes of our work we defined context as the background environment, or setting for any programme. We defined contextual factors as those elements of context that could affect implementation of a programme. The list of contextual factors hypothesized to be relevant for evaluating coverage of evidence-based, life-saving interventions for maternal and newborn health fell into the following thematic areas:

  • Demographics and socio-economics

  • Epidemiological profile

  • Health systems

  • Health service uptake

  • Infrastructure

  • Education

  • Environmental

  • Politics, policy and governance.

  • Maternal and Newborn health policy and implementation

Contextual factors were then split into two categories, with the intention of further focusing our time and effort by limiting data collection for category 1 relative to category 2 contextual factors. The categories are defined in the following way:

  • Category 1: Structural Factors that were unlikely to change rapidly over the course of the evaluation. Examples include: religion or ethnicity of people living in a given geography. Leichter [11] refers to these as “slow changing” or “structural”.

  • Category 2: Situational Factors hypothesized to change relatively quickly and thus require more frequent review, or as Leichter calls them, “situational,” that is, particular to a specific point in time [11]. Additionally, those of particular relevancy to understanding maternal and newborn health outcomes are also included in this category. Examples of this category include health programmes in the area, number of health care workers, vaccination campaigns, political instability or natural disasters.

The contextual factors were then classified as feasible to capture either through secondary data extraction from existing reports, or via primary data collection. Below, those two methods are described in greater detail. The intention was to repeat the secondary data extraction annually and repeat the primary data collection every 2 years as needed [see Tables 4 and 5 for the lists of contextual factors disaggregated by method of data collection].

Table 4 Contextual factors for secondary data extraction
Table 5 Primary data collection checklist

Secondary data extraction

In partnership with collaborators based in Nigeria, Ethiopia and India, contextual factors from the multi-geography list were adapted and expanded to specific country contexts. At a minimum for each country the contextual factors were adjusted to reflect the relevant administrative areas and cadres of health care and frontline workers. Where available, data were disaggregated by age and gender.

A desk review was conducted within each country to identify availability of these contextual factors. Sources were identified, their frequency of data collection and coverage relative to our geographic areas of interest documented. Availability of sub-national data was noted as the programmes being evaluated were not implemented in all areas within these countries. Frequency of data collection was particularly relevant for Category 2 contextual factors as we will need to document change over the time period of programme implementation. Where multiple options for a given contextual factor existed, team members familiar with the geographies in conjunction with local partners assessed source quality and noted which source is more widely used.

The timing of data collection versus availability of reports was an anticipated challenge as ideally we would have sources collecting data around the same time as the evaluation survey data were collected. However even if the data were collected at the same time, reporting delays may impede data availability and therefore limit usability.

Following the desk review, the local partner extracted the data from these secondary sources for the time periods relevant to the evaluation studies (coinciding with baseline and midline surveys).

Policy summary

Following the desk review, secondary data extraction and prior to primary data collection, a maternal and newborn health policy summary was prepared by our local partners through document review of existing policies, strategies and programmes. The content of these policy memos included a brief introduction to each country including key demographic and maternal and newborn health statistics followed by a paragraph to half-page summary of each maternal and newborn health-related policy, strategy or programme. This draft was reviewed internally for comprehensiveness and quality. This served as the reference point for documenting implementation of existing policies and any policy changes captured through the primary data collection checklist.

Primary data collection

Primary data collection was supported by local partners. It began following the secondary data extraction and the preparation of the draft policy summary. The purpose of primary data collection was to:

  1. 1)

    Fill gaps in secondary data

  2. 2)

    Validate secondary data with advisory group and key informants and

  3. 3)

    Identify factors not originally hypothesized as being relevant

  4. 4)

    Identify changes related to maternal and newborn health policies and document implementation of existing maternal and newborn health policies

The finalised checklists (see Table 5) were populated with data from documents identified through the desk review and internet searches. The partially completed checklists were circulated to the advisory committee to ensure all internal tacit knowledge was maximised before approaching key informants.

Key informants were identified by in-country staff and senior staff of our local partner. The key informants were asked to review the checklist and provide information where gaps persist. This process started with centrally available individuals and snowballed to subnational levels. Approximately 5–10 key Informants (one or two per subnational geographic area) were anticipated to be involved per country. These structured interviews took place primarily in person. For example, in India, after informal, unstructured interviews with state-level government employees to help identify appropriate secondary sources, we interviewed district-level government employees including the district programme manager, assistant chief medical officer and chief medical officer.

Our approach combined secondary and primary data collection to address some of the limitations of existing data sources. In spite of these limitations we believed that the contextual data will lend insight into understanding public health programme implementation and are critically important to document and use in a standardised way as a best practice in public health programme evaluation.

Contextual data use

We outlined the following ways we intended to use this information in the testing of the specified theory of change:

  • Interpret patterns in the quantitative and qualitative data used to evaluate if and how maternal and newborn health programmes increase coverage of life-saving interventions.

  • Gather supplemental information to further understand explanations for how and why scale up happens and if these scaled programmes increase coverage of life-saving interventions

  • Provide an opportunity through targeted interviews to ask questions of key informants about the preliminary findings to assist in interpretation.

Given reliance on secondary data sources, we anticipated some misaligned data collection timeframes and overall problems with quantitative data quality. Therefore, the specific uses of contextual data were always intended to be qualitative in nature.

Following the secondary data extraction and review of survey data for patterns in need of clarification, a more detailed data analysis plan was to be developed for each geography that would also identify existing information gaps to address during the primary data collection phase.

Preliminary results

Implementation status

Implementation of this approach to collecting and using contextual factor data has been challenging. Each geography has had its own unique set of complications and solutions leading to variabilities across the geographies in how this approach was applied. Below is an illustrative example of our experience in Gombe, Nigeria.

In Gombe, there are few published reports available either at the federal level that reflect Gombe-specific data or within Gombe state. Limited data were extracted through in person interactions with government staff managing state-level databases and the rest was captured through expanded primary data collection. Costs were minimized through combining this with other data collection efforts for the MNH programme evaluation. After developing a first draft of the policy memo in Nigeria, we realized a key point of interest, the degree of policy awareness and implementation could only be ascertained through further primary data collection. Due to instability in Gombe, data collection to complete this part of the work was delayed.

The combination of limited secondary data availability and instability required adjustments to our approach and frequency of data collection. Based on our implementation experience in Gombe and elsewhere the following changes to the approach were made:

  • Policy memo development was changed to become a two-staged process with a draft emerging following documentary review and a final version following primary data collection capturing policy awareness and implementation. This then further evolved from a one-off general maternal and newborn health policy memo to a more specific dashboard. The dashboard lists key maternal and newborn health care services and interventions and documents supportive policies and the degree of implementation for each country. Please see Additional file 1: Annex 1 for a list of the specific services and interventions screened for inclusion in existing policies and strategies.

  • Frequency of secondary data extraction was changed to no longer be annual. Given the limited secondary sources in Nigeria and the delays in accessing data in the other geographies this frequency was impractical. Many of the data sources were only available in hard copy and some online platforms are not reliable. In the absence of updated central repositories it was not possible to implement this as envisioned.

Below are two illustrative examples of experiences using contextual data in Uttar Pradesh, India and Gombe State, Nigeria.

Uttar Pradesh, India

Researchers leading a facility readiness analysis conducted as part of the broader MNH evaluation suspected there was a supply chain variation between two types of facilities accounting for systematic differences in stocking of key MNH commodities. To test this hypothesis, we integrated a question related to this in our checklist. Ultimately the respondents were unaware of any differences in supply chains that could account for this distinction.

Gombe State, Nigeria

Results from the evaluation surveys in Gombe State, Nigeria showed an increase in syphilis screening at health facilities. Initially this was believed to be the result of the grantee innovations, the programmes we were evaluating. However, upon further investigation we learned that there was another project being implemented in these health facilities that focused on syphilis screening and that was likely to be the reason for the increase in syphilis testing. We learned of this program through informal means, as our formal contextual data collection activities were delayed in Nigeria. This experience highlights the importance of collecting and using contextual data—including informal information from colleagues in-country—for accurate interpretation of evaluation data.

These were the best examples of our approach to using contextual data.

Future plans

IDEAS was involved in the evaluation of community based newborn care (CBNC), which built upon components of Bill and Melinda Gates Foundation funded programs in Ethiopia and therefore represents an example of a scaled-up programme. The CBNC evaluation in Ethiopia integrated a limited set of contextual factors into the baseline and midline surveys. While this was not originally a part of our approach, this proved to be an easy and fruitful way to collect contextual data. One major advantage relative to the secondary contextual data extracted via the original approach is that the contextual factors collected in the surveys are explicitly linked to the relevant geographies and time periods of interest. Additionally, as the implementation of surveys requires authorization from woreda (district) leaders, this presents an opportunity to ask questions and collect primary data using the checklist. Across all three countries the content of the policy memos proved to be of particular interest to the funders of this research, the Bill and Melinda Gates Foundation. We envision that our approach will continue to evolve to maximise the efficiency of data collection and the value of the findings.

Conclusions

Our intention in publishing this paper was to improve clarity on the potential use of contextual data and share with other researchers a set of steps for collecting and using contextual data for complex maternal and newborn health evaluation. The contextual factor data collected have thus far been less valuable than anticipated and we will continue to assess utility and adjust our approach as warranted.

Past experiences with capturing contextual data for the Integrated Management of Childhood Illness (IMCI) [1] and Expanded Quality Management Using Information Power (EQUIP) [32] reveal that substantial time and effort can be invested in collecting contextual data but not all of those efforts will yield meaningful information. While our implementation experience found that our approach needed to evolve, the learnings presented here could help inform the development of more specific guidelines for the use of contextual factors in public health evaluation with minimal resource use.

One of the major challenges this work faced was the decentralised nature of contextual factor data. National data repositories could ensure in-country ownership and improve use of such data. With the advent of the Sustainable Development Goals, national statistics offices may be resourced to meet the anticipated measurement mandate that extends well beyond the health sector. A knock-on effect of this would be higher quality, more easily accessible contextual data for public health evaluations.

In the meantime, in consultation with our government partners, we intend to make the data we have gathered publicly available in repositories, where feasible within the countries involved, for other health researchers who may have contextual factor needs that our efforts can address. While these contextual factors were selected with MNH programmes in mind, they would likely have applicability for other health programmes. In Ethiopia, work is underway to create a central data repository which may be an appropriate home for this data set. It is our hope that more countries will follow suit thus enabling greater local access to contextual factor data for MNH evaluations and beyond.

Abbreviations

CBNC:

community based newborn care

EQUIP:

Expanded Quality Management Using Information Power

IDEAS:

Informed Decisions for Actions

MNH:

maternal and newborn health

MRC:

Medical Research Council

PROM:

pre-term rupture of membranes

References

  1. Victora CG, Schellenberg JA, Huicho L, Amaral J, El Arifeen S, Pariyo G, Manzi F, Scherpbier RW, Bryce J, Habicht JP. Context matters: interpreting impact findings in child survival evaluations. Health Policy Plan. 2005;20(Suppl 1):i18–31.

    Article  PubMed  Google Scholar 

  2. Burchett H, Umoquit M, Dobrow M. How do we know when research from one setting can be useful in another? A review of external validity, applicability and transferability frameworks. J Health Serv Res Policy. 2011;16:238–44.

    Article  PubMed  Google Scholar 

  3. Waters E, Hall BJ, Armstrong R, Doyle J, Pettman TL, de Silva-Sanigorski A. Essential components of public health evidence reviews: capturing intervention complexity, implementation, economics and equity. J Publ Health. 2011;33:462–5.

    Article  CAS  Google Scholar 

  4. Paina L, Peters DH. Understanding pathways for scaling up health services through the lens of complex adaptive systems. Health Policy Plan. 2012;27:365–73.

    Article  PubMed  Google Scholar 

  5. Mutale W, Ayles H, Bond V, Chintu N, Chilengi R, Mwanamwenge MT, Taylor A, Spicer N, Balabanova D. Application of systems thinking: 12-month postintervention evaluation of a complex health system intervention in Zambia: the case of the BHOMA. J Eval Clin Pract. 2015;23:439–52.

    Article  PubMed  Google Scholar 

  6. Pawson R, Tilley N. Realistic evaluation. Thousand Oaks: Sage; 1997.

    Google Scholar 

  7. Kane SS, Gerretsen B, Scherpbier R, Dal Poz M, Dieleman M. A realist synthesis of randomised control trials involving use of community health workers for delivering child health interventions in low and middle income countries. BMC Health Serv Res. 2010;10:286.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Rogers E. Diffusion of innovations. 5th ed. New York: Free Press; 2003.

    Google Scholar 

  9. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82:581–629.

    Article  PubMed  PubMed Central  Google Scholar 

  10. May C, Finch T, Mair F, Ballini L, Dowrick C, Eccles M, Gask L, MacFarlane A, Murray E, Rapley T, et al. Understanding the implementation of complex interventions in health care: the normalization process model. BMC Health Serv Res. 2007;7:148.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Leichter H. A comparative approach to policy analysis: health care policy in four nations. Cambridge: Cambridge University Press; 1979.

    Google Scholar 

  12. Victora CG, Black RE, Bryce J. Evaluating child survival programmes. Bull World Health Organ. 2009;87:83.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Victora CG, Black RE, Boerma JT, Bryce J. Measuring impact in the Millennium Development Goal era and beyond: a new approach to large-scale effectiveness evaluations. The Lancet. 2011;377:85–95.

    Article  Google Scholar 

  14. Rog DJ. When background becomes foreground: toward context-sensitive evaluation practice. New Dir Eval. 2012;2012:25–40.

    Article  Google Scholar 

  15. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Republished research: implementation research: what it is and how to do it Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it. Br J Sports Med. 2014;48:731–6.

    Article  PubMed  Google Scholar 

  16. Webster J, Chandramohan D, Hanson K. Methods for evaluating delivery systems for scaling-up malaria control intervention. BMC Health Serv Res. 2010;10:S8.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Greene JC. Context. Encyclopedia of evaluation. Thousand Oaks: SAGE Publications, Inc.; 2005.

    Google Scholar 

  18. Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ Br Med J. 1996;312:1215–8.

    Article  CAS  Google Scholar 

  19. Victora CG, Habicht J-P, Bryce J. Evidence-based public health: moving beyond randomized trials. Am J Publ Health. 2004;94:400–5.

    Article  Google Scholar 

  20. Bryce J, Victora CG. Ten methodological lessons from the multi-country evaluation of Integrated Management of Childhood Illness. Health Policy Plan. 2005;20:i94–105.

    Article  PubMed  Google Scholar 

  21. Diaz T, Guenther T, Oliphant NP, Muniz M. i CCMSioetg: a proposed model to conduct process and outcome evaluations and implementation research of child health programs in Africa using integrated community case management as an example. J Glob Health. 2014;4:020409.

    PubMed  PubMed Central  Google Scholar 

  22. Bryce J, Requejo JH, Moulton LH, Ram M, Black RE, Population Health I. Training—Africa Health Initiative Data C: a common evaluation framework for the African Health Initiative. BMC Health Serv Res. 2013;13(Suppl 2):S10.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Belaid L, Ridde V. Contextual factors as a key to understanding the heterogeneity of effects of a maternal health policy in Burkina Faso? Health Policy Plan. 2015;30:309–21.

    Article  PubMed  Google Scholar 

  24. Bryce J, Gilroy K, Jones G, Hazel E, Black RE, Victora CG. The Accelerated Child Survival and Development programme in west Africa: a retrospective evaluation. The Lancet. 2010;375:572–82.

    Article  Google Scholar 

  25. Huicho L, Dávila M, Gonzales F, Drasbek C, Bryce J, Victora CG. Implementation of the Integrated Management of Childhood Illness strategy in Peru and its association with health indicators: an ecological analysis. Health Policy Plan. 2005;20:i32–41.

    Article  PubMed  Google Scholar 

  26. Masanja H, Schellenberg JA, Mshinda HM, Shekar M, Mugyabuso JK, Ndossi GD, de Savigny D. Vitamin A supplementation in Tanzania: the impact of a change in programmatic delivery strategy on coverage. BMC Health Serv Res. 2006;6:142.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Rawat R, Nguyen PH, Ali D, Saha K, Alayon S, Kim SS, Ruel M, Menon P. learning how programs achieve their impact: embedding theory-driven process evaluation and other program learning mechanisms in Alive & Thrive. Food Nutr Bull. 2013;34:S212–25.

    Article  PubMed  Google Scholar 

  28. Rowe AK, Onikpo F, Lama M, Osterholt DM, Deming MS. Impact of a malaria-control project in Benin that included the integrated management of childhood illness strategy. Am J Publ Health. 2011;101:2333.

    Article  Google Scholar 

  29. Wendland KJ, Pattanayak SK, Sills EO. National-level differences in the adoption of environmental health technologies: a cross-border comparison from Benin and Togo. Health Policy Plan. 2015;30:145–54.

    Article  PubMed  Google Scholar 

  30. de Savigny D, Webster J, Agyepong IA, Mwita A, Bart-Plange C, Baffoe-Wilmot A, Koenker H, Kramer K, Brown N, Lengeler C. Introducing vouchers for malaria prevention in Ghana and Tanzania: context and adoption of innovation in health systems. Health Policy Plan. 2012;27:iv32–iv43.

    Article  Google Scholar 

  31. Chandler C, DiLiberto D, Nayiga S, Taaka L, Nabirye C, Kayendeke M, Hutchinson E, Kizito J, Maiteki-Sebuguzi C, Kamya M, Staedke S. The PROCESS study: a protocol to evaluate the implementation, mechanisms of effect and context of an intervention to enhance public health centres in Tororo, Uganda. Implement Sci. 2013;8:113.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Hanson C, Waiswa P, Marchant T, Marx M, Manzi F, Mbaruku G, Rowe A, Tomson G, Schellenberg J, Peterson S, Team ES. Expanded Quality Management Using Information Power (EQUIP): protocol for a quasi-experimental study to improve maternal and newborn health in Tanzania and Uganda. Implement Sci. 2014;9:41.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Medicine)’ IIo. Evaluation design for complex global initiatives: workshop summary. Washington: The National Academies Press; 2014.

    Google Scholar 

  34. Moore GAS, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, Baird J. Process evaluation of complex interventions: Medical Research Council guidance. London: MRC Population Health Science Research Network; 2014.

    Google Scholar 

  35. Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, Moore L, O’Cathain A, Tinati T, Wight D, Baird J. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: new guidance. https://www.mrc.ac.uk/documents/pdf/complex-interventions-guidance/. Medical Research Council; 2008. Accessed 27 Oct 2015.

  37. Gomm R, Davies C, editors. Using evidence in health and social care. London: SAGE; 2000.

    Google Scholar 

  38. Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci. 2013;8:65.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Bergstrom A, Skeen S, Duc D, Blandon E, Estabrooks C, Gustavsson P, Hoa D, Kallestal C, Malqvist M, Nga N, et al. Health system context and implementation of evidence-based practices—development and validation of the Context Assessment for Community Health (COACH) tool for low- and middle-income settings. Implement Sci. 2015;10:120.

    Article  PubMed  PubMed Central  Google Scholar 

  40. IDEAS. How to catalyse scale-up of maternal and newborn innovations in Ethiopia. 2013.

  41. IDEAS. How to catalyse scale-up of maternal and newborn innovations in Uttar Pradesh, India. 2013.

  42. IDEAS. How to catalyse scale-up of maternal and newborn innovations in North-Eastern Nigeria. 2013.

  43. IDEAS. Catalysing scale-up of maternal and newborn health innovations: lessons from a case study in North-Eastern Nigeria. 2016.

  44. Bryce J, Victora CG, Boerma T, Peters DH, Black RE. Evaluating the scale-up for maternal and child survival: a common framework. Int Health. 2011;3:139–46.

    Article  PubMed  Google Scholar 

  45. Darmstadt G, Marchant T, Claeson M, Brown W, Morris S, Donnay F, Taylor M, Ferguson R, Voller S, Teela K, et al. A strategy for reducing maternal and newborn deaths by 2015 and beyond. BMC Pregnancy Childbirth. 2013;13:216.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Bowling A, Ebrahim S. Handbook of health research methods: investigation, measurement and analysis. New York: McGraw-Hill Education; 2005.

    Google Scholar 

  47. Carneiro I, Howard N, editors. Introduction to epidemiology. 2nd ed. London: London School of Hygiene and Tropical Medicine, Open University Press; 2005.

    Google Scholar 

Download references

Authors’ contributions

KS prepared all drafts. All authors reviewed drafts and provided comments. JS and TM developed the initial concepts. All authors further developed these into the manuscript seen here. KS leads the contextual factors work for IDEAS. TM leads the household and facility survey design and analysis. NS leads the study of scale up. MG, DB and NU are the country coordinators involved with the data collection and review process. All authors read and approved the final manuscript.

Acknowledgements

The protocol described here is being adapted and implemented in conjunction with our following partners in Ethiopia, India and Nigeria: Anubrata Basu, Paresh Kumar and Kultar Singh of Sambodhi Communications, Ltd. Shimiljash Braha, Seifu Tadesse, Birhanu Worku and Tsegahun Tessema of JaRco Consulting, Ltd. Dr. Umar Adamu Usman and Engr. Nuraddeen Sambo Umar of Data Resource Management Consulting. Many thanks to Deepthi Wickremasinghe for her review and to Shirine Voller for her assistance in preparing information related to ethical clearance and our funding.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

At the time of submission, we do not have data ready for publication. However, we will be making datasets available in accordance with our funding requirements. The repository location is to be determined.

Consent for publication

Not applicable.

Ethics approval and consent to participate

The overall IDEAS protocol has ethical clearance from the London School of Hygiene and Tropical Medicine (Reference Number: 6088) and has been approved by the following institutions within each of the listed countries: Ethiopia. Ministry of Science and Technology, Reference Number: 3.10/038/2015. Regional Health Bureaus approval in Oromia, Amhara, SNNP and Tigray regions. Submitted by JaRco Consulting; India. Health Ministry Screening Committee/Indian council of Medical Research, Reference Number: HMSC/2012/08-HSR, IRB: SPECT-SRB, no Reference Number given. Submitted by Sambodhi Research & Communications Pvt. Ltd; Nigeria. National Health Research Ethics Committee of Nigeria, Reference Number: NHREC/01/01/2007, submitted originally by CEO Childcare and Wellness Clinics with subsequent renewals by Data Research and Mapping Consult Ltd. State approval given by Gombe State Reference Number: MOH/ADM/S/658/VOL. II/22. Submitted by Data Research and Mapping Consult Ltd.

Funding

The research was supported by IDEAS - Informed Decisions for Actions to improve maternal and newborn health (http://ideas.lshtm.ac.uk), which is funded through a grant from the Bill & Melinda Gates Foundation to the London School of Hygiene & Tropical Medicine. The Foundation approved the overall IDEAS research protocol to test their theory of change and the research questions guiding this work (Grant No. OPP1017031).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kate Sabot.

Additional file

Additional file 1: Annex 1.

Policy dashboard. An excel table of the services (proxies for access to care, facility readiness and interventions) screened for inclusion in existing policies, strategies and programmes. Includes: date of supportive policy, name of policy, relevant excerpt from policy and commentary on systems and implementation (if available).

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sabot, K., Marchant, T., Spicer, N. et al. Contextual factors in maternal and newborn health evaluation: a protocol applied in Nigeria, India and Ethiopia. Emerg Themes Epidemiol 15, 2 (2018). https://doi.org/10.1186/s12982-018-0071-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12982-018-0071-0

Keywords

  • Context
  • Contextual factors
  • Contextual moderators
  • Contextual indicators
  • External validity
  • Programme evaluation
  • Impact evaluation
  • Public health
  • Maternal health
  • Newborn health