Skip to main content
  • Research article
  • Open access
  • Published:

Knowledge-exchange in the Pacific: outcomes of the TROPIC (translational research for obesity prevention in communities) project

Abstract

Background

The Pacific TROPIC (Translational Research for Obesity Prevention in Communities) project aimed to design, implement and evaluate a knowledge-broking approach to evidence-informed policy making to address obesity in Fiji. This paper reports on the quantitative evaluation of the knowledge-broking intervention through assessment of participants’ perceptions of evidence use and development of policy/advocacy briefs.

Methods

Selected staff from six organizations - four government Ministries and two nongovernment organizations (NGOs) - participated in the project. The intervention comprised workshops and supported development of policy/advocacy briefs. Workshops addressed obesity and policy cycles and developing participants’ skills in accessing, assessing, adapting and applying relevant evidence. A knowledge-broking team supported participants individually and/or in small groups to develop evidence-informed policy/advocacy briefs. A questionnaire survey that included workplace and demographic items and the self-assessment tool “Is Research Working for You?” (IRWFY) was administered pre- and post-intervention.

Results

Forty nine individuals (55% female, 69% 21–40 years, 69% middle-senior managers) participated in the study. The duration and level of participant engagement with the intervention activities varied – just over half participated for 10+ months, just under half attended most workshops and approximately one third produced one or more policy briefs. There were few reliable changes on the IRWFY scales following the intervention; while positive changes were found on several scales, these effects were small (d < .2) and only one individual scale (assess) was statistically significant (p < .05). Follow up (N = 1) analyses of individual-level change indicated that while 63% of participants reported increased research utilization post-intervention, this proportion was not different to chance levels. Similar analysis using scores aggregated by organization also revealed no organizational-level change post-intervention.

Conclusions

This study empirically evaluated a knowledge-broking program that aimed to extend evidence-informed policy making skills and development of a suite of national policy briefs designed to increase the enactment of obesity-related policies. The findings failed to indicate reliable improvements in research utilization at either the individual or organizational level. Factors associated with fidelity and intervention dose as well as challenges related to organizational support and the measurement of research utilization, are discussed and recommendations for future research presented.

Peer Review reports

Background

The global increase in obesity prevalence is a major public health concern requiring a multi-faceted systems approach [1]. An important aspect to resolving obesity is the development of a suite of evidence-informed policies to shape obesogenic attitudes and behaviors [2]. Integrating relevant research evidence into appropriate and effective public policy is challenging, given that researchers and policymakers frequently have different agendas, timelines and priorities that constrain the use of evidence to inform policies [3]. Effective exchange of knowledge between evidence-producers (researchers and others) and end-users (those who initiate, select, approve, implement and evaluate policy) is critical to evidence-based policy development [3].

Effective knowledge exchange is determined by: 1) researchers producing timely and relevant evidence; 2) policy-makers communicating their priorities and timelines; and 3) policy-makers having the skills and resources to utilize evidence to inform policies [4]. Focusing on the third of these points, effective use of evidence to inform policy is determined by the ability of policy makers to access and critically analyse the best available evidence and apply it to policy formulation. The best evidence is accessible (available, affordable, appropriately framed), relevant (to obesity and the local context) and timely [4]. Evidence-informed policy making is, however, complex and often requires a change in organizational culture to ensure that the evidence is actually identified, utilized and converted into policy. Therefore, researchers need to understand both policy making processes and timelines, as well as the culture in which policy formulation occurs in order to optimize opportunities for evidence-informed policy making [5]. Indeed, organizational components are stronger predictors of evidence-informed policy making than individual factors [3, 5, 6]. It is also important that advocacy documents are informed by relevant evidence, given the potential for advocates to influence policy [7].

Strategies that enable evidence-informed policy making include: 1) producing relevant evidence that is aligned with policy cycles; 2) extending policy developers’ evidence-informed policy making skills and utilization of evidence; 3) working with policy developers and organizations to develop individual and organizational cultures that value and support evidence-informed policy making; 4) facilitating strong relationships between researchers, policy developers and policy making organizations [8, 9] and 5) embedding evidence-informed policy making into policies and practices [4].

Expertise and resources to enable evidence-informed policy making are often limited, especially in low to middle income countries, and those with small populations. This study reports on the evaluation of the impact of a 3-year knowledge exchange project called Translational Research for Obesity Prevention in Communities (TROPIC) [4] conducted in the Republic of Fiji from July 2009 to April 2012. The specific objectives of the TROPIC project were to: 1) extend evidence-informed decision making skills in selected partner organizations; 2) use a knowledge-broking approach to increase the uptake of evidence from the Obesity Prevention in Communities (OPIC) project [10, 11] and other relevant sources in the development of obesity-related policy and embed this in policy and advocacy documents; and 3) facilitate changes in organizational culture so that organizational structures were supportive of evidence-informed policy making. Results of the qualitative evaluation of the TROPIC project have been reported previously [12, 13]. Consequently, this paper reports on the quantitative evaluation of the TROPIC project and specifically on changes in self-reported evidence-informed policy making skills developed among nominated employees in selected government ministries and nongovernment organizations (NGOs).

Methods

Design

The TROPIC project incorporated a pre-post intervention design. Specific details of the project have been reported previously [4, 12, 13].

Organizations and participants

We elected to recruit a maximum of six organizations in order for the small TROPIC knowledge-broking team to have sufficient resources to facilitate evidence-informed policy making skills for development of policy briefs. Eight government and two NGOs were identified as potential participant-organizations based on: their potential to make or influence policies that improve food and/or physical activity environments; wide demographic (ethnic group; religion; urban/rural) representation to ensure a broad reach; capacity to release and support staff to participate in TROPIC activities; potential to share evidence-informed policy making knowledge and skills within their own and other organizations; and previous relationships with the research team [4, 5, 14]. High-level meetings subsequently took place with either government ministers or permanent secretaries (deputy ministers) in government organizations, or chief executive officers from NGOs [13]. Two of the six government organizations approached declined the invitation to participate in the study citing lack of organizational resources (time, staff) and difficulty aligning the project to their policy cycles. Both of the NGOs that were approached agreed to participate [13]. Each participating organization nominated a senior staff member as a focal/contact person. In addition, each organization nominated between 5 and 12 staff members who were either currently engaged in policy development, or were likely to do so in the near future, to participate in the study.

Intervention

In brief, the intervention involved a knowledge-broking team delivering a 12–18 month program per organization. The program comprised workshops that targeted evidence-informed policy making skills and practical support for developing evidence-informed policy briefs and advocacy statements to reduce obesity (we subsequently use the term ‘policy briefs’ to represent both of these) [4, 13]. The broad framework of the intervention was the same across organizations, but the specific content of the program was tailored to each of the six participating organizations. Knowledge-broking strategies included identifying policy/advocacy topics that could potentially reduce obesogenic environments, monitoring, evaluating and facilitating time-management skills, accommodating other organizational and individual priorities, delivering workshops, conducting meetings (whole group, small group, one-to-one) and supporting individual writing of policy briefs [4]. A number of these strategies were tailored by organization. For example, the policy/advocacy topics were negotiated so that they were aligned to the plans of individual organizations [4, 13]. The number and focus of the workshops were also tailored according to the needs of the organizations [4]. For example, additional workshops that provided an introduction to non-communicable diseases (NCDs) as well as the burden of NCDs and social and economic impacts, were provided to non-health organizations. Similarly, there was variable understanding of what constituted a policy and the local policy cycle across organizations meaning that the focus of the workshop activities were varied. Policy brief templates were also tailored for each organization - while many components of these templates were consistent, other components were specifically tailored according to organizational requirements [13]. One organization (with an existing policy unit) had an existing internal policy brief template and was hesitant to use the TROPIC template. The internal policy brief template, however, had some deficiencies (eg. no provision for evidence to support/justify the policy issue) and after negotiation, it was resolved that the policy brief would be drafted using both templates.

Project measures

The intervention’s effectiveness in increasing the use of evidence in the development of policy briefs to reduce obesity was assessed through four forms of data. First, individual participants’ perceptions of their knowledge and experience with evidence and policy were elicited via semi-structured interviews prior to engagement in the study [13]. Second, a three part survey questionnaire was used to capture information pre and post the TROPIC program to enable quantitative assessment of any changes in the valuing and utilization of evidence-informed decision making at both individual and organizational levels. Third, individual and organizational perceptions of the TROPIC program were elicited via individual interviews conducted with two groups at the conclusion of the program: 1) all participants in the TROPIC program; and 2) a high-level officer from each participating organization. Fourth, the knowledge-broking team recorded individual and organizational processes in terms of time and resources for workshop participation and development of policy briefs. They also documented outputs arising from TROPIC, specifically the completion of policy briefs, as well as evidence of the embedding of evidence-informed decision making practice. Given that results obtained from both sets of interviews and the diary data have been reported previously [12, 13], this paper focuses on the quantitative survey data.

Survey instrument

A three part survey questionnaire was used to collect information. The first two parts of the survey utilized the “Is Research Working for You? A Self-Assessment Tool and Discussion Guide for Health Services Management and Policy Organizations” (IRWFY). In consultation with a range of experienced researchers, we selected the IRWFY tool as one of the few available at project set-up that could provide a quantitative measure of both individual and organizational research utilization. The IRWFY self-assessment tool was developed by the Canadian Health Services Research Foundation [15] to assist health service delivery organizations to examine organizational strengths and weaknesses in evidence-informed decision making. The IRWFY tool measures perceptions of organizational culture (values; attitudes to evidence use; intentions to use evidence), and use of evidence (access; adapt; use in decision making; adopt) at both organizational (part one of survey) and individual (part two of survey) levels. Part one quantifies individual perceptions of how a participant’s organization 1) acquires and 2) uses research. Each of these two components is measured via a number of separate scales. The acquire component includes a single “acquire” scale (19 items and 1 open response item) made up of two subscales - an “ability/resources” subscale (10 items) and a “looking for research” subscale (9 items). The uses research component includes four scales: “assess” (6 items and 1 open response item), “adapt” (11 items and 1 open response item), “apply” (11 items and 1 open response item), and “decision making processes” (6 items and 1 open response item). Part two quantifies individual’s perceptions of their own ability to use, understand and transfer research evidence. This component is measured via three scales: “looking for research” (9 items), “assess” (6 items), and “adapt” (8 items and 1 open response item). A 7-point response scale (1 = ‘strongly agree’ - 5 = ‘strongly disagree’, 6 = ‘don’t know’, 7 = ‘not applicable’) is used for each of the scale items. The IRWFY tool has demonstrated good usability, strong response variability and adequate discriminant validity [16]. The third part of the survey was used to capture additional information about the participant’s perceptions of challenges to utilization of research evidence in their organization, strengths of their organization in the use of research evidence for decision making, and any other comments about the use of research evidence in their organization (three separate open response items) as well as information about participants’ past and current employment in their organization/sector, training/education on how to do research, and a question where they rated their own understanding of research (5-point response scale: 1 = ‘poor’; 5 = ‘excellent’) as well as general demographic information.

Consent and ethics approval

The TROPIC project was a three-year study, funded by the Australian Agency for International Development (AusAID) on an Australian Development Research Awards (ADRA0800148) grant. The project was approved by the Deakin University Human Research Ethics Committee (2009-142), the Fiji Health Research Committee, and the Fiji National Research Ethics Review Committee (2009-308). All participants provided informed written consent prior to engaging in the study.

Data treatment and analysis

Descriptive statistics (proportions) were used to summarize demographic and relevant background information. Participant’s level of engagement with the intervention was also summarized descriptively according to three separate measures – the duration of participation, workshop attendance, and policy/advocacy brief completion. The duration of participation was determined as the period from when the participant entered the study (by completing the consent requirement) through to either: 1) completion of their final policy brief, 2) the point where the participant ceased their involvement in the study, or 3) the termination of the overall project. Workshop attendance was calculated as the proportion of workshops attended by each participant relative to the number offered/available to the participant. Policy brief completion was determined from the number of policy briefs fully completed by individual participants - incomplete briefs were not included.

All IRWFY items were checked for missing and out of range values. Items were reverse scored to aid interpretability (ie. higher scores representing ‘better’ research utilization) and ‘don’t know’ and ‘not applicable’ responses were specified as ‘missing’. Organizational and individual scale scores (means) were computed using responses from the relevant items. Pro-rated means were computed where scales included items with missing data, providing data were available for most (>70%) items. An overall ‘mean organizational’ and ‘mean individual’ score was also computed using all valid scores from items from the five organizational scales and three individual scales respectively and these scores represented a global score on ‘research utilization’. A variable representing the overall level of participant engagement with the TROPIC project was generated by summing dichotomized versions of the workshop attendance (0 = <50% attendance; 1 = ≥ 50% attendance) and policy brief completion (0 = none; 1 = one or more) measures. Low engagement was indicated by a summed score of 0, moderate engagement a score of 1 and high engagement a score of 2. Scale reliability (internal consistency) was assessed using Cronbach’s alpha and evaluated using the following guidelines: >.9 ‘excellent’, >.8 ‘good’, >.7 ‘acceptable’, >.6 ‘questionable’, >.5 ‘poor’, and <.5 ‘unacceptable’ [17]. Changes in pre- and post-intervention scores on the IRWFY organizational and individual scales and the self-rating understanding of research were assessed. At the individual level, responses from all individuals with pre- and post-intervention data were used to compute individual-level means and differences were tested using separate t-tests. Due to small numbers in ‘engagement level’ subgroups, descriptive statistics (means, standard deviations) were used to examine whether participants’ level of project engagement influenced pre- and post-intervention differences on the five organizational and three individual scales. Due to the limitations of sample size, intervention dose and sensitivity of the IRWFY instrument, we also assessed pre- and post-intervention change - at both the individual- (ie. a series of N of 1 ‘studies’) and organizational-levels (ie. case studies for six organizations) – by comparing the overall mean individual and mean organizational values at post-intervention with the corresponding pre-intervention values. To generate the mean overall organization level score, we aggregated the individual responses by organization and divided this value by the number of staff from the organization providing data at both time-points. Individual- and organizational-level change scores were computed by subtracting the pre-intervention score from the post-intervention score and these change scores were then plotted. A positive change score indicated an increase in research utilization, a negative change score indicated a decrease in research utilization, and a zero score no change. Visual analysis and tests for proportional change (Z statistic) were used to interpret the plotted change scores. All data entry, preparation and analysis was performed using SPSS V22 (IBM Inc.) and statistical significance was accepted as p < .05 for all effects.

Results

TROPIC sample

A total of 49 individuals participated in the study. Characteristics of the sample participants at commencement are summarized in Table 1. In general, there was a relatively even spread of participants aged between 21 and 50 years and a slightly higher proportion of females. Almost half had worked in the ‘policy/advocacy sector’ for 12 or more years, about one third had been employed with their organization for 12 years of more, most held middle or senior management positions and had been in their current position for less than 3 years. Most participants reported having a ‘good’ to ‘very good’ understanding of research, and over half of the sample reported receiving research training/education through undergraduate classes and/or on-the-job training/collaboration. One participant reported no previous research training/education.

Table 1 Characteristics of TROPIC participants at pre-intervention (n = 49)

Engagement in TROPIC intervention

The profile of participant engagement is summarized in Table 2. Whilst almost half of the participants were engaged over the full duration of their organization’s engagement in the project, approximately one third were engaged for 3 months or less. Just under half of the participants attended 75% or more of the workshops available to them, and similarly, just under half attended less than 50% of the available workshops. Only a small proportion (8%) of participants completed more than one policy brief and about two thirds failed to complete a single brief. One organization elected for its participants to develop a single policy brief as a joint exercise. The pattern of engagement was generally similar across the six organizations, although there were some differences. For example, higher proportions of staff from Organization 3 (80%) and Organization 1 (75%) participated for the full project duration relative to the other organizations. Relative to staff from the other organizations, a higher proportion of staff from Organization 1 attended most of the available workshops (66%) and developed at least one policy brief (58%) while a higher proportion of staff from Organization 2 attended fewest of the available workshops (75%) and developed no policy briefs (83%).

Table 2 Summary of individual- and organization-level of engagement with the TROPIC intervention

IRWFY instrument

Internal reliability coefficients for the five organizational and three individual scales of the IRWFY instrument, at pre- and post-intervention, were computed. Results are shown in Table 3 and indicate that the IRWFY scales had acceptable – excellent internal reliability [17].

Table 3 Scale statistics for IRWFY instrument pre- and post-intervention

Intervention effect

Participants’ perceptions of how well they use research and how well their organization uses research before and after the TROPIC intervention were indicated by scores on the IRWFY scales. Means for each of these scales are summarized in Table 4. There were few differences between pre- and post-intervention means across the five organizational scales. Furthermore, t-tests revealed no statistically significant differences (ie. pre-post intervention change) and effect sizes for each of the scales were small. It should be reiterated here that the analyzed scores are individuals’ perceptions of their organization and not the means of a collection of individuals from the same organization. Unfortunately, the small within-organization subsample numbers precluded this level of analyses (however, in the following section, we do present a summary of an overall score that is aggregated by organization). The means for the three individual scales all show an increase in research utilization at post-intervention. However, only the effect for the “assess” scale was statistically significant and effect sizes were in the small to moderate range. There was also no difference for the individual self-rating of understanding of research from pre- (M = 2.7, SD = 0.8) to post-intervention (M = 2.7, SD = 0.8) (t = .00, p = 1.00, d = .00).

Table 4 Descriptive (mean, standard deviation) and test statistics for IRWFY scales at pre- and post-intervention (n = 32)

Since the intervention dose (ie. individual engagement with the intervention activities) varied between individuals, we also examined the effect of level of dose on scores on the individual and organizational scales. Pre- and post-intervention means were computed for each IRWFY scale according to level of individual engagement – indicated by an aggregate measure captured by the workshop attendance and policy brief development variables (see Table 5). The small group numbers precluded formal statistical testing. Nevertheless, descriptive assessment of the pre- and post-intervention scores on the five organizational scales failed to indicate any possible effect of engagement. Similarly, there were few differences on the three individual scales, although the improvements for the moderate engagement group, and to a lesser extent high engagement group, were generally greater than those observed for the low engagement group.

Table 5 Descriptives statistics (mean, standard deviation) for IRWFY scales pre- and post-intervention by summary measure of engagement

Since grouped analysis of the data may conceal small but potentially important changes, we also examined differences across pre- and post-intervention by computing both individual and organizational (ie. aggregated scores of individuals from the same organization) change scores as shown in Figs. 1 and 2 respectively. Of the individual participants having both pre- and post-intervention scores, 63% indicated increased research utilization post-intervention (37% indicated decreased research utilization). This effect, however, was not significantly different from chance level proportions (Z = −1.01, p = .31). Of the six organizations, the aggregated individual staff responses indicated that at an organizational level, 50% of the participating organizations had increased their research utilization post-intervention whilst 50% had decreased their research utilization.

Fig. 1
figure 1

Mean overall difference (post-intervention – pre-intervention) across all individual items for individual participants

Fig. 2
figure 2

Mean overall difference (post-intervention – pre-intervention) across all organizational items for participants aggregated by organization

Discussion

This paper reported the quantitative results of the effect of a knowledge-broking team on the use of obesity-related evidence in policy briefs. Specifically, it reported on changes in perceptions of evidence-informed policy making skills among employees in four selected government ministries and two NGOs. The findings indicated no changes in participants’ perceptions of their organization’s utilization of research, however modest positive changes (increases) were observed on the three individual-level scales although only one effect (“assess” scale) was significant. Findings from follow up descriptive analyses failed to reveal any ‘dose’ effect – there was no consistent evidence indicating greater (positive) change for those participants who more strongly engaged with the TROPIC program although changes were marginally stronger for those categorized as moderately or highly engaged compared with those lowly engaged. There was also no effect on individual’s global self-rating of understanding of research. Findings from analyses of perceptions at the (aggregated) organizational level and individual level indicated that effects (ie. changes in perceptions in research utilization) were not different to those expected by chance.

Overall, the quantitative findings reported herein, generally failed to show any consistent effects for the TROPIC knowledge exchange research program. The only significant effect was for the individual scale of “assess” where scores were improved - indicating that individual participants perceived that their ability to critically evaluate research methodology, assess the relevance of research and synthesize different pieces of research, was better after the TROPIC intervention. These specific results are consistent with previously reported findings from the interview data [12]. While the results for the two other individual scales (ie. “looking for research”, “adapt”) showed positive changes, these effects were smaller in magnitude and non-significant. The interview findings provide some explanation for the modest effects; participants indicated that accessing research was limited by resources and infrastructure (access to databases, computers, internet), as well as time constraints and in some cases, a perceived lack of organizational support [12]. The failure of the TROPIC intervention to show any significant effects at the organizational-level is not surprising since it is unlikely that adjusting the capabilities and skills of a small number of personnel within any one organization would be sufficient to alter the perceptions of research utilization capability at an organizational level. Even when individual data was aggregated by organization, no positive effects were discerned. Thus, while there was some indication of positive change in research utilization at an individual level, there was no evidence of perceived change at the organizational level suggesting that the reach of the TROPIC intervention within the participating organizations was limited. It is well recognized that shifts in organizational culture require multi-faceted approaches over time [9, 18]. Exploration of the factors that might facilitate or impede the reach of similar interventions need to be clarified.

The application of TROPIC knowledge exchange research program through a knowledge-broking approach is important given the explanatory nature of the culture and behavioral factors that influence decision making. While Waqa, Mavoa et al. [12] have previously reported that many participants believed they had increased skills in acquiring, accessing, adapting and applying evidence following the TROPIC project, and that their reporting had become based more on credible research evidence rather than perceptions and anecdotal “evidence”, the results of the quantitative evaluation failed to reveal consistent intervention effects at an individual level and no effect at an organizational level. A number of factors may account for this apparent inconsistency of effects. First, while Waqa, Mavoa et al. [12] captured qualitative impressions of the TROPIC intervention, the present study reports only on quantitative outcomes. Given the myriad of challenges faced when operationalizing the intervention, discerning (possible) intervention effects through quantitative assessment is problematic and unlikely to be able to capture the more personalized perceptions of the benefits of the intervention. Second, a number of participants commented in their post-intervention interviews that they had overrated their knowledge of research when they entered the TROPIC program and completed the initial IRWFY survey. It was only when they gained an increased understanding of research and its application that they felt they were able to provide a realistic score. Third, high staff turn-over in all six organizations compromised continuity of staff engagement in the program. Fourth, many participants had roles that required multiple tasks, resulting in fragmented and limited time to allocate to TROPIC activities. Natural disasters (ie. a cyclone and two major floods) and concomitant public health emergencies further fragmented time available to engage in TROPIC; Fiji experienced several natural disasters that necessitated governmental and NGO action which subsequently diverted staff from attending workshops and completing policy briefs. Fifth, only about one third of the organizations received tangible high-level political support from the relevant ministers (indicated by the presence of permanent secretaries (deputy ministers) and directors during presentation of policy briefs to the organization). The importance of high-level organizational support was reflected in the higher number of policy briefs produced by these organizations.

A number of other organizational barriers also limited the impact of using evidence in the development of policy briefs. Whilst these barriers varied across the six participating organizations, they included factors such as the reallocation of participants onto other activities despite their reported interest in the intervention activities and broader project, lack of organizational support and incentives to persist with policy development work in the face of other organizational priorities as well as the lack of information and technology resources (eg. database software) to enable storage of extracted evidence from scientific literature. Limitations of this nature have been reported elsewhere [19] and remain a challenge to embedding evidence-informed policy making into organizations. These challenges are heightened in low to middle income countries with limited economic and human resources and less capacity to either access or adapt evidence for policy documents [20], or foster a culture that supports and extends evidence-informed policy making [4, 21]. These limitations make it difficult to foster and sustain a culture, structures and processes that support evidence-informed policy making within organizations.

More generally, the relatively short duration of TROPIC program (12–18 months per organization) may have been insufficient for the program activities to be integrated into the individual participants’ working practices but more especially the practices of the organization within which the participants operated. Almost half of the participants had less than 9 months engaged in the TROPIC program. It has been suggested that 15–18 months is the shortest duration that can be expected to produce change in evidence-informed decision making [8, 22], however, competing priorities of participating organizations, the timing of entry into the project vis-a-vis organizational policy cycles, and limited resources of the TROPIC team meant that it was not possible to provide each organization with the optimal duration of intervention. The effectiveness of future knowledge-broking programs may require a longer duration, as well as greater staff engagement – that is, both a higher proportion of staff within organizations need to be engaged and staff also need to be provided with support to continue their participation in all of the intervention activities in the face of other more important priorities. Consideration of these and other factors, including longer training periods and more individualized tailoring of intervention strategies, as well as evaluation designs that allow sufficient time for effects to permeate through the culture of organizations before post-intervention assessment, is needed before measurable shifts in the utilization of research can be discerned.

Another factor that may have contributed to the limited (quantitative) effects is our choice of survey instrument. We used the IRWFY instrument, one of the few available in 2009. The instrument has good usability, strong response variability and adequate discriminant validity [16], however it was not designed to assess intervention effects but rather for organizational self-assessment, to ‘scan’ and generate discussion about how research is used. Thus, it is possible that the IRWFY tool was not sufficiently sensitive to enable detection of relatively small changes in individual or organizational knowledge and practices. This, in conjunction with the relatively small final sample at post-intervention may further explain the limited and inconsistent findings. Nevertheless, the magnitude of the observed effects (small) indicate that even though the potential for finding a “statistically significant” intervention effect may have been low, the observed effects were small in any case and point to other more program-specific factors as we have noted.

While multiple competing priorities in most participating organizations limited the impact of using evidence in policy briefs, the knowledge-broking team offered a flexible schedule of activities, organized the workshops away from participants’ workplace and in a block as well as providing a sequence of policy brief writing retreats that were timed to be accessible for participants in each organization. In the interviews, participants indicated that barriers to evidence-informed policy making were not just individual lack of knowledge about data sources, but also organizational. Participants cited inadequate time to develop evidence-informed briefs, and insufficient resources for accessing and managing evidence as barriers [12]. Embedding of evidence-informed policy making within organizational structures requires a critical mass of people with skills to acquire, assess and adapt evidence to inform policy, the availability of timely, relevant evidence in language that resonates for policy-makers, an organizational culture where there are clear structures and processes in place to support evidence-informed decision making, and that recognises and rewards the use of evidence-informed decision making, and strong researcher-end-user relationships [21]. Organizations who participated in TROPIC are now well placed to build on: 1) excellent relationships with researchers, and 2) the growing number of personnel who have acquired evidence-informed decision making skills. The next challenge is to continue to develop a culture that builds a solid organizational infrastructure to support evidence-informed decision making that informs all policies that have potential health benefits. Post-TROPIC initiatives to seek a similar experience for more personnel by at least one of the participating organizations suggests that there is motivation to continue to build a critical mass of staff with evidence-informed policy making skills.

The TROPIC study has provided some insights of knowledge-broking approaches to support evidence-informed policy development that are generic and can be transferred to any policy area. The value placed on types of evidence within decision making contexts is dependent on individuals, the organizations in which they work and the systems in which they operate. Decision making processes are also context-dependent [23]. A supportive organizational environment is especially important in the transferability of skills in any low- or middle-income country with limited policy making resources. This observation is consistent with other studies [4, 20, 21].

Strengths of this study include its uniqueness in a number of respects. The knowledge-broking team employed a number of complementary approaches to facilitate evidence-informed policy making, including specific workshops tailored to the needs of individual organizations as well as individual knowledge-broking sessions whereby participants received personalized guidance about accessing and utilizing research evidence to inform development of policy briefs. The knowledge-broking team also provided broader mentoring support for individual participants, assisted in aligning timelines for policy brief completion to policy timelines such as approval of annual plans, protecting time and also sharpening their understanding of how to plan and draft policy briefs. The team also developed a policy brief template that guided writing, as well as a template for presenting briefs to higher decision making levels [13]. A number of limitations of the study are acknowledged. Participants’ lack of understanding of what constitutes evidence resulted in many participants overestimating their evidence-informed policy making skills when entering the intervention [13]. This overestimation, along with the disparity in participants’ basic skills and availability required more individual mentoring than the team had anticipated. The project timelines and capacity limits of the TROPIC team, meant that it was not possible to individually tailor each of the knowledge-broking strategies. The duration of the intervention (≤ 9 months for almost half of the participants) may have been insufficient to demonstrate significant changes at an individual level and certainly at an organizational level. The project timelines also meant that there was a relatively brief period from commencement (and conclusion) of the intervention activities and post-intervention assessment. Dedicated embedding of the knowledge and skills gained within organizational structures was beyond the scope of this 3-year project. The attrition of participants from several organizations meant that a substantial proportion of participants engaged in limited intervention activities and furthermore were also unavailable for follow up surveying. The IRWFY instrument used to assess quantitative changes in research utilization lacked sensitivity to discern any intervention effect. The TROPIC program evaluation did not explore the barriers to evidence awareness, knowledge and uptake.

Conclusions

In conclusion, the findings of this quantitative analysis of the TROPIC program indicated inconsistent and relatively modest effects at the individual level and no detectable effects at the organizational level. The findings are also inconsistent with previously reported qualitative results that indicate more positive effects arising from the same program. While design and measurement factors may partly account for the lack of quantitative effects, other contextual and program-specific factors may better explain the observed ineffectiveness and apparent inconsistency with previous qualitative findings. Future research is needed to better understand the barriers to implementation of future knowledge-broking interventions and the structural and organizational factors including those relating to evidence awareness, knowledge and uptake that may facilitate program effectiveness. Additionally, further work is needed to refine tools so they can more precisely quantify both individual and organizational level research utilization to enable better assessment of future knowledge broking programs.

Abbreviations

IRWFY:

“Is Research Working for You?”

NCDs:

Non-communicable diseases

NGO:

Nongovernmental organization

OPIC:

Obesity Prevention in Communities

SPSS:

Statistical Package for the Social Sciences

TROPIC:

Translational Research for Obesity Prevention in Communities

WHO:

World Health Organization

References

  1. Gortmaker SL, Swinburn BA, Levy D, Carter R, Mabry PL, Finegood DT, Huang T, Marsh T, Moodie ML. Changing the future of obesity: science, policy, and action. Lancet. 2011;378:838–47.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Swinburn BA, Sacks G, Hall KD, McPherson K, Finegood DT, Moodie ML, Gortmaker SL. Obesity 1 the global obesity pandemic: shaped by global drivers and local environments. Lancet. 2011;378:804–14.

    Article  PubMed  Google Scholar 

  3. Bowen S, Erickson T, Martens PJ, Crockett S. More than “using research”: the real challenges in promoting evidence-informed decision-making. Health Policy. 2009;4:87–102.

    Google Scholar 

  4. Mavoa H, Waqa G, Moodie M, Kremer P, McCabe M, Snowdon W, Swinburn B. Knowledge exchange in the Pacific: the TROPIC (translational research into obesity prevention policies for Communities) project. BMC Public Health. 2012;12:552.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Rj L, Lamari M, Amara N. The extent and determinants of the utilization of university research in government agencies. Public Adm Rev. 2003;63:192–205.

    Article  Google Scholar 

  6. Kothari A, Birch S, Charles C. “Interaction” and research utilisation in health policies and programs: does it work? Health Policy. 2005;71:117–25.

    Article  PubMed  Google Scholar 

  7. Martin J. The role of advocacy. In: Waters E, Swinburn B, Seidell J, Uauy R, editors. Preventing childhood obesity: evidence, policy, and practice. West Sussex: Wiley-Blackwell; 2010. p. 192–9.

  8. Dobbins M, Hanna SE, Ciliska D, Manske S, Cameron R, Mercer SL, O'Mara L, DeCorby K, Robeson P. A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implement Sci. 2009;4:61.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Dobbins M, Robeson P, Ciliska D, Hanna S, Cameron R, O'Mara L, DeCorby K, Mercer S. A description of a knowledge broker role implemented as part of a randomized controlled trial evaluating three knowledge translation strategies. Implement Sci. 2009;4:23.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Kremer P, Waqa G, Vanualailai N, Schultz JT, Roberts G, Moodie M, Mavoa H, Malakellis M, McCabe MP, Swinburn BA. Reducing unhealthy weight gain in Fijian adolescents: results of the Healthy Youth Healthy Communities study. Obes Rev. 2011;12(Suppl 2):29–40.

  11. Swinburn BA, Millar L, Utter J, Kremer P, Moodie M, Mavoa H, Snowdon W, McCabe MP, Malakellis M, de Courten M, et al. The Pacific Obesity Prevention in Communities project: project overview and methods. Obes Rev. 2011;12(Suppl 2):3–11.

  12. Waqa G, Mavoa H, Snowdon W, Moodie M, Nadakuitavuki R, McCabe M, Swinburn B. Participants' perceptions of a knowledge-brokering strategy to facilitate evidence-informed policy-making in Fiji. BMC Public Health.2013;13:1–9.

  13. Waqa G, Mavoa H, Snowdon W, Moodie M, Schultz J, McCabe M, Kremer P, Swinburn B. Knowledge brokering between researchers and policymakers in Fiji to develop policies to reduce obesity: a process evaluation. Implement Sci. 2013;8:1–11.

    Article  Google Scholar 

  14. Jansson SM, Benoit C, Casey L, Phillips R, Burns D. In for the long haul: knowledge translation between academic and nonprofit organizations. Qual Health Res. 2010;20:131–43.

    Article  PubMed  Google Scholar 

  15. Canadian Health Research Foundation. Is research working for you? A self-assessment tool and discussion guide for health services management and policy organizations. Ottawa: Canadian Foundation for Healthcare Improvement; 2007.

    Google Scholar 

  16. Kothari A, Edwards N, Hamel N, Judd M. Is research working for you? Validating a tool to examine the capacity of health organizations to use research. Implement Sci. 2009;4:46.

    Article  PubMed  PubMed Central  Google Scholar 

  17. George D, Mallery P. SPSS for windows step by step: a simple guide and reference, 11.0 update. 4th ed. Boston: Allyn and Bacon; 2003.

    Google Scholar 

  18. Harvey G, Fitzgerald L, Fielden S, McBride A, Waterman H, Bamford D, Kislov R, Boaden R. The NIHR Collaborations for Leadership in Applied Health Research and Care (CLAHRC) for Greater Manchester: combining empirical, theoretical and experiential evidence to design and evaluate a large-scale implementation strategy. Implement Sci. 2011;6:96.

  19. van Kammen J, de Savigny D, Sewankambo N. Using knowledge brokering to promote evidence-based policy-making: the need for support structures. Bull World Health Organ. 2006;84:608–12.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Mendis S. The policy agenda for prevention and control of non-communicable diseases. Br Med Bull. 2010;96:23–43.

    Article  PubMed  Google Scholar 

  21. Mavoa H, Snowdon W, Waqa G. Embedding evidence-informed decision-making into policies that benefit health. Fiji Journal of Public Health. 2012;1:27–8.

    Google Scholar 

  22. Mitton C, Adair CE, McKenzie E, Patten SB, Waye PB. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85:729–68.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Armstrong R, Waters E, Moore L, Dobbins M, Pettman T, Burns C, Swinburn B, Anderson L, Petticrew M. Understanding evidence: a statewide survey to explore evidence-informed public health decision-making in a local government setting. Implement Sci. 2014;9:188.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors are grateful for the generous support of the Government of Fiji, participating organizations and participants who made this study possible. We are especially grateful for the wisdom and ongoing support of our Fiji advisors who provided invaluable advice. We acknowledge the support of C-POND (Pacific Research Centre for the Prevention of Obesity and Non-Communicable Diseases), Fiji National University and Deakin University where staff were based. We gratefully acknowledge AusAID who funded this study on an Australian Development Research Awards grant. We thank the Canadian Health Services Research Foundation for allowing us to use the “Is Research Working for You?” tool. The authors wish to acknowledge Radha Etheridge, Rigieta Nadakuitavuki who assisted in the workshops and/or implementation of the survey. Waqa, Moodie and Swinburn are researchers in the NHMRC Centre for Excellence for Obesity Policy and Food Systems (#1041020).

Funding

The TROPIC project was funded by the Australian Agency for International Development (AusAID) on an Australian Development Research Awards grant (#ADRA0800148).

Availability of data and materials

The analyzed dataset for the study is not publicly available due to ethical and legal restrictions but is available from the corresponding author on reasonable request.

Authors’ contributions

BS, MMcC, HM and MM conceptualized the TROPIC project. HM and GW operationalized and conducted the intervention activities and collected the data. PK and HM conceptualized the study and carried out the primary writing of the manuscript. PK performed analysis and interpretation of the survey data. All co-authors had critical input, and have read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

The TROPIC project was approved by the Deakin University Human Research Ethics Committee (2009-142), the Fiji Health Research Committee, and the Fiji National Research Ethics Review Committee (2009-308). All participants provided informed written consent prior to engaging in the study.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Kremer.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kremer, P., Mavoa, H., Waqa, G. et al. Knowledge-exchange in the Pacific: outcomes of the TROPIC (translational research for obesity prevention in communities) project. BMC Public Health 17, 362 (2017). https://doi.org/10.1186/s12889-017-4254-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12889-017-4254-3

Keywords