Filling the data gaps: community-led research

Not just no data, but also the wrong type of data

Readers who have had anything to do with the Global Fund to fight AIDS, Tuberculosis and Malaria will know that there is about to be a huge flurry of activity, as the Fund prepares to launch its new funding model after a hiatus of a couple of years.

Among many, many other things the new model is emphasizing the need to make sure that programmes are based on sound epidemiological data.  In principle this is a good thing: it is a way of making sure that programmes are designed to reach the right people. The people who, particularly in the context of HIV and AIDS, have been overwhelmingly neglected and marginalised.

But, as Stef Baral wrote on this site a few posts back, the same systems and prejudices that lead to the neglect of groups such as sex workers, men who have sex with men, transgender people, and people who use drugs, also operate to ensure that we don’t really have good data.  Perversely, we end up in a situation where no data = no programmes.

That’s not the only problem, however.  The types of data that are generally considered “robust” and valid for strategic and programming purposes tend to be fairly limited in terms of the insights they provide: we’re talking about estimations of the sizes of different population (or “at risk”) groups, and estimations of HIV prevalence rates among these groups.  While this information, if available, can help to indicate where the majority of new infections are occurring in a given country or area, it tells us little or nothing about the reasons why, or the challenges that highly affected groups are facing.  It limits them to epidemiological risk-factors.

Community-led research an help bridge some of this gap.  It may not provide the hard numbers that decision makers want as a basis for funding allocations, but it is an excellent way of reframing priorities that can help improve how services and programmes are designed.

Sex worker led research in Namibia

A few years ago, because there has been very little research on sex work in Namibia, and because most of the programmes designed to support sex workers are framed around a very narrow HIV focus (information, condoms, cajoling or even coercing people to get tested and have STI check ups; and no attention to issues like violence, discrimination and insecurity), UNFPA and UNAIDS wanted to do a bit of qualitative research to look in more detail at what was going on.  

Although I’m a big user of epidemiological research (quantitative and qualitative), in this context it wasn’t particularly feasible (given the resources available) or appropriate to see this as a classic research project, with publication in a peer-reviewed journal or changing national policies as the ultimate goal.  What seemed more important, given that a major new HIV programme aimed at sex workers was about to be launched, was to document some of the specific situations in the towns that the programme was going to target, to help influence the sorts of things that get addressed, and to identify and point out any gaps in the programme.  Moreover, there were quite a few sex workers in Namibia who are very involved in community work, whether in relation to HIV or more broadly, and we wanted to help them get even more involved.  

So we decided to provide some introductory training on one qualitative method – focus group discussions – and got them to think through what sorts of issues their colleagues might want to discuss.  We used those suggestions to develop a guide, and sent them out to conduct their own research.  Unsurprisingly when they developed the guide, they did not start with a list of questions about “condom negotiation” or “access to HIV testing”.  They started with questions about violence, abuse by the police, and discrimination in health services.  This is important is because the whole idea behind the work we did in Namibia was to move away from the standard survey approaches which ask sex workers the same standard questions about condom use and access to services, and to give sex workers space to talk, among other things, about the issues that HIV programmes aren’t helping them with and maybe even won’t help them with.

The report describes the results in detail.  It also describes the limitations, of which there are many.  Although I remain adamant that the purpose of this activity was never to extract data that will tell the whole story and represent the realities of sex workers throughout Namibia, some common themes come out of each of the five towns.  But there are also differences.  It’s the differences that interest me.  I wanted to give people an opportunity to discuss and think about what was going on in their own towns, and what, practically, immediately, might be done to fix some of the problems in each town.  And to an extent, I think that’s what we got.  It’s not generalisable; in fact the results from each town are probably very biased.  We know, for instance, that in most of the towns, we failed to talk to any male or transgender sex workers.  But if the biases and their relevance to each town are recognised, and used to get positive change in each town, then that’s OK.

We also got a team with a new set of skills, who could do the same thing again, or can replicate it in other towns, or – why not – help other marginalised groups like men who have sex with men, migrants, or people living in slums do the same thing.  The team has, on a number of occasions, used the findings to frame their input into national discussions about HIV programming – so this research, in a very real way, helped to plug the data gap.

Maybe “research” is the wrong term to describe using research techniques in creative ways.  This participatory approach isn’t new to community development work: far from it.  It isn’t new to public health researchers either.  Practitioners have been advocating it for decades.  But it remains a marginal rather than a mainstream practice.  When there’s no data, however, this sort of research is an excellent starting point for filling the gap.


What does “first ever” research look like?

Starting from nothing

When we talk about where there is no data, it is not strictly true that there is NO data.  Even if information has not been collected or analysed or published, it still exists.  But collecting, analysing and publishing information is often an important step to getting health problems to be better understood and to dealing with them effectively.

I was once asked to do some research on key populations for HIV – particularly sex workers and men who have sex with men – in a small country where no structured research had ever been carried out on these groups.  Of course, if you asked someone in the ministry of health, or even someone on the street, whether sex workers and men who have sex with men existed in the country, they would have an opinion – or indeed several opinions.  But if you want to start building programmes to work with these populations on issues such as human rights and HIV you need more than that.  Our initial brief was to carry out a population size estimate – to try to figure out roughly how many sex workers and MSM there were – but given the complete lack of any documented information, it was clear that this was not going to be possible.  So instead, we decided to try and find a few people to talk to and to write up a few case studies – just to document for the first time that there were indeed sex workers and MSM, to get a sense from them of the human rights and HIV situation, and possibly to lay the groundwork for some more formal research.

By word of mouth, and by identifying people through internet chat sites, we managed to find a handful of MSM who said they would be happy to talk.  Given the sensitivity of the topic and the fact that sex between men was illegal and highly stigmatised in the country, we arranged to meet in private locations like out-of-hours health facilities or hotel rooms, or even in one case in the back of a taxi.  Similar arrangements were made to speak with sex workers, although finding the way in to the group was less straightforward, as it involved working through local intermediaries.

We had an idea of how many people we wanted to speak to in the time available, and had developed both a questionnaire and a focus group discussion guide in order to be able to adapt to different situations.  We also prepared an informed consent statement and form, designed to make sure participants understood the purpose of the exercise and were happy to participate.

In reality, it is very hard to control what is going to happen in this sort of exercise, and nothing worked out quite the way we had planned.  Here are a few of the stumbling blocks we came up against:

  • Around half of the appointments were not kept.
  • Although participants were asked to come individually, a number came in pairs or groups of three, making it difficult to use either the interview or the focus group discussion guides.
  • A number of participants avoided responding to questions about sexual behaviour and sex partners – in some cases they declined to talk about themselves but talked about people they knew instead.
  • One interview had to be cut short because we were being devoured by mosquitoes.

Limited results; managing expectations

There are many good reasons why the process was so fraught. It is quite likely that participants were not reassured by the explanation of the research or the consent agreement, or that they did not feel able to talk openly despite the initial agreement.  Although the work did produce a few case studies, the amount of data and insight produced was fairly minimal.  And although it was not the only work we did during the trip, it ended up being relatively expensive for what it produced.

It may seem, given these challenges, that there is very little point in conducting this sort of research.  However, in situations where virtually nothing is known, and where the government or other funders use this fact to sidestep sensitive issues, it is important to start somewhere.  We were able to take the scant information to decision makers, to demonstrate that these groups existed and that even though it was challenging, it was possible to work with them to find out more about their lives.  We also made a handful of contacts who helped out, much later, in developing a more structured piece of research.  Conversely, as the difficulties we faced suggest, it would have been impossible to conduct a more structured piece straight off: we would not have been able to identify enough people and implement the research in a rigorous or acceptable way.

One of the biggest challenges in this process was, of course, that the information we were able to obtain fell far short of the client’s expectations.  In this case we realised before starting that this would be the case and was able to discuss this with the client and come to an agreement on what was realistic.  However, it is often not until research projects get going that challenges like these are identified.

What is the lesson?

The lesson here is that when trying to gather data on sensitive topics, it is often necessary to begin with very small and informal studies.  Being clear about the methods, and ensuring participants understand the process and consent to participate in it remains essential.  But don’t set your expectations in terms of results too high.  And in an environment where those who fund research are often removed from realities by some distance, it is essential to be clear from the start about what is and isn’t possible.  If a client wants a large behavioural study or a population size estimate, but the starting point is similar to the one described above, it is important to develop some type of “roadmap” that plots out the different processes and sub-studies needed to get there.  We’ll post more about this “roadmap” idea on Where there is no data over time.