Filling the data gaps: community-led research

Not just no data, but also the wrong type of data

Readers who have had anything to do with the Global Fund to fight AIDS, Tuberculosis and Malaria will know that there is about to be a huge flurry of activity, as the Fund prepares to launch its new funding model after a hiatus of a couple of years.

Among many, many other things the new model is emphasizing the need to make sure that programmes are based on sound epidemiological data.  In principle this is a good thing: it is a way of making sure that programmes are designed to reach the right people. The people who, particularly in the context of HIV and AIDS, have been overwhelmingly neglected and marginalised.

But, as Stef Baral wrote on this site a few posts back, the same systems and prejudices that lead to the neglect of groups such as sex workers, men who have sex with men, transgender people, and people who use drugs, also operate to ensure that we don’t really have good data.  Perversely, we end up in a situation where no data = no programmes.

That’s not the only problem, however.  The types of data that are generally considered “robust” and valid for strategic and programming purposes tend to be fairly limited in terms of the insights they provide: we’re talking about estimations of the sizes of different population (or “at risk”) groups, and estimations of HIV prevalence rates among these groups.  While this information, if available, can help to indicate where the majority of new infections are occurring in a given country or area, it tells us little or nothing about the reasons why, or the challenges that highly affected groups are facing.  It limits them to epidemiological risk-factors.

Community-led research an help bridge some of this gap.  It may not provide the hard numbers that decision makers want as a basis for funding allocations, but it is an excellent way of reframing priorities that can help improve how services and programmes are designed.

Sex worker led research in Namibia

A few years ago, because there has been very little research on sex work in Namibia, and because most of the programmes designed to support sex workers are framed around a very narrow HIV focus (information, condoms, cajoling or even coercing people to get tested and have STI check ups; and no attention to issues like violence, discrimination and insecurity), UNFPA and UNAIDS wanted to do a bit of qualitative research to look in more detail at what was going on.  

Although I’m a big user of epidemiological research (quantitative and qualitative), in this context it wasn’t particularly feasible (given the resources available) or appropriate to see this as a classic research project, with publication in a peer-reviewed journal or changing national policies as the ultimate goal.  What seemed more important, given that a major new HIV programme aimed at sex workers was about to be launched, was to document some of the specific situations in the towns that the programme was going to target, to help influence the sorts of things that get addressed, and to identify and point out any gaps in the programme.  Moreover, there were quite a few sex workers in Namibia who are very involved in community work, whether in relation to HIV or more broadly, and we wanted to help them get even more involved.  

So we decided to provide some introductory training on one qualitative method – focus group discussions – and got them to think through what sorts of issues their colleagues might want to discuss.  We used those suggestions to develop a guide, and sent them out to conduct their own research.  Unsurprisingly when they developed the guide, they did not start with a list of questions about “condom negotiation” or “access to HIV testing”.  They started with questions about violence, abuse by the police, and discrimination in health services.  This is important is because the whole idea behind the work we did in Namibia was to move away from the standard survey approaches which ask sex workers the same standard questions about condom use and access to services, and to give sex workers space to talk, among other things, about the issues that HIV programmes aren’t helping them with and maybe even won’t help them with.

The report describes the results in detail.  It also describes the limitations, of which there are many.  Although I remain adamant that the purpose of this activity was never to extract data that will tell the whole story and represent the realities of sex workers throughout Namibia, some common themes come out of each of the five towns.  But there are also differences.  It’s the differences that interest me.  I wanted to give people an opportunity to discuss and think about what was going on in their own towns, and what, practically, immediately, might be done to fix some of the problems in each town.  And to an extent, I think that’s what we got.  It’s not generalisable; in fact the results from each town are probably very biased.  We know, for instance, that in most of the towns, we failed to talk to any male or transgender sex workers.  But if the biases and their relevance to each town are recognised, and used to get positive change in each town, then that’s OK.

We also got a team with a new set of skills, who could do the same thing again, or can replicate it in other towns, or – why not – help other marginalised groups like men who have sex with men, migrants, or people living in slums do the same thing.  The team has, on a number of occasions, used the findings to frame their input into national discussions about HIV programming – so this research, in a very real way, helped to plug the data gap.

Maybe “research” is the wrong term to describe using research techniques in creative ways.  This participatory approach isn’t new to community development work: far from it.  It isn’t new to public health researchers either.  Practitioners have been advocating it for decades.  But it remains a marginal rather than a mainstream practice.  When there’s no data, however, this sort of research is an excellent starting point for filling the gap.

Getting practical: learning to work where there is no data

Post by Jamie Uhrig

A hostess café, Ethiopia. Image, (c) J.Uhrig

Implementing HIV prevention activities with sex workers in Ethiopia means learning to work where there is no data.

The HIV epidemics in Ethiopia are unique to this multi-ethnic federal state. Three Demographic and Health Surveys have been conducted since the beginning of the millennium. HIV testing was performed as part of the survey process so that a broad outline of the epidemics can be obtained. Adult seroprevalence among those aged fifteen to forty-nine in the country in 2011 was 1.5%. Prevalence is higher in women than men, higher in urban areas than rural, and highest among those who fall into the highest wealth bracket.

What about HIV among key affected populations? Here there is no data. There are no reports of people injecting drugs in the country and there is a deafening silence on the topic of men who have sex with men.

But it is common to talk about sex work. Sex work itself is not illegal in Ethiopia and the harassment of sex workers by police that occurs in former British colonies in other countries in the region is almost never reported by sex workers here. It is possible for HIV prevention programme implementers to work closely and openly with sex workers with the full support of local authorities. This makes the remarkable lack of data even more astounding.

The largest programme for sex workers in the country is currently being implemented by the local nongovernmental organisation Timret Le Hiwot. Outreach workers talk with sex workers during recruitment for regular training sessions on sexual health and over ten thousand women attend two-day trainings each year. Staff also listen to women during traditional Ethiopian ‘coffee ceremonies’ when female sex workers have a chance to talk with their peers about whatever is on their minds: violence, condom sizes, boyfriends… One thousand coffee ceremonies were held last year. That is a lot of peer support.

There has been no sentinel surveillance among sex workers performed in a long time. The Ethiopia HIV/AIDS Prevention and Control office and the World Bank noted five years ago that “routine and detailed surveillance of this high-risk population has not been carried out for fifteen years.” They also pulled no punches in writing at that time: “Clearly, new studies need to be initiated to monitor and measure the progress of the epidemic amongst this vulnerable population.”

HIV rates among female sex workers who undergo voluntary counselling and testing can be used to measure effectiveness and efficiency of HIV testing programmes but are not useful in determining prevalence.  There is a tantalizing piece of information that can be obtained by reading between the lines of the last Demographic and Health Survey. Among women who had ten or more lifetime partners, HIV seroprevalence was 23%. Included in this group are some women who practise sex work – but we don’t know what proportion.

If HIV prevalence and incidence cannot be calculated, then what about measuring behaviour trends? The funder of Timret Le Hiwot, DKT Ethiopia, regularly tracks changes in behaviour using Behaviour Change Impact Surveys among female sex workers. Consistent condom use rates with clients are regularly reported by sex workers to be over 90%. This figure has not changed in ten years. And yet, ironically this consistently high rate of reported condom use makes measuring significant changes in behaviour difficult.

What about coverage of HIV prevention programmes for sex workers? To determine coverage you need to have a population estimate. There has never been a scientific national population size estimate of sex workers in the country. UNAIDS/WHO-recommended “capture/recapture” methods have been used to make estimates in Addis Ababa and many of the largest cities in the country. Vigorous and acrimonious debate has followed. It took three years for a population estimate in the Kenyan capital Nairobi to be published in a peer-reviewed journal. Do not expect to see an estimate for Addis Ababa in public soon.

There is good news. The Ethiopian Health and Nutrition Research Institute is working with the US Centers for Disease control to conduct HIV sentinel surveillance among sex workers. Data is being collected as this piece is being written. Still, even when the report is published, we will be left thinking: “Is HIV prevalence going up or down among sex workers? And what about incidence?”

Even one HIV sentinel surveillance report can inform programming immediately. If prevalence is found to be higher in some places in the country, more programme resources can be devoted to these areas. Until prevalence rates are known, all that can be done is to assume that there is equal prevalence among all sex workers and ensure maximal reach of programmes in as many cities in the country as resources allow.

Onwards.

__

Jamie Uhrig is an independent consultant who provides technical assistance to Timret Le Hiwot and DKT Ethiopia in providing high quality HIV prevention services for female sex workers throughout Ethiopia. http://www.linkedin.com/in/jamieuhrig/

He moderates the English language section of the website HIV Information for Myanmar [him] http://www.hivinfo4mm.org/ and can also be followed on twitter @himmoderator.

Eight lessons on finding the money

Post by Owen Ryan

Image

Where there is too much data… (Image courtesy of Salvatore Vuono/FreeDigitalPhotos.net)

Much of what we have discussed so far on Where There Is No Data is about collecting information where none or very little exists. But I focus on a type of research that is characterized by a slightly different obstacle: too much data. Anyone who has conducted any type of budget monitoring or program implementation analysis will be able to relate closely to this issue.

Collecting epidemiological information on marginalized groups can often feel like walking into a dark cave with a very small torch, but collecting budget and policy data on the same populations more closely resembles searching for a grain of salt on a beach of sand.  You know the real data are out there but they are surrounded by – and obscured by – a lot of what you don’t need.

A much better writer than me referred to this as “the signal and the noise” – the signal is what you’re looking for and the noise is everything that distracts you from it. Donors and national governments publish thousands of pages of documents on national AIDS programs but each of these varies tremendously in terms of quality and reliability (for an assessment of donor reporting see here). Finding the signal buried inside reams of noise can be almost impossible, but that information is a crucial advocacy tool and is worth searching for.

Budget and program data when combined with up to date epidemiological data can form a strong basis for effective advocacy with donors and national governments.   Recent examples show how efforts to describe the needs of marginalized groups combined with budget information can result in increased funding and services for these populations.

In recent years, budget monitoring and accountability have become much more prevalent. There are fantastic civil society organizations all over the world conducting this work (find them here). The International Budget Project has a very comprehensive guide and other resources that can help civil society advocates who are new to this work as well as those who are more experienced.

Researching how funding for HIV is allocated or what types of programs are being implemented is not a perfect science. My team at amfAR (the Foundation for AIDS Research) and I have spent years searching through donor reporting and budget spreadsheets to describe how national AIDS programs respond to the needs of sex workers, people who inject drugs, men who have sex with men and transgender individuals. Here are eight lessons we learned from our own mistakes:

  1. Start with the end in mind. If you are planning on writing a short paper on your findings, you will need to limit how much information you are looking for. If you are looking to develop a database on national AIDS spending, you may want to spend a lot more time and effort on your data collection. Take the time to define what you want your finished product to look like and let that guide your research.
  2. Define what you’re looking for. Do you want to know more about how much money the government spends preventing HIV among sex workers? Are you interested in the availability of condom compatible lubricant for MSM? Be clear in the beginning what you are trying to track and write specific questions for each. This will help guide and focus your research.
  3. Don’t reinvent the wheel. Take time to understand what others have written about your subject so you can build off of that knowledge without repeating the same research.
  4. Start with official reporting then quickly move on. Government and donor reporting is a good place to start learning more about a country’s national AIDS response, but it can also take too much time (e.g. PEPFAR’s 2012 plan for South Africa is 663 pages; the current National Strategic Plan is 83 pages). Read the pieces you need to understand, but quickly move on to other sources.
  5. Sometimes you can get a lot more out of a phone call or meeting than you can out of an official report. Many donor agencies and national governments have a contact person who is available to talk to civil society organizations interested in learning more about their national AIDS response. Getting a meeting with them can take a lot of patience but their information is often more accurate and reliable than official reporting.
  6. Be rigorous with your data collection. You will quickly find yourself knee-deep in a lot of data. Use a system to collect this information and keep track of your sources. Stick to that system. That way, you can accurately describe what you found when you’re finished. If people raise questions about your findings, you can quickly point to your data.
  7. Keep talking to your sources. You will find people in government and at donor agencies that are willing to help you with your research. Keep talking to them and make sure when you find information that they know something about, you check with them to get their opinion on it.
  8. Put the same level of effort into communication that you did for data collection. Your findings are no good if they sit on a shelf. Talk to others about the best (and, depending on the context, safest) ways to make sure the right stakeholders hear what you have to say. This is never accomplished with one meeting or one presentation. Keep talking about it even months after your work is finished.

It is worth repeating – this is not a perfect science, but it is an essential element in describing the needs of marginalized groups.

Owen Ryan is Deputy Director for Public Policy at amfAR, the Foundation for AIDS Research where he focuses on key populations, budget monitoring for HIV, and new prevention technologies. Owen is also an Alternate Board Member to the Global Fund to Fight AIDS, Tuberculosis and Malaria.

What does “first ever” research look like?

Starting from nothing

When we talk about where there is no data, it is not strictly true that there is NO data.  Even if information has not been collected or analysed or published, it still exists.  But collecting, analysing and publishing information is often an important step to getting health problems to be better understood and to dealing with them effectively.

I was once asked to do some research on key populations for HIV – particularly sex workers and men who have sex with men – in a small country where no structured research had ever been carried out on these groups.  Of course, if you asked someone in the ministry of health, or even someone on the street, whether sex workers and men who have sex with men existed in the country, they would have an opinion – or indeed several opinions.  But if you want to start building programmes to work with these populations on issues such as human rights and HIV you need more than that.  Our initial brief was to carry out a population size estimate – to try to figure out roughly how many sex workers and MSM there were – but given the complete lack of any documented information, it was clear that this was not going to be possible.  So instead, we decided to try and find a few people to talk to and to write up a few case studies – just to document for the first time that there were indeed sex workers and MSM, to get a sense from them of the human rights and HIV situation, and possibly to lay the groundwork for some more formal research.

By word of mouth, and by identifying people through internet chat sites, we managed to find a handful of MSM who said they would be happy to talk.  Given the sensitivity of the topic and the fact that sex between men was illegal and highly stigmatised in the country, we arranged to meet in private locations like out-of-hours health facilities or hotel rooms, or even in one case in the back of a taxi.  Similar arrangements were made to speak with sex workers, although finding the way in to the group was less straightforward, as it involved working through local intermediaries.

We had an idea of how many people we wanted to speak to in the time available, and had developed both a questionnaire and a focus group discussion guide in order to be able to adapt to different situations.  We also prepared an informed consent statement and form, designed to make sure participants understood the purpose of the exercise and were happy to participate.

In reality, it is very hard to control what is going to happen in this sort of exercise, and nothing worked out quite the way we had planned.  Here are a few of the stumbling blocks we came up against:

  • Around half of the appointments were not kept.
  • Although participants were asked to come individually, a number came in pairs or groups of three, making it difficult to use either the interview or the focus group discussion guides.
  • A number of participants avoided responding to questions about sexual behaviour and sex partners – in some cases they declined to talk about themselves but talked about people they knew instead.
  • One interview had to be cut short because we were being devoured by mosquitoes.

Limited results; managing expectations

There are many good reasons why the process was so fraught. It is quite likely that participants were not reassured by the explanation of the research or the consent agreement, or that they did not feel able to talk openly despite the initial agreement.  Although the work did produce a few case studies, the amount of data and insight produced was fairly minimal.  And although it was not the only work we did during the trip, it ended up being relatively expensive for what it produced.

It may seem, given these challenges, that there is very little point in conducting this sort of research.  However, in situations where virtually nothing is known, and where the government or other funders use this fact to sidestep sensitive issues, it is important to start somewhere.  We were able to take the scant information to decision makers, to demonstrate that these groups existed and that even though it was challenging, it was possible to work with them to find out more about their lives.  We also made a handful of contacts who helped out, much later, in developing a more structured piece of research.  Conversely, as the difficulties we faced suggest, it would have been impossible to conduct a more structured piece straight off: we would not have been able to identify enough people and implement the research in a rigorous or acceptable way.

One of the biggest challenges in this process was, of course, that the information we were able to obtain fell far short of the client’s expectations.  In this case we realised before starting that this would be the case and was able to discuss this with the client and come to an agreement on what was realistic.  However, it is often not until research projects get going that challenges like these are identified.

What is the lesson?

The lesson here is that when trying to gather data on sensitive topics, it is often necessary to begin with very small and informal studies.  Being clear about the methods, and ensuring participants understand the process and consent to participate in it remains essential.  But don’t set your expectations in terms of results too high.  And in an environment where those who fund research are often removed from realities by some distance, it is essential to be clear from the start about what is and isn’t possible.  If a client wants a large behavioural study or a population size estimate, but the starting point is similar to the one described above, it is important to develop some type of “roadmap” that plots out the different processes and sub-studies needed to get there.  We’ll post more about this “roadmap” idea on Where there is no data over time.

What is data for health?

The first few posts on Where There Is No Data have set the scene by discussing the challenges faced by marginalised populations in HIV programmes.  We’ve also talked about the fact that having good data isn’t always enough – policy decisions tend to be influenced as much by politics and prejudices as they are by evidence.  Nonetheless, while data isn’t everything, getting better data is important.  It can begin to shine a light on problems that have been ignored; and when there is political commitment to tackling these problems, it can help make sure that health programmes are properly designed.

Policy makers, funding agencies and NGOs often talk about different types of data and research, but not everyone is familiar with these.  When we talk about data for health programmes what do we mean?

  • Data that describes the burden of health problems including the prevalence and incidence of these conditions, and how they are distributed among different sections of the population.  In most countries, data on HIV prevalence (i.e., the percentage of the population that is infected with HIV) is derived from a range of sources: surveys of a representative sample of households during which respondents are asked to participate in anonymous HIV testing; surveys of specific population groups such as pregnant women, men who have sex with men, sex workers (these are often called “sentinel surveillance” surveys); and routine data – for instance, the proportion of people volunteering for HIV testing who are found to be HIV positive.  None of these methods provides a complete picture of HIV prevalence but, if all of them are used fairly regularly, they can help give a good overview of what is happening.  In some countries, special studies are carried out to measure HIV incidence (i.e. the rate of growth of the epidemic in a given period of time such as a year), but this type of study is rarely carried out at national level.
  • Prevalence and incidence studies can also help to identify how a health problem is distributed in the population – whether it affects men or women more, whether some age groups are disproportionately affected, and which behaviours or sub-populations are most affected.  A lot of this information can be gained from household surveys and routine data.  However, because in many societies people are reluctant to talk openly about sex and sexuality, and in particular because of the stigma against behaviours such as sex work, they are often under-represented.  Moreover, knowing how a health problem is distributed in the population is not just a matter of knowing the prevalence of the problem in each sub-population group.  It also means knowing the size of each these sub-population groups is.  This is useful for planning programmes and ensuring that resources are allocated in the right places.  Once again though there are particular challenges in estimating numbers of marginalised populations.  Sex between men, sex work, and drug use are behaviours not identities, and people with these behaviours often have good reasons to avoid being counted or included in surveys.  In many locations there has been little or no research on these groups and informal, community level research is needed before any formal surveys can be carried out.
  • Information about what makes people vulnerable or at risk.  The same surveys described above can help provide an indication of why some people are more vulnerable or at risk than others.  In the context of HIV, researchers often ask respondents about their sexual behaviour, condom use and so on – although once again, peoples’ responses to these questions are not always reliable.  Moreover it is not enough to know whether people have good enough knowledge about HIV or whether they use condoms or whether they have access to health care.  It is also important to know why these things happen.  Is it because programmes are not reaching them? What role does stigma and discrimination play?  Getting the answers to these questions often requires a different approach: one that engages much more with the people concerned, and ideally one that is led by them.  These factors also often vary from place to place and can change over time, so it is important to have mechanisms that enable communities to collect, understand and act on local data in a regular way.
  • Data on what types of programme or “intervention” are effective.  Many policy makers rely on experimental research methods to provide evidence on the effectiveness of programmes or interventions for preventing or treating health problems.  Although they are expensive to conduct these studies help provide an estimate of how effective different approaches are.  There is considerable debate surrounding the reliability of this sort of study for evaluating social change programmes, although their use in this area is growing.  In any case, because they are experiments, generally conducted in controlled conditions, they will not necessarily be as effective or work in the same way when implemented at scale.  For this reason programmes have to find ways of continually monitoring their impact and of identifying any unintended consequences.  Once again a combination of large scale survey and routine data and more qualitative, community based approaches is needed.
  • There is increasing interest in good quality programme related data – for instance, data on the coverage of programmes (how many people they reach and who these people are); data on how programmes are funded (where does the money come from? Communities? The government? Donors?); and data on what it costs to implement different types of programme.  Good data on costs, combined with good data on effectiveness can be used to make sure the most cost-effective programmes get funded – in other words that available resources have the biggest possible impact.

Readers working on HIV and AIDS may be familiar with concepts such as “know your epidemic”, “know your response” and “strategic investment”.  The description of the different types of data described above is somewhat simplistic; however they are the basic building blocks behind these concepts.  It is particularly useful for community actors, key or marginalised populations to know about these types of data and concepts so that they can clearly articulate the gaps when speaking to policy makers.

We will explain all of these types of data in more detail on this site – including how reliable are, what sorts of skills are required to collect them, and the methods and the costs of collecting them.  There are a number of free online courses and resources where you can learn more about different research techniques and we will post links to these too.  Finally, we will talk about the importance of data generated by communities and the challenge of trying to get this sort of data taken seriously.

The “data paradox”

In this short post we discuss further some of the challenges faced in planning HIV programmes for key populations.  The post is based on discussions that were originally posted on our Facebook page.

Image

A write-up of a discussion among sex workers about where they are most likely to face violence in their community, and what they can do about it. By FIMIZORE, Madagascar network of sex workers (c).

Stef Baral

So why are we in a situation where we have no data on some of the groups most affected by HIV? To characterize the widespread and generalized HIV epidemics found in mostly southern and eastern Africa, we have relied on surveillance systems that do not meaningfully assess the very populations known to be most affected by HIV in most parts of the world including sex workers, gay men and other men who have sex with men, people who use drugs, and trans populations. I understand the design and use of these systems–and there is no doubt that having a sampling frame made up of households is the most effective tool for population-based sampling to retrieve population-based estimates of HIV.

However, I also understand that for there to be household based transmission of HIV, there has to be sero-discordant relationship. And that tells me that there has to have been a primary infection event outside of the household. It is those risk factors for the extra-household primary infection events that we still seem to know very little about. And it seems to me that we may not have invested in this since maybe we don’t want to hear the answer.

Maybe it is because all around the world there are people that sell sex–as there always has been and always will be. All around the world, there are people who do not fit into heteronormative social expectations for sexuality and simple binary expressions of gender. And all around the world there are people who use drugs–some for fun and some because of untreated or undertreated mental health. And each of these people have specific needs for the prevention and treatment of the acquisition and transmission of HIV. But how can we fully understand these needs, if we don’t want to admit that populations exist everywhere. So we still, 30 some odd years later, find ourselves in a data paradox. A paradox where we know less about the needs of diverse populations in settings with the most stigma.

Does that mean we should not respond until we have “the data”? It took me a while to get how brilliant of a strategy this is–argue that the lack of data is a reason to not launch any programs or funding that may result in data. Ie, the lack of data feeds itself.

Matt Greenall

So the “data paradox” is this: decision-makers deny that most affected populations exist, or that they are relevant to the epidemic; so no research gets done on these populations; the lack of data feeds the denial; and so on.

But the other challenge we face is that decision-makers don’t always put resources towards programmes with key populations even when good data is available. Experience tells us that politicians, in particular, are very capable of ignoring data when it suits them to do so – and there are many countries where the quality of the data is reasonable enough but the investments in programming with the most affected groups are still all wrong. So while it is important to encourage better national level research and data that can eventually be plugged in to modelling and strategic planning exercises, we should also be aware that they won’t necessarily resolve everything.

Although there is no doubt that having the right laws and supportive leadership from the government or Ministry of Health makes a huge difference to how different problems get addressed, not all change comes from the top down. Similarly, the stigma and marginalisation that key populations face does not only come from the national level – it also has a lot to do with attitudes and behaviours of health care workers, law enforcement officers, and community members at the local level. These need to be addressed to. This discussion at the recent International AIDS Conference in Kuala Lumpur is an example of how research and programmes can be initiated at local level even without strong support from policy makers. I’ll also post something in a few days, and post a link here, about local, community-led research and how it can help groups get organised at local level. 

Setting the scene

cropped-new-picture.png

I have been working in public health and human rights for nearly 15 years, and for around half of that time I have been involved independently as a researcher, analyst and facilitator.  I specialise in HIV and AIDS, and within that a large proportion of my work is on issues related to “key populations” such as men who have sex with men, transgender individuals, and sex workers.

These groups are often much more affected by HIV than any other group, and yet there are all sorts of barriers that get in the way of making sure they are involved in effective programmes.  They are often stigmatised or marginalised by society and even criminalised.  Moreover although we often use the terms “key populations” or communities to refer to these groups, the terms are misleading: although there may be groups or communities of men who have sex with men, transgender people or sex workers, none of them is necessarily a homogenous or self-identified community.  Although categories are useful in epidemiology and other types of research, the people who fall under these categories are diverse, and they aren’t always that useful from a programming point of view.

Decision makers and people with political authority have often used this fact as an excuse for ignoring the need to protect the human rights and health of key populations.  Their existence is often denied, and because of this, no effort is made to find out more about them and how they are affected by HIV and AIDS and human rights violations.  And because there is no data, decision-makers – whether they are in national governments, NGOs and donor organisations – are reluctant to enact policy changes or make funding available.

So this is where Where there is no data is going to start: by talking about data on key populations in the context of HIV.  The next post on Where there is no data, entitled “The data paradox” by Stefan Baral, explains this in more detail.  Many readers will recognise these problems and will have come across them.

As well as discussing the challenges and paradoxes, however, we are even more interested in discussing and sharing how to respond to them, in a practical way.  In the next few weeks we hope to publish posts discussing the following questions:

  • If no research has ever been done on a key population in a country, where do you start? What does the first ever research look like?  What comes next?
  • What types of study about key populations and HIV exist?  What do they tell us, and what do they not tell us?  Where can you find out about existing studies in a given country?  How do you go about running these studies – what sorts of skills and resources are needed?
  • How do we know when data is “good enough” for decision makers to use?  What makes research credible?
  • What are the challenges and risks involved in doing studies on these topics?
  • How can marginalised and stigmatised populations get involved in research and in using data?
  • Is it enough to have good data? What if decision makers are determined to ignore it?

On Where there is no data, our aim is not just to discuss these questions in theory but also to give examples, describe experiences, and point readers to where they can find more information and resources.  We welcome comments and discussions, and contributions from readers, as our How to get involved section describes.  And as the About section describes, while we are beginning by discussing HIV and key populations, we are keen to expand our focus.  There’s also plenty of room for discussion on the forum we have set up on Facebook and on our Google+ page.

What do you think of our plans?  Do you have any suggestions?  Please tell us about them using the comment field below, or via the form on the Contact us page.

We look forward to welcoming you back when we publish our next post.