CBVCT Services and Organization Needs

Monitoring and evaluation

Monitoring and evaluation (M&E), and also quality improvement, are management tools.

Monitoring uses data to track progress on activities. An evaluation is a systematic and objective review of the relevance, effectiveness, efficiency and impact of activities in the light of the specific objectives determined as part of programme planning.

M&E results provide information for improving services and comparing performance to other, similar services. M&E results can also be useful in advocating for CBVCT services in addition to health care based HIV testing and counselling (HTC) services, as well as provide evidence of activities and impact when seeking funding.

Please note:

  • M&E of CBVCT at the individual service level requires the allocation of resources such as personnel time and logistical support.
  • Consider an appropriate balance between the M&E workload at the level of the CBVCT service and at the level of the national HIV/AIDS prevention, treatment and care programme.
  • Check the feasibility of introducing, conducting and sustaining M&E efforts over time.
  • A short list of M&E indicators relevant for the core goal, objectives and targets of CBVCT services activities is recommended.

The aims and objectives of special studies or research projects that may be conducted within CBVCT services generally require much more extensive data collection than routine M&E activities, and often more sophisticated data collection methods. Avoid confusing them with routine CBVCT M&E activities, which have to be sustainable in the long term.

Quality relates to the achievement of objectives and outcomes in a manner consistent with current professional knowledge and standards.

This section covers the aspects of monitoring and evaluation that are most relevant to CBVCT services. For more details on the topic of quality, please go to Quality improvement.

Get some insights from different Checkpoints in Europe in our video from the Workshop in Ljubljana:

Russian subtitles included /Включая российские субтитры

Item Does your CBCVT have this in place? Is there a documented standard, guideline, plan, policy, procedure, contract or agreement? Is it adapted to local needs and conditions? Is it working as intended? Action
Collecting activity data
Description

Description

Activity data are basic operational data that show whether the CBVCT is operating as intended and may also include client satisfaction ratings and qualitative comments.

 

Guidance

Guidance

For a comprehensive picture of CBVCT operations, you need to count all activities and client contacts, including those that do not result in an actual test being carried out.

Example: Athens Checkpoint reporting variables

  1. Number of testing and counselling sessions
  2. Percentage of appointments for testing / counselling / confirmatory testing / test results / referrals
  3. Percentage of men and women
  4. Percentage of age groups
  5. Percentage of population groups

(…)

  1. Percentage of client satisfaction as regards accessibility to the Service
  2. Percentage of client satisfaction as regards working hours
  3. Percentage of client satisfaction as regards their appointment waiting period
  4. Percentage of client satisfaction as regards the overall feeling of the Service
  5. Percentage of client satisfaction as regards employees’ manners and friendliness
  6. Percentage of client satisfaction as regards confidentiality
  7. Percentage of client satisfaction as regards the duration of the testing
  8. Percentage of client satisfaction as regards the pre-test and post-test counselling phases
  9. Percentage of client satisfaction as regards choosing Ath Checkpoint for future HIV tests

Activity data include, apart from data on tests conducted (see next topic: monitoring HIV diagnosis), the number of telephone, social media and face-to-face enquiries, website hits and outreach sessions as well as the numbers of condoms, safer drug use equipment and information materials distributed.

Data collection instruments may include client contact forms, activity reports and stock take forms filled in by staff and volunteers as part of routine operations, and website hosting reports received from contractors.

These data can then be compared to targets agreed during the planning process and monitored over time.

Collecting client satisfaction ratings requires asking for feedback through routine paper or electronic evaluation forms, or through periodic surveys conducted at longer intervals. Qualitative information such as complaints, comments and suggestions (verbal and recorded by staff or written) can be very useful in identifying possible reasons for trends in activity data.

Asking clients to spend time providing feedback, and the effort involved in collecting and analysing the data, as well as reporting and reacting to the results must remain proportional to the scope of the CBVCT operations. Only collect data that you have the capacity to analyse and use.

Activity data and the results of client satisfaction surveys are important inputs for quality improvement activities.

The Correlation Network’s ‘Effective evaluation: An introduction for grass-root organisations’ (http://www.correlation-net.org/index.php/products-correlation) is a straightforward guide to developing evaluation plans, including data collection.

 

Adaptation

Adaptation

Which reach and satisfaction data can be collected depends on the activities included in CBVCT operations (e.g. mobile units or outreach sessions, condom distribution) as well as on local conditions (e.g. some clients might be too concerned about privacy to fill in any evaluation forms).

Example: Athens Checkpoint Evaluation Form (excerpt)

  1. Was the Ath Checkpoint setting friendly (reception, testing room)?

not at all            0         0         0         0         0         very much

1                2                3                4                5

  1. Was the staff as friendly as you expected?

not at all            0         0         0         0         0         very much

1                2                3                4                5

  1. In case of a positive test result, did it cross your mind that confidentiality might be breached by the Ath Checkpoint employees?

not at all            0         0         0         0         0         very much

1                2                3                4                5

Options for reducing costs:

  • Harmonise activity data collected for mandatory reports – e.g. to funders and public health authorities – with the activity data needed for internal purposes.
  • Allocate data collection to staff and volunteers who are best placed to record activity as part of routine tasks, e.g. receptionists recording telephone enquiries as they take calls or outreach volunteers recording condoms deliveries received and stocks remaining.

Example: Thessaloniki Checkpoint client contact form

 

Quality Improvement

Quality Improvement

Tools such as Succeed, QIP (www.quality-action.eu), EQUIHP (European Quality Instrument for Health Promotion, www.ec.europa.eu/health/ph_projects/2003/action1/docs/2003_1_15_a10_en.pdf and Quint-Essenz (http://www.quint-essenz.ch/en) include questions to encourage discussion and creative ideas for improving reach and satisfaction data collection and analysis.

 

Action plan

Action plan

This Action Plan helps you to work directly on the items identified as priorities (yellow and/or red fields in the Checklist). Please list actions that are as specific as possible. You can download your finished Action Plans for each section as an xlsx.-document and print it afterwards. The Action Plans form the basis for your further planning, implementation and evaluation.

The Action Plan shows a sequence of steps to be taken, or activities to be performed for a strategy to succeed. The Action Plan has four major elements: (1) what will be done (specific tasks), (2) by whom (responsibility), (3) by when (timeframe), and (4) how the implementation of the task will be monitored.

 


→ Copy to action plan
Monitoring HIV diagnoses
Description

Description

For CBVCT services, monitoring HIV diagnosis means to assess to what extent activities increase early HIV diagnosis among key populations, in this case MSM.

 

Guidance

Guidance

The documented success of CBVCTs in detecting a large proportion of new HIV diagnoses is a strong argument for funding support. Monitoring HIV diagnosis is essential for supplying this information.

The ‘Guidelines for Data Collection for Monitoring and Evaluation of Community Based Voluntary Counselling and Testing (CBVCT) for HIV in the COBATEST Network’ are designed to improve the quality and consistency of the data collected at CBVCT services level for M&E purposes. This will enhance the conclusions that can be drawn at national and European level. Standardised monitoring and evaluation allows for comparability of data.

These guidelines are built around a set of core indicators:

Level 1 (Core indicators to monitor HIV diagnosis in CBVCT services)

  1. Number of clients tested for HIV with a screening test
  2. Proportion of clients who reported to have been previously tested for HIV
  3. Proportion of clients who reported to have been tested for HIV during the preceding 12 months
  4. Proportion of clients who reported to have been tested for HIV at the same CBVCT facility during the preceding 12 months
  5. Proportion of clients with a reactive HIV screening test result
  6. Proportion of clients tested for HIV with a screening test who received the results
  7. Proportion of clients with a reactive HIV screening test result who received post-result counselling
  8. Proportion of clients with a reactive HIV screening test result who were tested with a confirmatory HIV test
  9. Proportion of clients with a positive confirmatory HIV test result
  10. Proportion of clients with a positive confirmatory HIV test result who received the conclusive confirmatory HIV test result at the CBVCT facility
  11. Proportion of clients with a positive confirmatory HIV test result who received post-result counselling at the CBVCT facility

Level 2 (Optional CBVCT indicators)

  1. Proportion of clients who received a pre-test discussion, pre-test counselling or pre-result counselling and were tested for HIV with a screening test
  2. Proportion of clients with a non-reactive screening HIV test result who received post-result counselling
  3. Proportion of clients with a negative confirmatory HIV test result who received the conclusive confirmatory HIV test result at CBVCT facility
  4. Cost per client tested
  5. Cost per HIV diagnosis

Level 3 (Optional core CBVCT indicators)

  1. Proportion of clients testing HIV positive at CBVCT sites who were linked to health care
  2. Proportion of clients testing HIV positive at CBVCT sites who were diagnosed late

You can find the full document at https://eurohivedat.eu/

 

You can find a data sheet template (Data Collection Form COBATEST Network_Questionnaire) at https://eurohivedat.eu/

Excel file to calculate the indicators for M&E VCTs activities COBATEST network: https://eurohivedat.eu/

 

Adaptation

Adaptation

Readily available tools and instructions support the core indicators and data collection and analysis methods described in the ‘Guidelines for Data Collection for Monitoring and Evaluation of Community Based Voluntary Counselling and Testing (CBVCT) for HIV in the COBATEST Network’.

They are an essential monitoring system for CBVCTs and have been tested in practice in a variety of settings.

Because they are already specifically adapted to CBVCTs, it makes sense to implement them fully. Participating in the COBATEST network builds comparable data sets at the national and European level over time.

Support is available from the COBATEST network to implement the guidelines. Please contact CEEISCAT (lflopez@iconcologia.net) for further assistance.

Example: Athens Checkpoint reporting data (excerpt)

(…)

  1. Percentage of clients who received pre-test counselling
  2. Percentage of clients whose informed consent was requested for the test
  3. Percentage of clients who were tested for HIV antibodies
  4. Percentage of clients who received post-test counselling
  5. Percentage of negative and preliminary positive results
  6. Percentage of clients who tested positive and were booked an appointment for confirmatory testing
  7. Percentage of clients who tested positive and were given written directions on how and where to take a confirmatory clinical test
  8. Percentage of clients who tested positive and were proposed to be escorted to a health clinic for confirmatory testing
  9. Percentage of clients who tested positive and requested an extra counselling session
  10. Percentage of clients who tested positive and were referred to a psychologist
  11. Percentage of information sources regarding the Service
  12. Percentage of clients who had been tested for HIV before
  13. Percentage of clients regarding when they had last taken an HIV test
  14. Percentage of clients regarding where they had last taken an HIV test
  15. Percentage of clients regarding whether they had been tested for other STIs before
  16. Percentage of clients regarding where they had been tested for other STIs before
  17. Percentage of clients who had been tested for other STIs and for which STIs they have been tested
  18. Percentage of clients who have been vaccinated against Hepatitis B

(…)

 

Quality Improvement

Quality Improvement

Question eleven of the Euro HIV EDAT Self-evaluation Grids focuses specifically on evaluation.

Comprehensive quality improvement tools for prevention and health promotion projects such as Succeed and QIP, both available at www.quality-action.eu, as well as EQUIHP (European Quality Instrument for Health Promotion, www.ec.europa.eu/health/ph_projects/2003/action1/docs/2003_1_15_a10_en.pdf) and Quint-Essenz (http://www.quint-essenz.ch/en) cover monitoring and evaluation overall, including questions to check whether the CBVCT collects all the data needed to monitor and evaluate its goals and objectives.

 


→ Copy to action plan
Regular data analysis and reporting
Description

Description

Regular data analysis means using statistical and thematic methods to identify results and trends. Reporting means summarising the findings, comparing them to any existing targets and writing them up in relation to goals and objectives.

 

Guidance

Guidance

You can analyse quantitative activity data using basic statistical tools such as proportions and averages. There are a number of online resources on analysing activity data (reach and satisfaction). The EURO HIV EDAT project has also produced a self-evaluation assessment tool for CBVCT:

Euro HIV EDAT self-evaluation assessment

Basic thematic analysis of qualitative data (complaints, comments and suggestions), such a simply counting the number of times a suggestion is made or a topic is mentioned (see also the Enquiries and Complaints Register method in the PQD toolkit, available on www.quality-action.eu), can guide quality improvement and innovation. Even if they are themselves not representative or conclusive, they are an indication of what the relevant topics are. You can use participatory methods such as Rapid Assessment or Focus Group, also available as step-by-step guides in the PQD tool, to investigate them further.

It is important to report evaluation results not only to funders, management, boards, advisory committees and other governance structures, but also to the key population, especially to those who contribute to the CBVCT, and to survey respondents. Feedback loops can be created by posting short summaries of evaluation results online, publishing articles in the gay press, and of course by integrating results into the CBVCT (e.g. ‘60% of you liked our short pre-test information session and 40% wanted more information about syphilis: so here it is!’). Such feedback loops highlight the participation of the key population and can contribute to a sense of collective ownership of the CBVCT.

Key questions are:

  • Whom do we have to report to and whom do we want to report to?
  • What can we report back to MSM and how can we make it meaningful?
  • What do the evaluation results tell us and what do they mean for the future?

Charts and graphs communicate quantitative results visually and selected quotes link the themes emerging from qualitative data back to the lived experience of survey respondents.

 

Adaptation

Adaptation

The effort (financial, human and time resources) that goes into analysis and reporting as part of monitoring and evaluation must be in proportion to the resources dedicated to achieving the targets of the CBVCT. It depends on a range of local factors, e.g.:

  • Non-negotiable requirements in funding agreements
  • Legal requirements
  • Whether the CBVCT is a new or established project
  • Available evaluation expertise
  • The prevailing culture regarding clients providing data and giving feedback.

To match the expectations of different stakeholders – incl. funders, staff, clients and the MSM community – with a meaningful and realistic level of monitoring and evaluation, it is important to collectively plan and agree on the indicators, data collection methods, and on the depth and frequency of analysis and reporting.

Options for reducing costs:

  • Offer students the opportunity to analyse the data as part of their course work (e.g. social science, nursing, public health management and related courses)
  • Combine reporting requirements, e.g. of funders, governance structures (e.g. boards, management committees, supervisors) and regulators so that one set of reports can be used to provide accountability to a range of stakeholders
  • Participate in networks (e.g. the COBATEST network) that provide a web-based data entry tool that allows participating CBVCT services access to a database with all their testing data and provides estimates on indicators for M&E.
  • Collaborate with research centres and public health agencies.

 

Quality Improvement

Quality Improvement

The Circles of Influence method in the PQD toolkit (www.quality-action.eu) offers a way of analysing the level or participation of each relevant stakeholder group in the decision-making processes of the project. Because analysis and reporting are key inputs into decisions, this method can also be used for identifying which level of analysis and reporting is appropriate for which group of stakeholders.

Comprehensive quality improvement tools for prevention and health promotion projects such as Succeed and QIP, both available at www.quality-action.eu, as well as EQUIHP (European Quality Instrument for Health Promotion, www.ec.europa.eu/health/ph_projects/2003/action1/docs/2003_1_15_a10_en.pdf) and Quint-Essenz (http://www.quint-essenz.ch/en) cover monitoring and evaluation overall, including questions to check whether the CBVCT analyses and reports in a way that allows it to assess and improve quality.

 


→ Copy to action plan
Impact evaluation
Description

Description

Impact evaluation assesses the intended and unintended changes resulting from an intervention. Beyond stated activity targets, it looks for the changes in outcome that are attributable to the intervention.

 

Guidance

Guidance

CBVCTs targeting MSM aim to contribute to the first step in the HIV continuum of care: the proportion of MSM living with HIV that has been diagnosed. This is reflected in the WHO programme indicators for HIV testing (WHO/HIV/2015.32, available at http://apps.who.int/iris/handle/10665/179870).

Because several testing services may contribute to the numerator for this indicator (number of MSM living with HIV who have been diagnosed and received their results) and the denominator (number of MSM living with HIV) is based on an estimate, attributing impact to a CBVCT service depends on the proportion of HIV diagnoses among MSM it contributes. Some existing CBVCTs have already been able do demonstrate that they contribute more than a third of HIV diagnoses at the national level.

Empowerment, quality of life and community involvement are also desired impacts of the CBVCT service. A holistic medicine/harm reduction concept (mind, body, social environment) seeks to empower individuals to take an active role in their health and take responsibility for their own health maintenance by educating a patient/key population on how to make the best choices for their health.

Evaluating these impacts requires different indicators and longer-term follow-up, as changes may take some time to appear. Examples from the field of international development suggest that the key population themselves need to define what changes are meaningful to them. A participatory approach to impact evaluation uses SPICED indicators to complement the SMART indicators commonly used to measure operational objectives:

SPICED: Subjective – Participatory – Interpreted and communicable – Cross-checked and compared – Empowering – Diverse and disaggregated

Subjective: Informants have a special position or experience that gives them unique insights, which may yield a very high return on the investigators time. In this sense, what others see as ‘anecdotal’ becomes critical data because of the source’s value.

Participatory: Objectives and indicators should be developed together with those best placed to assess them. This means involving a project’s ultimate beneficiaries, but it can also mean involving local staff and other stakeholders.

Interpreted and communicable: Locally defined objectives/indicators may not mean much to other stakeholders, so they often need to be explained.

Cross-checked and compared: The validity of assessment needs to be cross-checked, by comparing different objectives/indicators and progress, and by using different informants, methods, and researchers.

Empowering: The process of setting and assessing objectives/indicators should be empowering in itself and allow groups and individuals to reflect critically on their changing situation.

Diverse and disaggregated: There should be a deliberate effort to seek out different objectives/indicators from a range of groups, especially men and women. This information needs to be recorded in such a way that these differences can be assessed over time.

From ‚Equal Access Participatory Monitoring an Evaluation toolkit, Module 2: Setting objectives and indicators’, available at http://betterevaluation.org/toolkits/equal_access_participatory_monitoring

 

Adaptation

Adaptation

Describing the impact of a CBVCT service on the proportion of MSM living with HIV that has been diagnosed depends on collaborative data collection and analysis at the national or regional level over longer periods of time.

It is best to determine the frequency and form that impact evaluation takes by collaborating with all local stakeholders, including NGO providers of CBVCT.

Evaluating the impact on empowerment may be achieved through social research rather than programme monitoring and evaluation. Please also see the section on research projects.

Options for reducing costs:

  • Offer students the opportunity to carry out impact evaluations as part of their course work (e.g. social science, nursing, public health management and related courses).
  • Combine impact evaluation with interventions targeting MSM conducted by partner organisations and carry it out at longer intervals to increase the chances of finding significant changes.
  • Collaborate with research centres and public health agencies.

 

Quality Improvement

Quality Improvement

Quality improvement at the impact level (which usually means at the national or sub-national level) requires coordination and long-term planning. Shift is a comprehensive quality improvement tool developed specifically for this purpose. Among many other inputs, it depends on this kind of impact evaluation to inform the assessment and improvement of the overall response to HIV, including the continuum of care, and can help improve impact evaluation for the future.

The tool, including comprehensive supporting materials, is available at www.quality-action.eu.

 


→ Copy to action plan
Research projects
Description

Description

Research projects use a range of quantitative and qualitative scientific methods to answer specific questions.

 

Guidance

Guidance

Research projects, especially when they are conducted by a CBVCT in partnership with a research institution, e.g. a public health institute or university, can be used for a range of purposes that are important for establishing health promotion and prevention services, including CBVCT services, for key populations such as MSM. They can:

  • Establish baseline knowledge, e.g. describing the size and characteristics of the key population
  • Serve as a needs assessment that can be used as evidence in funding applications
  • Investigate topics arising from CBVCT operations, e.g. barriers to testing
  • Describe long-term impacts of CBVCT services in particular, and health promotion and prevention interventions in general.

Research projects can also gain exemptions from legal barriers to CBVCT and/or particular testing technologies and serve as pilot projects to build evidence for scaling up.

For example, Copenhagen Checkpoint asked researchers to look at linkage to care and viral suppression results in the cohort diagnosed at the checkpoint, and to publish the results in a scientific journal article. The article is then used as evidence in funding applications.

Research projects take considerably more time to complete than needs assessments or programme evaluations – usually several years – and are therefore suited to more fundamental questions.

 

Adaptation

Adaptation

Community-driven research and reports can contribute to key population empowerment and assess results in the context of lived experience. Research projects depend on external funding sources and details are negotiated accordingly.

 

For example, the formative research conducted as part of the HIV-COBATEST, Euro HIV EDAT and other European projects brought together the European Commission, national partners, academics, NGOs and the community in strengthening the CBVCT approach and built evidence and guidance to scale it up across Europe.

 

Options for reducing costs:

  • Collaborate with universities and conduct research studies as masters or doctoral theses (e.g. with social science, nursing, public health management and related postgraduates).

 

Quality Improvement

Quality Improvement

Involvement in research is a quality criterion in quality improvement not only because it increases the knowledge base for the project or programme, but also because collaborative research transfers skills between partners, e.g. research skills to community workers and participatory and facilitation skills to researchers.

Using structured quality improvement tools (e.g. those available on www.quality-action.eu) may generate questions that can then be prioritised for research projects.

 


→ Copy to action plan
Print Checklist

Action Plan

CBVCT Services and Organization Needs
Monitoring and evaluation
ItemWhat will be done?Who will do it?When?How will we monitor it?
Please copy your Item from the Check-List above
Please copy your Item from the Check-List above
Please copy your Item from the Check-List above
Please copy your Item from the Check-List above
Please copy your Item from the Check-List above
Print Action Plan