Back to Home Page of CD3WD Project or Back to list of CD3WD Publications

CLOSE THIS BOOKCARE Food Manual (CARE , 1998, 355 p.)
Chapter 12 - Monitoring Project Sites
VIEW THE DOCUMENT(introduction...)
I. Site Monitoring
VIEW THE DOCUMENTA. Reasons for Distribution Site Monitoring
VIEW THE DOCUMENTB. Ways of Collecting Information
VIEW THE DOCUMENTC. Use of Information
II. Sampling
VIEW THE DOCUMENT(introduction...)
A. Types of Statistical Sampling
VIEW THE DOCUMENT(introduction...)
VIEW THE DOCUMENT1. Unrestricted Random Sampling
VIEW THE DOCUMENT2. Stratified Random Sampling
VIEW THE DOCUMENT3. Systematic Sampling
B. Interpreting Statistical Data
VIEW THE DOCUMENT1. Precision
VIEW THE DOCUMENT2. Margin of Error
VIEW THE DOCUMENT3. Confidence Level
C. Selecting the Sample Size
VIEW THE DOCUMENT(introduction...)
VIEW THE DOCUMENT1. Estimating Values
VIEW THE DOCUMENT2. Attributes Sampling
VIEW THE DOCUMENTD. Cost Effectiveness
III. Using Field Monitors
VIEW THE DOCUMENT(introduction...)
VIEW THE DOCUMENTA. Role of Field Monitors
VIEW THE DOCUMENTB. Monitoring the Monitors
IV. Information To Be Collected
A. Developing a Data Collection Plan
VIEW THE DOCUMENT(introduction...)
VIEW THE DOCUMENT1. Determine Objectives
VIEW THE DOCUMENT2. Determine the Data to be Collected and the Format
VIEW THE DOCUMENT3. Select the Sample Sites
VIEW THE DOCUMENT4. Collect the Data
VIEW THE DOCUMENT5. Summarize the Data
VIEW THE DOCUMENT6. Look for Relationships and Differences
B. Suggested Information to Collect
VIEW THE DOCUMENT1. General Information
2. Project Participants
VIEW THE DOCUMENTa. Beneficiary Records
VIEW THE DOCUMENTb. Beneficiary Interviews
VIEW THE DOCUMENTc. Interviews with Distribution Staff
VIEW THE DOCUMENTd. Ration Sizes
3. Center Management
VIEW THE DOCUMENTa. Ledger Review
VIEW THE DOCUMENTb. Inventory Records
VIEW THE DOCUMENTc. Center Documentation
VIEW THE DOCUMENTd. Storage Site
VIEW THE DOCUMENTe. Sale/Disposition of Containers
V. Monitoring Reports
VIEW THE DOCUMENT(introduction...)
A. Information in Reports
VIEW THE DOCUMENT1. General Site Information
VIEW THE DOCUMENT2. Project Participants
VIEW THE DOCUMENT3. Distribution Site Management
VIEW THE DOCUMENT4. Recommendations
VIEW THE DOCUMENTB. Scoring and Follow-up

CARE Food Manual (CARE , 1998, 355 p.)

Chapter 12 - Monitoring Project Sites


Figure

I. Site Monitoring

A. Reasons for Distribution Site Monitoring

Two types of monitoring generally take place for all projects: impact and systems monitoring. Monitoring for impact involves the tracking of project-specific variables directly related to final objectives, such as nutritional status, consumption patterns and household income. CARE and donors want to know who is receiving benefits from the program, in what way and to what degree relative to the costs involved, and why the program is or is not having the intended impact. This type of evaluation requires baseline information.

This manual focuses on monitoring systems of food management, by reviewing internal controls and verifying the documentation for individual transactions. This information relates to management of assets and compliance with donor regulations.

The monitoring process seeks to reduce the risk that registered beneficiaries are not receiving their intended rations and that systems are not operating.

Monitoring data should satisfy management information needs covering receipt, storage and distribution of food. Monitoring should:

· Verify that registered beneficiaries are receiving the intended quantity and quality of food.

· Determine if distribution staff are following procedures as stipulated in agreements.

· Determine if control procedures are adequate at each stage of the distribution to prevent corruption and misappropriation.

· Determine losses and actions taken on a timely basis to pursue claims against responsible parties.

· Provide project management with suggestions to improve procedures.

· Verify amounts of food in possession of counterparts by reconciling stock records and physical inventories.

CARE often provides support to on-going government or other counterpart programs by procuring food, arranging for transport and delivery of food, and providing advisory or technical support to the counterpart’s program activities. Counterparts often manage all other aspects of project implementation, including food handling and distribution activities.

Whether or not CARE directly implements a program, effective monitoring systems and procedures must be in place for any program using food resources.

In developing monitoring systems, refer to the CARE Program Manual Chapter Five - Monitoring and Evaluation and the Data Collection Handbook: Tools for Evaluation, March 1991, and more specifically to the Food Security Unit’s (formerly Food Program Unit) Evaluation Module, March 1993.

B. Ways of Collecting Information

Information about systems at the site level is collected in several ways. First, there is required reporting based on recordkeeping. Project management may require all sites to submit daily, weekly, monthly or quarterly reports. Regular site reports are the main source of information regarding total amount of food received and distributed to beneficiaries, inventories in storage sites, extent of losses, adequacy of food management systems, staff training needs, and the number of project beneficiaries. Second, there are site visits to improve performance of sites not operating adequately. The visits, regardless of the information produced, have a positive impact on site management. Third, and the focus of this chapter, is monitoring a sample of sites, based on mathematical laws of probability which state that a small number of sites randomly selected from all the sites will demonstrate the characteristics of the whole. The goal of statistical sampling is to achieve maximum objectivity, representativeness and efficiency.

C. Use of Information

Aggregate information collected from the regular site reports is compared with information drawn from the monitoring sample. If the sample is reliable, discrepancies between the two could indicate serious control problems at the site level. For example, every month 95% of the sites may report that they distribute the full authorized ration to the precise number of authorized beneficiaries. Monitoring reports, however, show that 85% of the sites visited are serving an average of 50% more beneficiaries than authorized or reported. There is clearly a widespread distortion between the site reports and the monitoring reports.

Comparative analysis has both programmatic and administrative implications; the under-reporting or over-reporting of beneficiaries may require a change in the number of sites, better targeting and registration, change in planning of allocations , different types of foods, or adjustments in distribution mode to insure that the target population receives the intended ration.

If the center reports do not match the monitoring reports, possible causes of the discrepancies include:

· Misappropriation
· Lack of training
· Poorly designed reporting formats
· Fear of site personnel to report honestly and freely on distribution activities/problems
· Collusion involving transporters and individual(s) responsible for receipt at the center
· Receipt of short-weight deliveries from CARE warehouses or transporters.

Project managers, Food and Logistics staff and others in country offices must regularly review and compare distribution site reports with information received during visits by field monitors to determine whether there are discrepancies.

II. Sampling

All sites are monitored only when the number is very small. In most cases, conclusions extrapolated from visits to a sample of sites can be used to the validate the accuracy of the information provided by the site reports. Statistical sampling attempts to strike a balance between the implausibility of completely examining the performance and transactions of all sites on one hand and the selection of a sample where the margin of error is within an acceptable range on the other hand. It is important that sample sites are selected from the master list of approved distribution sites and that sites are selected in such a way that every site has an equal chance of being selected.

A. Types of Statistical Sampling

The following general information and suggestions on selecting sample sizes, choosing a methodology and interpreting data can be augmented by further assistance from regional managers, Technical Assistance Group, CARE’s Internal Audit Department and other consultants.

Country offices must reach agreement with local donor representatives on methods of sampling, selecting sample sizes and interpreting data.

1. Unrestricted Random Sampling

This method assumes that each site has an equal chance of being part of the sample selected. Make a list of all project sites, perhaps by alphabetical order. Every project site is given a number. Once the total number of sites is known, decide how many sites are required for the sample (see Selecting the Sample Size below). Use a table of random numbers to decide which site is selected first and the pattern for selecting sites thereafter. For instance, the table might tell you to start with Site #4 and select every 6th site after that until a sample of 20 sites has been selected.

Random sampling isn’t always the most convenient method of choosing a sample. If there are many, many sites and the number of sites selected is small, the random method will almost always produce a sample across many different regions and terrain. It may not be physically possible, given the number of monitors, vehicles and fuel available to visit all the randomly selected sites in a prescribed time frame. For example, it is unrealistic to expect a monitor to witness distributions at two sites per day if s/he must travel hundreds of miles by motor bike or public transportation. Other types of sampling, such as stratified random or systematic may be more appropriate.

2. Stratified Random Sampling

This method of sampling is sometimes used if there are wide variations in site performance within a certain geographic location or type of distribution site (i. e., health centers or schools). All the sites are grouped into segments, each having some uniform, easily identifiable characteristics. Each segment is sampled separately using unrestricted random sampling methods. For instance, there might be a sample taken of all the school distribution sites and another sample taken of all the health centers. Within the segment, each site must have the same probability of being selected as any other site. At the end of the examination of each segment, the results from all segments are jointly evaluated.

3. Systematic Sampling

In systematic sampling, the selection plan is established by selecting a random start and setting a sampling interval that would result in choosing a previously specified sample size. For example, the third site on the list may be the first site monitored and thereafter every tenth site will be included in the sample.

B. Interpreting Statistical Data

1. Precision

Project management must draw conclusions from the results of the sample. Because the sample may not show the true characteristics of the entire population of sites, a certain risk is involved in all samples. It is possible to quantify how much variation to expect as a result of errors under certain conditions, e.g., ± 2%.

2. Margin of Error

There are two types of error: sampling and non-sampling error. Non-sampling errors include listing errors and omission, response and measurement errors, errors of coding and data entry. Sampling error refers to errors that are attributable to the fact that the estimates are being made from the sample rather than testing the entire universe.

3. Confidence Level

This has to do with the percentage chance of drawing a correct conclusion from the sample. For example, a 95% confidence level means that there is a 95% chance that the true value of whatever is being measured lies within the specified precision. In other words, there is a 5% chance that the true value for the population does not lie within the specified precision. Usually a larger sample size will result in a higher confidence level.

C. Selecting the Sample Size

There are a number of factors to consider when determining an adequate sample size.

First is a determination of the number of variables or factors which are expected to have a significant influence on systems management. Variables may include:

· Available staff and support infrastructure (health posts vs. health centers)
· Accessibility of site to supervision and supplies (urban vs. rural)
· Type of institution (private vs. public, MCH vs. school feeding, community based or government)
· Size of catchment area, i.e., geographical area and population served by the site
· Amount of food and other resources being used in a project
· Estimated amount of loss or current inventory in sites.

The actual number of sample sites to select will depend on what is being measured.

1. Estimating Values

If information on the actual amount of loss or inventory is required, sample sizes may be developed using the table below. Determination of this sample size is based on the general rule that the sample size must be high enough to allow for representation of each value to be estimated.

Sampling Guidance

Number of sites in the project

Number of sites in the sample

Up to 10

Each site

11 - 100

10 drawn at random

More than 100

The square root (approximately) of the total number of sites
drawn at random according to a suitable scheme.

Adapted from Table 10, Food Storage Manual, World Food Programme, 1992. Note that the sampling fraction varies with the population. For example, if there are 10 sites, all ten sites or 100% of the sites should be monitored. For 25 sites, 5 sites or 20% of the sites should be monitored. For 100, 10% of the sites, and so on.

2. Attributes Sampling

Attributes sampling is a method used to estimate the proportion of specific attributes in a population. This proportion is called the occurrence rate and is the ratio of the attributes to the total number of the population. For example, country offices may be interested in knowing the percentage of centers complying with reporting requirements. Attributes samples vary only slightly with population size. For example, the sample size for a population of 500 is almost the same as the sample size for a population of 2000.

This distinction is important because it may determine just how large a sample size must be drawn. If there are specific needs to look at, such as the actual size of a loss or the amount of damaged food shipped to centers, the total number of centers must be taken into account. On the other hand, for attributes sampling, a smaller sample size can provide managers with sufficient information to make informed decisions about how well distribution sites are complying with reporting requirements.

D. Cost Effectiveness

Early in the development of monitoring systems, country offices must consider the practical questions about the cost of monitoring activities including the time and travel of staff and staff support. Consideration must be given to:

· Salaries and other personnel costs - program management, field staff, clerical and consultants
· Travel
· Office rent in the field
· Vehicle purchases and maintenance
· Supplies and equipment
· Administration - printing, postage, telephone
· Other costs - overhead.

Country offices must assure themselves that sample sizes are not larger than they can afford. If country offices do not have adequate personnel and resources to monitor the sample size required to insure a 95% confidence level, a lower confidence level, such as 80%, may have to be set. In these cases, country offices should inform regional managers and reach agreement with local donor representatives to assure that donor requirements on monitoring and sampling are satisfied.

III. Using Field Monitors

Monitors’ recommendations may be the best method of determining the exact causes of problems and the steps needed to overcome site-level difficulties.

A. Role of Field Monitors

Monitors must monitor compliance with CARE and donor program requirements and accountability standards. In order to maintain objectivity, monitors should not be the same people responsible for management or supervision.

Field monitors must be trained in the following areas:

· Principles of internal control
· Basic food inventory accounting
· How to do physical counts of inventory in stock, proper warehouse and storage practices
· Monitoring dispatch/distribution systems, and reviewing beneficiary records
· How to detect the possibility of fraud and theft
· How and when to fill in basic food control forms
· Crowd control guidance
· Sampling of sites for inspection
· Sampling of food packages to assess quality
· Sampling of documentation for review
· Observation of actual distribution of food, such as scooping procedures and measures.

Before field visits, monitors should review site reports, information on food dispatches, and previous monitoring reports. Where practicable, monitors should take previous monitoring reports with them when they visit sites.

B. Monitoring the Monitors

The performance of monitoring staff should also be examined by the project management. If one monitor or group of monitors under a particular supervisor continually submits reports that are inconsistent with the other sites’ performances, there may be a problem with training of the monitors or collusion. For example, if one monitor reports 100% of sample sites had monitoring reports that were 99% accurate, but all other monitors reported accuracy percentages of 75% - 80%, the problem may be with the monitor and should be investigated.

Some ways to prevent distortion in site reporting include:

· Provide monitors with standardized formats.

· Establish a schedule of surprise visits by project managers or others.

· No person monitors the same center consecutively. Check the names of those who performed the last two monitorings and the results of these monitorings.

· Advise monitoring as late as possible about the sites on their visitation schedule.

Monitors must be closely supervised and a sample of their reports periodically re-validated by supervisory personnel. Project managers should assess monitoring programs and their coverage on a regular basis. Special attention should be paid to each monitor’s findings and recommendations for distribution sites and the steps site personnel have taken to address problems. Programs may consider developing a spreadsheet or large wall chart with the name of each site, problems identified, and actions taken to correct problems, with dates.

IV. Information To Be Collected

A. Developing a Data Collection Plan

The plan should be designed in the field and reflect the cultural differences, program objectives and operating conditions that have an impact on local management “realities.”

1. Determine Objectives

This has to do with how the information is to be used and by whom. Data has no intrinsic value unless it can be used to achieve some end. Do not collect data without specifying the action system it will serve. If people keep reporting information and never see any results, they will begin to lose trust.

2. Determine the Data to be Collected and the Format

The important test questions are:

· How is the data to be used?
· When is it needed?
· What level of detail is needed?
· What format is most useful for presentation?

Monitors should have a standardized CARE format to capture all necessary information. Suggestions are provided below. The information should be mostly objective and easily quantifiable, such as physical counts, document verification, the absence or presence of storage and distribution materials. Monitoring staff should not be required to perform complex calculations, since errors could lead to information distortions.

Some subjective observations and recommendations are an important link for project management to field conditions and operations, and space for such should be provided on standard formats. However, subjective data should be limited to the degree possible.

Examples of Subjective and Objective Questions

Subjective

Objective

Is the storage area clean?

· Is there visible rodent excrement on the floor or bags?
· Are there flying insects or insects outside or inside of bags?
· Are there damaged or torn sacks?
· Are there evidences of garbage?

Is the food properly stacked?

· Are pallets used?
· Are the stacks interlaced or bonded?
· Is there distance between the stacks and walls and other stacks?

3. Select the Sample Sites

· Map the location and number of the sites
· Set up the monitors’ schedule.

4. Collect the Data

· A representative sample of the waybills should be checked against inventory ledgers showing receipts and dispatches. The current balance shown in the inventory ledgers should be validated by a physical count of food in the warehouse.

· The quantities of food actually received (as counted/weighed by the receiving site) should be compared to the quantities on the waybills and discrepancies noted.

· Food removed from inventory as “unfit for human consumption” or “stolen” must be validated by examination of the loss reports and documents showing destruction of food. If any of the documents are considered to be suspect, the monitor must contact the issuing authorities to verify the documents.

Field monitors must be able to trace all transactions of food movement from primary and secondary warehouses to the beneficiaries, and validate documented information on distributions, inventory, accounting, and the identity and eligibility of recipients. Monitors must periodically witness actual food distributions for propriety, actual ration size distributed and inspection of storage areas.

5. Summarize the Data

Determine the percentage of sites that are operating acceptably and the percentage operating unacceptably, according to the sample.

6. Look for Relationships and Differences

Compare the results of the sample with the data from regular site reporting.
Look for discrepancies.

B. Suggested Information to Collect

1. General Information

· Project name and number
· Type of program: (such as school feeding, MCH, FFW, general distribution)
· Date of visit
· Site address and/or code
· Name of institution
· Province, district or community
· Name of person(s) in charge
· Name of person(s) authorized to receive food
· Date of site agreement
· Date center opened
· Date of last monitoring visit
· Is this a (circle one): regular visit/follow-up visit
· Was a distribution observed?

2. Project Participants

a. Beneficiary Records

· Number of participants registered to receive food

· Number of participants listed as having received food for a sample of five days since the last visit

· Difference between the two. (In reality there will always be differences between registered numbers and numbers actually receiving food.)

· If great differences are found, expand the sample to 20 days.

b. Beneficiary Interviews

Whenever possible, monitors should select a sample of beneficiaries to interview. The following are some questions that monitors can ask:

· Their names (Confirm that the names actually match those of approved beneficiaries.)
· Whether they visited the project site during the last distribution
· Whether they received food during the distribution period
· Whether the food they received was the usual amount
· Whether they have an individual ration card or other card
· Whether ration card is filled in.

c. Interviews with Distribution Staff

· Is there up-to-date information from CARE or counterparts on distribution schedules and ration sizes? If so, look at documentation.

· Does food arrive on a timely basis? Are there missed or late deliveries? Look at documentation.

· Have days for food distributions been canceled?

· Amount of food missed or late

· Are distributions reaching approved beneficiary levels?

d. Ration Sizes

The following questions can be included on a monitoring form.

Food Information

Beans

Peas

Wheat

a.

The quantity of food distributed on the day of the inspection.




b.

The quantity of food that the site is authorized to distribute




c.

The difference between a. and b.




d.

Percentage of difference between a. and b.




e.

The number of people at the distribution site on the day of the visit.




f.

Average ration size per beneficiary (line a. divided by line e.)




g.

The approved ration size per individual




h.

Percentage of difference between f. and g.




3. Center Management

a. Ledger Review

Identify distribution days from the records and confirm through interviews with beneficiaries that feeding actually occurred.

b. Inventory Records


Commodity Information

Beans

Peas

Wheat,

a.

Balance from last inspection physical count




b.

Quantity of food delivered to the center since the last inspection (from waybills received since last inspection)




c.

Total amount of losses (itemize by type of loss) - for example,

Stolen
Wet

Infested



d.

Total amount of food available for distribution (lines a+b-c)




e.

Total amount of food distributed since the last inspection according to site records




f.

Total quantity remaining (lines d-e)




g.

Physical inventory to the nearest quarter bag or nearest half can




h.

Inventory per center records




i.

Difference between physical inventory and center’s records (lines f-g)




Field monitors should get the signature of the person responsible for the distribution site indicating agreement with the physical inventory.

c. Center Documentation

· Are copies of all receiving waybills on file and accessible for inspection?
· Are all copies of the site’s monthly reports on file and accessible for inspection?
· Is the site ledger up-to-date?

d. Storage Site

· Is there ventilation?

· Are the roof, walls and doors structurally sound?

· Is the food stored in a secure area with restricted access?

· Is there a key to the warehouse?

· Are the persons responsible for authorizing and recording dispatches different from the person holding keys to the warehouse?

· Is area free from visible rodent excrement on the floor or bags?

· Is the area free from insects (flying, inside or outside bags)?

· Are all sacks in sound condition (not damaged or torn)?

· Is the area free from garbage?

· Are pallets used?

· Are the stacks interlaced or bonded?

· Is there sufficient space between the stacks and walls and other stacks?

e. Sale/Disposition of Containers

· If sites are selling or giving away empty bags or containers, are they following CARE procedures?

· If bags or containers are being sold, how much money is being collected, how is it being recorded, what is being done with the money, are reports going to CARE?

V. Monitoring Reports

Results from monitoring visits should be summarized in a standardized monitoring report. The reports should kept on file and made available to counterparts and donors as required.

A. Information in Reports

1. General Site Information

· Project name and number
· The names and/or identification number of all sites that were visited
· Location of sites monitored
· Date of previous site visit
· Average interval between monitoring visits

2. Project Participants

· The difference between approved number of beneficiaries and actual attendance, according to distribution site records

· All centers that had at least one participant/beneficiary on the attendance record who could not be verified or who responded to interviews in such a way that the validity of the center’s attendance records is in doubt

· If distributions were taking place during visits, the percentage difference between the approved number of beneficiaries and the number counted by the monitor. Reasons for any differences should also be noted.

3. Distribution Site Management

· All sites by percentage difference in inventory balances (ledger balances minus physical counts)

· All sites that had at least one deficiency in the storage area

· All sites by percentage difference between the actual ration distributed and the approved ration for distribution. (This can depend on whether the site takes attendance, how the food is distributed, and the amount of food on hand to distribute.)

· All sites that could not distribute food because of late or missed deliveries, by location

4. Recommendations

Increasing or decreasing the amount of food provided to each site, a review of past problem areas, progress on implementing previous recommendations and any recommendations for imposing sanctions.

B. Scoring and Follow-up

To assist in the management of distribution sites and determine which sites are operating more effectively than others, country offices should establish standards of operating acceptability with counterparts. A rating system can be adopted which summarizes the performance of the center, based on the monitor’s examination and the adequacy of the site’s reports. For example, criteria could be established for five categories, ranging from “very good” rating to “very inadequate”.

· Very Good = (letter of congratulations from CARE)
· Good = (letter of congratulations from CARE)
· Adequate = (additional training)
· Inadequate = (training, warning and follow-up visit)
· Very Inadequate = (suspension or de-selection)

While random sampling of sites will still be required to monitor activities at distribution sites, establishing a rating system such as above may help country offices more efficiently target resources for sites with problems or make decisions to terminate activities.

TO PREVIOUS SECTION OF BOOK TO NEXT SECTION OF BOOK

CD3WD Project Donate