+ All Categories
Home > Healthcare > Evaluation of health services

Evaluation of health services

Date post: 14-Aug-2015
Category:
Upload: kavita-yadav
View: 29 times
Download: 0 times
Share this document with a friend
Popular Tags:
49
APPLICATION OF EPIDEMIOLOGY IN EVALUATION OF HEALTH SERVICES By: Dr. Kavita Yadav MPH 1 st yr Moderator:Dr.Kavitha HS JSSMC,Mysore
Transcript

APPLICATION OF EPIDEMIOLOGY IN EVALUATION OF HEALTH SERVICES

By: Dr. Kavita Yadav

MPH 1st yr

Moderator:Dr.Kavitha HS

JSSMC,Mysore

FRAMEWORK Introduction to epidemiology Uses of epidemiology Definition and Need of Evaluation Steps in evaluation Application of epidemiological studies in

evaluation of health services Various study designs Merits and demerits Conclusion References

EPIDEMIOLOGY The study of distribution and

determinants of health related states or events in a specified populations, and the application of this study to the control of health problems.

EPIDEMIOLOGICAL METHODS• Observational studies Descriptive studies:Ecological Cross-sectional Analytical studies: Case-control Cohort • Experimental studies : RCT,field trials, Community trials

USES OF EPIDEMIOLOGY To study historically the rise and fall of

disease in a population Community diagnosis Planning and evaluation Evaluation of individual’s risk and chances Syndrome identification Completing the natural history of the disease Search for cause and risk factor

EVALUATION OF HEALTH SERVICES A systematic process to assess

the achievement of the stated objectives of a programme,its adequacy,efficiency, and its acceptance by all parties involved.

MONITORING A planned, systematic process of observation

that closely follows a course of activities, and compares what is happening with what is expected to happen

MONITORING EVALUATION1)It determines programme 1)It determines programme

efficiency effectiveness

2)It establishes standard of 2)It identifies inconsistencies

performance at the activity level between programme

3)It alerts the management , objective and activities.

of discrepancy 3)It suggests changes in

4)It identifies strong & programme procedure

weak points of operation & objectives

programme operation 4)It identifies the possible

side effects of programme

NEED FOR EVALUATION OF HEALTH SERVICES To review the implementation of

services provided by health programmes so as to identify problems and recommend necessary revisions of the programme.

To assess progress towards desired health status at national or state levels and identify reasons for gap, if any.

CONTD. To contribute towards better health

planning To document results achieved by a

project funded by donor agencies. To know whether desired health outcomes are being achieved and identify remedial measures.

To improve health programmes and the health infrastructure.

Allocation of resources in current and future programme.

To render health activities more relevant, more efficient and more effective.

WHO DOES EVALUATION The Policy makers(Those responsible

for programme development and implementation) Adhoc research group By the Community (students,NGOs)

TYPES OF EVALUATION

STEPS IN EVALUATION Determine what is to be evaluated Establish standards and criteria Plan the methodology Gather information Analyse the results Take action Re-evaluate

1) WHAT IS TO BE EVALUATED

2) ESTABLISH STANDARDS & CRITERIA Structural criteria: physical

criteria,facilities equipments(cost benefit,cost effectiveness)

Process criteria: every prenatal mother must receive 6 check ups.

Outcome criteria: alteration in patient health status or behaviour resulting from health care.

INDICATORS..... WHAT ARE THEY? An indicator is a standardized, objective

measure that allows— A comparison among health facilities A comparison among countries A comparison between different time

periods A measure of the progress toward

achieving program goals.

CHARACTERISTICS OF INDICATORS Valid: should actually measure what they

are supposed to measure. Reliable: answer should be same if

measured by different people in same conditions

Sensitive: sensitive to change in situation Specific: reflect changes only in situation Feasible : ability to obtain needed data Relevant :contribute to understanding of phenomenon of interest

COMPONENTS OF EVALUATION Relevance Adequacy Accessibility Acceptability Effectiveness Efficiency Impact

S Efficacy:it is a measure in a situation in

which all conditions are controlled to maximize the effect of the agent

Effectiveness:If we administer the agent in a “real-life” situation, is it effective?(cost effectiveness)

Efficiency: Is it possible to achieve our goals in a cheaper and better way? Cost includes not only money, but also discomfort, pain, absenteeism, disability, and social stigma.

(cost benefit ratio)

3) PLANNING THE METHODOLOGY Purpose of evaluation Standards and criteria must be included

4) GATHERING THE INFORMATION Data required may include political,

cultural, economic, environmental and administrative factors influencing the health situation as well as mortality and morbidity statistics.

Focus on the health service as the independent variable, with a reduction in adverse health effects as the anticipated outcome (dependent variable) if the modality of care is effective

EPIDEMIOLOGICAL EVALUATION

VARIOUS STUDY DESIGNS Randomized design Non randomized design Before after design Simultaneous Nonrandomized Design Combination design Case control studies

RANDOMIZED DESIGN Eliminates problem of selection bias. For ethical and practical

reasons,randomizing patients to receive no care is not considered.

Assign different types of care and then evaluate.

EXAMPLES Chemotherapy of tuberculosis in India,

which demonstrated that domiciliary treatment of pulmonary TB was as effective as the more costlier hospital or sanatorium. Results gained international acceptance and ushered in new era.

Evaluation of Multiphasic screening in South East London, led to withholding of vast outlay of resources required to mount a national programme

DEMERITS OF RANDOMIZED DESIGN RCT trials are logistically complex and

extremely expensive. Ethical problems Long time for completion, so relevance

is questionable. Alternative approach- outcome

research.

OUTCOME RESEARCH Denotes studies comparing the effects

of two or more health care interventions or modalities- such as treatment, forms of health care organization, or type and extent of insurance coverage and provider reimbursement on health or economic outcomes.

Uses data from large data sets that were derived from large population.

ADVANTAGES Refers to real world population and issue of

representativeness or generalizability is minimized As the data already exists, analysis can be

completed and results generated rapidly Sample size is not a problem except when smaller

sub-groups are examined Cost effective.

DISADVANTAGES Data gathered for fiscal and administrative purpose

may not suit research questions addressed in study New questions (as knowledge is more complete

now) wouldn’t have been framed Data on independent and dependent variable may

be limited Data relating to possible confounders may be

inadequate or absent Certain variables that are relevant today,were not

included in original data set

CONTD. Investigator may create surrogate variable or

may change original question which he wanted to address

Investigator becomes progressively more removed from the individual being studied

NON-RANDOMIZED DESIGN Before –after design Simultaneous nonrandomized design a)Comparison of utilizers and non utilizers b)Comparison of eligible and non eligible Combination designs Case control studies

BEFORE-AFTER DESIGN Data obtained in each of two periods are

not comparable in terms of quality and completeness.

Difference is due to programme or due to other factors which changed over time like housing,nutrition,lifestyle

Problem of selection exists

SIMULTANEOUS NONRANDOMIZED DESIGN (PROGRAM-NO PROGRAM) A cohort study in which the type of

health care being studied represents “exposure”

Problem arises as in how to select exposed and non-exposed group for study

COMPARISON OF UTILIZERS AND NON UTILIZERS To compare a group of people who use a

health service with a group who do not. Problem of self selection exists Address this problem by characterizing

the prognostic profile of people in both groups.

We cant say someone not to utilize the programme.

COMPARISON OF ELIGIBLE AND NON ELIGIBLE POPULATION Assumption being made that eligibility

and non-eligibility is not related to either prognosis or outcome,

So no selection bias is being introduced For eg:employer or census tract of residence May relate to socioeconomic status

COMBINATION DESIGNS In all above mentioned designs that

compare the morbidity level in people who receive care and who do not, assumption is made that the original level of morbidity in 2 groups at time T1 were comparable before the care was provided.

Combination of both the designs viz. before and after ,programme and no programme

CASE CONTROL STUDIES The case-control design has been applied primarily to

etiologic studies, when appropriate data are obtainable, this design can serve as a useful, but limited, surrogate for randomized trials.

But this design requires definition and specification of cases, it is most applicable to studies of prevention of specific diseases. The “exposure” is then the specific preventive or other health measure that is being assessed.

As in most health services research, stratification by disease severity and by other possible prognostic factors is essential for appropriate interpretation of the findings.

5) ANALYSIS OF RESULTS Should take place within shortest time

feasible Discussion of results should be done.

6) TAKING ACTIONS For evaluation to be truly effective

emphasis should be on actions- to support,strengthen or modify the services.

Calls for shifting priorities,revising objectives or developing new programmes to meet previously unidentified needs.

7) RE-EVALUATION Aims at rendering health services more

relevant, more efficient and more effective.

APPLICATION The 5 year RCH phase II was launched in

2005 with a vision to bring about outcomes as envisioned in the Millennium Development Goals, the National Population Policy 2000 (NPP 2000), the Tenth Plan, the National Health Policy 2002 and Vision 2020 India, minimizing the regional variations in the areas of RCH and population stabilization through an integrated, focused, participatory programme meeting the unmet needs of the target population, and provision of assured, equitable, responsive quality services.

THE ACTION PLAN FOR CARRYING OUT R.C.H SERVICES EVALUATION

Goal: “Health For All” Objective: Population stabilization by 2045 Programme: Comprehensive R.C.H services Monitoring & Evaluation: RCH indicators/feedback

data

ACCESSIBILITY INDICATOR

No. of eligible couples registered/ANM No. of Antenatal Care sessions held as planned % of sub Centres with no ANM % of sub Centres with working equipment of ANC % ANM/TBA without requisite skill % of sub centres with infant weighing machine % sub centres with vaccine supplies % sub centres with ORS packets % sub centres with FP supplies

QUALITY INDICATOR % Pregnancy Registered before 12 weeks % ANC with 5 visits % ANC receiving all RCH services % High risk cases referred % High risk cases followed up % deliveries by ANM/TBA %PNC with 3 PNC visits % PNC receiving all counselling % PNC complications referred % Eligible couple offered FP choices % women screened for RTI/STDs % Eligible couple counselled for prevention of RTI/STDs % ARI treated % children fully immunized

IMPACT INDICATOR % Deaths from maternal causes Maternal mortality ratio Prevalence of maternal morbidity % Low birth weight Neonatal mortality ratio Prevalence of post natal maternal morbidity % Baby breast feed within 6 hrs of delivery Couple protection rate Prevalence of terminal method of sterilization Prevalence of spacing method %Abortion related morbidity Prevalence of RTI/STDs

REFERENCES Gordis leon. Epidemiology. 4th edition. Philadelphia:

Elsevier Saunders:2009 Park K. Park’s Textbook of Preventive and Social

Medicine. 22nd ed. WHO: UNFPA. Programme Manager’s Planning

Monitoring & Evaluation Toolkit. Division for oversight services, August 2004,

UNICEF. “A UNICEF Guide for Monitoring and Evaluation: Making a Difference?”, Evaluation Office, New York, 1991. • ). “Framework for Program Evaluation in Public Health”, 1999. Available in English at http://www.cdc.gov/eval/over.htms


Recommended