+ All Categories
Home > Documents > 505 Evaluation Report

505 Evaluation Report

Date post: 28-Mar-2015
Category:
Upload: farnoush-davis
View: 377 times
Download: 1 times
Share this document with a friend
30
BOISE STATE UNIVERSITY Defense Language Institute’s Persian-Farsi Headstart Program Evaluation Report Farnoush H. Davis 12/7/2010 EDTECH 505-4172 This is a “formative evaluation” for the DLI Persian-Farsi Headstart Program to determine its effectiveness and usability in order to improve its value as a primary tool to provide quick instruction to the military personnel who require basic language skills. This is a “virtual” evaluation in that it collects data from simulated DLI soldiers who completed the first task of the first module of the course. The findings show that the program is effective on most levels, and provides a well designed and user focused process. However, while utilizing modern technology as a delivery tool, the content and presentation remain mostly in the traditional format of language instruction.
Transcript
Page 1: 505 Evaluation Report

BOISE STATE UNIVERSITY

Defense Language Institute’s Persian-Farsi Headstart

Program Evaluation Report

Farnoush H. Davis

12/7/2010

EDTECH 505-4172

This is a “formative evaluation” for the DLI Persian-Farsi Headstart Program to determine its effectiveness and usability in order to improve its value as a primary tool to provide quick instruction to the military personnel who require basic language skills. This is a “virtual” evaluation in that it collects data from simulated DLI soldiers who completed the first task of the first module of the course. The findings show that the program is effective on most levels, and provides a well designed and user focused process. However, while utilizing modern technology as a delivery tool, the content and presentation remain mostly in the traditional format of language instruction.

Page 2: 505 Evaluation Report

2

Table of Contents

Learning Reflection ................................................................................................................................ 3

Executive Summary ................................................................................................................................ 4

Purpose of the Evaluation ...................................................................................................................... 5

Main Purpose of the Evaluation .......................................................................................................... 5 Central Questions of the Evaluation .................................................................................................... 5 Most Impacted Stakeholders of the Evaluation ................................................................................... 5

Background Information ........................................................................................................................ 6

Program Rationale .............................................................................................................................. 6 Program Goals .................................................................................................................................... 6 Previous Program ............................................................................................................................... 7 Involved People in Program ................................................................................................................ 7 Program Description ........................................................................................................................... 7

Description of Evaluation Design ........................................................................................................... 9

Evaluation Questions ......................................................................................................................... 9 Data Sources .................................................................................................................................... 10 Samples ........................................................................................................................................... 10 Data Analysis ................................................................................................................................... 11 Audience ......................................................................................................................................... 11

Results ................................................................................................................................................. 12

Discussion of the Results ..................................................................................................................... 14

Conclusions and Recommendations .................................................................................................... 16

Immediate Conclusions .................................................................................................................... 16 Long-Range Planning ....................................................................................................................... 16 Evaluation Insights .......................................................................................................................... 16

References .......................................................................................................................................... 18

Appendices ......................................................................................................................................... 19

Program Screen Captures ................................................................................................................. 19 Survey .............................................................................................................................................. 26 Test Evaluation ................................................................................................................................ 29

Page 3: 505 Evaluation Report

3

Learning Reflection

The term “evaluation” always had the general meaning of “test” and “exam” for me, the process in which to evaluate a student’s learning. And at the beginning of this course (Evaluation for Educational Technologists) when it came to terms of assessing a program, project, or product I had the concept of “criticism” in my mind, a way to analyze the weaknesses and drawbacks of something.

Throughout the course we have had opportunities to learn about the concept of evaluation theoretically through the text, to understand and recognize evaluation from research, and to participate in practical activities and group projects to put our newly acquired knowledge into practice. Every single activity I was assigned to complete was like a small jigsaw puzzle piece to help me complete the big picture of this current Evaluation Report. In my new understanding, evaluation is a systematic process in terms of having a plan with set goals, methods to evaluate, tools to collect data, and a variety of evaluation types to meet different purposes. A test - what I used to think was the meaning of evaluation - is only one of the instruments to evaluate along with surveys, interviews, and questionnaires.

Taking this course, however, helped me to consider other job possibilities for my future career. As an educational technologist, having the knowledge of the education process from a teacher’s perspective and the structure of the instruction from a designer’s point of view, enables me to put all this input into perspective and evaluate a program to actually learn more about it. Evaluating is not just a matter of declining or accepting an evaluated program, or criticizing it from the pessimistic disapproval standpoint. Instead, evaluation can be an instructive instrument for problem-solving a program. What we observe of an overall program is like looking at the surface of water, but what you can do with evaluation is to sink into it to discover the core of a problem.

Evaluating the Headstart program was indeed an informative experience and allowed me to: a) understand and explain the reason of the existing shortcomings, b) reinforce its strengths, and c) impartially provide constructive recommendations to reduce the drawbacks and increase the productive outcomes. As there is not any flawless program with one hundred percent productivity, then an effective and reliable evaluation must be undertaken to find ways to improve their results. Confidently, the recommendations provided will help the Headstart program to progress and accomplish its goals and objectives based on the learner’s needs.

Page 4: 505 Evaluation Report

4

Executive Summary

During the 1990’s, the Defense Language Institute (DLI), Presidio of Monterey, California, began developing programs to improve student probability of success by preparing them for the language class they would take with introductory lessons taken before their classes started. Students often have to wait from the time that they arrive in Monterey for a few weeks to a few months before their classes begin. DLI began using this time for one on one or small group tutoring by advanced students. This was the original Head Start.

After 9/11, there was a high demand for soldiers with language skills. DLI trains special linguists mostly for intelligence jobs. Now, with two wars, the Defense Department needed soldiers in every field who had some language skill, even if it was just at a basic level. DLI began tailoring the programs it was developing to be delivered to its students in a computerized format to meet the needs of soldiers in every military branch and in every job specialty. This new program, Headstart and Headstart2, would be used for deploying soldiers and for those already in the countries to quickly learn key phrases in the new languages.

This report is based on an evaluation to determine the usability, appropriate content, and level of independent use of the current version of Headstart. More specifically, the evaluation was designed to provide feedback and recommendations to the DLI Persian Language department. It involved the actual completion of the first module of the course by using participants who had no prior Persian Language skill, as well as some participants who had experienced older versions of DLIs Headstart programs. It also involved faculty from other departments to see if the applications and exercises presented the content in an appropriate manner.

The consensus was that overall this was a very good program for an introduction to the language. It was easy to navigate, led the student from one step to the next, and had good instructions in the tutorials on how to use the program. The students all progressed even from the first lesson. But the evaluation was not about student learning – as that would have required a much larger sample and they would have to complete much more of the course. This was an evaluation of the program itself, with the goal to find areas that could be improved both long and short term. The report concludes with the short and long term recommendations, based on the feedback provided by the focused survey given after the module evaluation. There are several areas where the current program can be improved immediately with changes to the design, and many long term improvements to provide more interactivity between the avatars that are used in the program and the students, creating a more “virtual” environment and less of a traditional classroom “memorization” format.

Page 5: 505 Evaluation Report

5

Purpose of the Evaluation

Main Purpose of the Evaluation

The main purpose of this evaluation is to determine if the program presents the educational material in an effective way that enables the student, with little to no outside assistance, to learn basic communication skills in a foreign language. Specifically, this evaluation was trying to determine if this technology-oriented program effectively increases the learner’s initial Persian language ability by itself given the fact that the target audience has no prior exposure to the program or the language. The expectation is that educational software, which is designed for a self-study program for learners with no to little prior knowledge of the content, would be user-friendly to ease the learning process. This evaluation also determined if there were technical issues that could be improved.

Central Questions of the Evaluation

This particular evaluation was not necessarily intended to assess student learning beyond an initial level because the target of the evaluation is the program itself. The focus of the evaluation was if the program included all of the aspects of effective instruction and if it considered the needs and learning style of the students for a better outcome.

The evaluation was centered on questions based on the purpose and objectives of the evaluation:

1. Is the program user-friendly and easy to work with? 2. Is all the information suitable for a learner with no prior Persian language knowledge? 3. Are all aspects of self-study considered (i.e. if it is teacher proof)?

Most Impacted Stakeholder(s) of the Evaluation

This evaluation involves two groups of primary and secondary stakeholders:

Primary stakeholders:

• The learners, who are the US in-service military people. They are the participants and their success will also show the success of the program.

• The team of instructional designers who are responsible for designing the course, developing the evaluation plans, and performing the formative evaluation plan.

Secondary stakeholders:

• The faculty of the Curriculum Development department, the designers of the program for all language departments.

• The Persian Language department faculty and educational supervisory faculty. They are accountable for content effectiveness and student achievement in accordance with institutional requirements, and will monitor all results.

Page 6: 505 Evaluation Report

6

Background Information

Program Rationale

In the military when you enlist you are sent for basic training. After completing this training you are then sent for training for the specific job that you are assigned for. Every week of the year many soldiers are sent to different locations for training in their positions such as cooks, firefighters, mechanics, infantry, or linguists. The Defense Language Institute (DLI), Monterey, California, receives new soldiers every week. The language classes do not start until they have required number of students, which is between 40 to 60 students. Some languages based on their needs will fill up very fast like Arabic, Persian, Korean, or Chinese and some like Uzbek might take longer to get there. Therefore, by the time soldiers arrive until their classes start may be one week to couple of months. In the meantime the soldiers are assigned to do various work around base. However, given to the costs of housing and expenses of soldiers, DLI designed various head-start programs to give the students exposure to their new languages and to help them to be successful in their classes and get ready while waiting.

The first head-start programs were initially conducted in a physical classroom containing a teacher (usually an advanced student), books, and paper and pencil. The teachers or tutors had usually been in class for six months up to a year. They introduced the new soldiers to the alphabet, numbers and simple phrases. Although this attempt was more beneficial for the new students than just waiting for their classes to start, it was not quite productive for the students to be successful in their classes. A lot of hardest languages only had a 60% graduation rate. Additionally, with the increase in the need for soldiers with language abilities for languages in the countries involved in America’s new wars, a method was required to give soldiers a quick an effective tool to learn the basic phrases that they would need when they deployed. Hence, DLI decided to create a new, computer-based Headstart program in which students would progress from the very basic level of the language to the lower intermediate level with very little assistance, if any.

Program Goals

The Defense Language Institute initially developed the "Headstart" program to help deploying troops gain skills in the Arabic, Pashto and Dari languages spoken in Iraq and Afghanistan. With conflicts ongoing in these two nations, there is a need for at least some soldiers to have knowledge of the languages spoken there.

The Defense Language Institute's Headstart program is one path that can help Soldiers develop language skills. Headstart is a computer-based, self-directed language learning program aimed at military members getting ready to deploy.

The self-guided program takes between 80 to 100 hours to complete. After completing the course, soldiers should be able to hit the ground in a new country with enough language skills to conduct business and have limited communication with civilians in the local language, according to the DLI commandant.

Page 7: 505 Evaluation Report

7

Previous Program

The computerized version of DLI Headstart was first developed for soldiers deploying to Iraq and Afghanistan to learn basic phrases to help them in their jobs. It initially began with three languages: Iraqi, Dari, Pashto and it was not just for DLI students but was available for military members who would need quick language instruction. Over time the program was expanded both in content and also in number of languages. Right now there are Headstart programs in 11 languages - Chinese (Mandarin), Dari, French, Iraqi, Korean, Pashto, Persian-Farsi, Russian, Spanish, Urdu, and Uzbek. They are all downloadable programs, with online access and also come on DVDs, which is the formats used to deliver the program to different military stations.

People Involved in Program

Each program shares some of the same stake holders, and some that are specific to each language. As each Headstart program is the same as far as design, subjects covered, and length, the DLI Command Faculty, as well as the faculty of the Curriculum Development department has an interest in the success of all of the programs, regardless of the language. Each language department has a stake in the success of their particular Headstart program. As many languages have different systems, alphabet structures and grammatical rules, some of the programs are more difficult based on the language. So each department is concerned with the progress that their students will make using Headstart, as it varies with difficulty level. In this evaluation, the Persian department, its faculty and the students are all stakeholders.

Program Description

Headstart2 consists of ten modules, each including two Sound and Script and five Military Tasks. Sound and Script teaches the basics of the target language writing system. Each Military Task focuses on fifteen language drills based on a given topic or theme, such as greetings and introductions or gathering intelligence. Headstart2 also features over 100 PDFs with writing drills that provide the user with the opportunity to practice writing the target script. Other features include a writing tool, a sound recorder, a glossary, and cultural resources section. Headstart2 exposes users to 750 key terms and phrases and provides them with important communication tools they need for preparation for deployment.

The Persian-Farsi Headstart program is developed by the Defense Language Institute Foreign Language Center. This program is divided into the two categories of Persian-Farsi Sound and Script and Military Tasks which each consist of ten modules, each module containing two tasks. Sound and Script introduces the Persian-Farsi writing system, while Military Tasks introduce important words and phrases related to military operations.

The program is actually delivered on Flash software. It is a big file size and requires that the user complete a registration. The home page has 5 boxes in a three-column layout and contains – starting from the top left – “last visited” and “language tools” pods to the “welcome” pod in the middle with a brief introduction of the program accompanied by a beautiful picture of the Alborz Mountains. The language tools pod has a menu of Cultural Resources, Sound Recorder, Glossary, and Pronunciation Guide. On the top right side of the page is “course content” which shows the modules and tasks. On the bottom right side is the “certificate” pod.

Page 8: 505 Evaluation Report

8

Once the learner completes the course, they will be able to print the course certificate. The course has tutorials on how to use the program. Screen captures of the program is included in Appendices.

Module 1 of the Persian-Farsi language program, introduces students to the letters of the alphabet, and breaks those characters down by letters that are similar to the English alphabet, and letters that require students to learn a new sound. Subsequent modules introduce country names, telling time, weather, making appointments and topography. The lessons are broken into different interactive games involving word-matching using the Persian language script.

To practice writing, the program provided printable PDF files with the relevant exercises and patterns for the learner to practice writing the script. For example, Step 4 of Task 1 of Module 1 of Sound and Script section, in front of each vocabulary there is a PDF button to open a printable page. See the sample on the Appendices section.

Page 9: 505 Evaluation Report

9

Description of Evaluation Design

This evaluation looked to see if the program meets the identified needs of the learners. In other words, the evaluation examined in what ways the program addressed and also what elements and components it employed to consider learning needs of the learners in acquiring the Persian language. The goal-free model is appropriate for this evaluation because it does not focus on the Headstart program’s own goals and objectives; instead it examines if all the delivery aspects of Headstart program are performing properly. The intended outcome of this evaluation based on the goal-free model is to learn about the usefulness and impact of the Headstart program. This is accomplished through data collection and data analysis. As the evaluator, I am not part of the program to create a biased report. I attempted to discover the actual effects of the Headstart program within the learner’s needs through concentrating on what is actually happening in the program.

Based on the goal-free model, I tried to rely on data gathering techniques including surveys and observations.

Evaluation Questions

Answering the earlier stated questions of the evaluation is a critical step that will help to ensure that the results will lead to program improvement.

Is the program user-friendly and easy to work with? Based on the conducted survey 80% of the participants (4 out of 5 participants) found the different sections of the program easy to navigate. Also the observation of the participants while they were taking the course showed that the different sections of the course could be completed independently without significant increase in overall completion time.

Is all the information suitable for a learner with no prior Persian language knowledge? I also conducted the survey with some independent faculty outside of the Persian Department to see if information is somewhat too much for the learner’s language level. 95% of the faculty who took the survey identified the content suitable enough for the target users.

Are all aspects of self-study considered (i.e. if it is teacher proof)? The aspects of an effective self-study program are as follows:

1. Help the learner identify the starting point of the educational program and determine related tasks.

2. Be a manager of the learning experience rather than an information provider. 3. Offer different ways of achieving objectives for a successful performance. 4. Provide various examples of acceptable work. 5. Develop high-quality learning guides. 6. Recognize learner learning styles. 7. Provide rich comprehensive explanations on instruction. 8. Provide opportunities for learners to reflect on what they are learning. 9. Promote learning interactions between learner-learner and learner-instructor.

Page 10: 505 Evaluation Report

10

The survey was conducted to measure the level of independence of the program. While the evaluation only covered the first task of the first module, it still provided valid information on how the program is designed for individual student-paced progression and how the directions are clear to create a teacher-proof set of instructions.

Data Sources

Data collecting for this evaluation was done through surveys, observations, and tests. For the purpose of this evaluation, the end-user (the learner) reviews and tries out the Headstart program. This evaluation conducted a one-on-one trial with each individual who participated in this evaluation. Each participant was asked to take the Headstart program and complete the first Task of the first Module of “Sound and Script.” This process was to be supervised and completed in an estimated time between 1 to 2 hours. This was keeping in line with DLI requirements of one module consisting of two activities a day. There was a test at the end of the program to assess the participant’s learning. Given the short amount of learning time and use of their notes, the test results show the level of comprehension after completion of the beginning activities. However, the main focus of this evaluation was how the design and organization of the program facilitated or hindered the learning process. This is reflected in the outcomes of the learning tests, which are part of the Headstart program, and primarily shown by surveys given to all participants. Although this evaluation relied on participants who were not DLI students, the results can still be used as a valid source of data. See Appendices for sample test.

The participants took the survey after completing the course. They also provided feedback and comments on how the program worked for them as an English speaker learner with no prior knowledge of the Persian language. See Appendices for sample survey.

The same survey was also used to gather information from a group of independent faculty members (simulated) who were not affiliated with the Persian Department. Although the Headstart program is the same format for the different languages, there are differences from language to language, and having other language instructors review the program is a way to highlight areas that may be confusing, incomplete or in an inappropriate sequence. This survey was also used to see if the Headstart program included all characteristics of a self-directed program with no need of instructor’s assistance.

The survey and test forms are PDF files for easy recording and tracking of the information.

Samples

This evaluation has been done successfully by collaboration of five individuals as simulated learners who participated in taking the Headstart program. The age range of the participants was from 22 to 40. Also, the range of their education level was varied from high school diploma to master’s degree, a similar mix compared to the composition of DLI classes. All of the participants were native English speakers. Three of participants had no prior knowledge of Persian language while the other two participants know the target language at the post-intermediate level but have not worked with the Headstart2 program before. As a result, all

Page 11: 505 Evaluation Report

11

the five participants were new to the Headstart2 program in terms of usability, content, and learner-oriented level of the program.

As far as the technology literacy and computer skills, the participants shared almost the same level of intermediate knowledge.

Yet again, using participants with different background knowledge of the Persian language enabled me to keep the focus of this evaluation on the program itself rather than assessing the learning level.

Data Analysis

The collected data from each individual provided the source of the numerical information. In order to analyze the collected data in a meaningful way, I employed mathematical formulas to produce a percentage since it provides a better grasp of the data gathered.

The survey asked 18 questions, each regarding a particular aspect of the evaluation. For the purposes of the evaluation, three areas were determined to provide the information needed to show the effectiveness of the program, and provide the answers to the questions initially asked. The survey consists of:

• Usability of the program: 9 questions • Level of initial information: 2 questions • Characteristics of a self-directed program: 7 questions

In order to get a general view of the overall program’s performance, I calculated each participant’s feedback on the survey questions (Table1). The answers of the questions for each of the criterion added up to create the mean score of the related criterion. Each question had a scale of 1 to 5. The average number was divided by the 5. For instance, if participant #4 provided the average score of 4.5 for the usability of the program, 4.5 was divided by 5 (the total scales) to calculate the tendency of 0.9. See Table1 in the Results section.

However, to analyze the input from the participants specifically on the questioned areas in the survey, I calculated the average weight of each criterion in Figure1.

Audience

This evaluation was conducted for the Curriculum Development department of the Defense Language Institute in order to provide impartial feedback for the improvement of the program. However, a copy of this report will also be sent to the Administrative Faculty of the Defense Language Institute of Foreign Language Center (DLIFLC) and the Persian language department.

Page 12: 505 Evaluation Report

12

Results

According to the data collected from the evaluation participants (simulated learners and faculty members) through surveys and observations, the Headstart program in general presented a usable and generally effective computer application for the purpose of language learning. The designers utilized the aspects of the content related design, giving the program a simple and pleasant appearance. This helped the users to be able to concentrate on the content with less distraction. Three out of five participants gave high scores on ease of use of the program; surprisingly, one out of five of the participants scored very low on accessibility and easy navigation.

The following table presents the results of the survey based on three main questions of the evaluation: a) is the program user-friendly and easy to work with? b) is all the information suitable for a learner with no prior Persian language knowledge? c) are all aspects of self-study considered (i.e. if it is teacher proof)?

Table 1

Participants Program’s User-

friendliness

Suitable Information

for Learner’s Level

Aspects of Self-

directed Program

Participant # 1 56% 70% 80%

Participant # 2 100% 70% 86%

Participant # 3 100% 90% 86%

Participant # 4 90% 90% 72%

Participant # 5 100% 100% 88%

The results regarding the characteristics of a self-directed program and also my observations show that using the Headstart program does not deny the need of instructor assistance. The feedback from the participants showed that not all the directions were descriptive enough to complete the activities. Additionally, the participants found that the vowels and some letter symbols were not explained properly. Yet, the first task of the first module lacks of comprehensive information on categorizing letters with similar sounds, and the confusion this might make for a new learner.

Page 13: 505 Evaluation Report

13

The Headstart program lacks a thorough introduction for the presented language before heading forward to doing the activities. Unfortunately, the new learner starts off the program with a massive amount of information they receive and less assistance to organize and manage it. The program fails to present the information in an order that would decrease the memory load. Apparently “memorizing” is still the traditional way for the program to instruct. One of the reasons, which has caused memory load increase is a lack of using the visuals and images.

Figure 1

The above chart presents the results of the survey questions based on the “user-friendliness of the program”, “appropriate information for learner’s level”, and “characteristics of a self-directed program” which include 9, 2, and 7 questions of the survey respectively. This result proves that the amount of information, given the learner’s level with no prior Persian language knowledge, is overwhelming. Also the directions provided are not efficient enough to support the self-directed feature of the program. As Headstart program is designed for self-study purposes, the low percentage of this aspect is certainly a drawback for the program.

0

10

20

30

40

50

60

70

80

90

100

Program's user-friendliness

Suitable info for learner's level

Aspects of self-directed program

94.7%82% 82.2%

Page 14: 505 Evaluation Report

14

Discussion of the Results

How good were the results of the program? The focus of the evaluation was on the usability, design quality, and enhancement of the learning experience. The results of the evaluation provided adequate information about the program’s strong points and the areas that need more improvement. Using the opinions and feedback gathered in the surveys of the students and the faculty – both of which went through the same unit of instruction of the Headstart program – I was able to determine that overall, the Headstart program proved to be both user friendly as well as strong in educational content.

Another critical area was the program’s ability to be navigated and used independently by the students without assistance. This area is not as easy to evaluate, as in language learning it is necessary to have a “two-sided” exchange of ideas, since this is what language is. While it is possible to guide a student through the basics of pronunciation and memorizing phrases, actual language production needs to have another “person” to communicate with. Headstart tries to overcome this by using avatars that show proper mouth movement to reproduce sounds and words, and even to say short phrases on screen to the learners. However, there is no real communication because the avatar is just repeating the words that the student sees on the screen, and not actually interacting or communicating with the student.

While the results were for the most part similar for each of the participants, the sample size made the accuracy of the results less than desirable. Having a larger sample size to work with, as well as having more time to complete the unit given, as well as move on through additional lessons, would have made the final results more reliable. Still, with the outcomes that I gathered, the information provided a good foundation to make both short term and long term recommendations.

How did the program results compare with what might have been expected had there been no program? Basically, the existence of the Headstart program was because of the need of the DLI to introduce and prepare the new learners to their new language. It fills the gap between the time the soldiers arrive at base until their regular classes start. It is also a tool to provide deploying soldiers a tool to learn some basic language skills before they deploy overseas.

The Headstart program enhanced the new learner’s acquaintance with their assigned language and has shown to increase a student’s chance at success and graduation significantly. The actual courses move at such a fast pace that the skills the Headstart program gives the students is like having a month of class before the class starts. It really is a head start, and if the student uses that as a way to get ahead and stay ahead in class, then they are more likely to succeed and graduate. Therefore, having a program like this is much better than having no program at all.

However, just having a program does not mean that it cannot be improved on. By assessing the program and gathering student feedback, a more effective program can be developed for the long term, and short term adjustments and fixes will help to have a program that is more effective and designed to keep student interest and give them the needed information. Now that Headstart has moved into the educational technology world, there are

Page 15: 505 Evaluation Report

15

almost limitless things that can be done to experiment for the best delivery and design. Much like software and operating systems change every few years, the Headstart program can also evolve into newer versions, keeping the content the same but changing the ways in which that content is delivered. DLI has changed from cassette tapes to CDs, to MP3 and now to itouch. Not only can the Headstart change, but the actual classes can change to use more and more technology.

Are there any other possible explanations of program results? Overall, the results showed that the participants were satisfied with the program. The participants came from different educational backgrounds, different ages, and different ethnic backgrounds. With this diverse group, the closeness of their responses gives confidence in the accuracy of the final results. However, it is possible that those who have gone through DLI before could have that experience reflecting their answers. It is interesting to note that of the two participants who attended DLI, one gave one of the most positive feedbacks, and the other gave the most critical.

As far as student learning, all of the participants did pretty well on the final assessment. This shows that the program is effective, and eliminates other explanations based on educational level or prior language learning experiences.

At DLI they are using much larger groups of students, and have found that this program does give some advantage to those who use it before beginning class. However, once the class progresses beyond what is taught in Headstart, the material is all new, and it is then up to the student and their skills to keep going and not fall behind. This program is a good tool for the basics, but is not designed for a whole composite course.

Page 16: 505 Evaluation Report

16

Conclusions and Recommendations

The following are conclusions which I, as the evaluator, have developed as the result of the collected data from the participants and faculty:

Immediate Conclusions

• Provide clear and specific directions for each activity to take away any ambiguity and confusion.

• Employ visuals and images for the glossary to decrease memory load. • Provide instructions for vowels in the Persian language and more fully explain their

difference with English vowels. • Provide adequate explanation – not overwhelming but indeed comprehensive – for the

groups of letters with similar sounds. • Set the evaluation test in the program in an area where the learner could not go back to

the lesson to find the answer.

Long-Range Planning

• Supply a means of interaction between the learner and instructor via email, chat, or discussion forum for instructional assistance and facilitation during student individual study.

• Supply a means of interaction between the learner and technical support via email, chat, or discussion forum to expedite and ease of troubleshooting procedure.

• Provide instructions for vowels in the Persian language and more fully explain their difference with English vowels.

• Provide adequate explanation – not overwhelming but indeed comprehensive – for the groups of letters with similar sounds.

• Create a fully interactive environment that instead of utilizing rollovers and stationary avatars, fully interfaces with the student to provide virtual reality learning experiences

• Create interactive games to engage the learner with the concept actively rather than just plain basic learning context.

• Employ videos of real situations to make the dialogue phrases more meaningful and practical instead of a memorizing practice.

Evaluation Insights

If I had to do this evaluation over again, there are many things that I could definitely learn from and improve on based on the experience I had through this process, and hopefully I would not repeat my same mistakes twice. The first thing I would do for a future evaluations would be to make DLI hire me and pay me well for the evaluation and to manage their program! Seriously, one of the biggest drawbacks I had was not having access to the hundreds of DLI students that are using this program every day. Much of the data I had to simulate or find ways to work around are easy to obtain if I could survey the students that are currently using the program. So I say I would be hired by DLI only half joking. It really would allow me to have better data to work with and therefore, have better results. Also, using the faculty outside of the

Page 17: 505 Evaluation Report

17

Persian department would have given a better insight into the educational content. I was lucky to have two of my participants be people with DLI experience. One was a military instructor and a former DLI student, and the other was a student of DLI twice, and a language manager in the military.

Another thing that did not go as I had planned was the timeline. Conducting this evaluation requires actual time to give the participants enough time to engage and connect with the activities, concepts, and brand new information they are receiving. The lessons of the Headstart program are very thorough with sufficient activities for the students to complete, so they should be given enough time for the information to sink in. As soon as the participants make connections with the program they find it to be second nature, so for a reliable outcome giving enough time is critical.

I would also like to consider the feedback from the learners who already used the same and/ or older version of the Headstart program. Their insight would be certainly valuable to find out about the strengths and weaknesses of a program in usefulness of the activities, the appropriateness of the presented information, the usability of the program, and the thoroughness of the instructions for learner’s independency.

Page 18: 505 Evaluation Report

18

References

Boulmetis, J., & Dutwin, P. (2005). The ABCs of Evaluation. San Francisco, CA: Jossey-Bass.

Dick, W., & Carey, L. M. Formative Evaluation. In Instructional Design (pp. 227-267).

Lowry, C. M. (1989). Supporting and Facilitating Self-Directed Learning. Retrieved from ERIC: http://www.eric.ed.gov/PDFS/ED312457.pdf

Lynch, B. K. (1996). Language Program Evaluation: theory and practice. Cambridge University Press.

Page 19: 505 Evaluation Report

19

Appendices

Persian-Farsi Headstart2 Screen Captures

Headstart Home Page

Page 20: 505 Evaluation Report

20

Sound and Script category, Module 1, Task 1

Page 21: 505 Evaluation Report

21

Step 1 of Task 1

Page 22: 505 Evaluation Report

22

Step 2 of Task 1

Page 23: 505 Evaluation Report

23

Step 3 of Task 1

Page 24: 505 Evaluation Report

24

Video Captures of Steps 4 & 5

Step 4 of Task 1 of Module 1

Step 5 of Task 1 of Module 1

step5.avi

step4.avi

Page 25: 505 Evaluation Report

25

Step 4: Writing Practice PDF file

Page 26: 505 Evaluation Report

26

Survey

Page 27: 505 Evaluation Report

27

Page 28: 505 Evaluation Report

28

Page 29: 505 Evaluation Report

29

Evaluation Test of Alphabet

Page 30: 505 Evaluation Report

30


Recommended