Welcom to E-portfolio
This blog is an e-portfolio for assignments and researches in this course. We hope you enjoy our e-portfolio!
Course instructor: Dr Alaa Sadik
StudentID(s): u065932 & u061563
Sunday, May 10, 2009
Evaluation of Online Learning
Find Whole article in this link
http://www.scribd.com/doc/15151281/EductionalFourmEvalution
Thursday, May 7, 2009
Models of evaluation in educational technology
This link is a powerpoint presention about proposal for Evaluating Omani E-portal using action model.
Tuesday, May 5, 2009
Comparative and non-comparative evaluation in educational technology
Comparative Study
- Title: Comparative Analysis of Learner Satisfaction and Learning Outcomes in Online and Face to Face Learning Environment
- Type of comparative study: learner's perception and performance
- Problem: Because the sharp growth of online programs in recent years, there is a need for researches to assess the capabilities and efficacy of online programs.
- Purpose of evaluation: compare an online course with an equivalent course taught in a traditional face-to face format.
- Question:
What differences exist between students enrolled in online versus face-to-face learning environments in: *satisfaction with the learning experience?
*student perceptions of student/instructor interaction, course structure, and course support ?
*learning outcomes (i.e., perceived content knowledge, quality of course projects, and final course grades)? - Participants: Students enrolled in instructional design course for human resource development professionals. *19 students: were taught on a traditional face-to-face format
*19 students: were taught totally online. - Instruments applied
1) some items from the Instructor and Course Evaluation System (ICES)
2) Course Interaction, Structure, and Support (CISS) instrument
3) A course projects
4) The final course grades
5) A self-assessment instrument: - Advantages of the study:
1. Equivalence of the groups in number, same instructor and course, delivered by the same department, and required the same content, activities, and projects
2. All data were collected at or near the end of the semester
3.To ensure instrument validity, Researcher used many ways.
a.To develop CISS the researcher contacted with authors of the (DOLES) and (DDE) instruments to obtain copies and necessary permission to use their instruments. b.Content experts reviewed the items of instrument.
c.The instrument was pilot tested by 68 students
d.Factor analysis procedures were used to establish the construct validity of the (CISS) instrument. - Disadvantagesof the study:
o The small sample size makes it difficult to interpret the result.
o The CISS instrument is still in its early developmental stage and has not completed a full analysis to ensure reliability and validity. - Results of the study:
1.Student Satisfaction: on instructor quality and course quality, both groups provided positive ratings .
2.Perception of course interaction, structure and support: Overall, both groups of students had positive perceptions. with the face-to-face students having significantly more positive views for interaction and support.
3. Student Learning Outcomes:
§Blind review of course projects: The difference in the project ratings for the two groups was not significant
§ Course grade: The grades were, for the most part, equally distributed between both groups
§ Self- assessment: Significant differences were found on only five of the 29 items on the self-assessment instrument
Note:At the end of the research, the researcher said: "These results support the argument that online instruction can be designed to be as effective as traditional face-to-face instruction"
11. Refernce:
JOHNSON, S. ARAGON,S. SHAIK,N. PALMARIVAS, N. (2000). Comparative Analysis of Learner Satisfaction and Learning Outcomes in Online and Face to Face Learning Environment. USA
Non-Comparative Study
- Title: Virtual interactivity: design factors affecting student satisfaction and perceived learning in asynchronous online courses
- Purpose: This study is a non-cooperative study. It investigates factors affecting student satisfaction with and perceived learning from asynchronous online learning.
- Participants: Participants of this study are Approximately 3,800 students who were enrolled in 264 courses offered through SLN but About (1,406) students returned the survey. Also, about 73 courses and eleven hundred and eight (1,108) students were enrolled in the. courses whose design features we examined.
- Evaluation instruments:
This study is used online SLN Student Survey Data as a data collection of tool. It is consisted of mostly multiple choices, forced answer questions eliciting demographic information and information concerning students’ satisfaction, perceived learning, and activity in the courses they were taking. In addition, it has open-ended comments for respondents to add comments to the survey.
Course Design Data: Two of the researchers separately examined each of the 73 courses and rat their content on twenty-two variables using Likert-type scaling. Ratings for each course were checked for agreement, and disagreements were resolved by consensus with reference to the courses themselves. - Advantages:
*Used more than one instrument for collecting data.
*Gives some information about why the finding is that. - Disadvantages:
*No clear information about participants. - Results:
It shows high levels of satisfaction with and perceived learning from SLN courses in the Spring, 1999 semester. The findings also indicate that most students believed their level of interaction with the course materials, with their instructor, and with their peers was as high or higher than in traditional face-to-face courses.
The study comes with that, there are three factors that contributing significantly to the success of online courses. Those factors are a clear and consistent course structure, an instructor who interacts frequently and constructively with students, and a valued and dynamic discussion. - Refernce:
Swan, K. (2001). Virtual interactivity: design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22, (2), 306-331.
Levels and techniques of evaluation in educational technology
- Purpose of the study:
To examine how one aspect of virtual computing – the virtual lab – effectively addresses many of the challenges of teaching web application development. - Approaches:
The subjects for this study were drawn from two related graduate courses. One course (Principles of Web Design) had 13 respondents, while the other (Web application development) had 10 respondents.
This study uses Statistical methods. It uses survey (Lickert-skale) of the related questions to evaluate H1-H4. while for hypothesis H5-H8 we used the non-parametric two-sample Kolmogorov-Smirnov test which is used to compare means in case of small sample sizes. The regression method enables researchers to identify the factors that most contribute towards the variance in the dependant variable. The dependent variable is ease of use and usefulness of virtual lab. The independent variables are students programming experience (in years). - Levels of Evaluation:
The evaluation in this study was done in a program level. It comes with that student’s rate the VL a mean of usefulness (3.75 out of 5) and easy to use (3.45 out of 5). - Instruments:
In this study, to collect the data for evaluating the virtual lab, a questionnaire Research instrument is used. The questions used in the questionnaire were derived from standard TAM questions (Meso & Liegle, 2005; Gallivan, 2001; Chircu et al., 2000; Straub et al., 1997) and further included standard demographic questions. Finally, data about the specifications of the primary computer used by each student and how – if at all – they had configured their web server was also collected. - Reference:
Jens Liegle, Peter Meso.( 2005). Evaluation of a Virtual Lab Environment for Teaching Web-Application Development. Dept. of CIS, Georgia State University.Atlanta, GA, 30319, USA.
Evaluation strategies
1. Content Quality
2. Learning Goal Alignment
3. Feedback and Adaptation
4. Motivation
5. Presentation Design
6. Interaction Usability
7. Accessibility
8. Reusability
9. Standards Compliance
Evaluation of CAI
http://www.surveyconsole.com/console/TakeSurvey?id=559030
Evaluation in educational technology
First study
- Title:The Integrated Studies of Educational Technology: A Formative Evaluation of the E-Rate Program
- purpose: This study focused to answer these two main questions:
- To what extent is the E-Rate helping to equalize access to the types of digital technology eligible for program discounts?
- Are schools and teachers able to use the technology that E-Rate supports? How is it being used in the classroom?
3.instruments:
- ISET surveys conducted during school year 2000–2001, including the following surveys:
- a survey of technology coordinator in all 50 states and the District of Columbia,
-The Survey of District Technology Coordinators
-The Survey of School Principals
-The Survey of Classroom Teachers. - E-Rate administrative data covering all E-Rate applications and funded commitments through January 2000.
4. Reference:
J. Puma.M, D. Chaplin.D, M. Olson.K, C. Pandjiris.A. (October 2002). The Integrated Studies of Educational Technology: A Formative Evaluation of the E-Rate Program. The Urban Institute. Washington, DC 20037-1207.
Second Study
- Title: Statistics E-learning Platforms Evaluation: Case Study
- purpose: This study aims to examine the effect on using e-learning platforms in statistics course, through the study course, gender, previous experience with e-learning, understanding statistics, study level, structured format, and self-study, the flexibility and freedom in dealing with the course, interactive environment and e-learning problems.
- Evaluation of e-learning platforms Features:
- Flexibility: it is important for students accessing system at any point. This feature is evaluated using this question: “Do you think that e-learning platforms give you the freedom and flexibility to work with the course? For example MM*Stat or moodle”. The question is re-classified into 4 categories: Yes, partial, no, I do not know.
- Structured format: A well-designed e-learning platform can help students learning by help them to develop their insights without getting bogged down in the mathematics. The question was to the students: What is your opinion about the structured format of e-learning platform which you have used? This variable is re-classified into 4 categories: Good, acceptable, bad, I do not know.
- Self-study: E-learning platforms offer the possibility to students to learn alone. The question was to students: Do you think that e-learning platforms improves and increases your self-study? This variable is re-classified into 4 categories: Yes, partial, no, i do not know.
- Interactive environment: The question was to the students: What are your opinions about the interactive environment of e-learning platforms? This variable is re-classified into 4 categories: Good, acceptable, bad, I do not know.
4. Reference:
Ahmad .T, H¨ardle.W.( August 31, 2008)Statistics E-learning Platforms Evaluation: Case Study _ CASE - Center for Applied Statistics and Economics Humboldt-Universit¨at zu Berlin. Spandauerstrasse 1, 10178 Berlin, Germany. Retrived on 14/2/2008 from: http://www.econbiz.de/archiv1/2008/58778_statistics_elearning_platforms.pdf
Note: Find PowerPoint presentation about these two studies in SlideShare.net in this link
http://www.slideshare.net/u065932/evalution-in-et