Interactive software usage for e-learning of business statistics

Document Type


Publication Title

Competitiveness Review


Purpose – The purpose of this paper is to examine students’ academic progress in a course of business statistics through interactive software usage, which helps these students to build their e-learning during basic competences: solving problems and interpreting results. Design/methodology/approach – A random sample of 278 students is used to contrast two research hypotheses through the x 2-test. The sample data correspond to the students’ final grades reported during the academic period from Spring-2004 to 2008 at an American State University. Findings – The population proportion of students that got passing or better final grades in a course of Business Statistics depends on how these students are classified as users or not users of interactive software (p-value ¼ 0.029); for the students that got passing or better grades: the proportion (86.8 percent) of students using interactive software is greater than the proportion (76.4 percent) of students that do not use interactive software. Confirmation of this conclusion at a significance level a ¼ 0.05, is obtained via the z-test for two proportions (p-value ¼ 0.030), where p1 ¼ 105/121 and p2 ¼ 120/157; which means, the data support the first research hypothesis HA1; but, the same data-set does not support the second research hypothesis (HA2) about the gender effect on the students’ academic performance (measured via their final grades); this is, the proportion of students under interactive software training that got passing or better final grades, does not depends on how these students are classified according to their gender (female or male), where the corresponding statistics are x 2 p-value ¼ 0.221, and z-test p-value ¼ 0.219. Research limitations/implications – The sample is restricted to the students attending fall and spring semesters. Summer students are not included, as the schedule-environment (eight hours of instruction per week, five weeks) and the grading policies are different in comparison to fall and spring periods (three hours of instruction per week, 14.5 weeks). Practical implications – The practical implications of this study include the need for a standardization of the summer’s syllabi. For example, during the fall and spring semesters, the students are required to complete 60 software-certificates; while for the summer, the students are required to solve just 30 (50 percent) of the software-certificates; which represent fewer assignments and less academic effort for the summer’s students. Originality/value – The study extends understanding regarding the responsibility of the instructors/professors, whose priority is to provide high-quality teaching, maintain a professional ethics environment, as well as the application and fulfillment of the approved academic syllabi.

First Page


Last Page




Publication Date


This document is currently not available here.