首页    期刊浏览 2025年07月16日 星期三
登录注册

文章基本信息

  • 标题:Examination software for technical subject education.
  • 作者:Podhorsky, Stefan ; Maronek, Milan
  • 期刊名称:Annals of DAAAM & Proceedings
  • 印刷版ISSN:1726-9679
  • 出版年度:2008
  • 期号:January
  • 语种:English
  • 出版社:DAAAM International Vienna
  • 摘要:An integral part of every education process, regardless of its form is assessment proceeding used to determine the reached study results. Beside traditional means of verification of acquired knowledge computer-based form can be used, offering higher level of efficiency, objectivity and reliability in the assessment process (Hrmo 2005). The computer-based assessment can be used not only for final tests, but also for gradual and formative assessment, with the major benefit of immediate feedback. In the case of computer-based testing the test questions must be circumscribed in the way to get definite answers, so the method of the correct answer selection from the given list, e.g. multiple-choice, multiple response, true or false test questions are most commonly used (JISC, 2007). Although this method of testing is quick and easy to implement, it is not able to verify the student's in-depth knowledge properly, especially in the case of technical education. This was the main reason for developing own system of computer-based testing and assessment in the Institute of Industrial Engineering at the Faculty of Materials Science and Technology.
  • 关键词:Software

Examination software for technical subject education.


Podhorsky, Stefan ; Maronek, Milan


1. INTRODUCTION

An integral part of every education process, regardless of its form is assessment proceeding used to determine the reached study results. Beside traditional means of verification of acquired knowledge computer-based form can be used, offering higher level of efficiency, objectivity and reliability in the assessment process (Hrmo 2005). The computer-based assessment can be used not only for final tests, but also for gradual and formative assessment, with the major benefit of immediate feedback. In the case of computer-based testing the test questions must be circumscribed in the way to get definite answers, so the method of the correct answer selection from the given list, e.g. multiple-choice, multiple response, true or false test questions are most commonly used (JISC, 2007). Although this method of testing is quick and easy to implement, it is not able to verify the student's in-depth knowledge properly, especially in the case of technical education. This was the main reason for developing own system of computer-based testing and assessment in the Institute of Industrial Engineering at the Faculty of Materials Science and Technology.

2. THE [sup.e]Test APPLICATION

[sup.e]Test is a software package designed for computer aided preparation of didactic tests and for computer-based testing of student's knowledge. It consist of few modules. For item bank management the module [sup.e]Test Database is used. The module [sup.e]Test Builder is intended for generation of didactic tests, and in the case of paper form of testing it used to print test sheets. [sup.e]Test Student and eTest Supervisor are the basic client applications for the computer-based form of testing. The next postulates had been defined before the start of the sofware developement:

* various type of test questions (considering the computer-based method of answers assessment),

* individual test building for every tested student,

* possible limitation to some part of the study subject (e.g. topic selection) for test generation,

* the count of generated test questions in every topic should be proportional to extension of those part in the curriculum,

* consideration of the questions difficulty level,

* possible usage of graphics (figures) into the test questions,

* integration of multimedia files into didactic tests,

* scoring of student's answers based on the weight of question signification and on the completeness of student's answer--balanced scoring (Huba et al., 2003).

Items bank data can be prepared using common text processor in rich text format (RTF) or a specialized tool can be used. Data is stored in own format, the IMS Question and Test Interoperability specification (QTI) is not internally supported, but data can be imported from and exported to other systems in the IMS QTI format. It seems to be a common practice in computer-based systems (Sclater & Low, 2002).

3. TOPICS

When a new test question is inserted into the item bank, it must be assigned to some topic of the study subject. Therefore, the structured list of topics is the integral part of each item bank of test questions. The list of topics has to respect the structure of the lectured matter. Each topics can be divided into smaller parts on the lower hierarchal level, similarly as the chapters of textbook are split into subheadings.

The assignation of test questions to topics is meaningful, it is used not only for easy categorisation of test questions, but the primary purpose is to guarantee uniform distribution of the generated test questions from every part of the subject matter. Therefore, the structured list of topics contains entries about the item's extensity in minutes. Based on these data and on the known summary count of the test questions, the software is able to determine how many questions should be picked from item bank in the frame of each topic.

4. TEST QUESTIONS

A test question is a record in the item bank database. To reach higher variability of the generated tests, the various variants of the same question should be inserted into the item bank. The term "question variant" means different formulation of the question, different way of the answer entering, different type of question and so on, but the object of the question stays the same. The software assures that only one variant of the question will be present each time in the generated test.

Beside the topic specification, after a new record is created, it is necessary to specify the type of the question. The type of the question determines the way the student will answer and in addition the question type determines the form of the answer entry into the appropriate field of the record. There are eight types of test questions in the present version of the software. Following the way of answer entering, the test questions type can be classified into three groups:

* student has to check the correct item/items in the list of provided items (answers),

* student has to enter a character string into input field (a word, few words, a sentence),

* student has to enter some numeric value.

Some fields in the item bank database can contain formatted text to enable complex formatting of the question's text in the case of technical subject education (special characters, Greek alphabet, superscript, subscript etc.). The form of the answer notation is specific for every type of question and consist a record for automatic assessment of the student's answer. A graphics can be attached to test questions bitmaps or vector graphics. Beside graphics, multimedia files can be attached for the electronic form of testing (video or audio file). There are next numeric fields present in the question record:

* time in seconds, available for student to work out his answer--time limit,

* difficulty index of the question,

* rating in marks the student can gain for completely answered question.

4.1 Types of test question

The type of test question determines the way the student will answer to the question. Along with it, the construction and the form of text entry for automated answer assessment are defined by the question type--some syntax rules are valid for every type of test question. The algorithm of answer assessment and the way of mark assigning (weight scoring) also depends on the question's type. Multiple-choice test questions are most common ones. There are three types of test questions in this group, when the student is presented with a choice:

* Alternative--there is just one correct answer in the test question,--student gains full marks for correct choice, otherwise he gains no marks.

* Subset--there are any count of correct items in the test question, or none (multiple response question),--the count of the correct items is not given for student,--balanced scoring is used.

* Combination--there are at least two correct items in the test question,--the count of items the student has to check follows from the question,--balanced scoring is used.

Next group of the test question types are those ones that require some text input for the answer. In these cases, appropriate keywords have to be specified for every test question, which will be seeking in the student's answer. The student's answer is taken as correct only if all the specified keywords have been found within his answer. This method of automated assessment of the student's answer brings some complexity. One of them is the inflection of words in many of languages. The application solves this problem using only the base part of the keyword for seeking, not the whole form. Therefore, the keywords are wrote using tilde character "~" to separate invariable part of the keyword from the suffix. Another problem is the real possibility of student's misspell. The student would not be penalised for a mistake in entering the answer (Cook & Perli 2007). Therefore diacritic marks are omitted, when the keywords are sought in the student's answer, the difference between characters "i" and "y" is ignored as well. Furthermore, a student self-checking of his assessment is used. There are three types of test questions utilizing the method of keyword seeking:

* Keywords--one or a few keywords are seeking in the text of the student's answer,--student gains full marks if all the keywords are found in the text of his answer, otherwise he gains no marks.

* Enumeration--the task of student is to specify an enumeration of some items,--the order of the items in the answer is not relevant,--balanced scoring is used.

* Completion--a text is displayed with some words omitted, the task of student is to complete the text with appropriate words,--balanced scoring is used.

Beside of mentioned types of test questions the next ones can be used in the item bank as well:

* Number--a numeric value is required from student,--range of allowable values can be specified for the correct answer.

* Manual--the answer of student on test question is some text input and the software does not assess this answer,--the student's answer is assessed by the examiner.

5. THE TEST BUILDING MODULE

The test-building module is used to define a set of rules and conditions, which are used for test building process when tests are generated. The defined complex of rules and conditions are saved into a file separately for each of the subjects. Furthermore, a few such files can be created in the frame of one subject for continuous tests (credits tests) and for final test. Each of generated tests is unique from the view of answers combination, e.g. each student on examination will get different set of questions. It requires a few steps to create a setting configuration for a test building: 1. Selection of the desired item bank of test question; 2. Specification of a source set of test questions (database filtering); 3. Setting of preferences.

6. CONCLUSION

The main benefit of the eTest application is the high efficiency of assessment process of students' knowledge, especially in the case of high quantity of students. Notable time saving can be achieved without detriment to the quality and objectivity of the assessment process. If there are enough count of test question in the item bank the high variability of generated tests is guaranteed. Optimizing the needful time for every question in the test, the examination will be not only shorter, but in the same time students will not have enough time for prospective cheating. In addition, the system of electronic testing eliminates subjective effect of the assessing person.

When the project of software development started, it had been intended for technical education only. Considering the versatility of the designed concept, the software can be utilized in any field of education after building the appropriate item bank of questions. Item banks for subjects Technology of Foundry, Technology of Welding and Materials Jointing have been worked out at the Institute of Production Technologies. The server part of the application is being developed now, in the next academic term we plan to test the computer-based form of assessment in the education process. The paper was realized with the support of KEGA 3/4157/06 grant.

7. REFERENCES

Cook, J. & Perli, R. (2007). Writing assessment questions for online delivery: Principles and guidelines, Available from: http://www.ltss.bris.ac.uk/caa/writing_e-assessments/, Accessed: 2008-06-05.

Hrmo, R. (2005). Innovation of vocational technical subjects teachers pedagogical training, Trendy technickeho vzdeldvdnl 2005 (Trends in Technical Education 2005), pp. 7-16, ISBN 80-7220-227-8, Olomouc, june 2005, Votobia, Praha.

Huba, M.; Zakova, K. & Bistak P. (2003). WWW a vzdeldvanie (WWW and Education), STU, ISBN 80-227-1999-4, Bratislava.

Sclater, N. & Low, B. (2002). IMS Question and Test Interoperability: An Idiot's Guide, Available from: www.scroUa.ac.uk/resources/s2/idiots_guide.pdf.

The Joint Information Systems Committee (JISC) (2007), Effective Practice with e-Assessment, Available from: http://www.online-conference.net/jisc/content2007/JISC%2 0effective_e-assess.pdf, Accessed: 2007-03-12.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有