Pola Grammar for Automated Marking of Malay Short Answer Essay-Type Examination
Ab Aziz, Mohd Juzaiddin (2008) Pola Grammar for Automated Marking of Malay Short Answer Essay-Type Examination. PhD thesis, Universiti Putra Malaysia.
The efforts to mark essay-typed examination automatically for English have been started since 1960s. But, there was not many attempted to mark the Malay essay-typed examination automatically. One of works to mark the Malay essay-typed examination was conducted to mark the History subject that focused on the temporal values of the essays rather than the sentence structure of the Malay language. The subjective nature of sentence construction makes the process to identify the important points addressed in the essays difficult to be carried out. Short answer essay-typed examination requires the students to answer the questions with sentences in a short paragraph. While marking the examination scripts manually, the lecturers or teachers have to identify the sentence similarity between the sentences in the answers scripts and answer scheme. The answers scripts have to be carefully read and understood by the examiner in order to award fair marks. Sentence similarity is defined as the sentences that have similar meaning but different from the words used or sentence structure. The problems in this research are solved by using pola grammar techniques where the sentence similarity is identified by a representation of the Malay language structure and a Malay verbs synonymous thesaurus. Pola grammar produces a Grammatical Relations (GRs) representation. The technique is an enhancement of the four basic Malay language representations. The representations are Noun Phrase + Noun Phrase (NP+NP), Noun Phrase + Verb Phrase (NP+VP), Noun Phrase + Preposition Phrase (NP+PP), and Noun Phrase + Adjective Phrase (NP+AP). In order to recognize the sentence structure, a finite state automata (FSA) is constructed based on the pola grammar rules. The effectiveness of the FSA is computed in an application known as an Automatic Marking System for Short Answer Essay-typed examination (AMS-SAE). There are two tests conducted using AMS-SAE, first, 78 short answer essays in the form of simple, complex and conjoined sentences have been computed for their similarity. The results show that the average scores different to human for simple sentences is 0.032, complex is 0.113 and conjoined is 0.042. Second, the answers from a three questions for a compiler examination is recorded and tested with AMS-SAE and human. Each question which has 30 to 45 answers in the form of short essay-typed has proved that AMS-SAE can be accepted to produce similar marks to human when the Mann-Whitney test and t-test have shown that the marks have a strong significant relationship.
Repository Staff Only: Edit item detail