Simple Search:

Case Slicing Technique for Feature Selection


Citation

A. Shiba, Omar A. (2004) Case Slicing Technique for Feature Selection. PhD thesis, Universiti Putra Malaysia.

Abstract / Synopsis

One of the problems addressed by machine learning is data classification. Finding a good classification algorithm is an important component of many data mining projects. Since the 1960s, many algorithms for data classification have been proposed. Data mining researchers often use classifiers to identify important classes of objects within a data repository.This research undertakes two main tasks. The first task is to introduce slicing technique for feature subset selection. The second task is to enhance classification accuracy based on the first task, so that it can be used to classify objects or cases based on selected relevant features only. This new approach called Case Slicing Technique (CST). Applying to this technique on classification task can result in further enhancing case classification accuracy. Case Slicing Technique (CST) helps in identifying the subset of features used in computing the similarity measures needed by classification algorithms. CST was tested on nine datasets from UCI machine learning repositories and domain theories. The maximum and minimum accuracy obtained is 99% and 96% respectively, based on the evaluation approach. The most commonly used evaluation technique is called k-cross validation technique. This technique with k = 10 has been used in this thesis to evaluate the proposed approach. CST was compared to other selected classification methods based on feature subset selection such as Induction of Decision Tree Algorithm (ID3), Base Learning Algorithm K-Nearest Nighbour Algorithm (k-NN) and NaYve Bay~sA lgorithm (NB). All these approaches are implemented with RELIEF feature selection approach. The classification accuracy obtained from the CST method is compared to other selected classification methods such as Value Difference Metric (VDM), Pre-Category Feature Importance (PCF), Cross-Category Feature Importance (CCF), Instance-Based Algorithm (IB4), Decision Tree Algorithms such as Induction of Decision Tree Algorithm (ID3) and Base Learning Algorithm (C4.5), Rough Set methods such as Standard Integer Programming (SIP) and Decision Related Integer Programming (DRIP) and Neural Network methods such as the Multilayer method.


Download File

[img]
Preview
PDF
FSKTM_2004_6(1-24).pdf

Download (1MB)

Additional Metadata

Item Type: Thesis (PhD)
Subject: Machine learning
Subject: Classification
Subject: Data mining
Call Number: FSKTM 2004 6
Chairman Supervisor: Associate Professor Hj. Md. Nasir Sulaiman, PhD
Divisions: Faculty of Computer Science and Information Technology
Depositing User: Nur Izyan Mohd Zaki
Date Deposited: 05 May 2010 08:55
Last Modified: 27 May 2013 07:25
URI: http://psasir.upm.edu.my/id/eprint/5838
Statistic Details: View Download Statistic

Actions (login required)

View Item View Item