Open Access Minireview Article

An Appraisal of Content-Based Image Retrieval (CBIR) Methods

J. O. Olaleke, A. O. Adetunmbi, B. A. Ojokoh, Iroju Olaronke

Asian Journal of Research in Computer Science, Volume 3, Issue 2, Page 1-15
DOI: 10.9734/ajrcos/2019/v3i230089

Background: Content Based Image Retrieval (CBIR) is an aspect of computer vision and image processing that finds images that are similar to a given query image in a large scale database using the visual contents of images such as colour, texture, shape, and spatial arrangement of regions of interest (ROIs) rather than manually annotated textual keywords. A CBIR system represents an image as a feature vector and measures the similarity between the image and other images in the database for the purpose of retrieving similar images with minimal human intervention. The CBIR system has been deployed in several fields such as fingerprint identification, biodiversity information systems, digital libraries, Architectural and Engineering design, crime prevention, historical research and medicine. There are several steps involved in the development of CBIR systems. Typical examples of these steps include feature extraction and selection, indexing and similarity measurement.

Problem: However, each of these steps has its own method. Nevertheless, there is no universally acceptable method for retrieving similar images in CBIR.

Aim: Hence, this study examines the diverse methods used in CBIR systems. This is with the aim of revealing the strengths and weakness of each of these methods.

Methodology: Literatures that are related to the subject matter were sought in three scientific electronic databases namely CiteseerX, Science Direct and Google scholar. The Google search engine was used to search for documents and WebPages that are appropriate to the study.

Results: The result of the study revealed that three main features are usually extracted during CBIR. These features include colour, shape and text. The study also revealed that diverse methods that can be used for extracting each of the features in CBIR. For instance, colour space, colour histogram, colour moments, geometric moment as well as colour correlogram can be used for extracting colour features. The commonly used methods for texture feature extraction include statistical, model-based, and transform-based methods while the edge method, Fourier transform and Zernike methods can be used for extracting shape features.

Contributions: The paper highlights the benefits and challenges of diverse methods used in CBIR. This is with the aim of revealing the methods that are more efficient for CBIR.

Conclusion: Each of the CBIR methods has their own advantages and disadvantages. However, there is a need for a further work that will validate the reliability and efficiency of each of the method.

Open Access Original Research Article

Meta-Heuristics Approach to Knapsack Problem in Memory Management

Emmanuel Ofori Oppong, Stephen Opoku Oppong, Dominic Asamoah, Nuku Atta Kordzo Abiew

Asian Journal of Research in Computer Science, Volume 3, Issue 2, Page 1-10
DOI: 10.9734/ajrcos/2019/v3i230087

The Knapsack Problems are among the simplest integer programs which are NP-hard. Problems in this class are typically concerned with selecting from a set of given items, each with a specified weight and value, a subset of items whose weight sum does not exceed a prescribed capacity and whose value is maximum. The classical 0-1 Knapsack Problem arises when there is one knapsack and one item of each type. This paper considers the application of classical 0-1 knapsack problem with a single constraint to computer memory management. The goal is to achieve higher efficiency with memory management in computer systems.

This study focuses on using simulated annealing and genetic algorithm for the solution of knapsack problems in optimizing computer memory. It is shown that Simulated Annealing performs better than the Genetic Algorithm for large number of processes. 

Open Access Original Research Article

Solving Nurse Scheduling Problem Using Constraint Programming (CP) Technique

Oluwaseun M. Alade, Akeem O. Amusat, Oluyinka T. Adedeji

Asian Journal of Research in Computer Science, Volume 3, Issue 2, Page 1-8
DOI: 10.9734/ajrcos/2019/v3i230088

Staff scheduling is a universal problem that can be encountered in many organizations, such as call centers, educational institution, industry, hospital, and any other public services. It is one of the most important aspects of workforce management strategy. Mostly, it is prone to errors or issues as there are many entities that should be addressed, such as the staff turnover, employee availability, time between rotations, unusual periods of activity, and even the last-minute shift changes. In this paper, constraint programming (CP) algorithm was developed to solve the nurse scheduling problem. The developed constraint programming algorithm was then implemented using python programming language. The developed CP algorithm was experimented with varying number of nurses. Experimental result confirmed that CP algorithm was able to solve nurse scheduling problem with promising results.

Open Access Original Research Article

Fuzzy Rule-based System for Corruption Control in Nigerian Police Force

Aliyu B. Salaat, I. Manga, Jerome M. Gumpy

Asian Journal of Research in Computer Science, Volume 3, Issue 2, Page 1-12
DOI: 10.9734/ajrcos/2019/v3i230090

This paper has attempted to develop an artificial intelligence model based on fuzzy logic for the control of corruption in Nigerian Police Force. The researcher employed fuzzy rule-base inference system methodology using four inputs variables; funding, logistics and operational equipment (FLOE) condition of service, remuneration and motivation (CSRM), recruitment, training and promotion (RTP), confidence and support by the community (CSC) were used to determine the corruption severity level. An output variable; corruption severity level (CSL) was adopted for the model development.  The simulation was carried out using MATLAB 2015 for Windows. The results revealed that it is very obvious for Nigeria as a country to have a Police Force whose corruption severity level is low. Thus (i) Condition of service, remunerations and motivation has to be excellent; (ii) Funding, logistics and operational equipment has to be adequate; (iii) Recruitment, training and promotion has to be excellent, and finally (iv) Confidence and support by the community has to be very high.

Open Access Original Research Article

Performance Assessment of Principal Component Analysis and Kernel Principal Component Analysis Using TOAM Database

Madandola, Tajudeen Niyi, Gbolagade, Kazeem Alagbe, Yusuf-Asaju Ayisat Wuraola

Asian Journal of Research in Computer Science, Volume 3, Issue 2, Page 1-10
DOI: 10.9734/ajrcos/2019/v3i230091

Face recognition algorithms can be classified into appearance-based (Linear and Non-Linear Appearance-based) and Model-based Algorithms. Principal Component Analysis (PCA) is an example of Linear Appearance-based which performs a linear dimension reduction while Kernel Principal Component Analysis (KPCA) is an example of non-linear appearance methods. The study focuses on the performance assessment of PCA and KPCA face recognition techniques. The assessment is carried out base on computational time using testing time and recognition accuracy on created database identified as TOAM database. The created database is mainly for this research purpose and it contains 120 face images of 40 persons frontal faces with 3 images of each individual under different lighting, facial expressions, occulations, environment and time. The findings reveal an average testing Time of 1.5475 seconds for PCA and 67.0929 seconds for KPCA indicating a longer Computational time for KPCA than PCA. It also reveals that PCA has 72.5% performance recognition accuracy while KPCA has 80.0% performance recognition accuracy indicating that KPCA outperforms the PCA in terms of recognition accuracy.