https://journalajrcos.com/index.php/AJRCOS/issue/feedAsian Journal of Research in Computer Science2024-03-18T12:46:33+00:00Asian Journal of Research in Computer Sciencecontact@journalajrcos.comOpen Journal Systems<p style="text-align: justify;"><strong>Asian Journal of Research in Computer Science (ISSN: 2581-8260 )</strong> aims to publish high-quality papers in all areas of 'computer science, information technology, and related subjects'. By not excluding papers based on novelty, this journal facilitates the research and wishes to publish papers as long as they are technically correct and scientifically motivated. The journal also encourages the submission of useful reports of negative results. This is a quality controlled, OPEN peer-reviewed, open-access INTERNATIONAL journal.</p>https://journalajrcos.com/index.php/AJRCOS/article/view/436Empirical Study of Agile Software Development Methodologies: A Comparative Analysis2024-02-27T12:00:14+00:00Samuel Gbli Tetteh samuel.gtetteh@gmail.com<p>The comparative analysis of software development models, also called the Software Development Life Cycle (SDLC), is an everyday discourse among software engineers, reflecting the dynamic nature of the field. Within this realm, various software development methodologies, such as prototyping, spiral development, and Rapid Action Development, have been established and recognised for their unique approaches to software creation. In recent years, Agile methodologies have emerged as prominent contenders in software development, offering flexibility, adaptability, and efficiency in delivering high-quality software within designated timeframes. Among the array of Agile methodologies, including Dynamic System Development Method (DSDM), Scrum, Feature-Driven Development (FDD), Extreme Programming (XP), Kanban, Adaptive Software Development (ASD), Mendix, Lean, and Crystal, several have garnered significant attention in the software development community. Specifically, ASD, DSDM, XP, FDD, Kanban, and Scrum have emerged as prominent choices among Agile methods utilised by software developers. This study conducts a comprehensive examination and comparison of these six Agile software models, aiming to elucidate their functionalities, strengths, and weaknesses. The findings of this comparative analysis seek to provide valuable insights for software industries, enabling informed decision-making when selecting software development models for upcoming projects. By understanding each Agile methodology's nuanced differences and capabilities, software developers and industry stakeholders can align their project requirements with the most suitable software development approach, ultimately optimising project outcomes and software quality.</p>2024-02-27T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/448Student Academic Record Systems and their Security Issues2024-03-15T11:41:34+00:00Yakubu Abdul-Wahab Nawusunabdul-wahab@tatu.edu.ghAbukari Abdul Aziz Danaa Diyawu Mumin Shiraz Ismail <p>Student academic records are an invaluable resource for universities and colleges worldwide, yet, their safekeeping is still a major concern for many of these institutions. The security measures currently applied to protect academic record systems are deemed unsatisfactory. This paper surveys the security issues relating to students’ academic records delving into existing students’ academic records systems, cybersecurity challenges and measures to deal with the security concerns. The paper proposes a practical security model, the PIEM model, to address these security issues aimed at empowering academic institutions to build and maintain strongly protected academic record systems for capturing, storing and retrieving student academic data. The work underscores the urgent need of educational institutions to exert extra efforts to securely maintain students’ academic records and facilitate easy access and use by the various educational actors.</p>2024-03-15T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/446The Role of ChatGPT in Exploring Science and Its Applications: A Case Study at the “Citizen” Program2024-03-12T11:04:34+00:00Tej Kumar Nepaltejkumarnepal97@gmail.com<p>From the citizen science perspective, this study investigates the multidimensional function that ChatGPT, an innovative language model developed by OpenAI, plays in the field. Using its capacity to generate text that is incredibly similar to that produced by humans, ChatGPT shows a great deal of promise in enabling and improving a variety of facets of collaborative scientific activities. While the paper does an excellent job of showing the potential uses, it also discusses the limitations and difficulties of implementing ChatGPT into citizen science programs.</p>2024-03-12T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/433Computing the Minimum Polynomial, the Function and the Drazin Inverse of a Matrix with Matlab2024-02-23T10:54:58+00:00Nikolaos Halidiasnick@aegean.gr<p>Aims/ Objectives: In this note we will discuss the known (but not well known) problem of finding the minimum polynomial and the function of a matrix providing the simplest proofs for undergraduate students. We will try to explain with fairly simple arguments how to compute the minimum polynomial of a matrix giving also the matlab code for its symbolic computation. Next we will describe the (symbolic) computation of the matrix of a function via the Hermite interpolation method which seems to be the simplest method for undergraduate students. Finally we shall see how we can compute the Drazin inverse given the nth power of a matrix A.</p>2024-02-23T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/434A Systematic Performance Review of Security Methods for the Cyberworld2024-02-23T12:51:54+00:00Samuel Gbli Tettehsamuel.gtetteh@gmail.com<p>Governments and organisations globally increasingly recognise the importance of cybersecurity as a critical measure against cyber threats in our highly interconnected society. Establishing robust security measures has become paramount in safeguarding sensitive information and infrastructure. Biometric systems have emerged as critical components in various sectors, including industry, civilian applications, and rhetoric, to enhance security measures. This paper provides an overview of diverse security approaches in the cyber world and examines several research works, highlighting their strengths and weaknesses in implementation. We critically analyse existing methodologies and address potential shortcomings, aiming to improve the effectiveness of security measures. Additionally, we explore emerging trends and novel research directions in the field of biometric and rhetorical security. The study delves into contemporary biometric toolkits, examining their functionalities and applications across domains. Furthermore, we discuss digital ornamental models, evaluating their efficacy in enhancing cybersecurity measures. Through comparative analysis, we identify key differences and areas for improvement in existing security frameworks. In conclusion, this paper proposes a generic computer security model tailored to address the evolving challenges of cybersecurity. We highlight potential applications of this model in society, emphasising the importance of proactive measures to mitigate cyber threats effectively. Through comprehensive analysis and innovative approaches, we aim to contribute to advancing cybersecurity practices in contemporary society.</p>2024-02-23T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/435Maximizing Penetration Testing Success with Effective Reconnaissance Techniques Using ChatGPT2024-02-24T11:20:35+00:00Sheetal Temara stemara22276@ucumberlands.edu<p><strong>Background/Objective:</strong> The study investigates the integration of ChatGPT, a generative pretrained transformer language model into the reconnaissance phase of penetration testing. The research aims to enhance the efficiency and depth of information gathering during critical security assessments offering potential improvements to traditional approaches.</p> <p><strong>Research Problem:</strong> The research study addresses the challenge of optimizing the reconnaissance phase in penetration testing. It seeks to provide a solution by exploring the capabilities of ChatGPT in extracting valuable data, such as various aspects of the digital footprint or infrastructure of a system or an organization. The scope of the research relies in demonstrating how ChatGPT can contribute to the planning phase of penetration testing, guiding the selection of tactics, tools, and techniques for identifying and mitigating potential risks that could be used to assist with securing Internet accessible assets of a system or an organization.</p> <p><strong>Methodology:</strong> The research adopts a case study methodology to assess the effectiveness of ChatGPT in reconnaissance. Tailored questions are formulated to extract specific information relevant to penetration testing. The study highlights the importance of prompt engineering emphasizing the need for carefully constructed questions to ensure usable results.</p> <p><strong>Results:</strong> The research showcases the ability of ChatGPT to provide diverse and insightful reconnaissance information. The extracted data includes IP address ranges, domain names, vendor technologies, SSL/TLS ciphers, and network protocols. The information gathering improves efficiency of the reconnaissance phase aiding penetration testers in planning subsequent phases of the assessment.</p> <p><strong>Discussion:</strong> The research study extends to the broader field of cybersecurity where artificial intelligence language models can play a valuable role in enhancing the success of reconnaissance in penetration testing. The research suggests that integrating ChatGPT into penetration testing can bring about positive changes in the efficiency and depth of information obtained during reconnaissance.</p> <p><strong>Conclusion:</strong> The results of the study determine that incorporating ChatGPT in the reconnaissance phase significantly benefits penetration testers by offering valuable insights and streamlining subsequent assessment planning. The results affirm ChatGPT as a pivotal tool in maximizing success in penetration testing, contributing to ongoing advancements in cybersecurity practices.</p>2024-02-24T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/437Smart and Sustainable City Experience on Smart Campus: A Case Study from Hassan Ist University2024-03-01T10:49:31+00:00Doha Malki doha.malki@uhp.ac.ma<p>The desire to meet the demands of regional competition requires cities to adopt new patterns of urban management. In this sense, the trend at the global level is to use smart city standards by investing in the digital transformation of various public services in cities. In recent years, many efforts have been made in many Moroccan cities to improve their attractiveness by being part of the process of upgrading and enhancing their territories through the implementation of major infrastructure projects and the digitalization of urban administrations.</p> <p>This study was conducted to examine the feasibility of a smart campus as a discounted smart city, and it relies on the dissemination of a quantitative survey, via a questionnaire (live and online), supplemented by a qualitative survey based on focus group methodology. We have gained a broader perspective that allowed us to understand the reality of things on campus. The study also showed that there are many efforts that the university must take in conjunction with developing its plan to advance Hassan I University to the ranks of smart universities.</p> <p>This article is part of a study proposing a strategy for developing the campus of Hassan I University of Settat to be a smart one.</p>2024-03-01T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/438ReT-ELBa: A Novel Algorithm for Efficient Load Balancing in Cloud Computing2024-03-02T10:35:14+00:00Yusif RashidCallistus Ireneous Nakpihcnakpih@cktutas.edu.gh<p>This paper presents a novel load-balancing algorithm for balancing load on the cloud environment which we call Response Time Efficient Load Balancer (ReT-ELBa). Load balancing involves a dynamic and equal distribution of workloads among processors or Virtual Machines to achieve better resource utilisation. The study gives an insight into the design, implementation and evaluation of an enhanced load-balancing algorithm that allocates tasks across Virtual Machines on the cloud, which minimises Response Time and increases resource utilisation. The ReT-ELBa distributes tasks based on their sizes, the requirements for their executions, as well as the state of the Virtual Machines. The Cloud Analyst simulation tool was employed to simulate or evaluate various cases for the ReT-ELBa in comparison with the Throttled and Round Robin algorithms. The study revealed that the ReT-ELBa outperformed the two algorithms in relation to Response Time. The ReT-ELBa also outperformed the two algorithms in relation to Data Centre Processing Time for all simulated cases, except one (Case 3), which recorded a small difference of 0.07ms. Even the Response Time was the Primary focus of this research, the Data Centre Processing Time of the algorithm was also analysed to present a more elaborate representation of the performances of the algorithms.</p>2024-03-02T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/439Influence of Variation in Tie Reinforcement Diameter on the Ductility of Reinforced Concrete Columns2024-03-05T11:02:14+00:00Samsul Abdul Rahman Sidik Hasibuansamsulrahman@staff.uma.ac.id<p>Reinforced concrete columns are crucial structural elements in ensuring the strength and stability of buildings. Column ductility, which is the ability to absorb energy and undergo deformation before failure, is a primary concern in structural engineering, especially in extreme external loading situations such as earthquakes. This study aims to evaluate the influence of variation in Tie reinforcement diameter on the ductility of reinforced concrete columns using Xtract software. The research method involves creating column structure models with specified dimensions and specifications, followed by the gradual application of axial loads to each model. Three models were created with varying Tie reinforcement diameters, namely 10 mm, 12 mm, and 14 mm. Structural analysis was conducted to examine the structure's response to the applied axial and moment loads, including evaluation of stresses, deformations, and column capacities. The analysis results show differences in ductility levels among the models. The model with a 10 mm Tie reinforcement diameter achieves higher ductility levels at low axial loads but fails to meet the requirements at higher axial loads according to the SNI 1726:2019 standard. However, models with Tie reinforcement diameters of 12 mm and 14 mm also exhibit a similar pattern, with good ductility at low axial loads but failing to meet the requirements at higher axial loads. In conclusion, variations in Tie reinforcement diameter affect the ductility of reinforced concrete columns. To ensure full ductility under various axial load conditions, adjustments to the design or specifications of the reinforced concrete column structure are necessary. This research contributes to understanding the factors influencing the ductility of reinforced concrete columns and can serve as a basis for the development of more effective design methods in the future.</p>2024-03-05T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/440Detecting Change in a Volatile Curve United State Stock Market (US SM) with the Use of Automated Decomposition for Time Series Components2024-03-06T07:14:37+00:00Ajare Emmanuel Oloruntobaajareoloruntoba@gmail.comAdefabi AdekunleOlorunpomi Temitope Olubunmi<p>The main reason for this investigation is to use manual process of identification of time series components with two types of automated decomposition for time series known as automated BFTSC (break for time series components) and, automated GFTSC (Group for time series components) in detecting change in a volatile curve united states stock market (US SM). In identification of components of time series present in the seasonal data of US stock market. The data US Stock Market was a monthly data from January 2001 until December 2018 and a total of 18 years. The stock market of US is also available as a secondary data at the DataBank of University Utara Malaysia Library. The weaknesses of BFAST were corrected by the extension of BFAST to BFTSC and GFTSC. Both were created to capture the cyclical and irregular components that were not captured by BFAST technique and it was included in the methodology of this study. BFTSC and GFTSC were considered to provide a combined image of all the four components of time series while GFTSC had additional advantage of providing equations to the components automated. Evaluation using simulation data and empirical data vindicated the accuracy of BFTSC and GFTSC based on linear trend less volatile data. They are effective and better than BFAST because it was able to identify 100% of the data with the basic four time series components monthly. Both techniques detects 99 % of the entire components in the time series data in a linear trend data.</p>2024-03-06T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/441Data Governance in AI - Enabled Healthcare Systems: A Case of the Project Nightingale2024-03-08T06:53:03+00:00Aisha Temitope ArigbabuOluwaseun Oladeji Olaniyioolaniyi00983@ucumberlands.eduChinasa Susan AdigweOlubukola Omolara AdebiyiSamson Abidemi Ajayi<p>The study investigates data governance challenges within AI-enabled healthcare systems, focusing on Project Nightingale as a case study to elucidate the complexities of balancing technological advancements with patient privacy and trust. Utilizing a survey methodology, data were collected from 843 healthcare service users employing a structured questionnaire designed to measure perceptions of AI in healthcare, trust in healthcare providers, concerns about data privacy, and the impact of regulatory frameworks on the adoption of AI technologies. The reliability of the survey instrument was confirmed with a Cronbach's Alpha of 0.81, indicating high internal consistency. The multiple regression analysis revealed significant findings: a positive relationship between the awareness of technological projects and trust in healthcare providers, countered by a negative impact of privacy concerns on trust. Additionally, familiarity with and perceived effectiveness of regulatory frameworks were positively correlated with trust in data, while perceptions of regulatory constraints and data governance issues were identified as significant barriers to the effective adoption of AI technologies in healthcare. The study highlights the critical need for enhanced transparency, public awareness, and robust data governance frameworks to navigate the ethical and privacy concerns associated with AI in healthcare. The study recommends adopting flexible, principle-based regulatory approaches and fostering multi-stakeholder collaboration to ensure the ethical deployment of AI technologies that prioritize patient welfare and trust.</p>2024-03-08T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/442Evolving Access Control Paradigms: A Comprehensive Multi-Dimensional Analysis of Security Risks and System Assurance in Cyber Engineering2024-03-08T08:11:17+00:00Nanyeneke Ravana MayekeAisha Temitope ArigbabuOluwaseun Oladeji Olaniyioolaniyi00983@ucumberlands.eduOlalekan Jamiu OkunleyeChinasa Susan Adigwe<p>This study evaluates the effectiveness of traditional access control paradigms—Role-Based Access Control (RBAC), Policy-Based Access Control (PBAC), and Attribute-Based Access Control (ABAC)—against ransomware threats in critical infrastructures and examines the potential benefits of integrating machine learning (ML) and artificial intelligence (AI) technologies. Utilizing a quantitative research design, the investigation collected data from 383 cybersecurity professionals across various sectors through a systematically structured questionnaire. The questionnaire, which demonstrated excellent internal consistency with a reliability score of 0.81, featured Likert scale questions aimed at assessing perceptions and experiences concerning the efficacy of different access control models in combating ransomware. Employing multiple regression analysis, the study explored the relationship between access control paradigms and their capability to mitigate ransomware risks, while also considering the impact of cybersecurity awareness among employees. The findings indicate that traditional access control methods are less effective against the dynamic nature of ransomware attacks, primarily due to their static configurations. In contrast, the integration of ML and AI into access control systems significantly enhances their adaptability and effectiveness in detecting and preventing ransomware incidents. Additionally, the study highlights the crucial role of cybersecurity awareness and training among employees in fortifying critical infrastructures against cyber threats. The adoption of a layered security strategy, incorporating advanced technological solutions and comprehensive cybersecurity practices, was found to markedly improve the resilience of critical infrastructures against ransomware attacks. Based on these insights, the study recommends the embrace of ML and AI technologies in access control systems, the prioritization of cybersecurity training for all organizational members, and the implementation of a multifaceted security approach to better defend against the evolving threat of ransomware. These strategies are essential for safeguarding the continuity and reliability of essential services in an increasingly digital and interconnected world.</p>2024-03-08T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/443First Fit Algorithm: A Graph Coloring Approach to Conflict-Free University Course Timetabling2024-03-09T11:17:33+00:00Stella Kehinde OgunkanPeter Olalekan Idowu poidowu@pgschool.lautech.edu.ngElijah Olusayo Omidiora Christopher Akin Oyeleye <p><strong>Aims: </strong>Tackling scheduling issues with the most optimal graph coloring algorithm has consistently posed significant difficulties. The university scheduling problem can be expressed as a graph coloring problem, where courses are depicted as vertices and the connections between courses that have common students or teachers are represented as edges. Subsequently, the task at hand is to assign the vertices with the minimum number of colors. In order to accomplish this task, this paper present a graph coloring technique to conflict free university course timetabling using first fit algorithms.</p> <p><strong>Methodology:</strong> The conflict graph is partitioned into a set of independent color classes to be assigned time slots and transformed into a conflict-free timetable. The Ladoke Akintola University of Technology (LAUTECH) University Course Timetabling Data was adopted. The allocation of venue based on the allotted time slots is done using first fit packing algorithm. The proposed model is implemented using Python programming language. The developed model had courses being represented as vertices and edges. The course conflict graph was created based on the acquired dataset using vertices-edges relationship diagram. The implemented model is evaluated in terms of Halstead complexity metrics: Program Volume (PV), Program Length (PL), Program Effort (PE), Program Difficulty (PD) and Execution Time (ET). The PV, PL, PE, PD and ET values obtained for the implemented model are 18.45kbits, 0.51, 1037684, 1.97 and 20.45 secs, respectively.</p> <p><strong>Conclusion:</strong> The proposed model shows a significant improvement over the existing models by producing conflict-free course timetabling problem with better evaluation results. This work will be highly useful in solving various scheduling, optimization and NP-hard related computational problems.</p>2024-03-09T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/444Digital Collaborative Tools, Strategic Communication, and Social Capital: Unveiling the Impact of Digital Transformation on Organizational Dynamics2024-03-11T09:14:33+00:00Oluwaseun Oladeji Olaniyioolaniyi00983@ucumberlands.eduJennifer Chinelo Ugonnia Folashade Gloria OlaniyiAisha Temitope Arigbabu Chinasa Susan Adigwe <p>The rapid advancement of digitalization has significantly impacted how organizations operate, with collaborative tools like Asana, Trello, and Slack becoming integral to fostering internal communication, collaboration, and social capital. Despite their widespread adoption, the comprehensive impact of these tools on organizational culture, employee engagement, and social capital remains underexplored. This study aims to bridge this gap by examining the potential of digital collaborative tools to enhance organizational social capital and exploring the role of strategic communication and digital inclusivity in this process. Utilizing a quantitative research approach, data were collected from 557 professionals through a survey distributed via LinkedIn. The questionnaire, designed with Likert scale closed-ended questions, focused on the utilization of collaborative tools and their perceived impact on communication, collaboration, and social capital within organizations. Multiple regression analysis was employed to test four hypotheses related to the effects of digital tool usage on organizational dynamics. The findings reveal that digital tools significantly enhance communication and collaboration within organizations, with the frequency and purpose of tool usage being key factors. Moreover, digital tool usage for networking and the quality of interactions facilitated by these tools were found to positively impact the development of social capital. However, challenges in integrating digital tools were shown to negatively affect this development, though technical support and training could mitigate these effects. Interestingly, the introduction of a 'social score' feature within digital tools was perceived positively, indicating its potential to further engage users and enrich organizational social capital. This study underscores the importance of adopting a strategic approach to implementing digital tools, emphasizing their role beyond operational efficiency to include the enhancement of social interactions and organizational culture. Recommendations include fostering digital tool usage for networking, providing adequate technical support and training, and exploring the integration of 'social score' features to incentivize engagement and collaboration.</p>2024-03-11T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/445Exploring the Role of Dimensionality Reduction in Enhancing Machine Learning Algorithm Performance2024-03-11T10:56:38+00:00John Kamwele Mutindajkmutinda@aimsammi.orgAmos Kipkorir Langat<p>In this study, we delve into the pivotal role of dimension reduction techniques in influencing the performance of machine learning algorithms for heart disease prediction. Through a comprehensive exploration of a dataset encompassing crucial features such as age, sex, chest pain type, blood pressure, cholesterol levels, and more, we investigate the impact of different techniques—namely, Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA), and Linear Discriminant Analysis (LDA) on classification algorithm effectiveness. The classification algorithms considered were Logistic Regression, Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Naive Bayes, and Deep Neural Network (DNN). We used K-fold cross validation to train and validate the classification algorithms. The performance of these algorithms was assessed using a range of key metrics including accuracy, F1-score, precision, recall, and specificity. The results reveals that Linear Discriminant Analysis consistently emerged as a potent method, remarkably enhancing algorithm performance across all assessed metrics. We also identified Naive Bayes and Logistic Regression as standout algorithms, demonstrating remarkable resilience and reliability across diverse scenarios. These findings collectively shed light on the intricate interplay between dimension reduction techniques and algorithm selection, offering critical insights for crafting more accurate and robust strategies in the prediction of heart disease.</p> <p> </p>2024-03-11T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/447Ballots and Padlocks: Building Digital Trust and Security in Democracy through Information Governance Strategies and Blockchain Technologies2024-03-15T05:28:27+00:00Oluwaseun Oladeji Olaniyioluwaseun.o.olaniyi@gmail.com<p>This research explores the integration of Information Governance (IG) strategies and Blockchain Technologies (BT) in enhancing digital trust and security within democratic processes. Amid concerns about the integrity and vulnerability of electoral systems in the digital era, this study examines how these technologies can collectively safeguard democracy. Utilizing Partial Least Squares Structural Equation Modeling (PLS-SEM), bootstrapping analysis for mediation effects, and the Fornell-Larcker Criterion for discriminant validity, the analysis was conducted on data from 934 participants involved in the electoral process. Key findings demonstrate that IG strategies significantly impact digital trust, indicating the importance of robust data management, legal compliance, and privacy measures for public confidence in electoral systems. Blockchain Technologies positively affect the security of democratic processes due to their decentralized and immutable characteristics. Furthermore, digital trust is identified as a critical mediator between IG strategies, BT, and the security of democratic processes, highlighting the importance of trust in the effectiveness of these technologies. Based on the insights gained, three actionable recommendations are proposed: Electoral authorities should adopt comprehensive IG frameworks to enhance data integrity and transparency; Pilot blockchain projects should be expanded to refine and understand the broader implementation implications for election security; Efforts should be increased to foster digital literacy and trust among the electorate, emphasizing the role of these technologies in securing electoral integrity.</p>2024-03-15T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/449Exploring a Pragmatic and Exponential Advancement in the Use of Machine Learning and Artificial Intelligence Systems2024-03-18T09:30:43+00:00Chinedu Chukwuemeka MaziGregory Anichebe Ogechi Ifeoma Anya Andrew Chinonso Nwanakwaugwu <p>With the advent of the Internet of Things (IoT) with sensors and connected devices, data generation is increasingly peaking at an unprecedented pace. However, energy consumption is also on the rise based on traditional energy sources, such as fossil fuels. This is not sustainable and could hurt the environment while being quite expensive to run e.g., empowering irrigation systems using sensors. In this context, using data as an energy source for future machines could be a promising solution to mitigate the energy crisis and reduce the carbon footprint. The concept of data as a new form of energy will be discussed, examining the benefits and challenges associated with this method. This paper also proposes other potential applications for using data as an energy source, including powering self-driving cars, drones, and smart irrigation systems a data-driven approach.</p>2024-03-18T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.https://journalajrcos.com/index.php/AJRCOS/article/view/450Sarcasm Detection in Pidgin Tweets Using Machine Learning Techniques2024-03-18T12:46:33+00:00Khadijat T Ladoja ktladoja35@gmail.comRuth T Afape <p>Detecting sarcasm in social media is of growing importance for applications such as monitoring, consumer feedback, and sentiment analysis. However, detecting sarcasm in Pidgin tweets poses unique challenges due to the blend of English and Pidgin languages, along with local cultural references. Existing models for sarcasm detection in English lack appropriate annotated data for Pidgin. This scarcity hinders the development of effective machine learning models. This research aims to address these challenges and create a model for accurate sarcasm detection in Pidgin tweets. Logistic Regression, XGBoost, Random Forest, and Vanilla Artificial Neural Network (ANN) classifiers were assessed, focusing on accuracy, precision, recall, and F1-score metrics on sarcasm data collected by curating and pre-processing a dataset of Nigerian Pidgin tweets. The XGBoost model demonstrated notable performance, attaining an accuracy of 85.78%, precision of 88.57%, recall of 94.44%, and F1-score of 91.41%. These outcomes underscored the model's prowess in discerning sarcastic and non-sarcastic expressions. By unfolding the intricacies of language in the Nigerian context, this research into sarcasm identification in Nigerian Pidgin text data introduced a comprehensive pipeline encompassing data curation, exploratory analysis, culturally tailored pre-processing, model training, evaluation, and prediction.</p>2024-03-18T00:00:00+00:00Copyright (c) 2024 Author(s). The licensee is the journal publisher. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.