ISSN: 3029-2786 (Online)

   Journal Abbreviation: Comput. Artif. Intell.

   Publication Frequency: Bi-annual

   Publishing Model: Open Access

 

About the Journal

Computing and Artificial Intelligence (CAI) is a peer-reviewed, open access journal of computer science and Artificial Intelligence. The journal welcomes submissions from worldwide researchers, and practitioners in the field of Artificial Intelligence, which can be original research articles, review articles, editorials, case reports, commentaries, etc.

Latest Articles

  • Open Access

    Article

    Article ID: 450

    Medical image classification using a quantified hazard ratio and a multilayer fuzzy approach

    by Kishore Kumar Akula, Monica Akula, Alexander Gegov

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 130 Views, 113 PDF Downloads

    We previously developed two AI-based medical automatic image classification tools using a multi-layer fuzzy approach (MFA and MCM) to convert image-based abnormality into a quantity. However, there is currently limited research on using diagnostic image assessment tools to statistically predict the hazard due to the disease. The present study introduces a novel approach that addresses a substantial research gap in the identification of hazard or risk associated with a disease using an automatically quantified image-based abnormality. The method employed to ascertain hazard in an image-based quantified abnormality was the cox proportional hazard (PH) model, a unique tool in medical research for identifying hazard related to covariates. MFA was first used to quantify the abnormality in CT scan images, and hazard plots were utilized to visually represent the hazard risk over time. Hazards corresponding to image-based abnormality were then computed for the variables, ‘gender,’ ‘age,’ and ‘smoking-status’. This integrated framework potentially minimizes false negatives, identifies patients with the highest mortality risk and facilitates timely initiation of treatment. By utilizing pre-existing patient images, this method could reduce the considerable costs associated with public health research and clinical trials. Furthermore, understanding the hazard posed by widespread global diseases like COVID-19 aids medical researchers in prompt decision-making regarding treatment and preventive measures.

    show more
  • Open Access

    Article

    Article ID: 441

    Identifying voices using convolution neural network models AlexNet and ResNet

    by Abdulaziz Alhowaish Luluh, Muniasamy Anandhavalli

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 69 Views, 46 PDF Downloads

    Deep learning (DL) techniques which implement deep neural networks became popular due to the increase of high-performance computing facilities. DL achieves higher power and flexibility due to its ability to process many features when it deals with unstructured data. DL algorithm passes the data through several layers; each layer is capable of extracting features progressively and passes it to the next layer. Initial layers extract low-level features, and succeeding layers combine features to form a complete representation. This research attempts to utilize DL techniques for identifying sounds. The development in DL models has extensively covered classification and verification of objects through images. However, there have not been any notable findings concerning identification and verification of the voice of an individual from different other individuals using DL techniques. Hence, the proposed research aims to develop DL techniques capable of isolating the voice of an individual from a group of other sounds and classify them based on the use of convolutional neural networks models AlexNet and ResNet, that are used in voice identification. We achieved the classification accuracy of ResNet and AlexNet model for the problem of voice identification is 97.2039 % and 65.95% respectively, in which ResNet model achieves the best result.

    show more
  • Open Access

    Article

    Article ID: 440

    A comparison of cepstral and spectral features using recurrent neural network for spoken language identification

    by Irshad Ahmad Thukroo, Rumaan Bashir, Kaiser Javeed Giri

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 78 Views, 159 PDF Downloads

    Spoken language identification is the process of confirming labels regarding the language of an audio slice regardless of various features such as length, ambiance, duration, topic or message, age, gender, region, emotions, etc. Language identification systems are of great significance in the domain of natural language processing, more specifically multi-lingual machine translation, language recognition, and automatic routing of voice calls to particular nodes speaking or knowing a particular language. In his paper, we are comparing results based on various cepstral and spectral feature techniques such as Mel-frequency Cepstral Coefficients (MFCC), Relative spectral-perceptual linear prediction coefficients (RASTA-PLP), and spectral features (roll-off, flatness, centroid, bandwidth, and contrast) in the process of spoken language identification using Recurrent Neural Network-Long Short Term Memory (RNN-LSTM) as a procedure of sequence learning. The system or model has been implemented in six different languages, which contain Ladakhi and the five official languages of Jammu and Kashmir (Union Territory). The dataset used in experimentation consists of TV audio recordings for Kashmiri, Urdu, Dogri, and Ladakhi languages. It also consists of standard corpora IIIT-H and VoxForge containing English and Hindi audio data. Pre-processing of the dataset is done by slicing different types of noise with the use of the Spectral Noise Gate (SNG) and then slicing into audio bursts of 5 seconds duration. The performance is evaluated using standard metrics like F1 score, recall, precision, and accuracy. The experimental results showed that using spectral features, MFCC and RASTA-PLP achieved an average accuracy of 76%, 83%, and 78%, respectively. Therefore, MFCC proved to be the most convenient feature to be exploited in language identification using a recurrent neural network long short-term memory classifier.

    show more
  • Open Access

    Article

    Article ID: 416

    Revolutionizing Neurosurgery and Neurology: The transformative impact of artificial intelligence in healthcare

    by Habib Hamam

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 97 Views, 45 PDF Downloads

    The integration of artificial intelligence (AI) has brought about a paradigm shift in the landscape of Neurosurgery and Neurology, revolutionizing various facets of healthcare. This article meticulously explores seven pivotal dimensions where AI has made a substantial impact, reshaping the contours of patient care, diagnostics, and treatment modalities. AI’s exceptional precision in deciphering intricate medical imaging data expedites accurate diagnoses of neurological conditions. Harnessing patient-specific data and genetic information, AI facilitates the formulation of highly personalized treatment plans, promising more efficacious therapeutic interventions. The deployment of AI-powered robotic systems in neurosurgical procedures not only ensures surgical precision but also introduces remote capabilities, mitigating the potential for human error. Machine learning models, a core component of AI, play a crucial role in predicting disease progression, optimizing resource allocation, and elevating the overall quality of patient care. Wearable devices integrated with AI provide continuous monitoring of neurological parameters, empowering early intervention strategies for chronic conditions. AI’s prowess extends to drug discovery by scrutinizing extensive datasets, offering the prospect of groundbreaking therapies for neurological disorders. The realm of patient engagement witnesses a transformative impact through AI-driven chatbots and virtual assistants, fostering increased adherence to treatment plans. Looking ahead, the horizon of AI in Neurosurgery and Neurology holds promises of heightened personalization, augmented decision-making, early intervention, and the emergence of innovative treatment modalities. This narrative is one of optimism and collaboration, depicting a synergistic partnership between AI and healthcare professionals to propel the field forward and significantly enhance the lives of individuals grappling with neurological challenges. This article provides an encompassing view of AI’s transformative influence in Neurosurgery and Neurology, highlighting its potential to redefine the landscape of patient care and outcomes.

    show more
  • Open Access

    Article

    Article ID: 535

    Enhancing user experience in large language models through human-centered design: Integrating theoretical insights with an experimental study to meet diverse software learning needs with a single document knowledge base

    by Yuchen Wang, Yin-Shan Lin, Ruixin Huang, Jinyin Wang, Sensen Liu

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 258 Views, 70 PDF Downloads, 0 Supp. file Downloads, 0 Supp. file Downloads

    This paper begins with a theoretical exploration of the rise of large language models (LLMs) in Human-Computer Interaction (HCI), their impact on user experience (HX) and related challenges. It then discusses the benefits of Human-Centered Design (HCD) principles and the possibility of their application within LLMs, subsequently deriving six specific HCD guidelines for LLMs. Following this, a preliminary experiment is presented as an example to demonstrate how HCD principles can be employed to enhance user experience within GPT by using a single document input to GPT’s Knowledge base as new knowledge resource to control the interactions between GPT and users, aiming to meet the diverse needs of hypothetical software learners as much as possible. The experimental results demonstrate the effect of different elements’ forms and organizational methods in the document, as well as GPT’s relevant configurations, on the interaction effectiveness between GPT and software learners. A series of trials are conducted to explore better methods to realize text and image displaying, and jump action. Two template documents are compared in the aspects of the performances of the four interaction modes. Through continuous optimization, an improved version of the document was obtained to serve as a template for future use and research.

    show more
  • Open Access

    Article

    Article ID: 1258

    The AI spectrum under the doctrine of necessity: Towards the flexibility of the future legal praxis

    by Lambrini Seremeti, Ioannis Kougias

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 25 Views, 16 PDF Downloads

    Society is rapidly changing into an implicitus one. The main factor leading to this societal transition is the integration of artificial intelligence (AI), influencing all aspects of anthropocentric legal order. The deep concern to safeguard fundamental human rights under unforeseeable circumstances threatening hypostasis, leads those who are involved in the legal praxis to reorganize the legal system to ensure its functional continuity. To this purpose, a reliable extra-legal tool, such as the doctrine of necessity, is proposed, to validate the issue of AI development that falls outside the purview of any legal process, though, being necessary for society prosperity.

    show more
  • Open Access

    Article

    Article ID: 570

    Clustering data analytics of urban land use for change detection

    by C. Rajabhushanam

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 36 Views, 25 PDF Downloads

    In this study, the author proposes and details a workflow for the spatial-temporal demarcation of urban areal features in 8 cities of Tamilnadu, India. During the inception phase, functional requirements and non-functional parameters are analyzed and designed, within a suitable pixel area and object-oriented derived paradigm. Land use categories are defined from OpenStreetMap (OSM) related works with the scope of conducting climate change, using multispectral sensors onboard Landsat series. Furthermore, we augment the bands dataset with Spatially Invariant Feature Transform (SIFT), Normalized Difference Vegetation Index (NDVI), Normalized Difference Built-Up Index (NDBI), Leaf Area Index (LAI), and Texture base indices, as a means of spatially integrating auto-covariance to stationarity patterns. In doing so, change detection can be pursuit by scaling up the segmentation of regional/zonal boundaries in a multi-dimensional environment, with the aid of Wide Area Networks (WAN) cluster computers such as the BEOWULF/Google Earth Engine clusters. GeoAnalytical measures are analyzed in the design of local and zonal spatial models (GRID, RASTER, DEM, IMAGE COLLECTION). Finally, multi variate geostatistical works are ensued for precision and recall in predictive data analytics. The author proposes reusing machine learning tools (filtering by attribute-based indexing in PaaS clouds) for pattern recognition and visualization of features and feature collection.

    show more
  • Open Access

    Article

    Article ID: 1388

    Utilizing emotion recognition technology to enhance user experience in real-time

    by Yuanyuan Xu, Yin-Shan Lin, Xiaofan Zhou, Xinyang Shan

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 2 Views, 1 PDF Downloads

    In recent years, advancements in human-computer interaction (HCI) have led to the emergence of emotion recognition technology as a crucial tool for enhancing user engagement and satisfaction. This study investigates the application of emotion recognition technology in real-time environments to monitor and respond to users’ emotional states, creating more personalized and intuitive interactions. The research employs convolutional neural networks (CNN) and long short-term memory networks (LSTM) to analyze facial expressions and voice emotions. The experimental design includes an experimental group that uses an emotion recognition system, which dynamically adjusts learning content based on detected emotional states, and a control group that uses a traditional online learning platform. The results show that real-time emotion monitoring and dynamic content adjustments significantly improve user experiences, with the experimental group demonstrating better engagement, learning outcomes, and overall satisfaction. Quantitative results indicate that the emotion recognition system reduced task completion time by 14.3%, lowered error rates by 50%, and increased user satisfaction by 18.4%. These findings highlight the potential of emotion recognition technology to enhance user experiences. However, challenges such as the complexity of multimodal data integration, real-time processing capabilities, and privacy and data security issues remain. Addressing these challenges is crucial for the successful implementation and widespread adoption of this technology. The paper concludes that emotion recognition technology, by providing personalized and adaptive interactions, holds significant promise for improving user experience and offers valuable insights for future research and practical applications.

    show more
  • Open Access

    Review

    Article ID: 1141

    A Review of quantum algorithms for prediction of hazardous asteroids

    by Priya Pareshbhai Bhagwakar, Chirag Suryakant Thaker, Hetal A. Joshiara

    Computing and Artificial Intelligence, Vol.2, No.1, 2024; 68 Views, 31 PDF Downloads

    Quantum computing (QC) and quantum machine learning (QML), two emerging technologies, have the potential to completely change how we approach solving difficult problems in physics and astronomy, among other fields. Potentially Hazardous Asteroids (PHAs) can produce a variety of damaging phenomena that put biodiversity and human life at serious risk. Although PHAs have been identified through the use of machine learning (ML) techniques, the current approaches have reached a point of saturation, signaling the need for additional innovation. This paper provides an in-depth examination of various machine learning (ML) and QML techniques for precisely identifying potentially hazardous asteroids. The study attempts to provide information to improve the efficiency and accuracy of asteroid categorization by combining QML techniques like deep learning with a variety of machine learning (ML) algorithms, such as Random Forest and support vector machines. The study highlights weaknesses in existing approaches, including feature selection and model assessment, and suggests directions for further investigation. The results highlight the significance of developing QML techniques to increase the prediction of asteroid hazards, consequently supporting enhanced risk assessment and space exploration efforts. Paper reviews might not be related because the study only looks at generic paper reviews.

    show more
View All Issues