ISSN: 3029-2786 (Online)

   Journal Abbreviation: Comput. Artif. Intell.

   Publication Frequency: Bi-annual

   Publishing Model: Open Access

 

About the Journal

Computing and Artificial Intelligence (CAI) is a peer-reviewed, open access journal of computer science and Artificial Intelligence. The journal welcomes submissions from worldwide researchers, and practitioners in the field of Artificial Intelligence, which can be original research articles, review articles, editorials, case reports, commentaries, etc.

Latest Articles

  • Open Access

    Article

    Article ID: 545

    Plant leaf disease classification using FractalNet

    by Hmidi Alaeddine, Malek Jihene

    Computing and Artificial Intelligence, Vol.2, No.2, 2024; 138 Views, 72 PDF Downloads

    In this work, an effort is made to apply the FractalNet model in the field of plant disease classification. The proposed model was trained and tested using a “PlantVillage” plant disease image dataset using a central processing unit (CPU) environment for 300 epochs. It produced an average classification accuracy of 99.9632% on the test dataset. The experimental results demonstrate the efficiency of the proposed model and show that the model achieved the highest values compared to other deep learning models in the PlantVillage datasets.

    show more
  • Open Access

    Article

    Article ID: 1427

    The computational analysis of COVID-19-induced socio-economic, environmental, and climatic disruptions on the Indian food production system

    by Adya Aiswarya Dash, Sonu Sharma

    Computing and Artificial Intelligence, Vol.2, No.2, 2024; 81 Views, 29 PDF Downloads

    COVID-19 dominantly affected all the sectors of the Indian economy, surprisingly the impact is much lower with respect to the agricultural production (−2.7%) in India. The increase in yield of the crops can be attributed to the variables such as environmental, climatic, and socio-demographic factors. The study illustrates the relationship of the increase in crop yield in India during the first wave of COVID-19 along with the increase in the infection count and the land under cultivation attributed to supporting factors during the year 2020. The relation is explained by the method of ordinary least square (OLS) and geographically weighted regression (GWR). The distribution of the increase in crop yield across India is analyzed against COVID-19 infections along with other dominant factors. Useful intuitions against crop yield can be generated by studying the spatial relationships between them. The geographically weighted regression method depicted an increase in R 2 value as compared to the global ordinary least regression method. The Akaike information criterion in the geographically weighted regression method is also lower as compared to the ordinary least square therefore explaining GWR as a better model as compared to OLS. The combination of the various variables affecting agricultural yield in India is found to vary geographically as well as with the type of crops.

    show more
  • Open Access

    Article

    Article ID: 1409

    Predicting manipulated regions in deepfake videos using convolutional vision transformers

    by Mohan Bhandari, Sushant Shrestha, Utsab Karki, Santosh Adhikari, Rajan Gaihre

    Computing and Artificial Intelligence, Vol.2, No.2, 2024; 310 Views, 84 PDF Downloads

    Deepfake technology, which uses artificial intelligence to create and manipulate realistic synthetic media, poses a serious threat to the trustworthiness and integrity of digital content. Deepfakes can be used to generate, swap, or modify faces in videos, altering the appearance, identity, or expression of individuals. This study presents an approach for deepfake detection, based on a convolutional vision transformer (CViT), a hybrid model that combines convolutional neural networks (CNNs) and vision transformers (ViTs). The proposed study uses a 20-layer CNN to extract learnable features from face images, and a ViT to classify them into real or fake categories. The study also employs MTCNN, a multi-task cascaded network, to detect and align faces in videos, improving the accuracy and efficiency of the face extraction process. The method is assessed using the FaceForensics++ dataset, which comprises 15,800 images sourced from 1600 videos. With an 80:10:10 split ratio, the experimental results show that the proposed method achieves an accuracy of 92.5% and an AUC of 0.91. We use Gradient-Weighted Class Activation Mapping (Grad-CAM) visualization that highlights distinctive image regions used for making a decision. The proposed method demonstrates a high capability of detecting and distinguishing between genuine and manipulated videos, contributing to the enhancement of media authenticity and security.

    show more
  • Open Access

    Article

    Article ID: 1364

    Software cost estimation tool: A App based application, estimate the cost of software project

    by Ajay Jaiswal, Piyush Malviya, Lucky Parihar, Rani Pathak, Kuldeep Rajput

    Computing and Artificial Intelligence, Vol.2, No.2, 2024; 135 Views, 62 PDF Downloads

    This paper presents the design and implementation of a software cost estimation tool integrated into a mobile application developed using Flutter. The tool incorporates various techniques for software cost estimation, including expert judgment, function point analysis, 3D point analysis, and the COCOMO model. The purpose of the program is to give software engineers and project managers a practical and effective tool for calculating the time and money needed for software development projects. The paper provides a thorough explanation of each estimation technique’s implementation, along with a discussion of the app’s main features and functionalities. Because of the app’s intuitive and user-friendly design, users can quickly enter project data and get precise cost estimates. The tool’s efficacy is assessed using case studies and contrasts with other software cost estimation methods currently in use. The outcomes show that the app can produce trustworthy and precise cost estimates, which makes it an important resource for software development projects.

    show more
  • Open Access

    Article

    Article ID: 1467

    Enhancing user experience and trust in advanced LLM-based conversational agents

    by Yuanyuan Xu, Weiting Gao, Yining Wang, Xinyang Shan , Yin-Shan Lin

    Computing and Artificial Intelligence, Vol.2, No.2, 2024; 97 Views, 35 PDF Downloads

    This study explores the enhancement of user experience (UX) and trust in advanced Large Language Model (LLM)-based conversational agents such as ChatGPT. The research involves a controlled experiment comparing participants using an LLM interface with those using a traditional messaging app with a human consultant. The results indicate that LLM-based agents offer higher satisfaction and lower cognitive load, demonstrating the potential for LLMs to revolutionize various applications from customer service to healthcare consultancy and shopping assistance. Despite these positive findings, the study also highlights significant concerns regarding transparency and data security. Participants expressed a need for clearer understanding of how LLMs process information and make decisions. The perceived opacity of these processes can hinder user trust, especially in sensitive applications such as healthcare. Additionally, robust data protection measures are crucial to ensure user privacy and foster trust in these systems. To address these issues, future research and development should focus on enhancing the transparency of LLM operations and strengthening data security protocols. Providing users with clear explanations of how their data is used and how decisions are made can build greater trust. Moreover, specialized applications may require tailored solutions to meet specific user expectations and regulatory requirements. In conclusion, while LLM-based conversational agents have demonstrated substantial advantages in improving user experience, addressing transparency and security concerns is essential for their broader acceptance and effective deployment. By focusing on these areas, developers can create more trustworthy and user-friendly AI systems, paving the way for their integration into diverse fields and everyday use.

    show more
  • Open Access

    Article

    Article ID: 1447

    Exploring other clustering methods and the role of Shannon Entropy in an unsupervised setting

    by Erin Chelsea Hathorn Mshi, Ahmed Abu Halimeh

    Computing and Artificial Intelligence, Vol.2, No.2, 2024; 17 Views, 7 PDF Downloads

    In the ever-expanding landscape of digital technologies, the exponential growth of data in information science and health informatics presents both challenges and opportunities, demanding innovative approaches to data curation. This study focuses on evaluating various feasible clustering methods within the Data Washing Machine (DWM), a novel tool designed to streamline unsupervised data curation processes. The DWM integrates Shannon Entropy into its clustering process, allowing for adaptive refinement of clustering strategies based on entropy levels observed within data clusters. Rigorous testing of the DWM prototype on various annotated test samples revealed promising outcomes, particularly in scenarios with high-quality data. However, challenges arose when dealing with poor data quality, emphasizing the importance of data quality assessment and improvement for successful data curation. To enhance the DWM’s clustering capabilities, this study explored alternative unsupervised clustering methods, including spectral clustering, autoencoders, and density-based clustering like DBSCAN. The integration of these alternative methods aimed to augment the DWM’s ability to handle diverse data scenarios effectively. The findings demonstrated the practicability of constructing an unsupervised entity resolution engine with the DWM, highlighting the critical role of Shannon Entropy in enhancing unsupervised clustering methods for effective data curation. This study underscores the necessity of innovative clustering strategies and robust data quality assessments in navigating the complexities of modern data landscapes. This content is structured by the following sections: Introduction, Methodology, Results, Discussion, and Conclusion.

    show more
  • Open Access

    Article

    Article ID: 1443

    Validation of the practicability of logical assessment formula for evaluations with inaccurate ground-truth labels: An application study on tumour segmentation for breast cancer

    by Yongquan Yang, Hong Bu

    Computing and Artificial Intelligence, Vol.2, No.2, 2024; 29 Views, 3 Supp.file Downloads, 5 PDF Downloads

    The logical assessment formula (LAF) is a new theory proposed for evaluations with inaccurate ground-truth labels (IAGTLs) to assess the predictive models for artificial intelligence applications. However, the practicability of LAF for evaluations with IAGTLs has not yet been validated in real-world practice. In this paper, we applied LAF to two tasks of tumour segmentation for breast cancer (TSfBC) in medical histopathology whole slide image analysis (MHWSIA) for evaluations with IAGTLs. Experimental results and analysis show that the LAF-based evaluations with IAGTLs were unable to confidently act like usual evaluations with accurate ground-truth labels on the one easier task of TSfBC while being able to reasonably act like usual evaluations with AGTLs on the other more difficult task of TSfBC. These results and analysis reflect the potential of LAF applied to MHWSIA for evaluations with IAGTLs. This paper presents the first practical validation of LAF for evaluations with IAGTLs in a real-world application.

    show more
  • Open Access

    Review

    Article ID: 1279

    Applications of reinforcement learning, machine learning, and virtual screening in SARS-CoV-2-related proteins AI and SARS-CoV-2-related proteins

    by Yasunari Matsuzaka, Ryu Yashiro

    Computing and Artificial Intelligence, Vol.2, No.2, 2024; 74 Views, 47 PDF Downloads

    Similarly, to all coronaviruses, SARS-CoV-2 uses the S glycoprotein to enter host cells, which contains two functional domains: S1 and S2 receptor binding domain (RBD). Angiotensin-converting enzyme 2 (ACE2) is recognizable by the S proteins on the surface of the SARS-CoV-2 virus. The SARS-CoV-2 virus causes SARS, but some mutations in the RBD of the S protein markedly enhance their binding affinity to ACE2. Searching for new compounds in COVID-19 is an important initial step in drug discovery and materials design. Still, the problem is that this search requires trial-and-error experiments, which are costly and time-consuming. In the automatic molecular design method based on deep reinforcement learning, it is possible to design molecules with optimized physical properties by combining a newly devised coarse-grained representation of molecules with deep reinforcement learning. Also, structured-based virtual screening uses protein 3D structure information to evaluate the binding affinity between proteins and compounds based on physicochemical interactions such as van der Waals forces, Coulomb forces, and hydrogen bonds, and select drug candidate compounds. In addition, AlphaFold can predict 3D protein structures, given the amino acid sequence, and the protein building blocks. Ensemble docking, in which multiple protein structures are generated using the molecular dynamics method and docking calculations are performed for each, is often performed independently of docking calculations. In the future, the AlphaFold algorithm can be used to predict various protein structures related to COVID-19.

    show more
View All Issues