Prof. Shaohua Wan
University of Electronic Science and Technology of China, China
Description
Computing and Artificial Intelligence (CAI) is a peer-reviewed, open-access journal dedicated to the dissemination of cutting-edge research in the fields of computer science and artificial intelligence. The journal aims to bridge the gap between theoretical research and practical applications by providing a platform for scholars, researchers, and industry professionals to share their insights and findings. CAI is published bi-annual, ensuring a regular flow of new research findings and discussions. All the papers pubilshed in CAI could be access, read, and downloaded freely with the aims that making research freely available to the public, fostering greater collaboration and knowledge exchange within the scientific community.
The journal welcomes submissions from worldwide researchers, and practitioners in the field of Artificial Intelligence, which can be original research articles, review articles, editorials, case reports, commentaries, etc. Authors are encouraged to adhere to the submission guidelines provided on the journal's website to ensure a smooth review process.
Latest Articles
-
Open Access
Article
Article ID: 1498
Generative artificial intelligence (GAI): From large language models (LLMs) to multimodal applications towards fine tuning of models, implications, investigationsby Zarif Bin Akhtar
Computing and Artificial Intelligence, Vol.3, No.1, 2024; 144 Views, 78 PDF Downloads, 3 Supp. file Downloads
This research explores the transformative integration of artificial intelligence (AI), robotics, and language models, with a particular emphasis on the PaLM-E model. The exploration aims to assess PaLM-E’s decision-making processes and adaptability across various robotic environments, demonstrating its capacity to convert textual prompts into very precise robotic actions. In addition, the research investigates Parameter-Efficient Fine-Tuning (PEFT) techniques, such as Low-Rank Adaptation (LoRA) and Quantized Low-Rank Adaptation (QLoRA), providing a historical overview of PEFT and highlighting their significance in enhancing task performance while reducing the number of trainable parameters. The broader scope of Generative AI is examined through an analysis of influential models like GPT-3, GPT-4, Copilot, Bard, LLaMA, Stable Diffusion, Midjourney, and DALL-E. These models’ abilities to process natural language prompts and generate a wide range of outputs are thoroughly investigated. The research traces the historical evolution of AI, from its roots in science fiction to its practical applications today, with a focus on the rise of Generative AI in the 21st century. Furthermore, the research delves into the various modalities of Generative AI, covering applications in text, code, images, and more, and assesses their real-world impact on robotics, planning, and business intelligence. The implications of synthetic data generation for business analytics are also explored. The research inspects within both software and hardware landscapes, comparing local deployment on consumer-grade hardware along with cloud-based services, and underscores the benefits of local model deployment in terms of privacy protection, intellectual property security, and censorship resistance. Ethical considerations are central to this research, addressing concerns related to privacy, security, societal impact, biases, and misinformation. The research proposes ethical guidelines for the responsible development and deployment of AI technologies. Ultimately, this work reveals the deep interconnections between vision, language, and robotics, pushing the boundaries of AI capabilities and providing crucial insights for future AI model development and technological innovation. These findings are intended to guide the field through the emerging challenges of the rapidly evolving Generative AI landscape.
show more -
Open Access
Article
Article ID: 1450
Pre-trained models for linking process in data washing machineby Bushra Sajid, Ahmed Abu-Halimeh, Nuh Jakoet
Computing and Artificial Intelligence, Vol.3, No.1, 2024; 57 Views, 24 PDF Downloads
Entity Resolution (ER) has been investigated for decades in various domains as a fundamental task in data integration and data quality. The emerging volume of heterogeneously structured data and even unstructured data challenges traditional ER methods. This research mainly focuses on the Data Washing Machine (DWM). The DWM was developed in the NSF DART Data Life Cycle and Curation research theme, which helps to detect and correct certain types of data quality errors automatically. It also performs unsupervised entity resolution to identify duplicate records. However, it uses traditional methods that are driven by algorithmic pattern rules such as Levenshtein Edit Distances and Matrix comparators. The goal of this research is to assess the replacement of rule-based methods with machine learning and deep learning methods to improve the effectiveness of the processes using 18 sample datasets. The DWM has different processes to improve data quality, and we are currently focusing on working with the scoring and linking processes. To integrate the machine model into the DWM, different pre-trained models were tested to find the one that helps to produce accurate vectors that can be used to calculate the similarity between the records. After trying different pre-trained models, distilroberta was chosen to get the embeddings, and cosine similarity metrics were later used to get the similarity scores, which helped us assess the machine learning model into DWM and gave us closer results to what the scoring matrix is giving. The model performed well and gave closer results overall, and the reason can be that it helped to pick up the important features and helped at the entity matching process.
show more -
Open Access
Article
Article ID: 1485
Harnessing artificial intelligence (AI) for cybersecurity: Challenges, opportunities, risks, future directionsby Zarif Bin Akhtar, Ahmed Tajbiul Rawol
Computing and Artificial Intelligence, Vol.2, No.2, 2024; 109 Views, 31 PDF Downloads
The integration of artificial intelligence (AI) into cybersecurity has brought about transformative advancements in threat detection and mitigation, yet it also introduces new vulnerabilities and potential threats. This research exploration systematically investigates the critical issues surrounding AI within cybersecurity, focusing on specific vulnerabilities and the potential for AI systems to be exploited by malicious actors. The research aims to address these challenges by swotting and analyzing existing methodologies designed to mitigate such risks. Through a detailed exploration of modern scientific research, this manuscript identifies the dual-edged impact of AI on cybersecurity, emphasizing both the opportunities and the dangers. The findings highlight the need for strategic solutions that not only enhance digital security and user privacy but also address the ethical and regulatory aspects of AI in cybersecurity. Key contributions include a comprehensive analysis of emerging trends, challenges, and the development of AI-driven cybersecurity frameworks. The research also provides actionable recommendations for the future development of robust, reliable, and secure AI-based systems, bridging current knowledge gaps and offering valuable insights for academia and industry alike.
show more -
Open Access
Article
Article ID: 1481
The based-biofeedback approach versus ECG for evaluation heart rate variability during the maximal exercise protocol among healthy individualsby Sara Pouriamehr, Valiollah Dabidi Roshan, Somayeh Namdar Tajari
Computing and Artificial Intelligence, Vol.2, No.2, 2024; 51 Views, 16 PDF Downloads
Although the use of biofeedback devices is beyond measure, they are widely applied only for clinical purposes. Therefore, this study evaluated whether biofeedback devices could be applied to estimate heart rate variability (HRV) among healthy populations. 60 individuals (46 ± 5 years; 30 women) performed maximal exercise protocol (MEP). At pre- and post-MEP status, HRV indexes were collected by two devices: 1) the electrocardiogram device (ECG); 2) the biofeedback device (BIO). At pre-exercise status, all HRV parameters had significant correlations, ranging from low ( r = 0.241) to high ( r = 0.779). At post-exercise status, significant correlations for some of the HRV measures were found as well, ranging from low (i.e., r ≤ 0.29) to moderate (i.e., 0.3 ≤ r ≤ 0.49). According to our knowledge, this study is the first attempt to evaluate HRV by biofeedback devices among healthy individuals, which shows they can also be applied as a swift method to examine HRV among healthy individuals, especially in rest conditions.
show more -
Open Access
Review
Article ID: 1279
Applications of reinforcement learning, machine learning, and virtual screening in SARS-CoV-2-related proteins AI and SARS-CoV-2-related proteinsby Yasunari Matsuzaka, Ryu Yashiro
Computing and Artificial Intelligence, Vol.2, No.2, 2024; 149 Views, 71 PDF Downloads
Similarly, to all coronaviruses, SARS-CoV-2 uses the S glycoprotein to enter host cells, which contains two functional domains: S1 and S2 receptor binding domain (RBD). Angiotensin-converting enzyme 2 (ACE2) is recognizable by the S proteins on the surface of the SARS-CoV-2 virus. The SARS-CoV-2 virus causes SARS, but some mutations in the RBD of the S protein markedly enhance their binding affinity to ACE2. Searching for new compounds in COVID-19 is an important initial step in drug discovery and materials design. Still, the problem is that this search requires trial-and-error experiments, which are costly and time-consuming. In the automatic molecular design method based on deep reinforcement learning, it is possible to design molecules with optimized physical properties by combining a newly devised coarse-grained representation of molecules with deep reinforcement learning. Also, structured-based virtual screening uses protein 3D structure information to evaluate the binding affinity between proteins and compounds based on physicochemical interactions such as van der Waals forces, Coulomb forces, and hydrogen bonds, and select drug candidate compounds. In addition, AlphaFold can predict 3D protein structures, given the amino acid sequence, and the protein building blocks. Ensemble docking, in which multiple protein structures are generated using the molecular dynamics method and docking calculations are performed for each, is often performed independently of docking calculations. In the future, the AlphaFold algorithm can be used to predict various protein structures related to COVID-19.
show more -
Open Access
Article
Article ID: 1443
Validation of the practicability of logical assessment formula for evaluations with inaccurate ground-truth labels: An application study on tumour segmentation for breast cancerby Yongquan Yang, Hong Bu
Computing and Artificial Intelligence, Vol.2, No.2, 2024; 97 Views, 23 PDF Downloads, 2 Supplementary materials Downloads
The logical assessment formula (LAF) is a new theory proposed for evaluations with inaccurate ground-truth labels (IAGTLs) to assess the predictive models for artificial intelligence applications. However, the practicability of LAF for evaluations with IAGTLs has not yet been validated in real-world practice. In this paper, we applied LAF to two tasks of tumour segmentation for breast cancer (TSfBC) in medical histopathology whole slide image analysis (MHWSIA) for evaluations with IAGTLs. Experimental results and analysis show that the LAF-based evaluations with IAGTLs were unable to confidently act like usual evaluations with accurate ground-truth labels on the one easier task of TSfBC while being able to reasonably act like usual evaluations with AGTLs on the other more difficult task of TSfBC. These results and analysis reflect the potential of LAF applied to MHWSIA for evaluations with IAGTLs. This paper presents the first practical validation of LAF for evaluations with IAGTLs in a real-world application.
show more
Announcements
Research: Enhancinguser experience in large language models through human-centered design: Integrating theoretical insights with an experimental study to meet diverse software learning needs with a single document knowledge base
The surge of Artificial Intelligence (AI) technology is reaping benefits across a spectrum of industries, with one of the most notable applications being the evolution and utilization of Chat GPT. This tool has become an integral part of text editing, content creation, and even code generation. Articles published both on Nature and Computing and Artificial Intelligence reveal the values and technology logict and development.
Read more about Research: Enhancinguser experience in large language models through human-centered design: Integrating theoretical insights with an experimental study to meet diverse software learning needs with a single document knowledge base