Vol. 3 No. 2 (2025)

  • Open Access

    Article

    Article ID: 2485

    Predict and estimate the current stock prices by using Adaptive Neuro-Fuzzy Inference System

    by Ying Bai, Dali Wang

    Computing and Artificial Intelligence, Vol.3, No.2, 2025;

    To correctly and accurately predict and estimate the stock prices to get the maximum profit is a challenging task, and it is critically important to all financial institutions under the current fluctuation situation. In this study, we try to use a popular AI method, Adaptive Neuro Fuzzy Inference System (ANFIS), to easily and correctly predict and estimate the current and future possible stock prices. Combining with some appropriate pre-data-processing techniques, the current stock prices could be accurately and quickly estimated via those models. A normalization preprocess for training and testing data was used to improve the prediction accuracy, which is our contribution and a plus to this method. In this research, an ANFIS algorithm is designed and built to help decision-makers working in the financial institutions to easily and conveniently predict the current stock prices. The minimum training and checking RMSE values for the ANFIS model can be 0.103842 and 0.0651076. The calculation of accuracy was carried out using the RMSE calculation. The experiments conducted found that the smallest RMSE calculation result was 0.103842 with training data. Other issuers can use this method because it can predict stock prices quite well.

    show more
  • Open Access

    Article

    Article ID: 2514

    An efficient ray tracing algorithm and its implementation based on adaptive octree decomposition

    by Chunlong Dong, Shengjun Xue, Limin Zhao

    Computing and Artificial Intelligence, Vol.3, No.2, 2025;

    This paper proposes a ray tracing algorithm based on adaptive octree decomposition to solve the problem of low efficiency of calculating the intersection point of light rays and complex surfaces in the optical simulation of vehicle lights. The algorithm significantly improves the efficiency of solving the intersection problem by discretizing the complex surface into a series of polygonal facets and using a bilinear interpolation algorithm to optimize the light refraction calculation. Experiments show that the algorithm can greatly reduce the computation time on vehicle light models of different complexity, especially when dealing with complex surfaces, the algorithm improves performance by nearly 50%. The algorithm has been successfully applied to simulating optical performance in the design of vehicle lights and has achieved good application results, providing an efficient solution for the optical simulation in the design of vehicle lights.

    show more
  • Open Access

    Article

    Article ID: 2220

    Offensive and defensive cybersecurity solutions in healthcare

    by Cheryl Ann Alexander, Lidong Wang

    Computing and Artificial Intelligence, Vol.3, No.2, 2025;

    Healthcare services usually implement defensive data strategies; however, offensive data strategies offer new opportunities because they focus on improving profitability or revenues. Offensive data also helps develop new medicine, diagnosis, and treatment due to the ease of data-sharing rather than data control or other restrictions. Balancing defensive data and offensive data is balancing data control and flexibility. It is a challenge to keep a balance between the two. Sometimes, it is necessary to favor one over the other, depending on the situation. A robust cybersecurity program is contingent on the availability of resources in healthcare organizations and the cybersecurity management staff. In this paper, a cybersecurity system with the functions of both defensive cybersecurity and offensive cybersecurity in a medical center is proposed based on big data, artificial intelligence (AI)/machine learning (ML)/deep learning (DL).

    show more
  • Open Access

    Article

    Article ID: 2923

    Multifidelity Bayesian optimization for hyperparameter tuning of deep reinforcement learning algorithms

    by Eduardo C. Garrido-Merchán, Martin Molina, Gonzalo Martínez

    Computing and Artificial Intelligence, Vol.3, No.2, 2025;

    This research focuses on comparing standard Bayesian optimization and multifidelity Bayesian optimization in the hyperparameter search to improve the performance of reinforcement learning algorithms in environments such as OpenAI LunarLander and CartPole. The primary goal is to determine whether multifidelity Bayesian optimization provides significant improvements in solution quality compared to standard Bayesian optimization. To address this question, several Python implementations were developed, evaluating the solution quality using the mean of the total rewards obtained as the objective function. Various experiments were conducted for each environment and version using different seeds, ensuring that the results were not merely due to the inherent randomness of reinforcement learning algorithms. The results demonstrate that multifidelity Bayesian optimization outperforms standard Bayesian optimization in several key aspects. In the LunarLander environment, multifidelity optimization achieved better convergence and more stable performance, yielding a higher average reward compared to the standard version. In the CartPole environment, although both methods quickly reached the maximum reward, multifidelity did so with greater consistency and in less time. These findings highlight the ability of multifidelity optimization to optimize hyperparameters more efficiently, using fewer resources and less time while achieving superior performance.

    show more
  • Open Access

    Article

    Article ID: 3104

    Lightweight weighted average ensemble model for pneumonia detection in chest X-ray images

    by Suresh Babu Nettur, Shanthi Karpurapu, Unnati Nettur, Likhit Sagar Gajja, Sravanthy Myneni, Akhil Dusi, Lalithya Posham

    Computing and Artificial Intelligence, Vol.3, No.2, 2025;

    Pneumonia is a leading cause of illness and death in children, underscoring the need for early and accurate detection. In this study, we propose a novel lightweight ensemble model for detecting pneumonia in children using chest X-ray images. Our main contribution lies in the development of a novel, particularly weighted average ensemble model that combines two lightweight pre-trained convolutional neural networks (CNNs), MobileNetV2 and NASNetMobile, an ensemble combination that has not been previously explored in the field of deep learning for image classification tasks. These models were selected for their balance of computational efficiency and accuracy, fine-tuned on a pediatric chest X-ray dataset, and combined to enhance classification performance. The proposed ensemble model achieved a classification accuracy of 98.63%, significantly outperforming individual models such as MobileNetV2 (97.10%) and NASNetMobile (96.25%) in terms of accuracy, precision, recall, and F1 score. Moreover, the ensemble model outperformed state-of-the-art architectures, including ResNet50, InceptionV3, and DenseNet201, while maintaining computational efficiency. The proposed lightweight weighted average ensemble model presents a highly effective and resource-efficient solution for pneumonia detection, making it particularly suitable for deployment in resource-constrained settings.

    show more
  • Open Access

    Article

    Article ID: 2300

    Semantic backpropagation: Extending symbolic network effects to achieve non-linear scaling in semantic systems

    by Andy E. Williams

    Computing and Artificial Intelligence, Vol.3, No.2, 2025;

    Addressing humanity’s most complex challenges—such as poverty, climate change, and systemic inequality—requires solutions that scale non-linearly with their key variables. Traditional symbolic-level backpropagation algorithms, which power neural networks, achieve non-linear scaling through hierarchical feature extraction. However, these algorithms are constrained by their reliance on symbolic representations and numeric optimization, limiting their applicability to context-rich, real-world systems. This paper introduces semantic backpropagation, a novel extension of symbolic backpropagation, designed to operate on semantic representations that encode richer contextual and relational information. We hypothesize that (1) symbolic-level network effects can be generalized and replicated at the semantic level through semantic backpropagation algorithms, and (2) the non-linear scaling observed in symbolic backpropagation can also be achieved in semantic systems. To test these hypotheses, we developed a simulation framework that dynamically constructs, evaluates, and optimizes networks of interventions, such as value chains, using semantic query loops and iterative fitness optimization. The results demonstrate that semantic backpropagation demonstrates the potential to replicate symbolic-level network effects and achieve non-linear scaling through cooperative semantic interactions. Collaborative idea generation within this framework produced an exponential increase in the number and impact of business ideas compared to independent idea generation, providing initial evidence for the potential of semantic backpropagation to address multi-dimensional challenges. This work bridges the paradigms of symbolic precision and semantic richness, offering a powerful new tool for designing decentralized collective intelligence systems and solving global problems at scale. Semantic backpropagation provides a theoretical and practical foundation for leveraging semantic-level network effects to exponentially enhance the impact of human and AI collaboration. This work does not claim to present final empirical validation. Rather, it defines and tests a generative framework whose full implementation lies beyond current infrastructure. It proposes a theory of recursive semantic coherence whose feasibility must be evaluated not by external metrics alone, but by its ability to generate conceptual resolution and future testable models across domains.

    show more
  • Open Access

    Article

    Article ID: 2601

    Resources management and execution mechanisms for thinking operating system

    by Ping Zhu, Pohua Lv, Weiming Zou, Xuetao Jiang, Jin Shi, Yang Zhang, Yirong Ma

    Computing and Artificial Intelligence, Vol.3, No.2, 2025;

    To achieve interpretable machine intelligence surpassing human cognitive levels and realize the ultimate objective of co-evolutionary human-computer interactions, this article analyzed various related aspects such as the human-computer interaction process, knowledge base construction, visual programming tool development, and thinking operating system design. This article proposed a method for simulating human thinking processes by computer: Firstly, it clarified the route by starting from the “teaching and learning” mode, which was the human-computer interaction computing mode, enabling the gradual accumulation of knowledge and data, and established the thinking knowledge base. Secondly, it established human thinking simulation mechanisms on the thinking operation system, including state perception, common sense judgment, error rollback, static logic structure analysis for the programs, and dynamic execution path analysis. Thirdly, it discussed the computer implementation methods of the thinking operation system and applications in detail, using mechanisms such as autonomous enumeration and rule induction of input data features, common sense judgment rollback, automatic error self-healing, online self-programming, and system adaptation (generalized pattern matching); all the above mechanisms were commonly used in human thinking. Finally, it summarized the whole article, and the future research directions were proposed by the authors.

    show more
  • Open Access

    Review

    Article ID: 2258

    The potentially fractal nature of intelligence

    by Andy E. Williams

    Computing and Artificial Intelligence, Vol.3, No.2, 2025;

    This article examines the hypothesis that intelligence may exhibit fractal properties. The concept of Nth order intelligence is introduced, emphasizing its implications for problem-solving scalability and contrasting the limitations of centralized systems with the potential of decentralized collective intelligence. The analysis explores the limitations of first-order AI systems in addressing non-linear problem scaling, particularly in the context of AI safety, and critiques the inherent risks of centralization in accelerating control-oriented trajectories. In contrast, decentralized collective intelligence is proposed as a scalable framework capable of optimizing problem-solving across diverse participants. The stakes of these competing trajectories are profound: one path leads to escalating centralization, potentially culminating in irreversible and misaligned control, while the other fosters collaboration through decentralized structures that ensure alignment. This work emphasizes the necessity of prioritizing decentralized, semantic-level approaches to intelligence to address existential challenges and ensure alignment with collective human interests.

    show more