
Prof. Shaohua Wan
University of Electronic Science and Technology of China, China
Open Access
Article
Article ID: 2342
by Haopeng Mu, Ling Guo, Xiaozhou Zhang
Computing and Artificial Intelligence, Vol.3, No.3, 2025;
In recent years, due to the excellent spatial information correlation, small data volume and high computational efficiency of bone data, it has been widely applied in action recognition fields such as autonomous driving and intelligent security. However, in practical applications, attackers only need to apply a small perturbation to the input bone data to cause the attacked model to make incorrect recognition of the corresponding action, thereby resulting in a significant drop in recognition accuracy and even potentially causing serious consequences in high-risk scenarios such as autonomous driving. To solve this problem, many attack methods have been proposed, such as attacks that limit the angle changes between bones or attacks that alter the length of bones. These methods can, to a certain extent, increase the attack success rate of action recognition models, but most of these methods attack the bone data by simply disregarding the influence of each joint bone node on the overall action. In this paper, we propose a new adversarial attack method, that is, to attack through interfering with the coordinate data of the entire skeletal joint nodes. In our method, the concept of joint weights is proposed, and a time cropping translation attack is designed based on joint weights to improve the attack success rate. We conducted experiments on our method. The experimental results show that our attack success rate is stable at over 60%.
Open Access
Article
Article ID: 3781
by Michael Mncedisi Willie
Computing and Artificial Intelligence, Vol.3, No.3, 2025;
Artificial Intelligence (AI) has emerged as a transformative enabler across strategic management, qualitative research, and crowdsourced operational systems. However, adoption is shaped by human judgement, organisational processes, and socio-technical factors. Existing literature often examines AI applications in isolation, overlooking integrative approaches that balance technical capability with human and ethical oversight. This study systematically synthesises evidence to examine AI’s impact across multiple domains, identifying patterns, limitations, and opportunities, and proposes a human-centred framework for responsible deployment. A systematic integrative review was conducted, encompassing peer-reviewed journals, technical reports, and policy documents. Data extraction focused on AI capabilities, human-AI interaction, governance, methodological rigour, and socio-technical integration. Thematic analysis identified recurring patterns and gaps across domains. This study reveals that AI-driven decision-support systems enhance predictive analytics, scenario planning, and resource allocation, yet require managerial expertise, governance, and interpretive oversight to translate insights into actionable strategy. Furthermore, AI-assisted tools improve thematic analysis, coding, and data synthesis efficiency, but human interpretation remains critical to maintain contextual depth, methodological rigour, and ethical integrity. Lastly, Platforms such as Waze and Google Maps demonstrate real-time operational value, yet outcomes are contingent on data quality, user engagement, and trust, highlighting the socio-technical dependencies of AI deployment. The Triadic AI Integration Framework (TAIF) operationalises these insights by linking AI capabilities, human interpretation, and organisational processes within a human-centred, ethically governed structure. Effective AI adoption requires interpretive oversight, socio-technical alignment, and cross-domain integration to maximise strategic, research, and operational impact. Future research should empirically test TAIF, explore socio-technical adaptation, and examine long-term organisational and societal outcomes.
Open Access
Article
Article ID: 3914
by Zaryab Rahman, Mattia Ottoborgo
Computing and Artificial Intelligence, Vol.3, No.3, 2025;
Current paradigms in Self-Supervised Learning (SSL) achieve state-of-the-art results through complex, heuristic-driven pretext tasks like contrastive learning or masked image modeling. We propose a departure from these heuristics by reframing SSL through the fundamental Minimum Description Length (MDL) principle. We introduce the MDL-Autoencoder (MDL-AE), learning visual representations by optimizing a Vector Quantized Variational AutoEncoder (VQ-VAE)-based objective for efficient, discrete compression of visual data. Through rigorous experiments on the Canadian Institute for Advanced Research 10 (CIFAR-10), we demonstrate that this compression-driven objective learns a rich vocabulary of local visual concepts. However, we uncover a critical architectural insight: despite learning a visibly superior, higher-fidelity vocabulary, a more powerful tokenizer fails to improve downstream performance. We show that the MDL-AE learns holistic object parts rather than generic, composable primitives. Consequently, a sophisticated Vision Transformer (ViT) head consistently fails to outperform a simple linear probe on the flattened feature map. This architectural mismatch reveals that the nature of the learned representation dictates the optimal downstream architecture. To validate this, we demonstrate that a dedicated self-supervised alignment task, based on Masked Autoencoding of the discrete tokens, resolves this mismatch and dramatically improves performance, bridging the gap between generative fidelity and discriminative utility. Our work provides a compelling case study on co-designing objectives and downstream architectures.
Open Access
Article
Article ID: 4185
by Dina Darwish, Nehal Khaled Ahmed, Soha Mohamed Abd Allah, Reham Adel Ali
Computing and Artificial Intelligence, Vol.3, No.3, 2025;
A new innovative way to learn and teach using Gamification is a relatively new educational concept that can bring about a change in the way we learn and teach. Gamification refers to the use of game design elements in non-game contexts in order to increase motivation or engagement. In simpler terms, gamification is the process of incorporating rewards, badges, leaderboards, and points into non-game activities, such as learning, to make it engaging and fun. Gamification is a highly effective method of achieving a wide variety of learning outcomes, as gamification is based on motivational psychology. The paper will explore the effects of playing video games and their cognitive and social impacts, such as increased engagement in learning, reduced anxiety, an increase in self-esteem, collaboration among players, etc. Gamification is anticipated to combine with emerging technologies such as virtual reality and artificial intelligence in the future to achieve a more individualized learning experience. In order to achieve its full potential to increase global equitable access to learning that is more fun and motivating, a student-centered approach needs to become central to the education system, and teachers need to collaborate and work together to achieve this goal. This paper discusses theoretical frameworks, literature review, tools and technologies, methods of implementation, challenges, conclusions, as well as future perspectives related to AI-enhanced gamification to achieve collaborative learning.