A CONCEPTUAL FRAMEWORK FOR AI SELF-HEALING FOR BIAS MITIGATION: A PROACTIVE ARCHITECTURAL PROPOSAL
DOI:
https://doi.org/10.61397/tla.v3i2.509Keywords:
AI Bias, AI Self-healing, AI EthicsAbstract
As the adoption of Artificial Intelligence (AI) continues to expand across various sectors, the issue of bias in training data has emerged as a significant ethical and technical challenge. AI systems are commonly trained using large-scale datasets collected from digital environments such as the internet, social media, and public databases. These datasets often contain historical inequalities, stereotypes, and unbalanced representations of certain demographic groups. Consequently, AI models may unintentionally replicate and amplify these biases in their predictions or decisions. This situation becomes particularly concerning when AI is used in high-stakes domains such as recruitment, healthcare, financial services, and public policy. Most existing bias mitigation strategies rely on reactive approaches, such as adjusting model outputs or modifying datasets after bias has already been identified. While these methods can reduce certain forms of discrimination, they often require significant manual intervention and may not effectively address bias in dynamic data environments. This research proposes a conceptual framework for an AI self-healing system designed to autonomously detect and correct bias in training data before it influences model outcomes. The proposed framework integrates four key modules: Data Monitoring, Bias Analysis, Automated Bias Correction, and a Feedback Loop and Validation mechanism. Together, these components create a continuous workflow that allows the system to identify bias patterns, apply corrective strategies, and verify fairness before data is used for model training. This framework offers a proactive and sustainable approach to bias mitigation while supporting the development of more ethical, robust, and accountable AI systems.
References
Adadi, A., & Berrada, M. (2018). Peeking inside the black box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. MIT Press.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Schafer, B. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
Gilson, L. L., & Goldberg, C. B. (2015). Editors’ comment: So, what is a conceptual paper? Group & Organization Management, 40(2), 127–130. https://doi.org/10.1177/1059601115576425
Jabareen, Y. (2009). Building a conceptual framework: Philosophy, definitions, and procedure. International Journal of Qualitative Methods, 8(4), 49–62.
Kephart, J. O., & Chess, D. M. (2003). The vision of autonomic computing. Computer, 36(1), 41–50.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage population health. Science, 366(6464), 447–453.
Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.
White, R., & Green, T. (2019). Explainable AI: Interpreting machine learning models for transparency. Journal of Artificial Intelligence Research, 65, 1–28.
Williams, R., & Brown, T. (2021). Data drift and bias in machine learning systems. IEEE Transactions on Artificial Intelligence, 2(3), 240–252.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Harianja Harianja, Elgamar Syam, Alawiyah Abd Wahab, Huda Ibrahim, Hapini Awang, Nur Suhaili Mansor, Adi Permana Sidik

This work is licensed under a Creative Commons Attribution 4.0 International License.



