Panama Life Health Insurance Case
Abstract
This case study explores how algorithmic bias affected stakeholders at Panama Life Health Insurance following the deployment of an AI-enhanced claims processing system. The analysis examines patterns of inequitable outcomes using a grounded theory approach, revealing higher denial rates for minority policyholders and reduced preventive care for women. There are three main reasons for this crisis: structural bias embedded in historical training data, organizational gaps in artificial intelligence (AI) governance, and cultural assumptions that automation is objective. Institutional vulnerability emerged, highlighting AI-driven inequities due to unmonitored automation, weak cross-functional communication, and a lack of fairness controls, which contributed to discriminatory outcomes. The study incorporates organizational change frameworks, such as Kotter's eight-step model and Schein's cultural analysis, which should be used in this case to emphasize the importance of aligning culture, incentives, and ethical imperatives within the organization. This case illustrates how grounded theory can shed light on the sociotechnical dynamics that create inequitable algorithmic outcomes in the literature on responsible AI governance. To restore ethical integrity, stakeholder trust, and operational accountability, healthcare organizations need to follow a structured path. KEYWORDS: Algorithmic Bias, Enterprise Risk Management (ERM), Health Equity, Organizational ChangePublished
2025-12-24
How to Cite
Jones, H. I. (2025). Panama Life Health Insurance Case . SCIENTIA MORALITAS - International Journal of Multidisciplinary Research , 10(2), 282-297. Retrieved from https://www.scientiamoralitas.com/index.php/sm/article/view/356
Issue
Section
Articles

This work is licensed under a Creative Commons Attribution 4.0 International License.