Challenges and barriers to implementing artificial intelligence in healthcare IT

Authors

Kiran Kumar Maguluri
IT systems Architect, Cigna Plano Texas, United States

Synopsis

Implementing AI in healthcare IT faces challenges such as data privacy concerns, integration with existing systems, regulatory hurdles, and lack of standardized protocols. Overcoming these barriers is essential for maximizing AI's potential in improving patient care, enhancing operational efficiency, and ensuring ethical and legal compliance in healthcare systems.

Keywords

Barriers, Data Privacy, Healthcare IT, Integration, Regulation, Standardization

11.1. Introduction

Artificial intelligence (AI) systems have an increasing impact on various fields, including healthcare. AI in healthcare can revolutionize diagnostics, treatments, care, and management of the healthcare system. However, to unlock AI’s full potential, one of the first steps is to build and deploy AI-enabled systems, including in healthcare information technology environments. Comprehensive study and analysis are required because healthcare IT is an intricate environment, known for complex and non-interoperable systems. In this work, we carried out a comprehensive comparative analysis based on the challenges, the AI- and healthcare-related characteristics they rely on, ethical considerations, the forms in which they manifest, and the difficulty in overcoming them, followed by an AI system that may be related to a concrete challenge. There are various domains in which the challenges related to implementing AI-enabled systems in healthcare IT may occur. When a challenge arises in a healthcare information technology environment, it might affect not only the technical aspects but also the ethical and organizational environment. This wide range of problems might increase the complexity of overcoming them. The purpose of this study is to identify the challenges or barriers related to AI applications in healthcare quality improvement that might arise when a healthcare provider wants to adopt AI applications and analyze how to overcome them throughout this essay (Syed, 2023).

11.1.1. Background of AI in Healthcare IT

From their origins as a machine-operated scientist to their incorporation into more recent technologies, artificial intelligence (AI) and its just-as-important counterpart, machine learning, have crisscrossed many facets of healthcare IT in the past few decades. The implementation of machine learning algorithms, deep learning algorithms, and data analytics platforms has changed the rate and scope of technology adoption in healthcare from patient care operations, assisting in diagnostics, supporting clinical decision-making, research and discovering new treatments, as well as all business operations in a health system. With so many different use cases across such a broad range of topics, the following is a breakdown of the AI technology use cases in healthcare.

Among the different types of predictive models, predictive analytics is gaining traction on the concept that it can gear clinical decision-making to result in the best possible patient outcomes while minimizing harm and using resources efficiently. Despite so many possible applications in healthcare, AI is making significant inroads in healthcare IT as a tool for electronic health records, telemedicine, teleradiology, and personal health management software. Reflective of broader uses of AI in healthcare, these areas of AI pertain to patient interaction, which is deeply interconnected with institutional performance and efficiency. Perhaps this is why it is estimated that a significant percentage of patient interactions will be underpinned by real-time healthcare AI capabilities that personalize communication, treatment, and medicine. Given the wide-scale adoption of electronic health records, it is noted that these records are important in medical care, and their operational efficiencies are improved by including AI capabilities to capture the full benefits of an enterprise-wide healthcare system. As AI helps integrate siloed patient information and improve operational efficiencies, it is also gaining a meaningful place within a rapidly evolving healthcare system. The need to employ AI to both technology and humans is key in deeply understanding healthcare and its use of technology (Nampalli, 2023).

Fig 11 . 1 : Challenges for Implementing AI in Healthcare IT. 

11.1.2. Significance of the Study

AI is a promising field and is increasingly being applied over the last few years. In the healthcare domain, AI has the potential to improve the operational efficiency of the sector, enable faster, more accurate diagnosis and patient matching, significantly augment patients’ quality of care, improve patient outcomes, and create extraordinarily new clinical opportunities. AI, particularly the ML subset, is gradually permeating the healthcare IT field. However, to understand and address the emerging challenges related to healthcare IT in light of AI, it is important to be aware of the forces shaping the field and defining firms’ technological frontiers. Given the possible gains from AI in healthcare, it is indeed important to understand the many challenges and barriers to implementing the technology. Learning how to navigate them and address them effectively is not only pertinent, but also essential to informed decision-making among the stakeholders possibly affected by the adoption of these new technologies. Few research probes the technical needs, the potential challenges of healthcare IT, or the experience of healthcare organizations. Analysis fails to develop a powerful case as to what will be lost if these challenges are overlooked. Further exploration and development of strategies are missing health information managers lacking consideration of the drivers toolkit in relation to the complexities presented by the level of knowledge, attitudes, behaviors, information, and data relationships. Failing to name and address these challenges, particularly in healthcare, risks failing to act on growth while promoting discourse and practical applications of AI. In particular, we argue that knowing these challenges, as well as understanding the frequently complex or opaque relationships, along with offering practical insights that could inform practice, and different keys is crucial.

11.2. Technological Challenges

Healthcare providers make use of a wide range of health IT systems for management and patient care delivery. AI can potentially be implemented in these systems to assist healthcare professionals in making clinical decisions and, therefore, improving the quality of care. However, there are a number of challenges and barriers to successfully implementing AI in healthcare IT, and it is important to identify these obstacles so that we can begin to tackle them. The barriers and challenges can be divided into technological, ethical, legal, professional, and organizational issues (Danda, 2023). This paper focuses on the technological barriers, as they concern the inability to apply clinical AI tools to data directly produced by healthcare IT systems.

One problem arises from the fact that a relatively small number of AI-based clinical decision support systems are fully operational and available directly to healthcare providers. Therefore, data sources with which to evaluate technological aspects of AI for clinical practice are scarce. Furthermore, there are a large number of IT systems available, either as standalone systems or in hospital-wide EHR systems provided by a number of companies. As a result, technological standards vary greatly and can produce discrepancies in the quality of the data, as well as constraints in performing AI evaluations and validating algorithms. In general, these difficulties in AI assessment should apply to any of the proposed technologies of concern in the area of clinical decision support. Indeed, although there are algorithm-specific characteristics that are important to take into account, the dependence on accurate and reliable data is common to many AI systems. The above-mentioned demand is easy to state but much more difficult to deliver, especially in a healthcare setting. One crucial area of concern is data quality, particularly the representativeness of data for accurately training AI algorithms. Historically, it has been established that the time, date, and recording rate of variables have been shown to vary from one EHR to another, just as data registries and their completeness can be relevant for a given time, field, and/or geographical region. In the context of AI, these issues in turn bring up the challenges of data availability. Indeed, the vision to expand the potential applications of AI far exceeds the data sources that have already incorporated it. Several data sources may not be available in an electronic format or may not exist yet, but it is likely that even available data may not be suitable for direct use in AI systems because of the issues of data quality. Finally, as a technical barrier to using broad training sets across multiple IT systems and sites, the interoperability of data representation is crucial. Many current EHR and electronic data systems do not necessarily communicate in a standardized data format, even when they use a common data standard for the records.

Equation 1 : Interoperability Challenges 

11.2.1. Data Quality and Availability

When considering the application of AI algorithms in healthcare settings, several challenges must be navigated. This subsection focuses specifically on examining the impact of data quality and availability on health data analytics. Clearly, accurate, comprehensive, and timely data are essential as they serve as the raw material required for most AI prediction engines. If there are limitations in data quality, the interpretation of an AI prediction engine’s results can be misleading at best and directly impact a patient’s treatment and care or operational outcomes in any healthcare organization. Consequently, future developments must address these issues and ensure that we as a society can not only understand them but also manage them.

With regard to establishing the challenges and limitations of AI within healthcare, data quality and availability are fundamental issues. Inaccuracy, lack of completeness, and inconsistency can lead to inaccuracies in data analytics and subsequent results derived. Missing data is one of the leading causes of data quality difficulties and is very prevalent in practice. A lack of availability and access to required data may also hinder the application and usage of AI, especially in clinical settings where making timely decisions is of the utmost importance (Syed, 2023). Cost can also affect the level of data that can be accessed due to legacy systems that only have the capability to capture a limited amount of data alongside the continued requirements of clinicians, management, and service users. These limitations increase the chances of bias and reduce the potential of AI that can be carried out.

11.2.2. Interoperability

According to the ISO, interoperability is understood as the ability of multiple health information technology systems or components to exchange information and to use the information that has been exchanged. When technological interoperability does not exist, clinicians often have fragmented care data that are not available when they need them. This is particularly important in implementing an AI system that is designed either to process data to help with diagnosis or to act autonomously, addressing specific problems. A number of clinicians and health IT organizations are now discussing the increased importance of working with our industries in order to improve healthcare through developing AI, while at the same time expressing concerns and skepticism about currently available systems. One of the main barriers facing developers of AI tools, such as imaging diagnostics, is the lack of access to difficult data; this data may be written on any one of hundreds of different systems. Without proper interoperability, AI has to be installed separately in the existing systems used within each healthcare organization. UK healthcare, one of those systems is Cerner, and within that system is Millennium, which is used by North Bristol NHS Trust.

The national data guardian for health and care highlights two types of barriers to interoperability in technology: 'legislative, organizational and cultural barriers' and 'technological barriers'. Technological barriers to interoperability include the range of different standards and protocols for accessing and then handling patient record data, worked on by consensus in the NHS Digital Interoperability Board along with users and developers. This is important because increasing data flows within a system that is a web of 'point to point' versions is expensive and unreliable. In addition, many of the more advanced algorithms required to deal with the very high dimensional liaisons between data already exist in distributed systems, for example, for securities and insurance sectors, while not a patient-based health system due to the infant nature of AI in healthcare. The consensus standards and protocols also act as an incentive to UK-based software companies for software in all areas to connect to the NHS as an exemplar 'big client'. In addition, the standards provide a framework for inspection by the Care Quality Commission. Concentrating effort on developing IT systems based on interoperable principles without purchasing new hardware or disposing of old systems is a declarative 'green' way to improve data flow.

11.3. Ethical and Legal Challenges

Implementing AI in healthcare IT is accompanied by a number of ethical and legal challenges. The aspect of patient rights is one of the leading considerations, especially regarding the integrity of patient data, since all filtering, evaluation, and interpretation, as the functions of AI, are reliant upon data quality. While resolving the challenges related to patient data in IT in healthcare, the question widens to address ethical problem classifications, along with questions concerning the observance of legal regulations. Moreover, within the model-based decision-making process, the implementation of any AI algorithm containing ethical and sociopolitical considerations is subject to strong critique—in principle, since decisions based on ethical theories are doubtful when the reduction to rules of action for the implementation in clinical practice occurs (Nampalli, 2022). AI-based decision-making amplifies these doubts.

Good words build trust in new technology, and in a trustworthy environment, AI has the potential to be transformational. Generally, healthcare AI is responsible for the maintenance of trust in society and for the proper observance of all legal procedures, such as patient confidentiality and data protection. The right framework balances incentives for innovation with assurances of conformity with patient rights and public trust in the healthcare system. Otherwise, there could be concerns with substantial regulatory meddling from the wrong parties. Among the challenges are identifying who is responsible for AI-related issues and the legal liability to be assumed when AI-led decision-making fails. This requires, first, an application of ethical criteria to assign responsibility, and also the development of a clear and easy but multi-dimensional ethical framework to assist those in AI use in finding the right balance in every situation. It will be a dynamic tool updated and corrected over time. Then this ethical perspective needs to be developed into a robust legal framework that can be readily transposed in various legal environments.

11.3.1. Data Privacy and Security

Data privacy and security, especially in the healthcare domain, are critical concerns. The rapidly increasing collection of healthcare data and the integrated nature of different digital healthcare services underscore the need to secure private and sensitive information. Trust is foundational in healthcare relationships and can be destroyed in seconds by a breach of patient confidentiality or by unauthorized access to patient records. Regulatory guidelines and legal frameworks ensure the requisite legal privacy boundaries around patient data. In the context of digital applications of healthcare, health IT infrastructure and the connected services or systems for transmitting and receiving sensitive patient-related information, there is an onus not only to protect the data but also to ensure that the data is protected, that is, not tampered with or changed in any unauthorized or malicious manner.

Security breaches continue to proliferate in the digital healthcare environment, from insider threats to external hacking incidents and ransomware. Moreover, some algorithms, particularly AI-based systems, are seen as "black box" systems, making it difficult to understand outcomes or trade-offs made by these systems to prevent data breaches in this scenario. Nevertheless, some AI techniques can potentially aid in the security of patient data, e.g., using AI for anomaly detection when insincere access to data or unauthorized modification of data occurs. Further, AI methods for maintaining patient data privacy, such as federated learning or differential privacy, show promise but also have shortcomings and may raise new data privacy issues themselves with the computation being performed on patient data. Consequently, an in-depth knowledge of these trends, challenges, and potential solutions is needed to inform the development of robust strategies for ensuring security and privacy in the deployment of AI in health IT.

Overall, data privacy and security are critical factors in the successful adoption of AI in healthcare. Any modern, efficient, and easily adaptable health AI system needs to be designed with a deep understanding of both these barriers. Healthcare systems need a balance among availability, usability, integrity, and confidentiality when providing an unrestricted security trust. Only in a well-designed AI-empowered health IT system, poised to resolve these critical issues, will end users and healthcare organizations place their trust.

11.3.2. Bias and Fairness

The intersection of AI and healthcare raises some critical issues related to bias and fairness. A significant portion of this bias results from the training data used to develop AI systems. Since the majority of the historical data fed into these systems has come from health settings, biases can potentially extend into AI applications. Historical healthcare practices have also treated select populations unjustly. For example, symptoms for several diseases in women and non-white patients were historically dismissed. Also, homosexuality was only a diagnostic category for physicians a little over forty years ago. Therefore, not accounting for systematic bias in the data used to develop AI can further deepen the injustice overlooked by healthcare institutions (Tulasi et al., 2022).

A considerable portion of the health AI literature tends to minimize bias scrutiny through the use of non-patient data – less biased data. This eventually runs the risk of making clinical care even more unfair. Mitigating bias in healthcare AI can be done in a variety of ways, and different methods exist to measure bias in AI-based systems. Researchers have posited the use of Simpson’s Paradox to discover if using particular decision-making systems increases disparities among various population subgroups. Other researchers proposed quantifying fairness through the use of justice theory, which essentially says that any inequity in society must be corrected to benefit the least advantaged member of society. These scholars recommend the distinction between fair and unfair in AI applications through measuring "unpreventable inequalities," which contribute variably to the unfairness of AI-based systems.

Fig 11 . 2 : Privacy-preserving artificial intelligence in healthcare.

Some work also focuses on designing the decision process of AI systems to be fair, suggesting a procedural form of fairness that shifts our attention from assessing the behavior of the AI system to the underlying decision-making process that the AI is automating. This work also carries broader implications, suggesting that ethics, intrinsic to any research or development process of any AI system, must be fundamentally procedural. This could translate into the development of a standard to conduct an independently verified ethical review at critical stages in any AI systems used in a healthcare environment. The AI community asserts the criticality of certifying all algorithms, not just those used for significant decision-making involving human welfare. There is an emphasis on bias reporting, which includes increased transparency, monitoring, and continuous evaluation to build trust while maintaining common standards.

11.4. Organizational Challenges

Organizations often develop routines based on established norms, values, and assumptions. Healthcare providers have established their routines based on the norms in the medical world. Some healthcare providers also resist any change, as it can create a disruption and threaten their status. Furthermore, the established norms are often the result of various institutional dynamics in healthcare settings. Therefore, in order to implement AI in healthcare, a better understanding of the cultural and institutional dynamics is needed. Additionally, one of the biggest challenges in realizing these opportunities includes the lack of data science and AI expertise on the part of healthcare professionals. Their level of expertise is still low even though they have recognized the importance of digital healthcare. In countries with significant health worker shortages, capacity limitations may also constrain the ability of physicians and nurses to incorporate and use AI. Moreover, healthcare organizations need to learn how AI can be effectively integrated into traditional workflows. Adding a new system or subsystem, especially one with fundamentally new capabilities and applications, to an organization’s operational environment can require changes in the existing processes or in the organizational structures to which it connects. AI is fundamentally a set of interconnected hardware and software components in which automated learning and decision-making capabilities are embedded. To ensure the effective integration of AI, healthcare organizations may need to examine how best to link new learning capabilities to existing organizational memory. Healthcare providers and medical professionals should also work on improving AI capabilities, either by changing organizational structures or by providing training opportunities to facilitate professional skills development. Patient data also needs to be utilized to enhance research in AI and foster new product development in the health IT industry (Venkata et al., 2022). Healthcare organizations need to gather large amounts of data in order for AI to be effective. Without this data, AI cannot be trained effectively.

11.4.1. Resistance to Change

During the planning and deployment phase, when potential AI use cases in healthcare are being assessed, the psychology of and resistance to change among people are important factors to identify and address. Resistance to change can be a significant factor at an individual and organizational level. Identification, acknowledgment, and resolution of these issues can help smooth the process and move from potential use cases to value realization and benefits. Here, the psychological and sociological barriers faced during the first two layers are introduced and explored using qualitative data.

Fig 11 . 3 : Challenges and Barriers to Implementing AI in Healthcare IT and Organizational Challenges

Lack of desire to lose one's job is understandable, and it is frequently cited as one of the barriers. AI systems and robots are feared for their potential to disrupt the workforce, leading to increased unemployment and potentially destabilizing communities. It is not just about losing jobs, but also about maintaining a comfortable and acceptable standard of living. Given the rapid pace of technological change, it is not clear whether people who lose their jobs to AI could be re-employed in areas where AI cannot yet effectively compete, particularly if they require substantial training to be transferred. It is an understandable human response to the uncertainty about losing jobs, but there is anticipation of job loss even among the most highly skilled and educated workers. To compound the problem, we often do not know if it is the specific AI technology that leads to job loss, through faster automation, increased centralization, loss of expertise, or because patients choose AI over real people. Many saw tele-radiology as important and of potential to reduce barriers to care; however, all noted the importance of staffing local radiologists to lead the review of patient data, provide expert local contextual interpretation, and be immediately available for hands-on services to other health professionals and thus provide an embedded clinical teacher and supervisor footprint in clinics. Success will depend on getting stakeholder input and buy-in, capacity building, succession planning, and the existence of a totally integrated provincial digital health record and fully trained and conscientious clerical input. All stakeholders agreed that good diagnostic accuracy from the AIs or any diagnostic method is important but not enough to efficiently and effectively connect people with personal health professionals and provide the added value of a person-to-person relationship.

11.4.2. Lack of AI Expertise

One central challenge in the implementation of AI in healthcare is the limited number of healthcare employees with AI expertise. Many healthcare professionals are not trained to reason and problem-solve using quantitative data. They are therefore not able to understand or use AI tools no matter how user-friendly they are. Although the larger, more sophisticated healthcare organizations may employ IT professionals and data scientists with AI expertise that they can partner with, smaller and more remote organizations that serve many populations may not be able to afford these kinds of employees. The implementation of AI in healthcare is likely to be affected by this lack of a suitable number of qualified employees. AI technologies may not be used to their capacity, and maximum benefit might not be realized (Pandugula et al., 2024).

With the healthcare industry experiencing an ever-increasing shortage of employees, cross-training or educating the current workforce in the use of AI might prove to be one of the most practical solutions. The challenge is not finding an AI solution, but rather finding healthcare professionals that understand the scope of the healthcare problem and how to effectively integrate AI technology into the existing healthcare system to address it. The convergence of AI in the realm of IT and advanced healthcare has given birth to the new fields of clinical informatics, bioinformatics, physiomics, predictive medicine, mobile health, and digital therapeutics. Collaboration and partnership between healthcare providers and IT professionals are needed to make AI technology more meaningful and effective in the healthcare sector. 

11.5. Conclusion

We started with the research question: what are the most significant challenges and barriers impeding healthcare organizations and health IT vendors from implementing AI into healthcare IT? For each of these, and across a range of stakeholder groups and knowledge, we identified twelve subthemes for challenges and thirteen subthemes for barriers. This synthesis of different sets of stakeholders' and experts' views presents a broad-ranging, multilateral study of the challenges and barriers that need to be addressed to get healthcare AI solutions into use in the healthcare system (Kalisetty et al., 2023).

The main finding is that the challenges to implementation and use of AI in healthcare IT are multifaceted and complex, with subthemes covering technological, organizational, ethical, and social factors. The challenges are interconnected and need to be addressed across multiple dimensions. Given the range and complexity of the challenges and barriers to adopting healthcare AI solutions, healthcare organizations and IT vendors must act to address them to take full advantage of the benefits that AI can bring to healthcare delivery. Of course, healthcare is a complex, multidimensional system that cannot be reduced to a technological fix and needs input and change management from a range of stakeholders for AI to be used responsibly. The situations, challenges, and barriers in every health and care system are also unique. Further research is required to explore which, how, and why the relative principal barriers and challenges for implementing AI in healthcare IT can evolve over time.

11.5.1. Summary of Key Challenges

This paper identified a range of challenges and barriers for the implementation of AI in healthcare IT. No individual challenge was found to stand in isolation; rather, challenges were found to be interconnected. In the following, a summary of the identified challenges and barriers is presented. First, the quality of data, interoperability, and regulatory compliance are three key technological challenges for AI implementation. Data quality is closely related to data privacy and security, indicating the interconnectedness of ethical and technological challenges. Data bias and unfairness relate both to ethical and technical challenges. These various challenges collectively reflect the complexities around AI implementation in healthcare IT. The rapid evolution of artificial intelligence underlines the need for care and caution as practitioners move from development in the lab to use in the real world. Change is urgently required to ensure that concerns can be addressed and that systems will make the NHS more efficient and deliver the outcomes needed by the public. The most consequential challenges identified (broken down by dimension) are those that are most likely to slow down or put a stop to efforts to implement AI in healthcare IT. When seeking to address potential solutions, it is these that should be focused upon. Although organizations have different contexts and may seek to address some challenges identified in this work more than others, it seems that the single most important factor for successfully implementing AI in the healthcare workplace is organizational commitment, support, and resources for addressing challenges and facilitating implementation.

Fig 11 . 4 :  Challenges and Barriers to Implementing AI in Healthcare IT.

Equation 2 : Bias in AI Models

11.5.2. Recommendations for Future Research

This section of the article gives some recommendations for future research. This study points to several relevant points for future research. First, research should explore innovative solutions for addressing the low quality and quantity of the data. It should detail, compare, and specify these innovative methods for native and legacy systems, meaning that it should account for a wide range of healthcare IT systems that are currently in place. Further, we recommend more research into how to encourage the adoption of best practices in interoperability as well as further work modeling the pathways and workflows in integrated care contexts and how this might be utilized in AI systems. Specifying and comparing ethical principles and normative requirements for AI in healthcare is also a high-priority recommendation; beyond this, further work could also be conducted in researching the area of learning health systems. There are also high-priority recommendations regarding how to encourage best practices in the management of organizational change to address resistance to AI training in the workplace. These approaches should be detailed, compared, and be directly relevant to those responsible for implementing large-scale AI programs. Additionally, further research into how to evaluate AI for the support of diagnosis, preliminary examination, and initial treatment is recommended. All of these points are based upon and directly relate to the research of the current study, and it is hoped that addressing these questions in the future can streamline the positive effects of healthcare AI systems being developed (Sondinti et al., 2023).

Organizational Change and Leadership: Research should be conducted to model and determine the features of an off-putting and inviting change process inclusive of recruitment, implementation, resourcing, user support, and continuous learning with a particular focus on AI implementation. Areas in need of further inquiry, as identified from the study, include (1) understanding and addressing enablers and barriers to successful AI implementation and use (both practically and ethically); (2) how to develop organizational capabilities required to make best practices in AI implementation routine, although they might present a challenge to some users or decision-makers; (3) how to foster an understanding and trust in the outputs and recommendations from complex and opaque AI tools, and then how to encourage the use of these tools in care provision; (4) how to measure progress with an AI program—in other words, how to assess the effectiveness of an AI tool or system, both the anticipated and unanticipated outcomes, and how assessments of performance can be used to refine an AI in the future; (5) the design of learning health systems, designed to shape inquiries and data analysis into care delivery, undertaken in the natural course of care delivery, where learning is used to create value for society, patients, and providers.

 References

Danda, R. R. (2023). Innovations in Agricultural Machinery: Assessing the Impact of Advanced Technologies on Farm Efficiency. In Journal of Artificial Intelligence and Big Data (Vol. 2, Issue 1, pp. 64–83). Science Publications (SCIPUB). https://doi.org/10.31586/jaibd.2022.1156

Kalisetty, S., Pandugula, C., & Mallesham, G. (2023). Leveraging Artificial Intelligence to Enhance Supply Chain Resilience: A Study of Predictive Analytics and Risk Mitigation Strategies. In Journal of Artificial Intelligence and Big Data (Vol. 3, Issue 1, pp. 29–45). Science Publications (SCIPUB). https://doi.org/10.31586/jaibd.2023.1202

Nampalli, R. C. R. (2022). Neural Networks for Enhancing Rail Safety and Security: Real-Time Monitoring and Incident Prediction. In Journal of Artificial Intelligence and Big Data (Vol. 2, Issue 1, pp. 49–63). Science Publications (SCIPUB). https://doi.org/10.31586/jaibd.2022.1155

Nampalli, R. C. R. (2023). Moderlizing AI Applications In Ticketing And Reservation Systems: Revolutionizing Passenger Transport Services. In Journal for ReAttach Therapy and Developmental Diversities. Green Publication. https://doi.org/10.53555/jrtdd.v6i10s(2).3280

Pandugula, C., Kalisetty, S., & Polineni, T. N. S. (2024). Omni-channel Retail: Leveraging Machine Learning for Personalized Customer Experiences and Transaction Optimization. Utilitas Mathematica, 121, 389-401.

Sondinti, L. R. K., Kalisetty, S., Polineni, T. N. S., & abhireddy, N. (2023). Towards Quantum-Enhanced Cloud Platforms: Bridging Classical and Quantum Computing for Future Workloads. In Journal for ReAttach Therapy and Developmental Diversities. Green Publication. https://doi.org/10.53555/jrtdd.v6i10s(2).3347

Syed, S.(2023). Advanced Manufacturing Analytics: Optimizing Engine Performance through Real-Time Data and Predictive Maintenance.

Syed, S.(2023). Big Data Analytics In Heavy Vehicle Manufacturing: Advancing Planet 2050 Goals For A Sustainable Automotive Industry.

Tulasi Naga Subhash Polineni , Kiran Kumar Maguluri , Zakera Yasmeen , Andrew Edward. (2022). AI-Driven Insights Into End-Of-Life Decision-Making: Ethical, Legal, And Clinical Perspectives On Leveraging Machine Learning To Improve Patient Autonomy And Palliative Care Outcomes. Migration Letters, 19(6), 1159–1172. Retrieved from https://migrationletters.com/index.php/ml/article/view/11497

Venkata Obula Reddy Puli, & Kiran Kumar Maguluri. (2022). Deep Learning Applications In Materials Management For Pharmaceutical Supply Chains. Migration Letters, 19(6), 1144–1158. Retrieved from https://migrationletters.com/index.php/ml/article/view/11459

Published

January 10, 2025

Categories

How to Cite

Maguluri, K. K. . (2025). Challenges and barriers to implementing artificial intelligence in healthcare IT . In How Artificial Intelligence is Transforming Healthcare  IT: Applications in Diagnostics, Treatment Planning, and Patient Monitoring (pp. 187-202). Deep Science Publishing. https://doi.org/10.70593/978-81-984306-1-8_11