Policies and regulations of artificial intelligence in healthcare, finance, agriculture, manufacturing, retail, energy, and transportation industry

Authors

Nitin Liladhar Rane
Vivekanand Education Society's College of Architecture (VESCOA), Mumbai, India
Mallikarjuna Paramesha
Construction Management, California State University, Fresno
Jayesh Rane
Pillai HOC College of Engineering and Technology, Rasayani, India
Suraj Kumar Mallick
Shaheed Bhagat Singh College, University of Delhi, New Delhi 110017, India

Synopsis

The rapid evolution of artificial intelligence (AI) technologies has required to standardized policies and regulations to provide proper care, safeguard, and equity in the industry. This study revisits the field to identify the AI policy and regulation frameworks that nowadays are being implemented and their most relevant issues. AI is entering - or rather, already exists in - everything from health to finance, and across the globe regulators and governments are now looking at both the need for innovation and the requirement of oversight. Some notable programmes include the European Union's AI Act, where expects to classify AI systems by 'risk levels' and have more stringent requirements for high-risk applications. In the United States, guidelines around transparency, accountability, and debiasing. This reinforces the importance of public-private partnerships (PPPs) and a shared approach in formulating agile and future-proof regulations. The up-and-coming trends can be seen in regulatory sandboxes for AI pilots for developing AI innovations in a controlled environment and AI ethics boards driving corporate practices towards AI. It also reflects on the effects these regulations might have on innovation and the dynamics of the market, suggesting that, although difficult, regulation is necessary to support public trust and secure the sustainable development of AI technologies.

Keywords: Policy, Regulation, Artificial intelligence, Decision making, Decision support system, Machine learning, Deep learning

Citation: Rane, N. L., Paramesha, M., Rane, J., & Mallick, S. K. (2024). Policies and regulations of artificial intelligence in healthcare, finance, agriculture, manufacturing, retail, energy, and transportation industry. In Artificial Intelligence and Industry in Society 5.0 (pp. 67-81). Deep Science Publishing. https://doi.org/10.70593/978-81-981271-1-2_4

 4.1 Introduction

The runaway development pace of artificial intelligence (AI) technologies has provided impetus for profound changes across different industries; such a roll-out is naturally accompanied by co-evolution of the policy and regulatory landscape (Wischmeyer, & Rademacher, 2020; Hoffmann-Riem, 2020; de Almeida et al., 2021). As AI integrates into the health, financial, production, and transportation sectors, amongst others, so is the establishment of regulatory solid frameworks to oversee its deployment and reduce associated risks (Erdélyi, & Goldsmith, 2018; Lauterbach, 2019; Taeihagh, 2021;). The twin challenge for policymakers is encouraging innovation in the early stages while ensuring that AI systems meet ethical standards for safety and engender public trust (Wischmeyer, & Rademacher, 2020; Hoffmann-Riem, 2020; Paramesha et al., 2024a). Balancing creative responses to both challenges, therefore, really needs to be underpinned by a good grasp of AI's technological possibilities as well as its socio-economic effects from deployment. Policy-wise, regulatory environments around AI have been historically reactive, for example, setting standards in reaction to a specific issue rather than as proactive standard-setting. These approaches have led to a fragmented regulatory landscape, showing high inconsistency across jurisdictions and sectors. International organizations and national governments have recently undertaken different initiatives to clarify AI policies, thus recognizing the urgency of harmonized and forward-looking regulatory frameworks in this area. These frameworks should address central issues, such as data privacy, algorithm transparency, accountability, and how AI may deepen pre-existing inequalities (Manheim, & Kaplan, 2019; Capraro, et al., 2024; Rane et al., 2024a). Specifically, research on the policy and regulation of AI within the academic community has quickened its pace, engendering an enormous body of literature across legal studies, ethics, economics, and technology (Cath, 2018; Wong, 2021; Paramesha et al., 2024b; Rane et al., 2024b). The impact of different kinds of regulation on AI innovation and how practical various policy approaches have been probed using a raft of methodologies by researchers. This research contributes to the continuous discourse by conducting a careful literature review on AI policy and regulation in industry.

Contributions of the present study:

  • This study provides a comprehensive synthesis of literature available on AI policy and its regulation, with themes of broad consensus and divergence.
  • This study uses sophisticated text-mining techniques to identify popular topics under discussion and their interrelations, attaining a granulated understanding of the current discourse.
  • This study uses statistical methods to find out different clusters of related studies, pointing out emerging trends likely to receive significant attention in the future.

4.2 Methodology

This research approach policy and regulation issues relevant to AI in the industry with a comprehensive review of the literature. The identification of academic articles, policy papers, and industry reports is done through databases such as Google Scholar, Scopus, and Web of Science. The literature search focused on thorough literature gathering guided by keywords like "artificial intelligence," "AI policy," "AI regulation," "industrial AI," and "AI governance.". Literature data is then fed into bibliometric software, VOSviewer, for co-occurrence analysis to establish how often the keywords appeared and how much they relate. By cluster analysis of the co-occurrence network, it is possible to further divide this literature into distinct groups regarding thematic similarities. Each cluster represents a particular aspect of AI policy and regulation and allows detailed examinations of subtopics: ethical issues, regulatory frameworks, and industry-specific challenges. This methodological approach gives a systematic and structured review of the existing body of knowledge, helping to deepen an understanding of the complex nature of AI governance in industry.

 

4.3 Results and discussions

Co-occurrence and cluster analysis of the keywords

The broader theme of the network also demonstrates an idea that "artificial intelligence" is vital to the frontier of research. Here, various nodes semantically similar with the central node are connected as shown in the Fig. 4.1, showing the applications of AI, which explains the interdisciplinary nature of AI. This is an important cluster in the network that deals with "decision making" and "decision support system". This family is tightly integrated with AI, means a bundle of important concepts are covered within this family like machine learning, deep learning, neural networks and reinforcement learning. These connections illustrate why AI is critical to the decision-making processes in a wide array of industries. When machine learning and deep learning algorithms are integrated into decision support systems, then the accuracy of predictions and the efficiency of problem-solving are increased by manifolds. Another large group relates to ethics, privacy, and regulation - which is significant. This cluster demonstrates the increasing attention to ethics in AI, data privacy, and regulation. The relationship with these terms and AI demonstrates the dialogue and inquiry about making AI technologies more robust which continues through the age.

Fig. 4.1 Co-occurrence analysis of the keywords in literature

The final execution with the sense of utmost responsibility. This cluster is about how governance shapes the social implications of AI. Example, the existence of the words "public policy", "policy making," and "laws and legislation" is clear. The diagram also connects "sustainable development" and "energy policy. Themes link this cluster to artificial intelligence, including energy efficiency, energy consumption, and sustainability. The connections demonstrated here are cases of how AI can make a real impact in this kind of sustainability; by helping to save energy and promote green policy. The powerful partnership of AI in energy management systems will go a long way towards addressing sustainability objectives. Another captivating bundle of topics belong to the intersection of "healthcare policy", "public health" and "covid-19." Terms that appear in this cluster are related to health care delivery, information processing and clinical knowledge, which is relevant to artificial intelligence. This widespread application of such terminologies is representative of how prominently AI has come to the forefront in the health care sector - more so in the era of the Covid-19 pandemic. AI has helped with advancements in diagnostic tools, more streamlined management of health data and has provided invaluable help to public health projects.

The network diagram shows in addition to a cluster of genetics, gene expression, and computational biology. The aim of this cluster is to focus on the AI application in biological and medical research. The relationships among these key words and "artificial intelligence" reinforce the role of AI both in elucidating genetic information and its regulation. AI-based algorithms help to better understand complex biological data and are in the vicinity of communities such as genomics and personalized medicine. In addition, the presence of the terms "internet," "social media," and "students" to the network hints at the rapid encroaching of AI into the arenas of digital technology and education. The onset of AI technologies in these fields has begun to change the way information is stored and shared; and affecting most areas in the society. The network diagram reveals the significance of cross-disciplinary research in the realm of AI. The interactions of fields like ethics, healthcare, energy policy, computational biology with each other clearly explain AI in its broadest contexts as well as the practical applications of them. A single-discipline approach will not work for the multi-faceted problems that AI presents. It is important to take the interdisciplinary approach, which our societies need to adopt anyway.

 

Policy and regulations of artificial intelligence in industry

Global regulatory landscape

The regulatory landscapes of AI differ from country to country and region to region because they entail very different legal, cultural, and economic environments. The AI Act proposed, establish a broad framework for regulating AI within the European Union. In this sense, the legislation will categorize AI applications in terms of risk as minimal, limited, or high and apply more stringent requirements on the high-risk ones. It also means stringent testing, documentation, and transparency requirements to reduce the possible harms. In the United States case, AI regulation is more sector-specific and less centralized. Several federal agencies, such as the Food and Drug Administration (FDA) and the Federal Trade Commission (FTC), are responsible for AI applications in their domains. The National Institute of Standards and Technology (NIST) is also developing a voluntary framework that will guide the development and deployment of AI, outlining principles like transparency, fairness, and accountability. However, it is the more hands-on approach that China has pursued in AI policymaking. The concept is fully embodied in China's New Generation Artificial Intelligence Development Plan, which sets ambitious targets for AI leadership by 2030, coupled with regulations that manage data security, algorithmic transparency, and ethical standards.

Ethical and safety considerations

AI policy-making necessarily fares with ethical concerns (Vesnic-Alujevic et al., 2020; de Almeida et al., 2021; Paramesha et al., 2024c; Rane et al., 2024c). The ability of AI to further already existing or biased situations has put increased scrutiny on algorithmic fairness. Regulations now often require developers to implement the possibility of detecting and mitigating bias in AI systems. Under the new Act on AI, for example, periodic evaluations and activity documentation are required to ensure conformance to ethical standards by the EU. Another significant concern is safety, particularly for high-stakes applications such as autonomous vehicles and healthcare. Regulatory bodies drive necessary stringent testing and validation of AI systems to perform reliably under the most varied scenarios and conditions. The FDA, in its guideline document related to AI-based medical devices, that there was a need for continued monitoring using post-market surveillance, where opportunities for improvement could be picked up with timely action.

Data privacy and security.

Since AI systems often require vast amounts of personal data, data privacy, and security have become part of AI regulation (Tschider, 2018; Saura et al., 2022; Rane et al., 2024d). The European Union has high standards for data protection, which influences how AI systems are allowed to collect, store, and process personal data under the General Data Protection Regulation (GDPR). Compliance with GDPR would necessitate effective data management practices, impact assessments, and freely given user consent for any organization. It is the case that similar benchmarking for data privacy through the California Consumer Privacy Act (CCPA), and had an impact on AI. These laws gave consumers control over their data, from access rights to termination and opting out of sharing personal data.

Accountability and transparency

AI accountability and transparency are expected if the systems will realize public trust (de Almeida et al., 2021; Novelli et al., 2023; Rane et al., 2024e). The new directives from regulators call for AI developers to clearly explain how decisions are made by their systems; this is very critical in areas such as banking, whereby AI-driven decisions affect the personal life of any human being. Notions of "explainable AI" or XAI-for short-are growing, with regulatory frameworks putting one's weight behind AI systems that make sure output is understandable and interpretable (Ebers, 2020; Zednik, 2021; Rane et al., 2024f). This facilitates stakeholder-end-user and regulator-understanding of the reasoning behind AI decisions and allows for improved control and accountability.

International cooperation and standardization

Because AI is global in nature, it requires international collaboration for the harmonization of regulatory standards (Erdélyi, & Goldsmith, 2018; de Almeida, 2021; von Ingersleben‐Seip, 2023; Rane et al., 2024g). Organizations like International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE) are trying to develop global AI standards. These would mean interoperability across borders-safety and ethical considerations at par-promoting efficient international trade and cooperation. The AI principles, adopted by over various countries, offer a framework for the responsible development and use of AI. These principles set human-centered values, transparency, robustness, and accountability as guiding principles policy should follow at the national level while offering instructions for international cooperation.

Industry-specific regulations

Challenges and risks concerning AI differ across industries, which in turn calls for different regulatory approaches. Regulatory impulses in the health sector orient themselves around patient safety, protection of health data, and efficacy. The FDA has opened its regulatory pathway for AI-based medical devices, in which manufacturers are expected to provide evidence about safety and effectiveness but also agree to post-market surveillance to track the performance of devices over time. For autonomous vehicles in the automotive sector, the focus is on how they can operate safely and securely. The standards on testing and deployment are therefore based on rigorous simulations and real-world testing to prove that they can safely operate in different environments. Among others, those regulating bodies ensuring this include the National Highway Traffic Safety Administration (NHTSA) in the US and the European New Car Assessment Programme (Euro NCAP).

Challenges in AI regulation

Probably one of the most critical difficulties in policy and regulation is transparency and the ability to provide accountability (Novelli et al., 2023; Taeihagh, 2021; Paramesha et al., 2024d). AI systems, and particularly those using machine learning, are "black box" mechanisms that obscure how decisions are made. This opacity can be very obtrusive in domains like criminal justice, health, or finance, where AI is used. Second, the inability of affected persons to contest or appeal against AI decisions without clear explanations erodes trust in these systems. In that regard, policymakers, at this point, should establish frameworks that would bind AI developers to make AI transparent and increase clarity on how decisions within the AI system are made. Another critical challenge is that of bias and fairness in AI (Lauterbach, 2019; de Almeida et al., 2021). Large training datasets in most AI systems can be imbued with historical biases that mirror prejudices already present in society. If such biases are not kept at bay, they might further be augmented by the AI algorithm and cause prejudicial effects. For example, AI systems for hiring, lending, or law enforcement were shown to treat certain demographic groups worse than others. It is essential to develop regulations where fairness and equality are obligatory in AI systems. This entails not only technical standardization of data handling and algorithmic fairness, for example, but also diversity within the teams that develop these technologies to ensure a broader perspective anchors AI design and implementation.

Another area of prominence in AI policy debates is related to the concerns of privacy (Manheim, & Kaplan, 2019; Saura et al., 2022; Paramesha et al., 2024e; Rane et al., 2024h). Most AI systems require vast amounts of data to be run effectively, thereby raising concerns over the collection, storage, and usage of personal information. High-profile data breaches and unauthorized use cases have heightened sensitivity around privacy among the general public. Policymakers need to balance the benefits that AI-driven data analysis has against the need to protect individual privacy rights. This would involve having in place stringent data protection policies like the General Data Protection Regulation of the European Union, providing individuals with increased control over personal data and heavy sanctions for non-compliance. How AI rapidly develops is another regulatory challenge. Traditional regulatory processes often take their time in deliberations but find it hard to keep pace with the fast-moving AI scenery. This mismatch may produce regulations that are simply outdated or a major hindrance to innovation by laying burdensome requirements on emerging technologies. Policymakers must develop more agile regulatory approaches that can adapt to technological advancements. It may imply flexible regulatory sandboxes within which new applications of AI could be experimented, still in a relaxed regulative context.

Other critical concerns in this regard are the economic impacts and the future of work. AI can disrupt labour markets through the automation of tasks that are hitherto performed by human beings very quickly. While being able to increase productivity and precipitate economic growth, they also ushers in prospects of job displacement and widening economic inequality. The transition will require policymakers to promote education and training programs for workers that will endow them with relevant skills tailored for an economy directly impacted by AI, focusing on the creation and development of social safety nets for the affected workers run over by job displacement. Moreover, the global nature of AI development and deployment makes it complicated in terms of policy and regulation. The development, application, and operation of AI technologies are international business activities. For this purpose, the need for global cooperation and harmonization of the regulations arises. Out of disparate national solutions, regulatory fragmentation could emerge, which predominantly brings complex expenditure in case of compliance for multinational businesses and leads to a race-to-the-bottom effect regarding the applied standards. International organizations and coalitions have to give way to coherent frameworks that ensure consistent standards of practice worldwide. Ethical considerations must inform AI development and use (Roberts et al., 2021; Vesnic-Alujevic et al., 2020; Rane et al., 2024i). Now that AI systems are increasingly becoming autonomous, some relevant questions arise in making moral and ethical decisions. For instance, concerning autonomous vehicles, decisions regarding the type of actions an AI have to make in life-critically decisive situations give rise to complex ethical dilemmas. Policymakers must engage ethicists, technologists, and the public in creating guidelines to address these moral questions.

Another area that gives rise to solid concerns reaffirms the security risks associated with AI (Hoffmann-Riem, 2020; Fortes et al., 2022). Since AI is vulnerable to many different types of attack, including data poisoning, adversarial attacks, or even model theft, that may undermine its integrity and reliability in critical infrastructure concerning health, energy, or finance, this might have severe implications. Policymakers should set high-security standards and follow best practices to protect AI systems from malign activities. The problem is that of control and governance of AI. With researchers working on making AI ever more powerful and capable, ensuring that these future AI systems remain under human control and that their actions align with human values will be imperative. This includes human control mechanisms with distinct lines of accountability, creating fail-safes to prevent AI from acting outside intended parameters.

Table 4.1 provides a structured overview of the various costs and benefits associated with regulatory frameworks, as well as their ultimate impacts on society. These include several cost categories: direct costs, which concern direct compliance and hassle costs; enforcement costs related to monitoring, adjudication, and enforcement activities; and indirect costs, relating to broader compliance costs and other more general costs. On the benefits side, the table separates the direct benefits-like increased welfare and market efficiency-from indirect benefits comprising broader macroeconomic effects and other non-monetizable benefits. These regulations manifest their final impacts on welfare, happiness, life satisfaction, environmental quality, economic growth, standard of living, and employment. The politicians need this comprehensive analysis to appreciate all the regulatory implications fully, so that informed decisions could be made to balance costs against societal benefits.

Table 4.1 Structured overview of the various costs and benefits associated with regulatory frameworks, as well as their ultimate impacts on society

Category

Type

Description

Regulatory Costs

Direct costs

Direct compliance costs, hassle costs

Enforcement costs

Monitoring, adjudication, enforcement

Indirect costs

Indirect compliance costs, other costs

Regulatory Benefits

Direct benefits

Improved well-being, market efficiency

 

Indirect benefits

Indirect compliance benefits, wider macroeconomic effects, other, non-monetizable benefits

Ultimate Impact

-

Well-being, happiness, and life satisfaction

-

Environmental quality

-

Economic growth and living standards

-

Employment

 

Future directions in AI policy and regulations

One of the significant areas that in the future AI policy has to be focused on is setting clearly defined guides of ethics and frameworks (Petit, 2017; Wong, 2021). Since AI systems are being increasingly integrated into daily life, it is essential to ensure they function in ways that respect human rights and societal values. The establishment of ethical guidelines for AI is being developed in response to issues of bias, privacy concerns, and transparency of AI decision-making processes. Apart from ethical considerations, there is a need for strong safety standards in AI development (Reed, 2018; de Almeida et al., 2021). Advanced AI has the potential risks attached to it through its unintended consequences and malicious use. All of this, therefore, demands that comprehensive safety protocols be put into existence. There is a growing recognition among policymakers of the need for setting standards that ensure AI systems will be robust, reliable, and secure-ensuring minimum risks with maximum benefits.

Another critical area affecting future policy directions comprises the economic implications of AI (George, 2023; Bhat, 2023). There is a capacity to improve productivity immensely and boost economic growth; however, risks associated with the impact of job displacement on economic inequality are many. To that regard, a host of policies that can be put in place to effect fair growth, particularly in upskilling and reskilling workers for new roles created in the shifts, are being assessed by both governments and international organizations. There's also the pressure to have policies to support innovation, all while ensuring that AI economic benefits are widespread. This includes funding for AI research and development, incentives for ethical AI startups, and regulation against monopolistic practices in the technology industry.

Data governance forms part of the core regulation of AI since AI systems are based on vast volumes of data. In the future, AI policy will further tighten rules around data collection, storage, and usage to preserve people's privacy and avoid data monopolies. Another relevant trend in AI policy is increasing emphasis on international cooperation and standardization (von Ingersleben‐Seip, 2023; Laux, et al., 2024). The development of AI is not restricted to one country; instead, it is a global effort. Therefore, adherence to different regulations across countries creates problems for international collaboration and innovation. Organizations are expounding international guidelines and standards for AI. Their efforts are toward harmonizing rules across borders, meant to help safely and beneficially develop AI technologies worldwide.  In the future, there will be more international treaties and agreements regarding the governance of AI that will lead to a much more converged approach toward tackling global challenges posed by AI.

Future policy directions will also be shaped by the role of AI in critical sectors such as healthcare, finance, and transportation (Rane, 2023). Indeed, these sectors have specific regulatory requirements so that the development and applications of Al would guarantee some safety, reliability, and fairness in those areas. For example, in healthcare, the usage of AI for diagnostics and treatment planning involves stringent validation and oversight to prevent harm to patients. Similarly, AI-driven trading algorithms and financial credit scoring systems need regulations capable of stopping biases and guaranteeing transparency. More and more, policymakers are working out sector-specific guidelines for dealing with particular issues related to AI in areas. Other key drivers of future policy directions concerning AI involve public trust. If diffusion and widespread acceptance occur, AI technologies require confidence in their fairness, reliability, and transparency. Indeed, policymakers are increasingly coming to focus efforts on enhancement measures to increase public trust through the compulsory nature of openness in AI decision-making processes and mechanisms for accountability and redress. It includes AI ethics committees, public consultation, and broad stakeholder involvement in AI policymaking to instil trust and ensure that such technologies answer the call for public good.

Another essential component of AI policy in the future will be education and awareness. With the pervasiveness of AI, public understanding of its implications is necessary. Such AI literacy policies represent, at one level, programs enjoying primacy in teaching at all levels and, at another, public campaigns aimed at making aware citizens of the benefits and risks associated with these emerging technologies. This therefore involves the integration of AI education into the curriculum and additional resources for lifelong learning to enable each citizen to navigate in an AI-driven world. Most likely, some adaptive and iterative approaches in AI policy and its further regulation will have to be applied faster rates of innovation call for more flexible regulations responsive to new challenges and opportunities that emerge. Ever-increasingly, supervisory controls embrace regulatory sandboxes and pilot programs that permit testing AI technologies within controlled settings. These approaches provide regulators with the wherewithal to collect data, assess impacts, and fine-tune policies to ensure that regulations stay relevant and effective amid a fast-evolving technological landscape.

 

4.4 Conclusions

The industry regulation and policy of AI has at the focus of many governments and organizations worldwide. As AI is becoming ubiquitous in sectors ranging from healthcare, finance, and manufacturing - the question of creating strong regulatory measures has never been more a need. The AI Act is the European Union to propose a risk levels of AI systems, and includes a series of obligations for high-risk applications. This legislative framework aims to ensure transparency, accountability, and respect for fundamental rights in combating the global standard for the regulation of AI. Meanwhile, increased regulatory activity in the United States includes passage of the National AI Initiative Act to improve trustworthiness in AI by funding research, workforce development, and interagency coordination. China is aggressively developing AI governance, seeking to ensure technological breakthroughs along with stringent data use and algorithmic transparency regulation. By adopting a dual strategy, they want to secure innovation as well as control over AI advancements. However, challenges remain. The ongoing argument revolves around how can AI be regulated in such a way that it does not stifle innovation. With rapidly growing global interest in AI both as an economic and a strategic good, a parallel emphasis is developing on cooperative approaches to harmonizing regulations related to AI across borders in order to build out an ecosystem for AI ethics and safety. Collaborative action, supported by agile approaches to regulation, will be crucial as nations and sectors seek to negotiate these challenges to a future concentrated increasingly on AI and its demands for security, fairness and justice.

 References

Bhat, I. A. (2023). Artificial Intelligence and its Impact on Indian Economy. Research Trends in Multidisciplinary Subjects, 94.

Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., ... & Viale, R. (2024). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS nexus, 3(6).

Cath, C. (2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080.

de Almeida, P. G. R., dos Santos, C. D., & Farias, J. S. (2021). Artificial intelligence regulation: a framework for governance. Ethics and Information Technology, 23(3), 505-525.

Ebers, M. (2020). Regulating explainable AI in the European Union. An overview of the current legal framework (s). An Overview of the Current Legal Framework (s)(August 9, 2021). Liane Colonna/Stanley Greenstein (eds.), Nordic Yearbook of Law and Informatics.

Erdélyi, O. J., & Goldsmith, J. (2018, December). Regulating artificial intelligence: Proposal for a global solution. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 95-101).

Fortes, P. R. B., Baquero, P. M., & Amariles, D. R. (2022). Artificial intelligence risks and algorithmic regulation. European Journal of Risk Regulation, 13(3), 357-372.

George, A. S. (2023). Future Economic Implications of Artificial Intelligence. Partners Universal International Research Journal, 2(3), 20-39.

Hoffmann-Riem, W. (2020). Artificial intelligence as a challenge for law and regulation. Regulating artificial intelligence, 1-29.

Lauterbach, A. (2019). Artificial intelligence and policy: quo vadis?. Digital Policy, Regulation and Governance, 21(3), 238-263.

Laux, J., Wachter, S., & Mittelstadt, B. (2024). Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act. Computer Law & Security Review, 53, 105957.

Manheim, K., & Kaplan, L. (2019). Artificial intelligence: Risks to privacy and democracy. Yale JL & Tech., 21, 106.

Novelli, C., Taddeo, M., & Floridi, L. (2023). Accountability in artificial intelligence: what it is and how it works. AI & SOCIETY, 1-12.

Paramesha, M., Rane, N. L., & Rane, J. (2024a). Artificial Intelligence, Machine Learning, Deep Learning, and Blockchain in Financial and Banking Services: A Comprehensive Review. Partners Universal Multidisciplinary Research Journal, 1(2), 51-67.

Paramesha, M., Rane, N., & Rane, J. (2024b). Trustworthy Artificial Intelligence: Enhancing Trustworthiness Through Explainable AI (XAI). Available at SSRN 4880090.

Paramesha, M., Rane, N., & Rane, J. (2024c). Artificial intelligence in transportation: applications, technologies, challenges, and ethical considerations. Available at SSRN 4869714.

Paramesha, M., Rane, N., & Rane, J. (2024d). Generative artificial intelligence such as ChatGPT in transportation system: A comprehensive review. Available at SSRN 4869724.

Paramesha, M., Rane, N., & Rane, J. (2024e). Big data analytics, artificial intelligence, machine learning, internet of things, and blockchain for enhanced business intelligence. Available at SSRN 4855856.

Petit, N. (2017). Law and regulation of artificial intelligence and robots-conceptual framework and normative implications. Available at SSRN 2931339.

Rane, N. L. (2023). Multidisciplinary collaboration: key players in successful implementation of ChatGPT and similar generative artificial intelligence in manufacturing, finance, retail, transportation, and construction industry.

Rane, N., Choudhary, S., & Rane, J. (2024a). Integrating deep learning with machine learning: technological approaches, methodologies, applications, opportunities, and challenges. Available at SSRN 4850000.

Rane, N., Choudhary, S., & Rane, J. (2024b). Artificial Intelligence (AI), Internet of Things (IoT), and blockchain-powered chatbots for improved customer satisfaction, experience, and loyalty (May 29, 2024). http://dx.doi.org/10.2139/ssrn.4847274

Rane, N., Choudhary, S., & Rane, J. (2024c). Artificial intelligence and machine learning for resilient and sustainable logistics and supply chain management. Available at SSRN 4847087.

Rane, N., Choudhary, S., & Rane, J. (2024d). Artificial intelligence, machine learning, and deep learning for sentiment analysis in business to enhance customer experience, loyalty, and satisfaction. Available at SSRN 4846145.

Rane, N., Choudhary, S., & Rane, J. (2024e). Artificial Intelligence and Machine Learning in Business Intelligence, Finance, and E-commerce: a Review. Finance, and E-commerce: a Review (May 27, 2024).

Rane, N., Paramesha, M., Choudhary, S., & Rane, J. (2024f). Business Intelligence through Artificial Intelligence: A Review. Available at SSRN 4831916.

Rane, N., Choudhary, S., & Rane, J. (2024g). Artificial Intelligence and Machine Learning in Renewable and Sustainable Energy Strategies: A Critical Review and Future Perspectives. Partners Universal International Innovation Journal, 2(3), 80–102. https://doi.org/10.5281/zenodo.12155847

Rane, N., Paramesha, M., Choudhary, S., & Rane, J. (2024h). Artificial Intelligence, Machine Learning, and Deep Learning for Advanced Business Strategies: A Review. Partners Universal International Innovation Journal, 2(3), 147–171. https://doi.org/10.5281/zenodo.12208298

Rane, N., Paramesha, M., Choudhary, S., & Rane, J. (2024i). Machine Learning and Deep Learning for Big Data Analytics: A Review of Methods and Applications. Partners Universal International Innovation Journal, 2(3), 172–197. https://doi.org/10.5281/zenodo.12271006

Reed, C. (2018). How should we regulate artificial intelligence?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170360.

Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. Ethics, governance, and policies in artificial intelligence, 47-79.

Saura, J. R., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2022). Assessing behavioral data science privacy issues in government artificial intelligence deployment. Government Information Quarterly, 39(4), 101679.

Taeihagh, A. (2021). Governance of artificial intelligence. Policy and society, 40(2), 137-157.

Tschider, C. A. (2018). Regulating the internet of things: discrimination, privacy, and cybersecurity in the artificial intelligence age. Denv. L. Rev., 96, 87.

Vesnic-Alujevic, L., Nascimento, S., & Polvora, A. (2020). Societal and ethical impacts of artificial intelligence: Critical notes on European policy frameworks. Telecommunications Policy, 44(6), 101961.

von Ingersleben‐Seip, N. (2023). Competition and cooperation in artificial intelligence standard setting: Explaining emergent patterns. Review of Policy Research, 40(5), 781-810.

Wischmeyer, T., & Rademacher, T. (Eds.). (2020). Regulating artificial intelligence (Vol. 1, No. 1, pp. 307-321). Cham: Springer.

Wong, A. (2021). Ethics and regulation of artificial intelligence. In Artificial Intelligence for Knowledge Management: 8th IFIP WG 12.6 International Workshop, AI4KM 2021, Held at IJCAI 2020, Yokohama, Japan, January 7–8, 2021, Revised Selected Papers 8 (pp. 1-18). Springer International Publishing.

Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & technology, 34(2), 265-288.

Published

October 13, 2024

Categories

How to Cite

Rane, N. L., Paramesha, M., Rane, J., & Mallick, S. K. (2024). Policies and regulations of artificial intelligence in healthcare, finance, agriculture, manufacturing, retail, energy, and transportation industry. In N. L. Rane (Ed.), Artificial Intelligence and Industry in Society 5.0 (pp. 67-81). Deep Science Publishing. https://doi.org/10.70593/978-81-981271-1-2_4