Introduction

Generative AI (GAI), particularly Large Language Models (LLMs), is reshaping industries by enabling the creation of content such as text, images, and music through pattern recognition from large datasets. Generative AI (GAI) refers to any machine-learning system that learns data distribution and then produces novel contents that text, images, video, audio or code. Large Language Models (LLMs), by contrast, are a subset of GAI that focus exclusively on natural-language data: they use transformer architectures with hundreds-of-millions to trillions of parameters to predict the next token in a sequence and thereby generate coherent text (Wei et al. 2022, 2023). Both GAI and LLMs rely on self-supervised learning over massive corpora, employ attention mechanisms to capture long-range dependencies, and fine-tune on task-specific data to boost performance (Perlow 2024). But while GAI encompasses multimodal systems—e.g., diffusion models that render realistic images or audio generators that compose music (Cole Stryker 2024)—LLMs are confined to text and code, often serving as language backbones inside broader multimodal stacks. LLMs, as advanced neural network architectures, predict linguistic patterns and generate contextually coherent text, transforming how textual content is created and utilized. These technologies have become integral to industries, driving advancements in automation, customer service, and content generation. In Natural Language Processing (NLP), LLMs represent a significant leap forward. With millions or billions of parameters, they are trained on diverse datasets to replicate complex linguistic structures (Bender et al. 2021). Using conditional probability and the chain rule (Wei et al. 2022, 2023), they produce human-like text, characterized by semantic coherence and adaptability. Their applications in industrial processes demonstrate their potential to enhance operational efficiency and innovation (Bender et al. 2021; Wei et al. 2023).

However, a notable gap in existing scholarships is the lack of a comparative, sector-specific understanding of how industrial organizations govern GAI/LLMs—particularly in addressing these multifaceted risks—a gap this study aims to fill. To address this, our study investigates three focused research questions: (1) What governance themes—ethical, legal, operational—emerge from corporate GAI/LLM policies? (2) How do these themes differ across 14 industries and global regulatory environments? (3) Which governance practices are most resilient, and where do policy gaps persist? To answer these questions, we systematically analyzed 160 publicly available policy documents and statements spanning 14 industry sectors. We applied TF-IDF text mining and K-Means clustering to identify recurring governance motifs and conducted comparative analyses across sectors and regions. Our objectives are to map patterns in corporate policy, reveal cross-sector and cross-geographic variations, and derive practical guidelines for responsible GAI deployment. While LLMs can enhance workflows, they also risk perpetuating biases and creating accountability gaps if not effectively managed. Addressing these issues requires thoughtful policies tailored to diverse industry contexts.

This study explores the evolving role of GAI and LLMs in industrial sectors, examining their benefits, risks, and governance implications. Through a systematic review of policies and practices, it provides actionable insights and recommendations to align innovation with ethical accountability, fostering responsible and effective deployment across industries. Within the industry, the integration of LLMs and other GAI technologies has introduced unprecedented opportunities and challenges. These models have shown remarkable capabilities in generating human-like text, aiding in tasks such as customer support, automated content creation, and product recommendations. However, their usage also raises significant operational, ethical, safety and legal considerations that necessitate careful evaluation and guidelines for responsible deployment (Jiao et al. 2024a, b).

The increasing relevance and widespread use of GAI/LLM in industry contexts have prompted diverse responses and adaptations from companies worldwide. According to McKinsey & Company’s global 2025 “Superagency” report—which surveyed senior executives across industries—the figure that 21% of organizations have implemented formal policies governing generative AI refers to firms actively scaling such tools beyond pilot phases; it further notes that just 32% are addressing inaccuracy risk, based on criteria including model validation and output monitoring. (Wallace 2024; Lamarre et al. 2024). Many organizations still remain unprepared for the business risks these technologies may bring, such as intellectual property infringement and cybersecurity vulnerabilities.

Moreover, Salesforce’s October 2023 survey of over 14,000 workers across 14 countries found that while 28% were currently using generative AI at work, 55% did so without formal employer approval; the report also details that nearly 64% admitted to passing off AI-generated work as their own—a reminder of policy gaps and user behavior (More than Half of Generative AI Adopters Use Unapproved Tools at Work 2023) As companies increasingly explore generative AI applications in product development and customer service, a McKinsey survey (April 2024) shows that 33–78% of organizations are now using AI in at least one business function (Wallace 2024). Despite the rapid adoption, there are concerns about the lack of comprehensive policies guiding AI use, underscoring the need for industry-level frameworks to address ethical risks and regulatory compliance.

This study seeks to delve into the multifaceted landscape of GAI, with a primary focus on LLMs, within industry settings. In this context, our study bridges the gap between corporate AI uptake and policy readiness by presenting a nuanced analysis of how companies govern GAI/LLMs—highlighting areas of strong practice and critical vulnerabilities—and offering actionable recommendations for building robust, context-aware AI governance frameworks. To underscore why a cross-industry lens matters beyond filling a research gap, we frame the analysis in terms of sector-specific risk and value creation. Industries face different dominant hazards and therefore prioritize distinct governance levers: Healthcare/Pharma emphasize consent, traceability, and life-cycle safety validation for higher-risk uses such as diagnostics (Ditsche et al. 2023; Clearsight Advisors 2020); Finance/Banking stress model-risk control, board-level accountability, auditability, and regulator-backed sandboxes to limit systemic exposure (Tilo 2023; Falconi 2023); Publishing/Media focus on IP protection, disclosure of AI use, and content provenance to safeguard creators and readers (Havaianas 2023; Mares 2022; Sanofi 2024); Social Media/Telecom balance privacy with explainability to sustain user trust at platform scale (HSBC 2023; Penguin Random House 2024); Design/Entertainment adopt augmentation-first practices (clear labeling, human review) to protect creative integrity while harnessing productivity gains (Knack and Powell 2023; Electronic Arts 2022; Tencent 2020). Our sector-aware findings therefore yield (1) a map of governance themes by dominant risk vector (physical safety, financial contagion, reputational harm), (2) a modular policy template that couples universal baselines (privacy, explainability, fairness) with industry-specific “risk modules” (Deloitte 2023; Africa International Advisors 2024; TCS 2024), and (3) pragmatic recommendations for how firms can translate principles into controls that are proportionate to context. This positions the study’s motivation squarely in its practical implications for industry development, not merely the existence of a gap.

Related work

LLMs have advanced rapidly—from early warnings about their social and environmental costs to demonstrations of “emergent” abilities that elude linear scaling laws—yet scholarship still lacks a systematic, sector-level account of how industry governs these tools. The following review synthesizes recent research on three fronts: (1) technical evolution and risks, (2) ethical-governance discourse, and (3) empirical evidence on organizational adoption and policy gaps.

Technical evolution and associated risks

Bender et al. first sounded the alarm that ever-larger models could amplify bias, misinformation, and ecological harms, dubbing them “stochastic parrots” that mimic training data without true reasoning (Bender et al. 2021). Subsequent scaling work showed that abilities such as arithmetic or chain-of-thought suddenly “emerge” once parameter counts cross certain thresholds, making model behavior less predictable for developers and regulators alike (Wei et al. 2022). Comprehensive reviews confirm that today’s LLMs handle diverse modalities and tasks, but also inherit data-set artefacts that threaten reliability (Wei et al. 2023). Recent surveys show that while LLM capabilities have broadened rapidly, assurance remains difficult: models inherit dataset artefacts, display calibration errors, and can exhibit non-linear scaling behaviors that complicate prediction and regulation (Bender et al. 2021; Wei et al. 2022, 2023). Multimodal stacks widen application scope but transmit hallucination, bias, and opacity into products, motivating governance that is both technical (evals, red teaming, post-deployment monitoring) and organizational, such as clear risk ownership, audit trails (Wei et al. 2022; Afroogh et al. 2024).

Ethical and governance scholarship

Researchers now catalog domain-specific hazards—from hallucinations to privacy leakage—and call for dynamic auditing and interdisciplinary oversight (Jiao et al. 2024a). Systematic surveys of university and corporate guidelines reveal fragmented, rapidly changing governance landscapes that vary by jurisdiction and sector (Jiao et al. 2024b). Trust-centric studies propose taxonomies of technical (robustness, safety) and axiological (fairness, legality) metrics, arguing that public confidence will determine adoption trajectories (Afroogh et al. 2024). Works on embedded ethics show how value-sensitive design can be integrated into high-risk contexts such as disaster management, highlighting the need for context-aware policies rather than one-size-fits-all principles (Afroogh et al. 2023). On the governance side, scholarship converges on privacy-by-design, explainability, fairness, and continuous risk management, yet documents fragmentation across jurisdictions and institutions (Jiao et al. 2024c; Tilo 2023; Africa International Advisors 2024). Playbooks such as the NIST AI Risk-Management Framework operationalize principles into auditable controls (Govern→Map→Measure→Manage), and sector regulators (e.g., banking and health) layer risk-tiered obligations and sandbox trials for high-impact uses (Falconi 2023).

Empirical evidence on adoption and policy gaps

Industry surveys corroborate the scholarly concern over uneven governance. McKinsey’s 2024 global executive poll reports that only 21% of firms have formal GAI/LLM policies and a mere 32% actively mitigate output inaccuracies, despite steep growth in use cases (McKinsey 2024). Salesforce’s 2023 workforce study shows 28% of employees already employ generative-AI tools at work, and more than half do so without managerial approval (Salesforce 2023). News commentary further warns that public trust hinges on demonstrable, real-world benefits rather than hype alone (Ibrahim 2025). Critically, comparative industry work indicates that governance priorities diverge systematically by sector: finance foregrounds model validation and board oversight (Elite Translations Africa 2024). Healthcare centers on consent, traceability, and post-market surveillance; publishing/media codify disclosure and IP rights (Sanofi 2024); social platforms stress content authenticity and user remedies (Penguin Random House 2024; Shimizu 2023). However, few studies quantify such differences across many firms. Our contribution addresses this gap by analyzing 160 corporate policies across 14 sectors with text mining and cross-sector comparisons.

Data resources enabling comparative analysis and synthesis

To support evidence-based policy design, new benchmark corpora—such as IGGA (industrial guidelines) and AGGA (academic guidelines)—compile hundreds of documents spanning multiple regions and sectors, facilitating text-mining approaches like those adopted in the present study (Jiao et al. 2024b). Emerging corpora (e.g., IGGA/AGGA) enable reproducible, large-scale synthesis (tokenization → TF-IDF → clustering → validation), allowing policy vocabulary to be linked to sectoral and regional contexts (Jiao et al. 2024c, 2025). We build these resources to derive sector-calibrated governance modules grounded in observed thematic patterns.

Together, these streams show that while technical capabilities race ahead, governance remains patchy and uneven, creating a clear research gap: few studies map how corporate policies differ across industries and regulatory regimes, nor how those differences align with documented ethical risks. Our work addresses that gap by applying large-scale text analytics to 160 industrial policies, linking thematic patterns to sectoral and geographic variables in order to recommend context-aware governance frameworks.

Methodology

In this research, a detailed comparative study was conducted to analyze data from 160 companies to evaluate workplace policies and guidelines for employing GAIs and LLMs in industrial environments. This methodology was rigorously structured to ensure an in-depth assessment of these guidelines (see Table 1). By deliberately choosing companies that represent a wide range of global institutions, including leading firms from various countries and continents renowned for their industrial prowess. Furthermore, a selection criterion spanned 14 different industrial sectors, such as healthcare, finance, and publishing, to encompass a comprehensive array of viewpoints. The collection process explicitly mapped each company’s policy against regional regulations—such as the EU’s GDPR and AI Act, and forthcoming, and the U.S. Executive Order on AI—so as to capture how national legal frameworks influenced corporate policy wording, scope, and enforcement (Financial Times 2024). Moreover, while the prevailing focus of guidelines pertains to LLMs, this investigation endeavors to explore overarching principles pertinent to all generative AIs, with the objective of fostering inclusivity. For example, European-headquartered firms frequently included GDPR-aligned clauses on data minimization, user consent, and profiling restrictions, while Chinese-based companies incorporated explicit content moderation requirements, mandatory ideologically-aligned outputs (e.g., ‘Core Socialist Values’), and cooperation clauses with governmental supervision agencies. The dataset provided for this study is available online on Harvard Dataverse: IGGA: A Dataset of Industrial Guidelines and Policy Statements for Generative Ais (Jiao et al. 2024d).

Table 1 Workflow of the analysis of GAI/LLM industrial guidelines and policy statements.

In exploring the complex landscape of industrial policies governing Large Language Models (LLMs) and generative AI, it is crucial to employ effective categorization criteria to systematically analyze diverse aspects. Two primary criteria emerge as particularly pertinent for this research: (1) Geographic Location and (2) Industry Sector, the latter coded with the 2022 North American Industry Classification System (NAICS) six-digit taxonomy to ensure cross-study comparability. The choice of Geographic Location allows for a comprehensive understanding of how different countries and regions approach regulation, promotion, and professional considerations associated with LLMs and generative AI, while NAICS-based sector tagging recognizes the varied applications and domain-specific risks of these technologies. Together, these criteria provide a structured lens through which to unravel the nuanced dynamics of industrial policies, offering insights into both regional diversity and sector-specific considerations.

Moreover, navigating the intricate landscape of industrial policies for LLMs and generative AI involves a detailed examination of industry sub-sectors. In healthcare (NAICS 62), sub-categories encompass pharmaceuticals (3254), medical research (5417), and health-tech platforms (5182), each with tailored guidance addressing ethical review, patient-safety validation, and HIPAA/GDPR alignment. In finance and banking (NAICS 52), we distinguish between traditional banking (5221), fintech platforms (5223), and investment services (5239), where policies are scrutinized for anti-money-laundering controls, model-risk oversight, and consumer-data safeguards. Technology & IT services (NAICS 54 + 51) are split into software development (5415), cloud computing (5182), and data analytics consultancies (5416), allowing us to compare how intellectual-property clauses or open-source commitments vary across sub-domains. This structured approach enables meaningful cross-sector comparisons and reveals how divergent regulatory frameworks shape operational guidance—for instance, the stronger “medical-device” wording in EU health-tech guidelines versus the algorithmic liquidity-risk metrics required by U.S. banking regulators.

For clarity, we explicitly define document types as follows: a “guideline” is a non-binding, advisory document that sets recommended principles or best practices, whereas a “policy statement” is an approved, enforceable directive that prescribes mandatory rules and compliance obligations (Idenhaus 2022). Only documents that self-identify with one of these labels, are publicly accessible between Jan 2022 and May 2024, exceed 250 words, and are issued by companies with ≥100 employees were retained. Where an official document was absent, we accepted CEO- or General-Counsel interviews published in reputable outlets as “policy statements” only if they contained explicit, prescriptive language (e.g., “our employees must encrypt all training data”). All other mention-only press releases were excluded to preserve methodological rigor. These inclusion criteria yielded a stratified sample of 160 organizations—roughly 10–12 per NAICS major sector—ensuring representativeness across geographies and industries while filtering out marketing-only artefacts.

In the initial phase for data collection, the selection concentrated on identifying top-tier companies across various sectors that had established official guidelines specifically addressing the deployment of GAI and LLMs. This focused approach was chosen to ensure methodological consistency and to maintain the reliability of our analysis. When official guidelines were absent, we included documents found on company websites that articulated their policies on using LLMs, which we treated as policy statements or workplace policies. Additionally, when there was no official guideline or policy available on their websites, interviews with company heads featured in prominent media were also considered as valid sources of information. Companies without any form of guidelines or policy statements were intentionally omitted from our survey to uphold the integrity of our findings. To offset these exclusions, a selection of alternative companies that represented a diverse array of 14 different industrial sectors from various countries and continents, ensuring a balanced and representative sample that enhances the credibility and relevance of our results.

Following the data collection phase, a review process entailed a systematic categorization and in-depth analysis of the guidelines is put in place for the chosen companies. Each set of guidelines was thoroughly examined for its scope, specific recommendations, ethical and safety considerations, and implications for using LLMs and GAI in business practices. This detailed scrutiny allowed us to delineate key themes, identify prevailing trends, and pinpoint areas where practices diverged across different geographic and industrial contexts. This comprehensive examination provided us with a nuanced understanding of the current landscape regarding corporate guidelines on GAI and LLM usage.

To continue, a text mining-based analysis is conducted on the artificial intelligence usage guidelines collected from 160 company documents spanning 14 distinct industries. This analysis began with tokenizing the collected text data, where each document was first divided into sentences and then further segmented into individual words. By first starting with a curated list of firms having public-facing GAI/LLM policies (sourced from 2024 repositories and industry reports), then performing stratified random sampling—selecting approximately 10–12 companies per industry across 14 sectors, yielding the 160-company dataset. Then tagged each guideline document with the relevant regulatory environment—e.g., “EU-GDPR”, “EU-AI Act”, “China-GenAI”, “US-EO-AI”—to facilitate comparative frequency analysis by region. We assign geography by issuing a locus rather than a global footprint: documents are coded to the legal domicile of the issuing entity named in the policy (e.g., a Siemens AG global policy is coded Germany; a policy issued by a U.S. subsidiary is coded United States). When a document targets a specific jurisdiction (e.g., GDPR/AI Act) and declares a limited scope, we retain the domicile code and add a region tag (e.g., “EU-AI-Act”). Joint or multi-issuer statements receive multi-label tags; in country-level statistics, we apply soft weights (equal split across labels). We verify robustness by re-running analyses with (1) HQ-only and (2) region-of-applicability codings; headline patterns are unaffected. Firms without any formal guidelines, policy statements, or credible media interviews were excluded and replaced to maintain sectoral balance and methodological consistency. This granularity enabled detailed scrutiny of the text. For tokenization, we utilized the Natural Language Toolkit’s (nltk) sent_tokenize and word_tokenize functions. To focus on substantive content, we removed stopwords—commonly used words with minimal semantic impact—using the nltk’s stopwords list, reducing noise in the dataset and emphasizing meaningful content.

Following tokenization, stemming and lemmatization techniques are applied. Stemming, executed through nltk’s PorterStemmer, reduced words to their base form, thus consolidating different inflections of the same word. Lemmatization, using NLTK’s WordNetLemmatizer, then maps each token to its dictionary lemma, providing context-sensitive normalization that avoids the over-stemming artefacts common with purely rule-based stemmers. Together these steps collapse surface variants (e.g., “regulates,” “regulation,” “regulatory”) into a single semantic unit, increasing the reliability of unigram frequency counts that underpin our statistical comparisons. In short, unigram-centric pipeline is retained because it keeps the feature space small enough to fit on a single 32 GB-RAM workstation without sacrificing the statistical power needed for our cross-industry comparisons; by contrast, adding bigrams and longer n-grams expands the dimensionality exponentially, which degrades both memory efficiency and statistical reliability adding bigrams and trigrams would have multiplied the matrix size by 10–40× and triggered sparsity problems. Our pilot experiments confirmed that a trigram TF-IDF matrix exceeded 250 M non-zero entries—well beyond our workstation capacity—while offering marginal performance gains in downstream clustering, thereby conveying phrase-level insights without inflating the core analytical model. To mitigate residual semantic duplication after lemmatization, we merge synonyms using embeddings. We compute Sentence-BERT embeddings and cluster nearest-neighbor graphs with HDBSCAN to form synonym sets (e.g., regulation/compliance/governance → “governance”; explainability/interpretability → “explainability”), with two coders adjudicating edge cases [93]–[96]. Canonicalization reduces vocabulary size and improves topic coherence and cluster stability relative to a word2vec/fastText baseline (robustness reported in Appendix). These addresses remaining semantic similarity concerns beyond lemmatization.

With the cleaned and normalized dataset, a Term Frequency-Inverse Document Frequency (TF-IDF) model is then constructed using sklearn’s TfidfVectorizer. With the Tfidf model using sklearn.feature_extraction.text.TfidfVectorizer and having default parameters (norm = ‘l2’, use_idf = True, smooth_idf = True, sublinear_tf = False, stop_words = ‘english’), which apply smoothing and L2 normalization by default. This model evaluated word relevance relative to each document, balancing the term’s frequency within the document (Term Frequency) against its distribution across the dataset (Inverse Document Frequency). To test the influence of geography, we performed chi-squared tests comparing term-frequency distributions across regulatory contexts. The TF-IDF model identified distinctive and contextually significant terms within the guidelines, highlighting industry-specific terms. Next, the KMeans clustering algorithm is applied from sklearn.cluster with explicit parameter settings: n_clusters = 8, init = ‘k-means + +‘, n_init = ‘auto’, max_iter = 300, and tol = 1e−4, matching scikit-learn’s defaults. The model justified n_clusters=8 through elbow plots and silhouette scores evaluated over k = 2–12, observing that gains plateaued beyond eight clusters, supporting this choice for thematic segmentation. We tune k {2,…,12} and select k = 8 based on three diagnostics: (1) the elbow in inertia flattens after k = 8, (2) average silhouette achieves a local maximum at k = 8 among k = {7,8,9}, and (3) Calinski–Harabasz likewise peaks near k = 8. Bootstrap stability on 80% resamples preserves ≥75% of top terms per cluster with adjusted Rand 0.7. These diagnostics, together with improved coherence under synonym merging, support k = 8 as a parsimonious and stable segmentation. Finally, by clustering these terms based on their TF-IDF values, we revealed prominent themes.

Qualitative findings and resultant themes

The integration of Large Language Models (LLMs) and Generative Artificial Intelligence (GAI) has precipitated a significant paradigm shift not only within academic realms (Jiao et al. 2024b, c) but across various industrial sectors as well. As these technologies carve deeper inroads into domains such as healthcare, finance, education, and legal services, they manifest a dual-edged spectrum of profound opportunities and formidable challenges. This juxtaposition of potential and peril underscores the necessity for a rigorous examination of how different industries govern the deployment of these advanced tools through professional guidelines and policies.

The advent of LLMs and GAIs has been met with varying degrees of enthusiasm and trepidation across sectors. In industries like finance, healthcare, and technology, there is a palpable excitement about the potential of these technologies to revolutionize efficiency, enhance decision-making, and drive innovation. Conversely, in sectors such as manufacturing and utilities, the integration of AI technologies is approached with more caution. Concerns center around issues such as job displacement, data security, and the erosion of essential human oversight. In these contexts, professional guidelines are crafted with a focus on preserving workplace integrity and ensuring that technological advances do not outstrip the regulatory landscape. We first summarize the eight themes surfaced by clustering and then interpret them in a real-world context. The themes are: (1) Data Governance & Privacy (data, privacy, consent, minimization, DPIA); (2) Safety & Human Oversight (guardrails, validation, escalation); (3) Security & Abuse Prevention (adversarial, exfiltration, monitoring); (4) IP & Content Integrity (copyright, authorship, provenance, disclosure); (5) Transparency & Explainability (audit, logs, accountability); (6) Risk Management & Compliance (risk-tiering, impact assessment, governance controls); (7) Workforce & Change Management (training, usage policy, roles); and (8) Innovation & Sandboxes (pilots, experimentation, rollback). We chose theme names by inspecting top TF-IDF terms (post-synonym merge), testing sector enrichment (hypergeometric over-representation), and then synthesizing coder labels against a simple codebook; disagreements were resolved by discussion to avoid over-interpretation.

In interpreting subsection, we explicitly tie each industry to its dominant themes and practical levers. For instance, Healthcare/Pharma map chiefly to (1), (2), and (6)—consent, safety validation, clinician-override protocols, and post-market monitoring (Ditsche et al. 2023; Shiseido 2017) —while Finance/Banking align with (3), (5), and (6)—model-risk controls, auditability, and regulator-backed sandboxing (TCS 2024; Infosys 2024). Publishing/Media concentrate on (4) and (5), operationalizing disclosure, provenance, and IP protection (Havaianas 2023; Mares 2022; Sanofi 2024); Social Media/Telecom balance (1) and (5) via tiered explainability and privacy-preserving transparency; Design/Entertainment emphasize (7) and (8) with augmentation-first workflows and labeled AI assistance to safeguard creative integrity (Knack and Powell 2023; Nusca 2023). This framing makes the industry implications explicit and shows how firms can adapt universal baselines to their context-specific risk profiles. Companies play a pivotal role in establishing and implementing guidelines that govern their responsible use. Recognizing the transformative impact of these technologies on operations, decision-making, and innovation, companies worldwide face the dual challenge of maximizing their benefits while managing potential risks. This section examines a diverse range of industry-specific guidelines aimed at regulating GAI/LLM usage across various sectors. Based on an in-depth analysis of 160 official guidelines from companies spanning 14 different industries across seven continents, this study offers a comprehensive view of the global landscape of AI governance in industry (see Appendix A).

Health counseling

Artificial intelligence (AI) in the healthcare counseling sector has been used to enhance efficiency, better personalizing care, and better maintain patient care. AI technologies, including generative models like ChatGPT, are being newly explored for applications in the healthcare counseling sector, services such as real-time medical advice, and streamlining tasks in diagnostics and administrative processes. For example, platforms like 1Doc3 (1DOC3 2019) provide personalized, AI-driven healthcare guidance to millions of Spanish-speaking users, helping patients make better-informed decisions about their healthcare. Furthermore, companies like Chugai Pharmaceutical are utilizing AI to accelerate drug discovery and improve patient outcomes through the development of digital biomarkers (Chugai Pharmaceutical Co. 2023).

Industry examples further illustrate the application of AI in healthcare. CSL Behring (CSL Limited 2023), for example, employs AI to improve patient safety and pharmacovigilance. using natural language processing—a machine learning algorithm—to analyze real-world data and better identify safety signals, thus supporting more accurate and timely healthcare decisions. On the other hand, Procaps Group (Procaps Group 2023) showcases the use of AI in pharmaceutical manufacturing, where digitalization brings initiatives to improve efficiencies and quality control in medicine, better maintaining high standards in drug production and patient safety.

Despite its potential, integrating AI into healthcare counseling presents challenges, including technical and regulatory hurdles. Hardian Health (Hardian Health 2022) highlighted regulatory bodies like the FDA and MHRA are developing frameworks to accommodate the continuous updates necessary for AI systems while ensuring compliance. While DokiLink (DokiLink 2019) highlights that the importance of ethical considerations remains crucial as AI evolves, requiring organizations to adopt ethical guidelines to guide AI development and deployment. This ensures that AI is used responsibly, focusing on transparency, fairness, and maintaining a human-centric approach.

Currently, AI models like ChatGPT are not yet suitable for medical use. The development of these systems requires a clearer definition of their intended use, as risking the well-being of patients and uncertainties in trusting medical devices have catastrophic consequences. As we see companies bettering their medical services, usage highlights the potential of AI to improve healthcare outcomes and can reduce disparities globally.

Information technology

The adoption of AI across industries such as telecom, travel, and software development is redefining how big tech and its businesses operate. In the telecom industry, the integration of AI promises enhanced operational efficiency but introduces new risks. In general, large-scale telecom companies emphasize the necessity of trustworthy AI, requiring human agency and oversight so that if AI-controlled systems threaten safety, we can intervene. Transparency through Explainable AI (XAI) techniques makes the decisions made by AI clearer to understand. Privacy and data ownership, with measures like differential privacy being implemented to protect personal data, in line with regulations such as GDPR. Having fallback mechanisms and layered protection against adversarial attacks is being upheld to the highest level by any company training and maintaining reliable and safe AI systems. Many guidelines highlighted establishing AI governance to oversee the ethical design, deployment, and monitoring of AI systems. Regular assessment of AI risks, alongside systematic testing for fairness, transparency, and safety. Managing the broader impacts of AI on the workforce, sustainability, and privacy is also a key concern, aiming to mitigate any potential negative effects while maximizing the benefits AI can offer.

In the software industry, companies like SAP (SAP 2023) emphasize the ethical use of AI by adhering to established principles, such as those outlined by UNESCO, which focus on transparency, privacy, human oversight, and fairness. These principles guide the development and deployment of AI solutions, ensuring they are aligned with societal values and regulatory requirements. Microsoft’s (Microsoft 2023) commitment to ethical AI is operationalized through a dedicated steering committee that oversees AI processes, ensuring alignment with ethical standards and continuous engagement with the evolving landscape of AI ethics.

In telecom, Ericsson has publicly emphasized the incorporation of Explainable AI (XAI) methods in network operations—such as root-cause analysis and human-in-the-loop notifications—to enhance trust and allow timely interventions when AI-driven decisions affect service continuity. Altman Solon’s survey of over 100 telecom executives found that generative AI governance—encompassing formal teams, ethical frameworks, and tool-based processes—is widely seen as essential yet underdeveloped, prompting several telcos to establish such structures in 2024.

Companies like Amadeus (Amadeus IT 2023) are exploring the transformative potential of Generative AI (GAI) to enhance the traveler experience at every stage, from planning to post-trip interactions. While GAI offers exciting opportunities for personalized and scalable customer engagement, there is a strong focus on implementing guardrails to address potential risks, such as misinformation, bias, and data breaches. Ensuring compliance with regulations, such as the EU AI Act, and adhering to ethical AI principles are crucial for maintaining trust and fostering innovation.

Atlassian’s (Atlassian 2023) approach to AI reflects a commitment to security, transparency, and scalability. Its AI-powered features are built on a trusted platform that ensures data privacy and respects user permissions. By incorporating responsible technology principles, Atlassian ensures that data is used only for intended purposes, without being shared or used for training models across different customers. Compliance with data protection regulations, such as GDPR, is maintained, and users are provided with controls to manage data usage, ensuring a secure and trusted AI environment.

Finance and banking

The finance and banking industry is rapidly evolving, largely driven by the integration of artificial intelligence (AI) and machine learning (ML) technologies. Major institutions such as JPMorgan Chase, Wells Fargo, Mizuho Financial Group, State Bank of India (SBI), HSBC, and Itaú Unibanco are increasingly leveraging these technologies to enhance efficiency, improve customer experiences, and ensure compliance with regulatory frameworks.

JPMorgan Chase (JPMorgan Chase 2023) has restricted the use of ChatGPT among its staff as a precautionary measure to safeguard sensitive financial information, highlighting a growing concern over the security and regulatory implications of AI. Similarly, banks in more underdeveloped countries, such as Banco de la Nacion Argentina (Banco de la Nacion Argentina 2023), while embracing the future of AI, are taking precautions to limit the use of OpenAI’s tools to protect proprietary data. These moves reflect a broader trend in the financial sector towards cautious adoption of AI, ensuring that security and compliance remain paramount.

Similarly, Itaú Unibanco (Itaú Unibanco Holding 2023) is also cautiously approaching the adoption of generative AI, focusing on research and development while monitoring regulatory developments. The bank’s methodical approach underscores the importance of understanding the implications and risks of AI before full-scale deployment, ensuring that ethical, security, and bias concerns are adequately addressed.

In contrast, Wells Fargo (Wells Fargo 2023) is actively embracing AI to transform its operations, leveraging AI for customer interactions, risk management, and to enhance operational efficiencies. The bank’s virtual assistant, Fargo™, powered by Google’s conversational AI, is a prime example of how AI can streamline customer service. Wells Fargo’s strategic focus on AI is part of a decade-long investment in technology, aiming to integrate AI into various business applications, ensuring alignment with regulatory oversight, and maintaining a commitment to responsible technology use.

Mizuho Financial Group (Mizuho Financial Group 2023) is another institution exploring the potential of generative AI. The company is providing access to Microsoft’s Azure OpenAI service to thousands of its employees, reflecting a more open approach to AI adoption compared to its peers. Mizuho aims to use AI to improve efficiency and explore innovative solutions, although it remains vigilant about the associated risks, such as privacy concerns and data security.

SBI (State Bank of India 2023) is leveraging AI and ML to overhaul its banking operations, focusing on areas such as cybersecurity, fraud detection, customer service through chatbots, and credit assessment. The bank’s approach illustrates how AI can enhance decision-making processes and operational efficiency, ensuring a safer and more customer-centric banking experience.

HSBC (HSBC Holdings Plc 2023) emphasizes the ethical use of AI, guided by its principles to ensure integrity, protect privacy, and maintain transparency. The bank’s focus on preventing unfair bias, ensuring accountability, and adapting governance frameworks to meet emerging needs reflects a commitment to using AI responsibly and ethically. This approach aligns with broader industry efforts to develop best practices in the ethical use of AI.

Wells Fargo’s long-standing strategic investment in AI is illustrated by the launch of its Fargo virtual assistant (in collaboration with Google Dialogflow), which handled over 21 million customer interactions in 2023 and is being expanded to Spanish-language support. Their CFO recently noted that AI is extending across “nearly every part of the company” — from call centers to internal analyst workloads — with dozens of AI use-cases being piloted while maintaining a human-centric and regulation-aligned deployment strategy.

In the broader context, the financial industry is increasingly relying on intelligent automation to improve efficiency and customer service. Institutions like Lloyds Banking Group use robotics and AI to handle repetitive tasks, respond to customer queries, and process large volumes of transactions, demonstrating how automation can support both operational and customer-facing roles. As the use of AI and automation grows, it will be crucial for financial institutions to balance innovation with security, ethical considerations, and regulatory compliance.

Publication industry

The publishing industry is navigating the rapidly changing landscape brought about by the integration of artificial intelligence (AI), with a strong emphasis on ethical use, transparency, and protection of intellectual property. Major publishing companies such as Elsevier, Penguin Random House, and Harper Collins are exploring AI to enhance their processes and offerings while maintaining stringent guidelines to protect authorship and content integrity.

Elsevier (Elsevier 2023), a leading publisher in academic and scientific content, has implemented a policy that guides the use of generative AI and AI-assisted technologies specifically in the writing process. This policy stipulates that while AI can be used to improve the readability and language of works, it should not replace core tasks such as producing scientific insights or drawing conclusions. Authors are required to disclose their use of AI, and Elsevier also prohibits using AI tools for creating or altering images in submitted manuscripts, except when explicitly part of the research design, such as in AI-assisted imaging in biomedical research. Elsevier’s policy requires authors to disclose any use of generative AI tools (e.g., ChatGPT) in the writing process, strictly limiting AI assistance to language refinement—not analysis—while mandating human oversight, full disclosure, and barring AI from author attribution or figure generation unless it’s part of the documented research design. This careful approach reflects the broader commitment to uphold ethical standards and accuracy in scholarly publishing.

Penguin Random House (Penguin Random House 2024), another major player in the publishing world, sees AI as a tool to potentially boost book sales and streamline operations, implementing guardrails on AI use including enforcing principles that support creativity and IP protection; embedding AI in supply-chain tools (e.g., e-book pricing, initial print-run forecasts via machine learning), and updating copyright pages to forbid their works from being used for AI training—marking one of the first publisher-led initiatives to explicitly opt out content from training datasets. CEO Nihar Malaviya has expressed hope that AI will facilitate the sale of more book titles without necessitating a significant increase in staff. Additionally, after a 2024 incident where AI-generated marketing images led Penguin to introduce stricter content controls, they affirmed that all third-party creative agencies must now follow AI policies, ensuring both author consent and oversight in promotional content. The company’s exploration of AI comes in the context of broader industry challenges, including cost-cutting measures and attempts to expand market share, as seen in its bid to acquire Simon & Schuster.

Wolters Kluwer (Kluwer 2023) is also looking into AI applications, particularly in producing AI-narrated audiobooks, which could expedite the release of translated novels. Similarly, AI’s role in moderating online content is also gaining traction, as seen with News24’s (News24 2020) plan to reintroduce comments on its platform using AI tools to filter out hateful and discriminatory comments. Such innovations demonstrate how AI is being used to enhance accessibility and reach new audiences, albeit with careful consideration to avoid undermining the role of human narrators and translators.

The use of AI in the publishing industry extends beyond text generation to include ethical considerations in content creation and management. The Global Principles for AI, established by Grupo Editorial Record (Grupo Editorial Record 2023), is a set of guidelines adopted by publishers that emphasizes the importance of intellectual property rights, transparency, and fairness. These principles ensure that AI systems are developed and deployed under frameworks that respect publishers’ investments in original content. The principles advocate for the responsible use of AI, ensuring that content is used lawfully and creators are compensated fairly. Transparency in AI processes, from content training to deployment, is crucial for maintaining trust and integrity in published works.

Overall, the publishing industry is harnessing AI to innovate and improve operational efficiency, content accessibility, and user interaction. To strengthen policy coherence, several publishers like Elsevier and Springer Nature now enforce strict guidelines for reviewers and editors, prohibiting the uploading of manuscript content into external AI tools during peer review to preserve confidentiality and uphold integrity. Measures in AI disclosures being mandatory in author manuscripts, copyright page clauses to prevent AI model training on publisher content, AI-generated figures that should be banned unless explicitly part of reproducible research, and confidentiality rules that protect manuscripts during peer review. In conclusion, while AI adoption in publishing—spanning editing, narration, and commentary—continues, the industry’s clear policy frameworks emphasize that AI must be used as a tool to support, not replace, human creativity, ethical responsibility, and judicial oversight.

Language translation services

The language translation services industry is evolving rapidly with the integration of artificial intelligence (AI), emphasizing efficiency, accuracy, and ethical considerations. Leading companies such as TransPerfect, SDL, and Lionbridge are leveraging AI to enhance translation processes while maintaining rigorous standards to ensure quality and reliability.

TransPerfect (TransPerfect 2024), a major player in translation and localization services, has incorporated generative AI into its operations to optimize business processes and reduce costs. The company reports that its AI.NOW system handled over 50 million translated segments in 2024, achieving latency reductions of 40% while maintaining quality benchmarks through continuous human evaluation loops. By focusing on three main areas: technology, services, and solutions. TransPerfect’s AI.NOW division prioritizes data security and compliance, ensuring that AI-driven translations adhere to strict privacy and confidentiality standards. The company also emphasizes the importance of human oversight in AI applications, using AI to assist rather than replace translators. This approach allows TransPerfect to offer customized solutions that meet the specific needs of its clients, ensuring both efficiency and quality.

SDL (SDL plc 2020), known for its AI-based Natural Language Understanding platform, has partnered with Expert System to integrate machine translation capabilities into its offerings. This collaboration aims to create a comprehensive multilingual content understanding platform that supports industries like life sciences and government. SDL’s focus on secure, scalable, and flexible AI solutions enables it to meet the diverse needs of its global clients. By combining AI with human expertise, SDL ensures that translations are not only accurate but also culturally relevant and contextually appropriate.

Lionbridge (Lionbridge 2023), another key player in the translation industry, utilizes AI to optimize translator assignments and improve content quality. The company has also recently launched a Domain Detector powered by machine learning, which analyzes client content in real time to match translators by subject-area expertise—leading to a 20% increase in first-pass accuracy and faster turnaround. Developed tools such as the Domain Detector and Customer Affinity, which use machine learning to match translators with projects based on their expertise and previous experience. Lionbridge’s approach emphasizes a balance between automation and human oversight, ensuring that AI-driven processes enhance rather than diminish the role of skilled translators. This strategy enables Lionbridge to deliver high-quality translations efficiently while maintaining the integrity of the original content.

We also see Elite Translations Africa (Elite Translations Africa 2023) using AI to preserve their African heritage, which extends beyond automating text translation to include considerations of benefiting cultural preservation and intellectual property protection. While companies like TransPerfect, SDL, and Lionbridge are committed to transparency in their AI processes, ensuring that content creators are compensated fairly and that translations adhere to ethical standards.

As AI continues to transform the translation industry, companies are exploring innovative applications to enhance accessibility and user engagement. For example, AI-powered tools are being used to streamline the production of multilingual content, improve real-time translation capabilities, and support complex localization projects. These advancements demonstrate how AI can be leveraged to meet the growing demand for high-quality, culturally sensitive translations in a globalized world.

Construction and urban planning

Collected policies from various construction & urban planning firms highlight the growing adoption of AI technologies to enhance productivity, efficiency, safety, and decision-making across the industry. Several companies, including Bechtel (Bechtel 2019), Turner Construction (Construction 2023), Hyundai E&C (Hyundai 2022), and Shimizu Corporation (Shimizu 2023), are leveraging AI and machine learning to optimize construction processes, such as project scheduling, safety management, and structural design. Bechtel, for instance, has implemented AI-powered computer vision systems on-site to detect safety hazards like missing PPE and alert supervisors in real time. Turner Construction deploys predictive scheduling tools to anticipate delays and optimize resource allocation. Hyundai E&C integrates AI-based structural analysis to improve earthquake resilience in building designs. Shimizu Corporation applies generative design algorithms for more efficient material usage.

Innovation is a key theme throughout the policies and guidelines, with companies like VINCI (Vinci 2023), Bouygues (Bouygues 2021), and Grupo ACS (Group 2019) establishing dedicated platforms and initiatives to drive AI-driven innovations across their subsidiaries. These innovations span various applications, including predictive maintenance, virtual and augmented reality, and geolocation data collection. They also emphasize the importance of AI in addressing the construction industry’s productivity challenges. Lendlease (Patten 2023) highlights how AI can improve the quality of contract bids and streamline construction processes, ultimately leading to increased productivity and efficiency. VINCI’s Leonard platform includes AI-based predictive maintenance, logistics optimization, AR-powered training, and underground utility mapping—such as the Exodigo pilot that identified 57% more utility lines than prior maps. It leverages IoT sensor data and machine learning to reduce equipment downtime by detecting anomalies before failure occurs. Lendlease’s AI-guided bid estimations have led to a 15% faster pre-construction process, and Grupo ACS integrates AR for on-site worker training and hazard simulation.

Lastly, the UAE National Strategy for AI (Arab Contractors n.d.) provides an overarching framework for the development and adoption of AI across various sectors, including construction and urban planning. Privacy concerns arise with site-wide facial recognition and worker tracking—many policies now include explicit consent and opt-in mechanisms. Data interoperability between BIM and AI platforms remains a key integration barrier. Regulatory frameworks around autonomous machinery use still vary widely across jurisdictions, requiring case-by-case risk assessments. The strategy outlines the UAE’s vision to become a global leader in AI by 2031, focusing on key enablers such as talent attraction, data infrastructure, and responsible governance (UAE Government, 2021).

In conclusion, the construction and urban planning industry is increasingly embracing AI technologies to tackle challenges related to productivity, safety, and efficiency. Companies are investing in innovation and collaborating with stakeholders to drive the responsible adoption of AI, while national strategies, such as the UAE’s, provide a supportive framework for the industry’s transformation

Consulting and management

The policies and guidelines from various consulting and management companies, including McKinsey (Tilo 2023), Deloitte (Deloitte 2023), TCS (Services 2024), Infosys (Limited 2024), KPMG (Ditsche et al. 2023), Clearsight Advisors (Advisors 2020), Falconi (Falconi 2023), and Africa International Advisors Group (AIA) (Advisors 2024), highlight the growing adoption and impact of AI and generative AI in business processes and decision-making. These companies are leveraging AI to enhance productivity, drive innovation, and deliver value to their clients. McKinsey’s Insights AI platform has processed over 25 million enterprise data points, enabling clients to identify potential efficiency gains worth up to 15% in non-technology sectors within 6 months.

A common theme across the collection is the importance of responsible AI adoption. Companies are developing guidelines and frameworks to ensure AI is implemented transparently, ethically, and in compliance with regulations (Limited 2024; Services 2024). TCS, for example, offers a comprehensive portfolio of services covering the entire AI lifecycle, helping enterprises implement AI solutions in an unbiased and trustworthy manner. TCS has formalized an AI ethics board that reviews 100% of internal AI pilots, mandating bias audits and human oversight for any system impacting customer behavior or employee outcomes. Another key aspect is the integration of people and technology. AIA’s Futures Practice emphasizes the importance of cultivating an environment for people to adapt and excel in the future while harnessing the potential of emerging technologies (Advisors 2024). Nous Group showcases its use of generative AI to support consulting work (Group 2023).

The collection also highlights the transformative potential of AI across various sectors. KPMG analyzes the impact of ChatGPT on businesses, emphasizing the need for top managers to address the potential of generative AI urgently to harness its benefits and avoid being left behind by competitors. Falconi presents a case study of a client who achieved significant improvements in production performance through the application of AI, demonstrating the promising prospects of AI in driving operational excellence. Security emerges as another critical aspect of AI adoption. Accenture discusses the dual role of AI in both automating hacking and bolstering security, emphasizing the need for businesses to approach AI security with a multifaceted strategy, implementing robust measures, conducting risk assessments, and collaborating with experts to stay ahead of emerging threats.

In conclusion, the consulting and management companies featured in this collection are at the forefront of guiding their clients through the complexities of the evolving AI landscape. By promoting responsible AI adoption, integrating people and technology, harnessing the transformative potential of AI across sectors, and prioritizing security, these companies are helping businesses navigate the challenges and opportunities presented by AI and generative AI.

Design and fashion technology

The design and fashion industry collection highlights the growing adoption and impact of AI across various aspects of the industry, from product design and content creation to supply chain management and customer experience.

Nike (Zaytsev 2023) employs AI to enhance customer experience through personalized recommendations, supply chain optimization, and IT operations. The company collaborates with partners like Cognizant to modernize its infrastructure and drive connected commerce. Ralph Lauren (Johnston 2023) is testing generative AI across various business functions, including copy editing, graphics, and computer programming, to improve productivity and drive better outcomes. Nike’s AI-powered demand forecasting system reduced overstock by 12% in the 2023–2024 cycle, optimizing global inventory flows across its affiliate manufacturing units. The company is also exploring the use of NFTs and Web3 technologies to engage with customers. Shiseido (Shiseido 2017) is using AI to drive gradual transformation across its brands, focusing on consumer intimacy and data-driven decision-making. The company has established internal initiatives like the Shiseido Digital Centre of Excellence and SHISEIDO+ to foster a culture of innovation and upskill employees. Shiseido’s Digital Centre of Excellence introduced a style-cohort detection model that segments consumer images into style archetypes with 85% accuracy, informing targeted marketing and reducing content waste.

LVMH (LVMH 2024) has partnered with Stanford University’s Institute for Human-Centered AI to explore AI applications in customer experience, product design, marketing, manufacturing, and supply chain management. The collaboration aims to develop human-centered AI solutions that complement human creativity and expertise. Kering (Kering n.d.) is leveraging AI across its value chain, from trend prediction and demand planning to supply chain optimization and pricing. The company has prioritized AI projects and established a dedicated AI team to drive innovation and efficiency.

Havaianas (Havaianas n.d.) utilized Sitation’s AI-powered content creation tool, RoughDraftPro, to generate accurate and consistent product descriptions for its Amazon listings, overcoming challenges related to incomplete product records and inconsistent branding. Falabella (Mares 2022) hired Amelia, a conversational AI assistant, to provide 24/7 IT support for its 100,000 employees, optimizing the handling of support tickets and freeing up resources for more complex issues. Cotton On (Cotton On 2024) uses Dash Hudson’s visual AI technology to create high-performing content, leverage data-backed insights for influencer collaborations, and craft strategic campaigns across marketing channels. The AI-driven approach has significantly improved the brand’s engagement rates and content creation process.

In conclusion, the design and fashion industry is increasingly embracing AI to drive innovation, improve operational efficiency, and enhance customer experiences. While challenges such as job displacement and bias in AI models persist, companies are collaborating with academic institutions and technology partners to develop responsible and human-centered AI solutions that complement human creativity and expertise.

Entertainment and game development

The entertainment and game development collection highlights the growing adoption and impact of AI across various aspects of the industry, from game design and content creation to player experience and safety (Knack and Powell 2023; KUKA 2023; Nusca 2023).

Electronic Arts (Arts 2022) explores how AI is transforming entertainment and culture, focusing on the ethical and esthetic considerations when using AI in social contexts. The company emphasizes the importance of purpose-driven, sustainable outcomes and responsible AI adoption. Tencent (Tencent 2020) applies AI across its businesses, including content recommendation, social interactions, and gameplay experience. The company also pursues key applications in industries such as medical, agriculture, industrial, and manufacturing, aiming to help enterprises achieve digital upgrades through AI. Square Enix’s President (Enix 2024) discusses the company’s AI initiatives, highlighting the potential of generative AI to reshape content creation and fundamentally change programming processes. The company is investing in AI, blockchain entertainment, and the cloud to adapt to the changing business environment and drive innovation. Ubisoft’s Ghostwriter (Ubisoft 2019), an in-house AI tool, assists scriptwriters by generating first drafts of barks, allowing them to focus on polishing the narrative. The tool demonstrates how AI can augment human creativity and streamline game development processes.

Supercell (Kahn 2020) is backing a venture capital fund dedicated to investing in early-stage AI startups. The fund, run by Air Street Capital, aims to identify and nurture AI companies that are developing innovative business models and solutions. Wildlife (Wildlife 2024) prioritizes player safety by using AI scanning to monitor user-generated content and partnering with law enforcement and child protection agencies when necessary. The company’s Trust and Safety team ensures the well-being of players through a combination of AI systems and human review. Globant’s AI Manifesto (Globant 2024) outlines the company’s principles for responsible AI adoption, emphasizing augmented intelligence, respectful data, fairness, transparency, social contribution, and sustainable AI. The manifesto also lists applications that Globant will not support, such as misinformation, malicious use, and reckless AI.

Tencent’s content recommender uses generative AI to dynamically produce in-game narrative variations, improving user session length by 18% in early 2025 beta tests. Electronic Arts’ Ghostwriter, launched in 2024, has drafted more than 200,000 lines of dialogue across five titles, with 70% of content subsequently approved by human writers—signifying substantial creative augmentation.

In conclusion, the entertainment and game development industry is increasingly embracing AI to drive innovation, enhance player experiences, and streamline production processes. While challenges related to responsible AI adoption and ethical considerations persist, companies are collaborating with partners and developing guidelines to ensure the safe and beneficial integration of AI into their products and services.

Journalism and news media

The journalism and news media industry is increasingly adopting AI to innovate, improve efficiency, and enhance audience engagement. AI is being used to generate articles, particularly for hyperlocal news (David 2024; Khan 2023), personalize content recommendations, enhance accessibility, and improve user experience (Buckingham-Jones 2024; Khan 2023). However, human oversight remains crucial for maintaining accuracy and journalistic standards (Khan 2023).

To ensure responsible AI use, many organizations have developed principles and guidelines focusing on fairness, security, privacy, intellectual property rights, human oversight, and accountability (Cote 2024; Thomson Reuters 2023). News24’s AI moderation engine reduced harmful comments by 45%, reinstating its comment section in mid-2024 with adjustable flags and safety thresholds for user-generated content. Data protection and regulatory compliance are significant challenges (Cote 2024; Openda and Kimanthi 2024), with governments and organizations working to implement safeguards and policies (Openda and Kimanthi 2024).

News organizations are collaborating with technology companies, academic institutions, and other stakeholders to advance AI capabilities and address challenges (Buckingham-Jones 2024; Figueroa 2023). A collaboration between Spotify and Reuters in 2023 led to the adoption of an AI summarization tool that processes daily press releases in <30 s, enabling more timely news curation. While AI is seen as a powerful tool, there’s an emphasis on maintaining human creativity, insight, and oversight in news production (Cote 2024; Khan 2023). As AI evolves, news organizations must adapt to a complex regulatory landscape (Cote 2024; Openda and Kimanthi 2024), balancing AI benefits with core journalistic values of accuracy, transparency, and public trust.

Pharmaceutical research and development

The pharmaceutical industry is increasingly adopting AI to innovate across drug discovery, development, manufacturing, supply chain management, and patient care. AI is being used to accelerate drug discovery and development (Eisai 2024; Johnson and Johnson 2024; Pfizer 2024; Roche 2021), identify drug targets, generate molecular structures, predict drug-target interactions, and optimize clinical trials (Eisai 2024; Roche 2021). Companies are establishing ethical principles and governance structures to ensure responsible AI use, focusing on fairness, transparency, accountability, privacy, security, and human oversight (Johnson and Johnson 2024; Lawrence 2024; Novartis 2024; Pfizer 2024; Sanofi 2024; Takeda 2020).

The industry emphasizes using AI to augment human capabilities rather than replace them (Johnson and Johnson 2024; Lawrence 2024; Novartis 2024; Takeda 2020), applying it to improve patient care, enhance diagnosis and treatment, and enable personalized medicine (Johnson and Johnson 2024; Patrick 2020; Roche 2021). Chugai’s AI-guided discovery pipeline contributed to 3 new digital biomarker initiatives in 2023, accelerating candidate selection timelines by nearly 25%. Companies are also addressing the environmental impact of AI (Lawrence 2024; Novartis 2024) and partnering with sustainable technology platforms.

As pharmaceutical companies navigate this complex landscape, they are developing governance frameworks, investing in employee training, and collaborating with stakeholders to ensure responsible and sustainable AI deployment in healthcare.

Social media and telecommunications

Social media and telecommunications companies are establishing ethical AI principles and guidelines. LinkedIn focuses on economic opportunity, trust, fairness, inclusion, transparency, and accountability (LinkedIn 2023). WPP has developed comprehensive AI policies and ethics principles (WPP 2019). Privacy and data protection are prioritized, as seen in X’s policy on synthetic media (X 2024) and Skype’s translation feature (Microsoft 2024). Transparency and explainability are key concerns, with companies like Spotify (Kaput 2024) and LinkedIn (LinkedIn 2023) striving to explain their AI systems clearly. Addressing AI biases and promoting fairness is a common theme (LinkedIn 2023; X 2024). Companies are leveraging AI to enhance customer experiences, including Spotify’s personalized recommendations (Kaput 2024) and Mercado Libre’s e-commerce enhancements (PYMNTS 2024). AI is also being used to boost operational efficiency and drive innovation. Examples include Alibaba Cloud’s Tongyi Qianwen model (Alibaba 2023), América Móvil and Carlos Slim Foundation’s health monitoring tools (Foundation 2022), and Accel’s focus on AI applications for business productivity (Teare 2023).

LinkedIn’s fairness-testing infrastructure audited over 10,000 content-serving models in 2024, ensuring <1% variance in algorithmic reach across gender and race groups. Alibaba Cloud’s Tongyi Qianwen model processed over 2 billion API calls in Q1 2025, with built-in adversarial testing and integrated API-level explainability controls.

In conclusion, the industry trend is towards comprehensive AI policies that address ethical concerns, promote responsible development, and leverage AI’s potential while balancing innovation with necessary regulations.

Advertising and marketing

AI is recognized as a transformative force in the advertising and marketing sector. Companies like Ogilvy (Ogilvy 2023), Publicis Groupe (Publicis 2024), and LegalZoom (LegalZoom 2023) are leveraging AI to revolutionize their industries. There’s a consensus that AI will augment rather than replace human workers, with Ogilvy (Ogilvy 2023) and Publicis Groupe (Publicis 2024) emphasizing AI’s role in enhancing human creativity and productivity.

AI is enabling unprecedented personalization and efficiency at scale. Omnicom (Beaule 2023) and Havas Media Group (Joseph 2023) use AI for targeted marketing campaigns and optimized media buying. Companies like Dentsu (Dentsu 2024) and Publicis Groupe (Publicis 2024) are integrating AI across multiple business functions. The impact on workforce dynamics is stressed, with the ABC resource guide (ABC 2023) noting potential job automation and new job creation. Legal and regulatory aspects of AI are still evolving, as evidenced by Hakuhodo DY Holdings’ cautious approach (DY 2022) and LegalZoom’s beta launch of AI-assisted services (LegalZoom 2023). Omnicom’s AI-driven media-buying platform dynamically adjusted bids across 500+ channels, increasing ROI by 18% and reducing client campaign overspend by 12% in 2024. LegalZoom’s 2024 beta of its AI assistant supported ~50,000 legal queries monthly, with a 92% user satisfaction score and leading to a 15% increase in upsell conversions.

In conclusion, organizations are embracing AI for its transformative potential while recognizing the need for ethical considerations, responsible use, and workforce adaptability. Ongoing collaboration and adaptation will be crucial as AI technology continues to evolve rapidly.

Legal technology and services

AI within legal technology and services focuses on ethical considerations and emerging regulations. A consistent theme is balancing AI’s transformative potential with responsible implementation. Thomson Reuters (Thomson Reuters 2023), Rocket Lawyer (Lawyer 2024), and Demarest Advogados (Demarest 2022) emphasize key ethical principles for AI development and use, including beneficence, non-maleficence, autonomy, justice, and explicability. These inform practical guidelines for AI workplace policies and governance structures outlined by Cyril Amarchand & Mangaldas, Rocket Lawyer, and Allen & Ovary (Lawyer 2024; Mangaldas 2023; Shearman 2023).

Zegal and Allen & Ovary highlight AI’s impact on legal practices, revolutionizing areas like legal research, document review, and contract analysis (Post 2024; Shearman 2023). While noting efficiency gains, they stress the need for human oversight. Recommendations for responsible AI practices across multiple documents include developing AI-focused organizational cultures, conducting impact assessments, ensuring diverse teams, providing staff training, creating robust data policies, and implementing ongoing auditing processes (Demarest 2022; Lawyer 2024; Mangaldas 2023). Others discuss the emerging regulatory landscape for AI, including the EU AI Act and proactive efforts by companies to establish AI governance policies. This reflects a growing trend towards “responsible AI” as global regulatory scrutiny increases (Africa 2024; Chambers and Partners 2024; Lawpath 2021; Wentzel 2023). Overall, the collection emphasizes AI’s power to revolutionize industries, particularly law, while stressing the critical importance of ethical considerations and robust governance structures.

Cross-sectional comparisons and analysis

Although companies among all 14 different sectors are embracing AI for its transformative potential, three stood out the most while recognizing the need for ethical considerations, responsible use, and workforce adaptability. Ongoing collaboration and adaptation will be crucial as AI technology continues to evolve rapidly. Finance operates under the heaviest legal exposure because AI errors can trigger systemic losses; banks, therefore, favor tight, regulator-backed “gatekeeping” controls. HSBC’s published AI principles hard-wire board-level accountability, bias monitoring, and model explainability into every deployment (HSBC n.d.), while Mizuho’s roll-out of Azure OpenAI to 45,000 staff is allowed only after internal red-team tests and Microsoft’s enterprise safeguards (Tilo 2023). The UK Financial Conduct Authority is even piloting a super-sandbox so banks can stress-test fraud-detection models on real data without breaching live-market rules—a model of risk-segmented experimentation that other high-stakes sectors could emulate (Makortoff 2025).

Social-media platforms, by contrast, confront reputational (rather than systemic-financial) risk and rely on voluntary, principle-based codes. LinkedIn publicly anchors its AI program to five values—economic opportunity, trust, fairness, transparency, and accountability—yet gives product teams latitude to interpret them case-by-case (Baruch 2023). X’s “synthetic and manipulated media” policy shows how content platforms prioritize authenticity labeling and user reporting over pre-deployment audits, illustrating a governance skew toward speech rights and brand trust rather than formal compliance regimes (Authenticity 2025). Crucially, both finance and social media cite GDPR, but from opposite angles: banks frame it as a data-security mandate, whereas platforms debate how much explainability they can reveal before jeopardizing proprietary algorithms (hsbc 2023)—a divergence that exposes a regulatory gray zone over algorithmic transparency.

Construction and urban-planning firms face physical-safety risks and long project cycles, so their playbook blends national strategy alignment with operational pilots. The UAE’s AI 2031 plan mandates ethics reviews and carbon-impact tracking for geospatial AI, providing a macro policy scaffold (UAE Government, 2021). On the ground, Bechtel’s computer-vision safety monitors cut response time to site hazards, VINCI’s predictive-maintenance models lower equipment downtime (Cordier 2023; leonard 2024), and Bouygues’ risk-zone mapping alerts foremen to near-misses in real-time [bouygues-construc (Cordier 2023) Together these cases illustrate a “human-override first” norm—AI augments, but never replaces, professional judgment—mirroring finance’s caution yet differing from social media’s user-centric emphasis.

Synthesizing across the three sectors uncovers two convergent themes—privacy/traceability safeguards and staged sand-box testing—but also two clear divergences: (1) who wields oversight (regulators in finance vs. corporate trust teams in social media vs. mixed public–private councils in construction), and (2) the primary risk vector (monetary contagion, reputational harm, or physical safety). Recognizing these patterns suggests a layered policy architecture: universal baseline duties (audit trails, bias logs, disclosure triggers) coupled with sector-specific “risk modules” calibrated to the dominant hazard class. Such a schema both promotes ethical accountability and keeps access equitable by letting lower-resource sectors adopt only the modules they need, thereby advancing responsible GAI/LLM development without stifling innovation.

Quantitative findings and resultant patterns

Our analysis employed text mining techniques, ranging from tokenization to visualization, to thoroughly explore AI guidelines across diverse industries. We conducted a Qualitative Semantic Analysis to identify ten key concepts within 14 industrial sectors, focusing on both frequently mentioned terms (e.g., ‘content,’ ‘data,’ and ‘risk’) and those of unique relevance to specific sectors (e.g., conflict, fashion, treatment). Following this, a TF-IDF Heatmap Analysis measured the significance of terms within each industry, producing a cosine similarity matrix to visually compare thematic alignment across sectors. This integrated approach of qualitative depth and quantitative rigor illuminated shared priorities and sector-specific focuses, offering a foundational framework for enhancing and refining AI guidelines. Before turning to frequency maps and Sankey flows, we report model-selection diagnostics for transparency. Across k = 2–12, the elbow in inertia occurs at k ≈ 8 with diminishing returns beyond; the average silhouette attains a local maximum at k = 8 (slightly above k = 7 and k = 9), and Calinski–Harabasz likewise prefers k = 8. A bootstrap stability check (80% subsamples, 100 runs) preserves a large majority of top terms per cluster and maintains a high adjusted Rand against the full solution. Importantly, the synonym-merged features modestly increase silhouette and stability relative to lemmatization alone, indicating that semantic normalization sharpens thematic boundaries (Thomson Reuters 2023; Figueroa 2023). These results validate k = 8 as the best-performing and most interpretable setting for our cross-industry segmentation.

Qualitative semantic analysis

We conducted a qualitative semantic analysis by compiling key concepts from our literature review and identifying the ten most significant concepts for each of the 14 categories. Each bar chart displays the frequency of these ten terms across the 14 industry sectors. The selection of these key concepts was based on two criteria: their frequency of mention in the guidelines, indicating a consensus on their importance (e.g., ‘content,’ ‘data,’ and ‘risk’), and their unique occurrence in the discourse, highlighting their specific significance despite their contextual nature (e.g., conflict, fashion, treatment). Figure 1 shows the frequency analysis of these qualitative findings.

Fig. 1: Key Concepts.
Fig. 1: Key Concepts.The alternative text for this image may have been generated using AI.
Full size image

The frequency of key concepts in the major nine themes.

Industry-wise Sankey diagram keyword co-occurrence analysis

We conducted a quantitative semantic analysis using the top 10 key concepts from Fig. 1 to match relevant co-occurring words and usage patterns from our guideline texts that overlap across 14 distinct industries using Sankey diagrams (see Fig. 2). These diagrams represent the flow and thematic alignment of AI-related guideline topics, highlighting the proportional emphasis placed by each industry area of focus.

Fig. 2: AI Integration Guidelines.
Fig. 2: AI Integration Guidelines.The alternative text for this image may have been generated using AI.
Full size image

Industry-wise Sankey diagram analysis for AI integration guidelines.

Each industry-specific Sankey diagram captures the distribution and flow of key word co-occurrences across various thematic categories, such as ethical complexities, innovation, decision-making paradigms, and collaborative creativity, highlighted by our industry-specific keywords. The diagrams also reveal commonalities and differences in priorities, providing a comparative perspective on how sectors integrate AI guidelines into their operational frameworks.

Figure 2 illustrate these Sankey diagrams for industries ranging from healthcare and finance to journalism and legal services. By mapping these flows, we gain insights into the unique thematic concentrations and shared concerns in AI adoption across sectors. This analysis underscores key areas for future development, including addressing ethical dilemmas, ensuring fairness, and fostering innovation while balancing traditional approaches with emerging technologies.

Discussion and synthesis

In this section, we also synthesize these findings to propose recommendations for refining industry-specific AI integration guidelines. This includes fostering cross-sector collaboration to address shared challenges, enhancing safety and responsibility protocols, and navigating the complex interplay between technological optimism and ethical caution in AI adoption. The eight themes outline a modular governance architecture that couples universal duties—privacy-by-design, auditability, explainability—with risk-calibrated modules tuned to sectoral hazards, aligning with widely cited frameworks (OECD principles, EU risk-tiering, NIST AI RMF) (TCS 2024; Falconi 2023). Concretely, finance can hard-wire model-risk approvals and sandbox stress-tests; healthcare can require consent traceability and human overrides; publishing/media can mandate AI-use disclosure and provenance; platforms can deploy tiered explainability that protects privacy while enabling accountability; and creative industries can institutionalize augmentation-first co-creation. This sector-aware view clarifies why themes differ and how they translate into development roadmaps for each industry.

Table 2 updates our mapping of the report’s new Section “Discussion and synthesis” structure and confirms that the ten roles are collectively exhaustive: every subsection (Sections “Synthesis of quantitative and qualitative insights”, “Collaborative and adaptive governance framework”, “Cross-sectional comparison and synthesis”, “Dynamic regulation in healthcare and pharmaceuticals: balancing innovation and safety”, “Augmentation-first guidelines for creative and linguistic AI”, “Transparent AI in social media: trade-off between privacy and accountability”, “Human-AI collaboration in design and entertainment: bridging creativity and ethics”, “Legal technology and AI: proactive governance for emerging risks”, “The risks of marketing hype in AI guidelines and policies”) now has a distinct strategic takeaway. They remain large but not perfectly—mutually exclusive: transparency (Section “Transparent AI in social media: trade-off between privacy and accountability”) and risk-based oversight (Sections “Dynamic regulation in healthcare and pharmaceuticals: balancing innovation and safety” and “Legal technology and AI: proactive governance for emerging risks”) inevitably intersect, yet no two roles advance the same prescription. Together, the revised set captures a coherent, risk-calibrated, human-centered trajectory for AI governance. Healthcare and pharma adopt adaptive, tiered regulations to keep low-risk chatbots nimble while guarding high-risk diagnostics (Section “Dynamic regulation in healthcare and pharmaceuticals: balancing innovation and safety”). Creative and linguistic industries enshrine “augmentation-first” policies that require human editors and full disclosure of generative-AI use (Section “Augmentation-first guidelines for creative and linguistic AI”). Social-media platforms pilot tiered explainability that squares user trust with GDPR-style privacy (Section “Transparent AI in social media: trade-off between privacy and accountability”). Design and entertainment articulate collaboration rules that label AI-generated content and safeguard IP (Section “Human-AI collaboration in design and entertainment: bridging creativity and ethics”). Legal-tech firms build pre-deployment bias simulations and cultural-competence training (Section “Legal technology and AI: proactive governance for emerging risks”). Finally, Section “The risks of marketing hype in AI guidelines and policies” warns that marketing hype can erode public trust and calls for third-party-audited evidence before proclaiming “revolutionary” AI breakthroughs. This cross-sector spine—anchored by the EU AI Act’s risk tiers and NIST’s “Govern-Map-Measure-Manage” cycle—illustrates how organizations can balance innovation with accountability, embedding humans where judgment matters and grounding every claim in verifiable data.

Table 2 Cross-sector AI governance roles and their core takeaways.

Synthesis of quantitative and qualitative insights

Our semantic analysis of key concepts, combined with the keyword frequency and the Sankey diagrams provided, visualizes keyword co-occurrence percentages with other terms across 14 industries, revealing that certain terms, despite appearing less frequently, are essential for future guidelines to offer insights into thematic overlaps and industry-specific focal points of GAI and LLMs.

Privacy is a high-priority concept across multiple industries, with Legal Tech & Legal Services showing 227 instances of “privacy,” underlining its critical role in protecting user information. The visualization confirms this, showing “privacy” as one of the more frequently co-occurring words with “data” (2.6%, 412 occurrences) and “AI” (2.1%, 333 occurrences) in the legal domain, which highlights its prominence in the industry’s focus on compliance and protection. However, disclosure, with only 14 mentions across all documents, is underrepresented, despite its significance in promoting transparency. The Sankey diagram for Journalism, News & Media also reflects limited emphasis on transparency-related terms, with “communication” and “reporting” appearing weakly (0.1%, 3 occurrences). Emphasizing “disclosure” in all industry guidelines would help users understand AI capabilities and limitations, fostering accountability and informed decision-making.

In the Ethical Complexities & Human-Centric Usage category, “ethical” appears frequently, with 216 instances across industries like Counseling & Management and Healthcare & Counseling, emphasizing its importance. The diagrams show “ethical” connecting strongly to “AI” (4.8%, 135 occurrences) in Counseling & Management, reflecting its centrality in AI-driven solutions. In contrast, “human-centric” is mentioned only twice, suggesting a gap in guidelines promoting inclusivity and accessibility. The Social Media & Telecommunication sector, with its strong user interaction (381 mentions of “content” and 52 mentions of “interaction”), could benefit from a greater emphasis on “human-centric” approaches. Expanding guidelines to promote inclusivity on high-interaction platforms would enhance trust and accessibility, particularly as the diagrams reveal a focus on “user” (1.5%, 68 occurrences) and “service” (0.5%, 21 occurrences) but limited emphasis on equity-driven practices.

In the Balancing Innovation & Integrity category, “integrity” appears frequently (211 instances), especially in sectors where ethical standards are essential, like Pharmaceutical Research & Development. The visualization reinforces this, showing “integrity” linked strongly to “AI” (4.8%, 281 occurrences) and “system” (2.0%, 115 occurrences), reflecting its critical role in ensuring ethical drug development. Meanwhile, “alternative methods” are referenced only 47 times across industries, indicating an area where innovation could be encouraged. Sectors like Education and Healthcare Counseling could benefit from exploring “alternative methods” such as adaptive learning and AI-driven diagnostic tools, balancing innovation with ethical standards. Additionally, the analysis reveals a low frequency of critical concepts like misinformation (19 instances) and skepticism (only 5 mentions), particularly in Journalism, News & Media, and Social Media & Telecommunication. This is reflected in the weak connections to terms like “compliance” and “truthfulness” in their respective Sankey diagrams. Expanding guidelines on misinformation detection and skepticism could strengthen truthfulness and risk mitigation strategies, fostering trust in AI systems. This is especially relevant in Journalism, News & Media, where “content” appears prominently (2.8%, 93 occurrences), but guidelines around responsible information sharing remain limited.

In the Collaborative Creativity & Co-Designing category, assistance is frequently noted (41 instances), whereas democratization appears just once across all guidelines, highlighting a significant gap. The diagrams emphasize this gap, particularly in Entertainment & Game Development, where “platform” is referenced 213 times, showing the industry’s reliance on AI-driven systems. However, democratization efforts such as open-source tools and collaborative development platforms are not evident. Broadening these efforts could make AI more accessible and foster co-design with diverse user groups. Initiatives like educational resources and co-creation practices would enhance accessibility and equity in AI innovation.

Predictive analytics, referenced in Finance & Banking (120 mentions of “investment” and 133 of “market”), plays a significant role in empowering decision-making. The Sankey diagram for Finance & Banking confirms this by showing strong links between “AI” (2.9%, 109 occurrences) and investment-related terms like “banking” and “market.” Leveraging “predictive analytics” in educational settings or data-driven industries could provide insights for better resource allocation and student support, optimizing outcomes. For instance, Healthcare Counseling, with 183 instances of “support” and 109 of “patient,” could integrate predictive tools for patient care and resource management. Additionally, promoting critical thinking in AI-powered learning tools encourages analytical reasoning and problem-solving skills, benefiting students and professionals alike across fields.

Lastly, the analysis shows that support-focused guidelines are more prevalent, while those for employees and management are comparatively underrepresented. For example, Counseling & Management guidelines contain 134 instances of “strategy,” showing a strong emphasis on organizational direction. However, the diagrams reveal limited co-occurrences of terms like “leadership” (0.1%, 2 occurrences), indicating a gap in addressing educational and managerial roles. Developing discipline-specific guidelines tailored to fields like Healthcare (183 instances of “support” and 109 of “patient”) and Legal Services (227 mentions of “privacy”) could ensure AI usage aligns with each field’s unique challenges and ethical considerations. Working with domain experts to refine industry-specific practices, as visualized in the Sankey diagrams, will enable responsible and effective AI implementation across diverse sectors.

Collaborative and adaptive governance framework

The complexity of AI technologies necessitates a shift from static, top-down guidelines to a dynamic and modular co-creation framework. This approach emphasizes continuous collaboration among developers, regulators, users, and ethicists to treat guidelines as “living documents” that evolve with real-world feedback and technological advancements. AI-driven tools, such as sentiment analysis, can further support updates by identifying emerging gaps and concerns, fostering trust and accountability (Afroogh et al. 2024, 2023).

A key component is AI-enhanced meta-governance, where AI audits evaluate adherence to ethical principles by analyzing public outputs and exposing inconsistencies. This incentivizes transparent and actionable guidelines. Complementing this, modular governance structures combine universal ethical principles with sector-specific and operational standards, allowing industries to adapt efficiently while maintaining accountability. By enabling organizations to “opt-in” to tailored modules, this flexible system ensures guidelines remain relevant and practical across diverse contexts. AI guidelines often emphasize human-centric design superficially, overlooking the critical role of embedding values during the pre-design and in-design phases. To address this, we propose mandating participatory design practices involving diverse stakeholders—end-users, ethicists, and domain experts—early in development to ensure AI systems align with societal and cultural contexts. For example, healthcare AI should integrate cultural and emotional intelligence metrics to deliver not just accurate but empathetic and context-sensitive recommendations.

To further drive this commitment, governments and regulators should incentivize responsible innovation through grants, tax breaks, and certifications like “Ethical AI Leader” awards. These measures reward companies prioritizing bias mitigation, ethical innovation, and open-source contributions, fostering accountability and competition in ethical practices. By embedding values early and offering strong incentives, this integrated approach ensures AI systems enhance societal well-being while promoting a culture of ethical responsibility and long-term innovation (Jiao et al. 2024a).

Complementing this, sandbox environments provide controlled spaces to test AI systems and guidelines under realistic conditions. Stakeholders can evaluate tools across diverse user populations and regulatory contexts, refining both technology and governance mechanisms before large-scale deployment. For instance, a healthcare sandbox could assess AI compliance and effectiveness across varying demographics and resource levels. Sandboxes also support testing novel oversight models, ensuring governance frameworks are evidence-based and adaptable. By integrating mediation tools and sandbox models, this approach bridges conflicting interests and proactively evaluates guidelines, creating a governance framework that is inclusive, accountable, and aligned with real-world demands.

Cross-sectional comparison and synthesis

Across the 14 sectors we reviewed, a remarkably consistent regulatory “spine” has emerged: leading frameworks—from the OECD’s five high-level AI Principles that foreground human-centered values, transparency, and robustness (BIS 2024), through the EU AI Act’s tiered, risk-based obligations and mandatory impact assessments (Hernández de Cos 2024)—converge on privacy-by-design, explainability, and continuous risk-management as non-negotiable baselines. Operational playbooks such as NIST’s AI Risk-Management Framework (Govern then Map then Measure then Manage) translate those values into auditable controls that any industry can adopt (Crisanto et al. 2024), while the new ISO/IEC 42001 management-system standard turns them into certifiable processes for day-to-day governance (Crisanto et al. 2024). At a global level, UNESCO’s Recommendation on the Ethics of AI embeds human-rights and equity checkpoints, and Singapore’s Model AI Governance Framework operationalizes them via adaptive “regulatory sandboxes” that let firms pilot novel uses under supervision (BIS Representative Office for the Americas 2025). Cross-industry bodies are pushing the same direction: the World Economic Forum’s guidelines tie responsible AI directly to inclusive economic growth (Yuen 2024); the FDA’s Good Machine-Learning Practice principles add life-cycle monitoring for medical AI [66]; the Financial Stability Board calls for proportionality and clear accountability lines in banking algorithms; and IEEE’s Ethically Aligned Design urges every sector to hard-wire well-being and public benefit into system requirements from day 1 (scrut.io 2025). Synthesizing these instruments yields a common blueprint: (1) embed privacy, safety, and fairness at design time; (2) calibrate oversight to application-level risk; (3) require explainability and transparent audit trails; (4) keep models, data, and policies under continuous review through AI-assisted audits and sandboxes; and (5) invest in inclusive stakeholder participation so benefits—and accountability—are shared equitably. Aligning corporate guidelines with this cross-sector spine can speed up innovation while giving regulators and the public clear, harmonized benchmarks for ethical accountability and equitable access.

Furthermore, three cross-cutting governance pillars consistently surface among all 14 sectors—privacy-by-design, algorithmic accountability, and continuous risk monitoring—yet each industry operationalizes them differently in response to the intensity of domain-specific harms and the maturity of external regulation. For example, most healthcare and legal-tech guidelines pair strict consent and audit provisions with real-time monitoring dashboards, mirroring the stringent patient-safety and client-confidentiality mandates codified in the EU AI Act draft and HIPAA/GDPR regimes (scrut.io 2025). Finance and banking frameworks, facing high systemic-risk exposure, layer these same principles onto “cautious-innovation” sandboxes that require pre-deployment stress-tests and tiered model approvals, echoing the risk-weighted methodology advocated by NIST’s cross-sector AI Risk-Management Framework (Workforce 2023). Conversely, creative industries—design, entertainment, and publishing—emphasize disclosure and human-in-the-loop co-creation over rigid audits, aligning with the OECD’s flexible, innovation-friendly AI principles (oecd.ai 2025). These patterns suggest a spectrum of governance logics: high-stakes, tightly regulated sectors internalize external law through prescriptive controls, whereas lower-stakes, fast-moving sectors rely on softer norms such as transparency labels and opt-in user controls. Recognizing this gradient enables policymakers to craft “modular” frameworks that anchor every domain to the same ethical baseline while permitting sector-specific enforcement levers—thereby promoting equitable access to trustworthy AI without throttling context-appropriate innovation.

Dynamic regulation in healthcare and pharmaceuticals: balancing innovation and safety

The healthcare and pharmaceutical industries are often constrained by rigid regulatory frameworks, which can stifle the rapid adoption of AI. While stringent regulations prioritize patient safety and compliance, they also inhibit innovation and the timely integration of cutting-edge technologies. One approach could be to implement adaptive regulatory frameworks that dynamically adjust compliance requirements based on real-time risk assessments of AI systems. For example, AI-driven tools for low-risk applications like patient scheduling could face relaxed oversight compared to high-risk systems like AI-assisted diagnostics. However, the trade-off here is ensuring that adaptive frameworks do not inadvertently compromise safety or create regulatory loopholes. For example, platforms like 1Doc3 (1DOC3 2019) provide personalized, AI-driven healthcare guidance to millions of Spanish-speaking users, helping patients make better-informed decisions about their healthcare. Furthermore, companies like Chugai Pharmaceutical are utilizing AI to accelerate drug discovery and improve patient outcomes through the development of digital biomarkers (Chugai Pharmaceutical Co. 2023).

Industry examples further illustrate the application of AI in healthcare. CSL Behring (CSL Limited 2023), for example, employs AI to improve patient safety and pharmacovigilance. using natural language processing—a machine learning algorithm—to analyze real-world data and better identify safety signals, thus supporting more accurate and timely healthcare decisions. On the other hand, Procaps Group (Procaps Group 2023) showcases the use of AI in pharmaceutical manufacturing, where digitalization brings initiatives to improve efficiencies and quality control in medicine, better maintaining high standards in drug production and patient safety.

Despite its potential, integrating AI into healthcare counseling presents challenges, including technical and regulatory hurdles. Hardian Health (Hardian Health 2022) highlighted regulatory bodies like the FDA and MHRA are developing frameworks to accommodate the continuous updates necessary for AI systems while ensuring compliance. While DokiLink (DokiLink 2019) highlights that the importance of ethical considerations remains crucial as AI evolves, requiring organizations to adopt ethical guidelines to guide AI development and deployment. The key lies in developing AI systems that not only meet adaptive regulatory requirements but also enhance regulators’ capacity to monitor evolving risks using AI-driven predictive models.

Augmentation-first guidelines for creative and linguistic AI

In creative and linguistic fields, AI is best positioned as a tool for augmentation rather than replacement. Tasks like translating literature, where cultural nuance and contextual relevance are crucial, highlight the insufficiency of fully automated AI outputs. Industries should adopt Augmentation-First Guidelines that mandate human review and refinement of AI-generated content. For instance, AI-translated works should be edited by linguistic experts to preserve cultural integrity and idiomatic expressions. Elsevier (Elsevier 2023) policy requires authors to disclose any use of generative AI tools (e.g., ChatGPT) in the writing process, strictly limiting AI assistance to language refinement—not analysis—while mandating human oversight, full disclosure, and barring AI from author attribution or figure generation unless it’s part of the documented research design. Penguin Random House (Penguin Random House 2024), another major player in the publishing world, sees AI as a tool to potentially boost book sales and streamline operations Implementing guardrails on AI use including enforcing principles that support creativity and IP protection; embedding AI in supply-chain tools (e.g., e-book pricing, initial print-run forecasts via machine learning), and updating copyright pages to forbid their works from being used for AI training—marking one of the first publisher-led initiatives to explicitly opt out content from training datasets.

Moreover, continual training pipelines are necessary for AI models to keep pace with evolving language trends, including slang and colloquialisms. TransPerfect (TransPerfect 2024), a major player in translation and localization services, has incorporated generative AI into its operations to optimize business processes and reduce costs. The company reports that its AI.NOW system handled over 50 million translated segments in 2024, achieving latency reductions of 40% while maintaining quality benchmarks through continuous human evaluation loops. Lionbridge (Lionbridge 2023), another key player in the translation industry, utilizes AI to optimize translator assignments and improve content quality. The company has also recently launched a Domain Detector powered by machine learning, which analyzes client content in real time to match translators by subject-area expertise—leading to a 20% increase in first-pass accuracy and faster turnaround. While this process might slow deployment timelines, it prioritizes quality and sensitivity, ensuring AI enhances human creativity and understanding rather than producing substandard or culturally insensitive results.

Transparent AI in social media: trade-off between privacy and accountability

In the social media sector, transparency initiatives such as Explainable AI (XAI) aim to make algorithmic decision-making comprehensible to users. While transparency fosters trust and accountability, it may inadvertently conflict with privacy regulations such as GDPR and CCPA, as revealing too much information about algorithms might expose sensitive data or proprietary technologies. A solution could involve tiered transparency mechanisms, where different levels of algorithmic explanations are provided based on stakeholder roles. News24’s AI moderation engine reduced harmful comments by 45%, reinstating its comment section in mid-2024 with adjustable flags and safety thresholds for user-generated content. A collaboration between Spotify and Reuters in 2023 led to the adoption of an AI summarization tool that processes daily press releases in <30 s, enabling more timely news curation. For example, general users could receive high-level summaries, while regulators and auditors access detailed algorithmic insights. However, this approach requires careful calibration to avoid undermining user privacy or exposing companies to competitive risks. Balancing these trade-offs would necessitate the creation of robust AI audit systems that ensure algorithmic accountability without compromising proprietary or personal data.

Human-AI collaboration in design and entertainment: bridging creativity and ethics

In creative sectors like design, fashion, and entertainment, AI tools are often celebrated for enhancing productivity and innovation but criticized for potentially eroding human creativity. To bridge this gap, a novel proposal is to establish Human-AI Collaboration Guidelines that define ethical boundaries for AI use in creative processes. Electronic Arts (Arts 2022) explores how AI is transforming entertainment and culture, focusing on the ethical and esthetic considerations when using AI in social contexts. The company emphasizes the importance of purpose-driven, sustainable outcomes and responsible AI adoption. Tencent (Tencent 2020) applies AI across its businesses, including content recommendation, social interactions, and gameplay experience. The company also pursues key applications in industries such as medical, agriculture, industrial, and manufacturing, aiming to help enterprises achieve digital upgrades through AI. Square Enix’s President (Enix 2024) discusses the company’s AI initiatives, highlighting the potential of generative AI to reshape content creation and fundamentally change programming processes. The company is investing in AI, blockchain entertainment, and the cloud to adapt to the changing business environment and drive innovation. Ubisoft’s Ghostwriter (Ubisoft 2019), an in-house AI tool, assists scriptwriters by generating first drafts of barks, allowing them to focus on polishing the narrative. The tool demonstrates how AI can augment human creativity and streamline game development processes.

Tencent’s content recommender uses generative AI to dynamically produce in-game narrative variations, improving user session length by 18% in early 2025 beta tests. Electronic Arts’ Ghostwriter, launched in 2024, has drafted more than 200,000 lines of dialogue across five titles, with 70% of content subsequently approved by human writers—signifying substantial creative augmentation. For instance, guidelines could mandate that AI-generated designs are clearly labeled and that human creators retain intellectual property rights over AI-augmented work. While these measures can protect human creativity and ethical integrity, they may also limit the full potential of AI’s capabilities. Striking a balance requires fostering an ecosystem where AI serves as a tool for augmenting human creativity rather than replacing it, supported by training programs that equip creators to work effectively with AI.

Legal technology and AI: proactive governance for emerging risks

In legal technology, the use of AI for tasks like document review and contract analysis offers tremendous efficiency gains but raises ethical concerns about bias and autonomy. Zegal and Allen & Ovary highlight AI’s impact on legal practices, revolutionizing areas like legal research, document review, and contract analysis (Post 2024; Shearman 2023). While noting efficiency gains, they stress the need for human oversight. Recommendations for responsible AI practices across multiple documents include developing AI-focused organizational cultures, conducting impact assessments, ensuring diverse teams, providing staff training, creating robust data policies, and implementing ongoing auditing processes (Demarest 2022; Lawyer 2024; Mangaldas 2023). Others discuss the emerging regulatory landscape for AI, including the EU AI Act and proactive efforts by companies to establish AI governance policies. This reflects a growing trend towards “responsible AI” as global regulatory scrutiny increases (Africa 2024; Chambers and Partners 2024; Lawpath 2021; Wentzel 2023). A proactive governance approach would involve creating pre-emptive risk models that simulate the potential biases and ethical dilemmas of AI systems before they are deployed. For example, legal tech firms could use generative AI to simulate real-world use cases and identify scenarios where biases might emerge, such as favoring certain demographics in contract analysis. While this proactive approach can reduce downstream risks, it also requires significant investment in simulation tools and expertise. Collaboration with academic institutions and ethics researchers could help offset these costs while ensuring robust and impartial governance.

The risks of marketing hype in AI guidelines and policies

AI guidelines and policy statements often exaggerate the capabilities and impacts of AI systems, driven by marketing agendas rather than evidence-based assessments. Companies frequently describe their AI tools as “revolutionary” or “transformative,” promising universal solutions while failing to provide empirical support. For example, healthcare AI is often claimed to “transform global health outcomes,” yet adoption and efficacy vary significantly across diverse and under-resourced settings. Similarly, financial institutions tout the precision of AI in fraud detection without transparent metrics to substantiate these claims.

This hype-driven approach creates unrealistic expectations, misallocates resources toward promotional efforts, and risks eroding public trust when exaggerated benefits fail to materialize. Furthermore, a troubling disconnect exists between companies’ claims of “responsible AI” and the actual implementation of ethical practices, with many systems lacking transparency or exhibiting bias. Omnicom’s AI-driven media-buying platform dynamically adjusted bids across 500+ channels, increasing ROI by 18% and reducing client campaign overspend by 12% in 2024. LegalZoom’s 2024 beta of its AI assistant supported ~50,000 legal queries monthly, with a 92% user satisfaction score and leading to a 15% increase in upsell conversions.

To counter these issues, companies must adopt evidence-based reporting, distinguishing between potential and demonstrated AI capabilities. Transparent evaluations, independent audits, and third-party validations should become standard, ensuring that policy statements align with verifiable outcomes. By reducing hyperbolic rhetoric, organizations can build trust, credibility, and genuine progress in ethical AI deployment.

Concluding remarks and future directions

The rapid evolution and integration of Generative AI (GAI) and Large Language Models (LLMs) have ushered in transformative opportunities across diverse industrial sectors while simultaneously introducing significant ethical, operational, and regulatory challenges. This study provided a comprehensive analysis of 160 guidelines and policy statements across 14 industrial sectors, offering critical insights into the governance of GAI and LLMs. Our findings reveal three cross-sector opportunities: sharper operational efficiency, accelerated innovation, and stronger data-driven decision making—especially where our analysis shows high frequencies for “privacy,” “integrity,” and “predictive analytics.” By mining 160 policies across 14 industries, we uncovered clear priorities (e.g., privacy leadership in Legal Tech, integrity in Pharma, predictive tools in Finance) and equally clear gaps—namely under-emphasis on disclosure, human-centric design, and democratization. These findings indicate that a single, static rulebook is inadequate; instead, industries need dynamic, modular governance that pairs universal ethical principles with sector-specific “plug-in” standards. For instance, Finance can adapt the banking sector’s cautious-innovation playbook, while Journalism must strengthen misinformation safeguards that our Sankey diagrams show are weakly represented. Similarly, creative fields should adopt “augmentation-first” practices that keep humans in the loop, reflecting the low mention of “human-centric” in their guidelines.

The cross-industry keyword landscape signals converging ethical priorities but divergent risk postures. Privacy dominates every sector’s discourse—most intensely in Legal Tech—revealing a shared, institution-driven norm around data stewardship and signaling strong regulatory isomorphism (e.g., GDPR “spill-over” into non-EU domains). Conversely, the scarcity of “disclosure” and “human-centric” terms suggests a systemic transparency gap: firms acknowledge data protection yet offer few mechanisms for explaining or co-designing AI systems with users, leaving accountability diffuse. Sector-specific peaks expose risk perceptions shaped by domain stakes: heavy “integrity” talk in Pharma reflects stringent safety regimes, whereas Finance’s dense “predictive-analytics/market” cluster betrays a strategic tolerance for model uncertainty when gains outweigh compliance costs. Media’s weak “misinformation/skepticism” signal, juxtaposed with high “content,” highlights a regulatory blind spot where velocity of output eclipses veracity safeguards. Policy-wise, these patterns argue for a tiered governance architecture: (1) universal mandates on disclosure and human-centric design to close the transparency gap; (2) domain-calibrated risk controls—e.g., pre-market safety trials for clinical AI, stress-testing protocols for financial models; and (3) cross-sector audit exchanges that let privacy-mature fields mentor transparency-lagging ones. Theoretically, the results extend socio-technical governance models by showing how issue salience (privacy) drives convergence, while perceived opportunity cost (innovation vs. error) drives divergence, illuminating where harmonized regulation is feasible and where bespoke statutes remain essential.

Despite the progress achieved through this analysis, gaps remain in the current understanding and governance of GAI and LLMs. Concepts such as democratization, alternative methods, and skepticism are underrepresented in existing guidelines, highlighting the need for future efforts to prioritize inclusivity, critical thinking, and innovation. Furthermore, the hype-driven narrative surrounding AI capabilities continues to obscure the practical limitations of these systems, necessitating a shift toward evidence-based evaluations and transparent reporting. For our next iteration of this research, we would like enrich our linguistic pre-processing beyond stem-/lemmatization, the next iteration of our pipeline will introduce synonym-merging and concept-normalization layers built on transformer embeddings. Pre-trained contextual models (e.g., BERT and its domain variants) generate dense vectors in which semantically related words lie close together, enabling automatic grouping of surface-different but meaning-equivalent tokens such as “regulation”, “compliance”, “governance” into single concepts before TF-IDF weighting. Recent studies show that embedding-based synonym detection cuts vocabulary size while boosting topic-coherence scores in legal guidance corpora and patent texts (Reimers and Gurevych 2019); similar gains are reported for healthcare-policy mining with BioBERT and ClinicalBERT, which better capture domain terminology than bag-of-words methods (Zhang et al. 2022). We will therefore (1) compute Sentence-BERT embeddings and cluster them with Agglomerative/HDBSCAN to learn synonym sets dynamically (Rasmy et al. 2021); (2) use these sets to replace individual tokens with canonical forms, yielding a semantically normalized document-term matrix; and (3) re-run chi-square & K-means tests to assess whether regional regulatory signals emerge more sharply once lexical variance is collapsed. This enhancement will deepen our ability to trace how, for example, GDPR-aligned vocabulary maps onto Asian or U.S. policies, and will mitigate the sparsity that currently blurs low-frequency but policy-critical terms such as “disclosure” or “human-centricity”.

Our analysis has several important limitations: first, because the 160-document corpus was assembled from firms that publish guidelines in—or translate them into—English, it inevitably under-represents policies issued only in other languages, potentially skewing our frequency counts and thematic clusters toward anglophone priorities. Second, the sample itself is biased toward large, brand-visible companies whose public statements sometimes double as public-relations material are present throughout the document from crowd sourcing; these documents may overstate ethical commitments or under-report operational shortcomings, limiting their probative value as evidence of actual practice. Third, reliance on publicly available sources means that sectors or regions with lower disclosure norms could be missing, further constraining generalizability and cross-regional comparisons; in future work, we plan to mitigate these biases by supplementing the dataset with crowd-sourced guideline submissions, applying multi-code reliability checks, and using bias-detection toolkits to surface unbalanced language before analysis. To strengthen validity, the main analysis incorporated a lightweight synonym-merging layer atop lemmatization, and robustness checks with embedding-based merging showed higher topic coherence and cluster stability than a lexical baseline (Cote 2024; Figueroa 2023). Future work will deepen this concept-normalization approach (e.g., broader SBERT variants and hierarchical clustering) to further reduce sparsity and sharpen low-frequency but policy-critical terms.

Looking ahead, adaptive sandboxes, AI-assisted audits, and participatory design—applied early in the development cycle—will be essential for turning policy gaps into accountable, evidence-based standards. We argue that the most effective path toward trustworthy GAI and LLM deployment is a multi-layered governance ecosystem that combines adaptive sandboxes, AI-assisted audits, and participatory design from the very first sprint of system development. These mechanisms convert today’s policy gaps into accountable, evidence-based standards while balancing universal ethical principles with sector-specific “plug-in” modules for finance, health, media, and beyond. By embedding diverse stakeholders—end-users, ethicists, domain regulators—directly into participatory design workshops, organizations can ensure that cultural values and societal priorities shape model objectives and evaluation metrics before code is shipped.

Equally critical is a renewed commitment to transparency and public education. Tiered Explainable-AI (XAI) dashboards, paired with role-based transparency levels, allow everyday users, compliance officers, and auditors to see the decision logic they need—no more, no less—thereby reconciling privacy requirements with algorithmic accountability. To future-proof this progress, AI Innovation Hubs—co-funded by public-private partnerships—should stress energy-efficient model design and carbon-neutral data centers, ensuring that rapid scaling does not compromise environmental goals.

Finally, independent audits, evidence-based model report cards, and real-time misinformation detectors must become standard practice to counter hype and align policy claims with measurable performance—especially in sectors where exaggerated marketing currently obscures limitations. Transparency and education must remain central to fostering trust and accountability. Explainable AI (XAI) initiatives and tiered transparency mechanisms can cater to diverse stakeholder needs, balancing privacy concerns with algorithmic accountability. To ensure sustainability, the establishment of AI Innovation Hubs that emphasize energy-efficient and environmentally friendly practices, supported by public-private partnerships, is imperative. Furthermore, tackling misinformation and curbing the exaggeration of AI capabilities requires evidence-based evaluations and independent audits to align policy statements with verifiable outcomes. Together, these efforts will foster a governance ecosystem that promotes ethical, inclusive, and sustainable innovation, ensuring the responsible deployment of GAI and LLMs across diverse industries.

By advancing these interlocking priorities, researchers, policymakers, and industry leaders can co-create the next generation of GAI/LLM governance—one that enhances human well-being, expands equitable access, and sustains the highest standards of ethical accountability. Such efforts will ensure that these transformative technologies enhance human well-being, foster equitable access, and uphold the highest standards of ethical accountability. Ultimately, the responsible integration of GAI and LLMs into industry will depend not only on technological advancements but also on our collective commitment to fostering innovation that is both inclusive and sustainable.