The Ethical Case for Telling People When AI Is Involved
If AI is shaping the information someone receives, influencing a decision that affects them, or standing in for a person they expected to be dealing with โ do they have a right to know?
Part One is a five-minute read. Part Two has the evidence for anyone who wants to dig deeper.
Part One: The ethical case for telling people when AI is involved
In late 2025, McDonald's Netherlands released a holiday advertisement produced entirely with generative AI. They didn't mention this. Social media identified it within hours, branded it "AI slop," and the company pulled the ad within three days. Around the same time, Coca-Cola ran AI-generated holiday ads that were also controversial โ but Coca-Cola was open about AI's role in producing theirs โ in press materials, behind-the-scenes content, and public communications. The reception was far from universally positive, but the sentiment was measurably better. The difference wasn't the quality of the creative. It was the transparency.
One company was upfront about how the work was made. The other wasn't. That single decision โ to disclose or not โ shaped the entire public response.
This pattern is playing out across industries, and it raises a question that goes beyond marketing or compliance: if AI is shaping the information someone receives, influencing a decision that affects them, or standing in for a person they expected to be dealing with โ do they have a right to know?
The answer should be straightforward. Yes, they do.
The gap between what companies do and what people expect
The research on this is remarkably consistent. Somewhere between seven and eight out of ten consumers say they want to know when AI is being used. A large majority say they'd switch to a different provider for it. And the trust numbers in the DACH region are especially stark โ only about one in five German consumers say they trust AI companies, and roughly the same proportion trust AI itself.
These aren't hypothetical preferences. When companies get caught using AI without disclosing it, the consequences are tangible and disproportionate. The share of major companies disclosing AI as a material risk has surged from roughly one in ten to nearly three-quarters in just two years โ a signal that boards and investors now see undisclosed AI use as a genuine threat to reputation and valuation. The cost of being discovered is consistently worse than the cost of disclosing upfront. Every time.
And yet, disclosure remains the exception. Among large European companies, only about one in five has a formal AI policy in place. The gap between what people expect and what organisations actually do is enormous โ and it's not closing fast enough.
The usual explanation is that companies are waiting for regulation to tell them what to do. The EU AI Act's transparency requirements don't reach full applicability until August 2026. Germany hasn't even designated its national authorities yet. Switzerland has explicitly rejected a comprehensive AI law in favour of a sector-specific approach. So the argument goes: why move before you have to?
Because this isn't really about regulation. It's about whether your organisation is the kind that tells people the truth about how decisions affecting them are made.
What disclosure actually means โ and what it doesn't
One of the reasons companies hesitate is that "AI disclosure" sounds like it means confessing to something. It doesn't. It means being transparent about how your organisation uses technology that affects the people you work with โ customers, employees, partners.
There's an important distinction here. Not everything needs to be disclosed. Spell-checking, spam filtering, internal research assistance โ these are background operations where AI involvement is immaterial. Nobody expects a label on every autocorrected email. But when AI is interacting directly with a customer, when it's influencing hiring decisions or credit assessments, when it's generating content that reaches people without meaningful human review โ those are situations where the person on the receiving end has a legitimate interest in knowing.
The research actually offers a useful framework for thinking about this. It comes down to three questions. First, could the AI's output materially affect someone's rights, finances, or wellbeing? If yes, disclose. Second, would a reasonable person expect human involvement in this interaction? If yes, disclose. Third, would discovery of undisclosed AI use cause more reputational damage than proactive disclosure? If yes โ and the answer is almost always yes โ disclose.
What's interesting is that the research also shows that how you disclose matters as much as whether you do. Bare mechanical labels โ "this was AI-generated" โ can actually reduce trust rather than build it. One study found that one-line disclosures consistently outperform detailed explanations, which cause information overload. Another found that AI labels can paradoxically decrease the credibility of true information while increasing the credibility of false claims.
The lesson isn't to avoid disclosure. It's to avoid doing it badly. A brief, honest statement about how AI was used, paired with clarity about human oversight, works better than either exhaustive labelling or silence. The European privacy experience with cookie banners โ where legally mandated disclosures became meaningless ritual clicks โ is a cautionary example of what happens when disclosure is designed for legal completeness rather than human comprehension.
The front-page test
There's a useful thought experiment that applies to any organisation using AI. If it became public tomorrow that your company used AI in this particular way without telling anyone, would you be comfortable explaining why you didn't disclose it?
If the answer is no, you should be disclosing now.
The examples keep accumulating. An airline deployed a chatbot that gave a customer incorrect information about bereavement fares โ and when challenged, the company tried to argue the chatbot was a separate legal entity. A hiring platform's AI automatically rejected applicants based on age, settling for $365,000 in damages. Investment advisers were fined for claiming AI-driven processes that didn't exist. In each case, the technology did what it was built to do. What failed was the organisation's willingness to be honest about what it was doing โ and the consequences were worse because the disclosure came from someone other than the company itself.
The pattern is consistent enough to be a principle: the organisations that get into trouble with AI are rarely the ones that disclosed too much. They're the ones that disclosed too little, too late.
Disclosure is the start โ not the finish
There's a temptation to treat disclosure as a solved problem once you've added the right labels. We told them. Job done. But disclosure without a feedback channel is a monologue, not a conversation. And the people on the receiving end of AI-driven decisions know the difference.
Think about what happens when a customer gets an AI-generated response that's wrong โ a chatbot that gives incorrect pricing, a recommendation engine that suggests something irrelevant, an automated assessment that misclassifies their application. If the only thing you've done is disclose that AI was involved, you've told them who to blame but not what to do about it. The frustration isn't just about the error. It's about the absence of any obvious path to fix it.
A simple reporting mechanism โ a way for customers, employees, or partners to flag when AI-generated content or decisions seem wrong, unfair, or harmful โ changes the dynamic fundamentally. It moves from "we told you" to "we're listening." It gives people agency over technology that affects them. And it gives the organisation something it badly needs: a signal when AI systems are failing in ways that internal monitoring might not catch. The people experiencing the output are often the first to notice when something has gone wrong.
This doesn't need to be elaborate. A dedicated email address. A flag button alongside AI-generated content. A line in the disclosure itself: "This response was assisted by AI. If something doesn't look right, here's how to tell us." What matters isn't the mechanism โ it's the message behind it. You're telling people that their experience matters more than the efficiency of your automation, and that a human will look at what they raise.
The EU AI Act actually requires something like this for high-risk systems โ users must be able to understand, interpret, and where necessary challenge AI outputs. But waiting for the regulation to mandate it misses the point. If you believe people deserve to know when AI is involved, the natural next step is to give them a voice when it gets things wrong.
What this looks like in practice
For an SME in the DACH region, building an ethical disclosure practice doesn't require a compliance department or an AI ethics board. It requires a decision โ made by leadership, communicated to every team โ about what your organisation considers appropriate transparency, and a commitment to hearing back from the people it affects.
That decision has two realistic paths.
Disclose by principle. Decide now, before anyone asks and before the regulation compels it, what your organisation's disclosure standards are. Map where AI is being used โ most companies that do this discover far more AI tools in operation than they expected. One documented case found 23 AI tools across an 85-person firm, including 17 separate ChatGPT accounts. Establish a simple, clear framework: always disclose when AI touches customers or makes consequential decisions; consider disclosing when AI shapes client deliverables even with human review; no disclosure needed for internal productivity tools with full human oversight. Build a feedback channel from day one โ give the people affected by your AI a way to tell you when it gets things wrong, and make sure a human reviews what comes in. Communicate all of this to your teams. Make it part of how you operate, not a box you tick.
Disclose by deadline. Wait for the EU AI Act's August 2026 enforcement date. Scramble to identify what AI your teams are using. Map it against the regulatory requirements. Build disclosure practices under time pressure, likely with external help. Bolt on a reporting mechanism because the regulation requires one for high-risk systems. End up with something that's technically compliant but feels exactly like what it is โ a response to a deadline, not a commitment to the people you serve.
The first path costs roughly 60 to 80 hours of distributed effort over 90 days, based on documented implementation cases. It doesn't require new headcount. It requires leadership attention and a genuine commitment to being honest with the people your organisation serves.
The second path costs more โ in consulting fees, in rushed implementation, and in the credibility gap between organisations that chose to be transparent and organisations that were made to be.
Why this matters more in the DACH market
German consumers are among the most AI-sceptical populations in Europe. Trust in AI companies sits at around 21%. Trust in AI itself is roughly the same. In a market where nearly nine out of ten companies consider the country of origin of their AI provider important, and where the overwhelming majority of those prefer German solutions, trustworthiness isn't a nice-to-have. It's a prerequisite for doing business.
This scepticism is sometimes framed as a barrier to AI adoption. It's more accurately understood as a quality filter. DACH buyers โ whether consumers or B2B procurement teams โ don't reject AI. They reject AI they can't trust. And trust starts with honesty about what you're doing.
Austria has already moved, establishing a dedicated KI-Servicestelle within its telecoms regulator in early 2024 โ one of the EU's first operational AI information and oversight bodies โ and introducing mandatory AI labelling in government services. Germany is catching up, with draft legislation designating national authorities. Switzerland, while outside the EU framework, binds its companies extraterritorially when they sell AI systems into European markets. The regulatory direction is unambiguous. But the ethical direction should have been clear long before the regulation was written.
The bottom line
The question every leadership team should be asking isn't whether they're legally required to disclose AI use. It's whether they'd be comfortable if their customers, their employees, and their partners found out on their own.
Disclosure isn't a liability. Silence is. And disclosure without a way for people to respond isn't transparency โ it's a broadcast. The organisations that will carry the most trust into the next decade are the ones that told people the truth and then gave them a voice when things went wrong. Not because it was required, but because it was right. And in a market as trust-sensitive as the DACH region, that commitment isn't just ethical. It's the foundation everything else gets built on.
Join the conversation
Does your organisation disclose when AI is involved โ or are you waiting for the regulation to decide for you? I'd love to hear how you're approaching this โ join the discussion on LinkedIn.
Everything in Part One is grounded in specific research. This section lays out the data for anyone who wants to verify the claims, challenge the numbers, or take this to their board with sources attached.
Part Two: The Evidence
What consumers and business buyers actually expect
The demand for AI disclosure is not speculative โ it's measured, consistent, and growing across every major survey conducted in 2025 and 2026.
Capgemini's 2026 consumer trends study โ surveying 12,000 consumers across 12 countries โ found that 76% want clear rules for when AI assistants can act on their behalf [1]. Seventy-one per cent expressed concern about how generative AI tools use their data, and two-thirds said they trust AI more when it explains its reasoning [1]. An Emplifi survey of frequent social media users found 83% want disclosure when AI is being used, with roughly half saying an "AI-powered" label would increase their trust in the brand [2].
The consequences of failing to meet these expectations are severe. Relyance AI's December 2025 survey found that 82% of consumers view loss of control over their data in AI systems as a serious personal threat. More critically, 84% said they would take action โ by abandoning or restricting their use of a company โ when that company cannot explain how their data is being used [3]. Fifty-seven per cent said they would stop using the product entirely [3].
B2B expectations mirror the consumer data. Recent research shows 71% of B2B decision-makers avoid suppliers lacking clear, transparent information, and 66% of B2B buyers now use AI tools for supplier research โ meaning your transparency practices are increasingly visible to procurement teams before they even contact you [4]. The Stanford Foundation Model Transparency Index 2025 found that B2B-oriented companies like IBM (scoring 95 out of 100) are leaning into transparency as a differentiator, even as the industry average sits at just 41 out of 100 [4].
The DACH trust deficit โ and the disclosure opportunity
The DACH market amplifies these dynamics. The Nuremberg Institute for Market Decisions (NIM), surveying 1,000 respondents each in the US, UK, and Germany, found that only 21% of German consumers trust AI companies and their promises, and only 20% trust AI itself [5]. These are among the lowest trust levels in any major European market.
Bitkom's 2025 survey of 604 German companies reinforces this from the business side: 88% consider the country of origin of their AI provider important, and 93% of those prefer German solutions [6]. This preference isn't purely about data sovereignty โ it's a proxy for trustworthiness. In a market with this level of scepticism, transparency is a competitive prerequisite, not a differentiator.
Glass Lewis's 2025 research on European corporate governance found that only 20.7% of large-cap European companies had formal AI policies in place, with 61.3% having no AI policy disclosed at all [7]. Switzerland led at 40% adoption โ notable given its decision not to pursue comprehensive AI legislation. The gap between consumer expectations (80%+ want disclosure) and corporate readiness (roughly 20% have policies) represents the single largest trust asymmetry in the European technology landscape.
The McDonald's lesson and the paradox of disclosure design
The McDonald's Netherlands case from December 2025 is one of the clearest data points on how non-disclosure backfires. The company released a 45-second AI-generated holiday advertisement without disclosing the AI involvement. Social media identified it within hours, branded it "AI slop," and McDonald's pulled the ad within three days [8]. Around the same period, Coca-Cola ran AI-generated holiday campaigns that were also controversial โ but the company was open about AI's role in press materials and public communications. The sentiment data showed a measurably more positive reception, with one analysis recording 61% positive sentiment [9].
The difference was not quality. It was honesty. But the research on disclosure design adds an important nuance that prevents this from being a simple "just disclose" story.
Schilke and Reimann's 2025 study in Organizational Behavior and Human Decision Processes โ across 13 preregistered experiments with over 3,000 participants โ identified what they call the "Transparency Dilemma." Actors who voluntarily disclose AI usage are initially trusted less than those who do not, operating through reduced perceptions of legitimacy. However โ and this is the critical finding โ third-party exposure of undisclosed AI use has an even stronger negative effect than voluntary disclosure [10]. The implication is not that companies should avoid disclosure, but that how they frame it determines whether it builds or erodes trust.
A 2026 study on AI disclosure in news content found that detailed disclosures led to reduced trust through information overload, while one-line disclosures did not produce the same negative effect โ they maintained trust levels comparable to no disclosure at all [11]. A separate study published in the Journal of Science Communication found that AI labels can paradoxically decrease the credibility of true information while increasing the credibility of false claims โ a "truth-falsity crossover effect" driven by negative attitudes toward AI [12].
These findings converge on a practical conclusion: disclosure needs to be brief, proportionate, and designed for human comprehension. The European privacy experience with cookie banners โ where legally mandated disclosures became meaningless ritual clicks โ is the cautionary precedent. The IAB's January 2026 AI Transparency and Disclosure Framework addresses this with a materiality test: disclosure is required only when AI "materially affects authenticity, identity, or representation in ways that could mislead" [13]. Not everything needs a label. But the things that do need a label need a good one.
The legal landscape โ cases that set the precedent
Several enforcement actions and legal rulings have established that non-disclosure of AI use carries real financial and legal consequences.
In February 2024, the British Columbia Civil Resolution Tribunal ruled that Air Canada was liable for incorrect bereavement fare information provided by its AI chatbot. The airline's argument that the chatbot was a "separate legal entity" was rejected โ the tribunal held the company fully responsible for information its AI tools provide to customers [14].
The US Equal Employment Opportunity Commission settled its first AI discrimination case when iTutorGroup agreed to pay $365,000 after its AI hiring software automatically rejected applicants over 55 (women) and 60 (men), screening out more than 200 qualified candidates based on age [15]. In March 2024, the SEC fined two investment advisers a combined $400,000 for falsely claiming AI-driven investment processes โ the agency's first "AI washing" enforcement action [16].
The S&P 500 has responded. According to The Conference Board and ESGAUGE, 72% of S&P 500 companies now disclose at least one material AI risk in their 10-K filings, up from just 12% in 2023. Reputational risk is the number-one concern, cited by 38% of disclosing companies [17]. The surge from 12% to 72% in two years signals that boards and investors now view undisclosed AI use as a material governance risk.
The regulatory timeline for DACH
The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 with phased implementation. Prohibited AI practices and AI literacy requirements became enforceable on 2 February 2025. Article 50's transparency obligations โ requiring that AI systems interacting with people inform users they are engaging with AI, that AI-generated content is machine-readably marked, and that deepfakes and AI-generated text on public interest matters are disclosed โ reach full applicability on 2 August 2026 [18]. Penalties for non-compliance reach up to โฌ35 million or 7% of global annual turnover for prohibited practice violations, with SMEs receiving proportionally lower fines [18].
Austria established its KI-Servicestelle within the RTR telecoms regulator in January 2024 โ one of the EU's first operational AI oversight bodies [19]. Germany's draft KI-MIG designates the Federal Network Agency as central market surveillance authority, though the country missed the August 2025 designation deadline partly due to its 2025 federal election. Switzerland, on 12 February 2025, explicitly rejected a comprehensive horizontal AI law in favour of a sector-specific regulatory approach, though Swiss companies selling AI systems into EU markets must comply with the EU AI Act extraterritorially [20].
Practical implementation โ what it actually takes
The concern that AI governance is too burdensome for SMEs is not supported by the implementation data. A ResultSense blueprint for SME AI governance outlines a four-phase framework achievable in 60 to 80 hours over 90 days, distributed across IT, management, legal, and operations teams without additional headcount [21].
ISO/IEC 42001:2023 โ the world's first certifiable AI management system standard โ provides formal scaffolding for organisations that want external validation of their governance practices, covering 38 controls across 9 objectives including transparency, risk management, and data governance [22]. The EU AI Act itself includes SME accommodations: priority sandbox access free of charge, simplified documentation requirements, proportionate fees for conformity assessments, and dedicated compliance communication channels [18].
The OECD AI Principles, updated in May 2024, note that "disclosure should be made with proportion to the importance of the interaction" โ explicitly acknowledging that the growing ubiquity of AI may influence the feasibility of disclosure in some cases [23]. This principle-of-proportionality approach aligns with the materiality-based framework: always disclose when AI affects people materially, consider disclosing when it shapes deliverables, no need for labels on routine background tools.
References
| Ref | Source | Published | Used for |
|---|---|---|---|
| 1 | Capgemini โ What Matters to Today's Consumer 2026 | January 2026 | 76% want clear AI rules, 71% concerned about data use, two-thirds trust AI more with explanations |
| 2 | Emplifi โ AI in Social Media 2025: What Consumers Want | 2025 | 83% want AI disclosure, ~50% say AI-powered label increases trust |
| 3 | Relyance AI โ AI Data Ultimatum Consumer Survey | December 2025 | 82% view data loss-of-control as serious threat, 84% would abandon or restrict companies over AI opacity |
| 4 | Stanford HAI โ Foundation Model Transparency Index 2025 | December 2025 | IBM scores 95/100, industry average 41/100; B2B transparency as differentiator |
| 5 | NIM โ Transparency Without Trust | 2025 | 21% of German consumers trust AI companies, 20% trust AI itself |
| 6 | Bitkom โ Breakthrough in Artificial Intelligence 2025 | 2025 | 88% of German companies consider AI provider country of origin important, 93% prefer German solutions |
| 7 | Glass Lewis โ Board AI Policies and Oversight in Europe 2025 | 2025 | 20.7% of European large-cap companies have formal AI policies; Switzerland leads at 40% |
| 8 | NBC News โ McDonald's AI-generated Christmas advert pulled after backlash | December 2025 | McDonald's Netherlands AI ad pulled within three days of release |
| 9 | Decision Marketing โ Coke AI ad triggers mass debate but most still love it | 2025 | Coca-Cola AI holiday ads received 61% positive sentiment with open AI communication |
| 10 | Schilke & Reimann โ The Transparency Dilemma: How AI Disclosure Erodes Trust, OBHDP | May 2025 | 13 experiments, 3,000+ participants: voluntary disclosure reduces trust, but third-party exposure reduces it more |
| 11 | Full Disclosure, Less Trust? AI Disclosure in News Writing | January 2026 | Detailed AI disclosures reduce trust; one-line disclosures maintain trust levels |
| 12 | Lin โ Visible sources and invisible risks, JCOM | 2026 | AI labels decrease credibility of true information, increase credibility of false claims |
| 13 | IAB โ AI Transparency and Disclosure Framework | January 2026 | Materiality test: disclosure required when AI materially affects authenticity, identity, or representation |
| 14 | ABA โ BC Tribunal Confirms Companies Liable for AI Chatbot Information | February 2024 | Air Canada liable for chatbot misinformation; "separate legal entity" defence rejected |
| 15 | EEOC โ iTutorGroup Settlement | August 2023 | $365,000 settlement for AI age discrimination in hiring; 200+ applicants rejected |
| 16 | SEC โ AI Washing Enforcement Action | March 2024 | $400,000 combined fines for falsely claiming AI-driven investment processes |
| 17 | Fortune / Conference Board โ S&P 500 AI Risk Disclosure | October 2025 | 72% of S&P 500 disclose material AI risk (up from 12% in 2023); reputational risk cited by 38% |
| 18 | EU AI Act โ Article 50: Transparency Obligations | Ongoing | Article 50 transparency requirements applicable 2 August 2026; penalty framework |
| 19 | RTR โ KI-Servicestelle | January 2024 | Austria's AI Service Centre established January 2024 within RTR |
| 20 | Swiss Federal Council โ Sector-Specific AI Regulatory Approach | February 2025 | Switzerland rejected comprehensive AI law 12 February 2025; sector-specific approach adopted |
| 21 | ResultSense โ Shadow AI Governance Framework for SMEs | October 2025 | 60โ80 hour implementation over 90 days; four-phase SME governance blueprint |
| 22 | ISO โ ISO/IEC 42001:2023 AI Management Systems | December 2023 | World's first certifiable AI management system standard; 38 controls across 9 objectives |
| 23 | OECD โ AI Principles | May 2024 (updated) | Disclosure proportionate to importance of interaction; acknowledges feasibility constraints |
This article is part of the Strategic Insights series at alexandrebally.ch, where we explore the operational realities behind business transformation and AI adoption for SMEs in the DACH region.
Comments are not configured yet.