Read on europarl.europa.eu:
European Parliament Artificial Intelligence Act March 2024
(29) AI-enabled manipulative techniques can be used to persuade persons to engage in
unwanted behaviours, or to deceive them by nudging them into decisions in a way that
subverts and impairs their autonomy, decision-making and free choices. The placing on
the market, the putting into service or the use of certain AI systems with the objective to or
the effect of materially distorting human behaviour, whereby significant harms, in
particular having sufficiently important adverse impacts on physical, psychological
health or financial interests are likely to occur, are particularly dangerous and should
therefore be forbidden. Such AI systems deploy subliminal components such as audio,
image, video stimuli that persons cannot perceive as those stimuli are beyond human
perception or other manipulative or deceptive techniques that subvert or impair person’s
autonomy, decision-making or free choice in ways that people are not consciously aware
or, where they are aware, they are still deceived or are not able to control or resist. This
could be facilitated, for example, by machine-brain interfaces or virtual reality as they
allow for a higher degree of control of what stimuli are presented to persons, insofar as
they may materially distort their behaviour in a significantly harmful manner. In
addition, AI systems may also otherwise exploit the vulnerabilities of a person or a
specific group of persons due to their age, disability within the meaning of Directive (EU)
2019/882 of the European Parliament and of the Council17, or a specific social or
economic situation that is likely to make those persons more vulnerable to exploitation
such as persons living in extreme poverty, ethnic or religious minorities.
17 Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on
the accessibility requirements for products and services (OJ L 151, 7.6.2019, p. 70).Such AI systems can be placed on the market, put into service or used with the objective
to or the effect of materially distorting the behaviour of a person and in a manner that
causes or is reasonably likely to cause significant harm to that or another person or groups
of persons, including harms that may be accumulated over time and should therefore be
prohibited. It may not be possible to assume that there is an intention to distort behaviour
where the distortion ▌ results from factors external to the AI system which are outside the
control of the provider or the deployer, namely factors that may not be reasonably
foreseeable and therefore not possible for the provider or the deployer of the AI system
to mitigate. In any case, it is not necessary for the provider or the deployer to have the
intention to cause significant harm, provided that such harm results from the
manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices
are complementary to the provisions contained in Directive 2005/29/EC of the European
Parliament and of the Council18, in particular unfair commercial practices leading to
economic or financial harms to consumers are prohibited under all circumstances,
irrespective of whether they are put in place through AI systems or otherwise. The
prohibitions of manipulative and exploitative practices in this Regulation should not
affect lawful practices in the context of medical treatment such as psychological
treatment of a mental disease or physical rehabilitation, when those practices are carried
out in accordance with the applicable law and medical standards, for example explicit
consent of the individuals or their legal representatives. In addition, common and
legitimate commercial practices, for example in the field of advertising, that comply with
the applicable law should not, in themselves, be regarded as constituting harmful
manipulative AI practices.