We need to chat about ethics in AI
If artificial intelligence is just a child now, we don't want to raise a delinquent.
Since the launch of Chat GPT, the technology media can’t get enough of the rise of artificial intelligence. Last month for my podcast “Towards Society 5.0” I interviewed virtual space pioneer David A. Smith from croquet. And he had a dire warning about the way AI has been programmed to date:
“It's sort of, that's a game, and AI is very, very good at. When we look at AR and VR, this AI that I talked about inside of the AR space that's working for me and for you, it can't be one of those. It has to be an AI that not only helps us, but also defends us against those others. Well, you know, it's gotta be a call prophylactic AI.
It's gotta be the thing that its job is to help you solve your problem. But also keep you safe. Mm-hmm. and I don't think there's not any work being done on that kind of AI right now. We see it as essential, cuz can you imagine if that AI was owned by a social network and that AI was gonna have a very strong urge to tell you to do things that may not be what you really need and aren't in your best interest?”
Meaning where are the boundaries? We need a strong, heroic AI to defend against the nefarious AI that may seek to harm us. We don’t send children out into the streets to raise themselves, yet we are using all corners of the internet, both light and dark, to train narrow AI. Which is as neglectful as it is dangerous.
So why is this important?
For starters, as in all facets of life, diversity matters. As a transformative technology, we have to make sure that no one gets left behind.
AI systems are designed to make decisions based on data and algorithms, but these systems can also perpetuate biases and discriminate against certain groups of people. For example, facial recognition technology has been shown to be less accurate for people with darker skin tones, and hiring algorithms can perpetuate gender and racial biases.
The consequences of biased AI can be significant. Biased AI can lead to unfair treatment of individuals or entire groups of people, perpetuating systemic inequalities. In the case of hiring algorithms, biased AI can result in a less diverse and less qualified workforce.
To address these ethical concerns, it's important to ensure that AI systems are designed with ethics in mind. This means considering the potential biases that may be present in the data used to train AI models and taking steps to mitigate those biases. It also means involving diverse perspectives in the development and implementation of AI systems to ensure that these systems do not perpetuate inequalities.
Additionally, it's important to consider the impact that AI systems may have on individuals' privacy and autonomy. As AI systems become more integrated into our daily lives, it's important to ensure that individuals have control over their data and are able to make informed decisions about how their data is used.
To promote ethical AI, organisations should establish clear ethical guidelines for the development and deployment of AI systems. These guidelines should be transparent and should involve input from a variety of stakeholders, including those who may be impacted by the AI systems. Additionally, organizations should regularly evaluate their AI systems to ensure that they are operating in an ethical manner and are not perpetuating biases.
when AI systems are developed with input from diverse perspectives, they are more likely to be fair and equitable. Diverse perspectives can help to identify potential biases in the data used to train AI models and can offer insights into how those biases may impact different groups of people.
Moreover, AI systems are used by people from different cultures and backgrounds, so it is important that they are designed to be inclusive and accessible for all. Having a diverse team working on AI programming can help to ensure that the systems are developed in a way that considers the needs and perspectives of different groups.
Diverse voices can also help to identify potential unintended consequences of AI systems. For example, an AI system designed to optimize traffic flow may inadvertently increase pollution levels in certain neighborhoods. A diverse team can help to identify these potential consequences and work to mitigate them.
It is also important for creative industries.
To paraphrase the classic Australian film, "The Castle.” There have been a few IP cases where artists have had to seek legal action, this year alone because AI has generated art based on their “vibe”.
Generative art AI involves using algorithms and data to create unique works of art. These works of art can be incredibly beautiful and inspiring, but they can also raise questions about who owns the copyright for these works.
In many cases, the copyright for generative art AI is owned by the person or organisation that developed the AI system. However, this is not always the case. In some instances, the copyright may be owned by the individual who created the data used to train the AI system, or by the person who commissioned the creation of the generative art AI.
Regardless of who owns the copyright, it's important to ensure that copyright holders are protected. This means taking steps to prevent the unauthorised use or reproduction of generative art AI. As well as ensuring that copyright holders are properly credited for their work.
One potential solution is to use blockchain technology to create a record of ownership for generative art AI. By creating a tamper-proof record of ownership, blockchain technology can help to the unauthorised use or reproduction of generative art AI and can help to ensure that copyright holders are properly credited for their work.
Another solution is to establish clear guidelines for the use and reproduction of generative art AI. These guidelines should be transparent and should consider the needs of both copyright holders and users of the generative art AI.
These are some of the issues to consider.
The irony is that Chat GPT co-wrote this blog. And Dall-e created the cover image.
Until next time friends, remember to stay curious and the future is user friendly.