Article

Bots vs Beings – the impacts of AI on life and work

Discussions of artificial intelligence (AI) are inseparable from questions of ethics. In 1942, Isaac Asimov famously popularised the idea that intelligent machines should adhere to a moral code set by humans: his fictional Three Laws of Robotics have been influential not just in science fiction but in the real-world research and development of technologies such as robotics and AI.

Today, the increased use of AI impacts everything from employment to the environment, and the ethical considerations are wide-reaching.

At the Bots vs Beings panel discussion on 13 June 2023, experts from the University of Waikato considered the question: How will AI impact your life and work? This article includes short videos of the experts and explores some of the themes raised during their panel discussion.

Jobs for machines 

Robots and AI doing work once done by humans isn’t science fiction, points out Professor Mike Duke. It’s already happening and will only become more common. It could be said that much of the work now done by robots and AI is work that humans prefer not to do.

Robotic machines versus human workers

Machines doing jobs once done by humans is nothing new, explains Professor Mike Duke from the School of Engineering Teaching and Research at the University of Waikato. Not only have factories been switching from humans to mechanised labour for decades, but sectors such as Aotearoa New Zealand’s commercial fruit and wine industries are finding more and more uses for artificial intelligence and machine automation.

Learn about some of the advances in AI and robotics in industry, including the horticulture sector in New Zealand, and meet Archie – a machine being developed to replace humans in some vineyard work.

Questions for discussion

  • What do you think are some of the advantages of using robots and AI to replace humans?

  • What do you think Mike means when he says Archie is not sentient?

  • Can you think of some jobs that have been replaced or supplemented by machines in your lifetime? For example, supermarket self checkouts.

 

Rights: The University of Waikato Te Whare Wānanga o Waikato

While many jobs will be lost to AI, another aspect to consider is how many jobs will be created in a fast-growing industry. Jobs such as prompt engineers, AI auditors and of course AI ethicists were unheard of a few years ago but are becoming common. How this work is sourced and remunerated, though, is still an open question as we’ll see.

AI won’t replace humans. People using AI will replace people not using AI.

Dr Amanda Williamson

You may keep your job, but you are not going to be well paid for it. So your job may remain, but expect to be poor.

Professor Nick Aga r

The need for human understanding 

Dr Amanda Williamson is a Senior Lecturer in Innovation and Strategy and a Manager in AI & Data Consultancy. Amanda says it’s important to teach and learn about the limitations of AI. The industry term for AI-generated material that seems confidently genuine but isn’t true to reality is ‘hallucination’.

Limitations of AI

Artificial intelligence (AI) can do things that not long ago would have seemed like miracles or science fiction. Dr Amanda Williamson, Senior Lecturer in Innovation and Strategy at the University of Waikato and a Manager in AI & Data Consultancy at Deloitte, cautions us to look closer so the limits of the tools – functions of how they were created and by whom – become apparent. We need to be aware of these limitations if we don’t want to see AI tools misused.

Questions for discussion

  • Amanda gives an example of biased data. Can you think of an example of how data could be biased?

  • How do you think we could help assure generative AI applications are not biased?

 

Rights: The University of Waikato Te Whare Wānanga o Waikato

Dr Williamson points out that generative AIs are also trained on data containing implicit human biases. Image generators will often assume that anyone doing a powerful or complex job should be represented by a middle-aged Pākehā male, for instance.

In 2016, Microsoft released an AI named Tay who would learn from its interactions with humans on social media. The company soon decided to take the AI offline after those interactions taught it to make hateful and bigoted remarks.

Another growing problem is the use of AI image generators to misinform and harass. False stories can be given credence through the AI-generated endorsement of celebrities or journalists. Faked images of individuals, including schoolchildren and young people, are spread online to harass and extort.

These are just a few examples of how generative AI needs to be designed and monitored so as not to replicate humans’ worst biases or facilitate antisocial behaviour. Human overseers of AI need to be deliberate in what data is and isn’t included in the datasets – and to include diverse teams capable of highlighting potential misuses before they can become real-world issues.

Environmental impacts of AI 

Everything we do on computers comes with an energy cost, and AI is no exception. The servers needed to train and maintain online generative tools like ChatGPT consume significant amounts of power .

Environmental impacts of AI

It’s easy to forget that, for every online task that makes our lives easier, there’s a cost in terms of energy used and carbon released into the atmosphere. Dr Amanda Williamson discusses the climate impact of artificial intelligence.

Did you know?

According to a study by researchers at the University of Massachusetts, training a large language model with 1.75 billion parameters can emit up to 284 tonnes of carbon dioxide, which is equivalent to the emissions from five cars over their lifetimes. 

Another study by researchers at the Alan Turing Institute suggests that training a model such as GPT4 generates as much carbon dioxide in 90–100 days as 60 average humans would use over a year.

In an analysis conducted by OpenAI, it was determined that the computing power required for training a large AI has been doubling every 3–4 months since 2012.

Questions for discussion

  • Do you think the energy cost of building a model such as ChatGPT is justified by the applications people will find for the technology?

  • What ways can you think of to keep up with or reduce the carbon cost of AI?

 

Rights: The University of Waikato Te Whare Wānanga o Waikato

It’s estimated that training a tool of ChatGPT 3’s scale generates a similar carbon footprint in a few months as five cars would do over their entire lifetimes. The size and complexity of our AI projects will only increase – but climate change isn’t going away either.

Microsoft has announced plans to powe r its AI network with a complementary network of small, next-generation nuclear reactors, arguing that this is the cleanest and most sustainable way of maintaining an AI infrastructure of the planned scale.

Whose data trains the AIs? 

Dr Te Taka Keegan is a computer scientist and Māori language expert. He calls the new large language models’ proficiency in te reo Māori “scarily good”, teasing out its implications while pointing out that the language data used to train the models belongs to Māori and was used without permission.

ChatGPT, AI and Māori data sovereignty

Associate Professor Te Taka Keegan (Waikato-Maniapoto, Ngāti Porou, Ngāti Whakaue) is a computer scientist and Māori language advocate at the University of Waikato. He talks about an experience exploring ChatGPT with his students and discusses the implications of large language models (LLMs) like ChatGPT on the push for Māori data sovereignty.

Question for discussion

  • What positive applications can you imagine for AI language models to support and protect te reo Māori?

Rights: The University of Waikato Te Whare Wānanga o Waikato

The call for Māori data sovereignty is part of a growing worldwide discussion about indigenou s data governance. The CARE Principles for Indigenous Data Governance were first drafted in 2018 by a panel that included experts from Aotearoa, Australia, Africa and the Americas.

As Dr Keegan points out, huge overseas operations like ChatGPT scraping te reo Māori data from social media is the opposite of Māori data sovereignty. Risks include international companies profiting from the Māori language and the possibility that contro l of the use and evolutio n of te reo Māori is gradually transferred from iwi to AIs.

Ethics and regulation: where to from here? 

Philosopher Nick Aga r has written widely on the role of technology in our human future. In discussing the ethical ramifications of the AI boom, Professor Aga r and Dr Williamson draw parallels with another technology that’s changed our world: the rise and prevalence of social media.

AI – ethical and regulatory concerns

Artificial intelligence (AI) comes with a wealth of possibilities – and many ethical concerns. Dr Amanda Williamson (University of Waikato and Deloitte) and Professor Nick Agar and Associate Professor Te Taka Keegan from the University of Waikato discuss some reasons to be wary of the new technology, the feasibility of laws regulating the industry and some suggestions for individuals navigating this new world.

Did you know?

The Cambridge Analytica data scandal in 2010 involved the harvesting of personal data without consent from millions of Facebook users. The data was used for political gain. 

Questions for discussion

  • How do you define ‘social cohesion’?

  • How do you think the rise of social media has impacted social cohesion?

  • What new jobs or new learning for current roles will be needed to regulate and safeguard against the abuse of AI?

  • Nick suggests that AI and generative language models will be more transformative than social media. How do you think social media has transformed society? How do you think that AI and language models will transform society?

  • Some of the speakers suggest ways to reduce the possible harms of AI and generative language models such as the ‘right to be forgotten’ when we request it. What regulations or brakes would you like to see applied?

Rights: The University of Waikato Te Whare Wānanga o Waikato

Generative AI makes its public debut in a time when the regulation of data usage is a huge international issue. While many tech workers fight for a living wage, the hard mahi of training tomorrow’s AI tools is outsourced to developing countries where workplace oversight is largely absent and payment is often in the single figures.

The work can be physically and psychologically gruelling as well as precarious. A huge amount of the human work required for AI to function is outsourced to countries like the Philippines, where climate change is already having a disastrous effect.

Dr Keegan stresses the importance, in an AI-connected world, of meeting and working kanohi ki te kanohi (face to face). It will be hard to pre-emptively regulate against the potential threats posed by AI, he argues, pointing out that traffic laws weren’t drafted until automobile fatalities demanded it.

“Question what you’re seeing and hearing and believing,” he says, advising that fostering in-person connections can counteract AI’s threats to social cohesiveness.

What do you think?

Professor Mike Duke showed a video of a robot pruner named Archie to which he’d added a simulation of the robot describing its job and joking about its superiority to humans. As Mike is careful to note, AIs don’t really “think” in this way at all. Is AI easier to understand if we personify it as Professor Duke has done? What are the advantages – or risks – of doing this?

Dr Amanda Williamson discussed some of the environmental costs of large-scale AI work, and we saw how one potential solution involves networks of small-scale nuclear reactors. Can you think of other possible ways of offsetting or minimising AI’s carbo n footprint?

We’ve seen some of the potential misuses of AI such as reinforcing human biases, spreading misinformation and enabling antisocial behaviour. How might education, diversity and in-person connections counteract these dangerous effects as Dr Williamson and Dr Te Taka Keegan advise?

Nature of science and technology

AI represents a massive commercial application of advanced STEM research. Discussing these technologies and their ramifications helps us to explore the impact science and technology have on our world and lives.

Related content

The article Artificial intelligence provides a on how we’re starting to see AI explored and employed today.

Professor Albert Bifet’s article ChatGPT – generating text and ethical concerns goes into more depth about some of the ethical questions raised by large language models (LLMs) such as ChatGPT.

The article ChatGPT and Māori data sovereignty explores some of the cautions and promises that Dr Te Taka Keegan sees in the future of LLMs.

The Connected article Emotional robots asks us to consider what constitutes intelligence and what it might mean to attribute it to a machine or computer program.

The citizen science project AI4Mars offers students an opportunity to help train AI for scientific mahi that humans currently can’t do.

Useful links

Dr Karaitiana Taiuru explains his role as a Māori Data and Emerging Technology Ethicist.

Explore the resources in the Artificial intelligence section on the Office of the Prime Minister’s Chief Science Advisor website.

Download from the Royal Society of New Zealand Te Apārangi Summary: The Age of Artificial Intelligence in Aotearoa. This 2019 report looks at what artificial intelligence is, how it is or could be used in New Zealand and the risks that need to be managed so that all New Zealanders can prosper in an AI world.

ChatGPT and other LLMs require significant input from humans and rely on our feedback to improve the technology. This article looks at LLMs from a sociological perspective.

Better Images of AI is a non-profit collaboration that examines cliched images used to illustrate AI concepts and how these hinder our understanding of AI. A common example of an inaccurate image to illustrator AI is the ‘thinking humanoid robot’.

The Royal Society Te Apārangi Mana Raraunga Data Sovereignty 2023 report outlines what data sovereignty is and why it matters in Aotearoa New Zealand. Listen to this RadioNZ interview with Professor Tahu Kukutai as she breaks down concepts like Big Data and Māori data sovereignty.

Acknowledgements

Professor Mike Duke is the Dean of Engineering and Dr John Gallagher Chair in Engineering at the University of Waikato. Mike is a founding member of Waikato Robotics Automation and Sensing (WaiRAS) research group.

Dr Amanda Williamson is a Senior Lecturer in and Strategy at the University of Waikato and a Manager in AI & Consultancy at Deloitte.

Professor Nick A is a Philosopher and Professor of Ethics at the University of Waikato.

Associate Professor Te Taka Keegan (Waikato-Maniapoto, Ngāti Porou, Ngāti Whakaue) is an Associate Professor of Computer Science, the Associate Dean Māori for Te Wānanga Pūtaiao (Division of HECS) and a co-director of Te Ipu Mahara (University of Waikato’s AI Institute).

Published:21 November 2023