top of page
Writer's pictureKaren Walstra

Ethics and AI in Education - how do we address this?

Updated: Feb 2, 2023



New inventions to explore, legislation in some instances. Tech being developed all the time. How do we as educators address this complex issue?


I love new tech. I think we should expose learners to developments, so they will succeed in the changing world. As much as I love AI, it scares me, and I think we need to remain informed.


For example: Exciting new tech - LaMDA (Language Model for Dialogue Applications) is a generative language model. Google’s LaMDA 2 available to the public. Generative language models are able to produce text about anything. They do this by recognizing patterns in language and predicting what comes next.


LaMDA 2, the AI (artificial intelligence) chatbot from Google that one of its engineers claimed was able to perceive or feel things (sentient), is being made available to the public. See the AI Business article: Google fires engineer who claimed AI was ‘sentient

Register an interest at AI Test Kitchen with Google https://aitestkitchen.withgoogle.com/ to test LaMDA. This app is designed to help Google gather feedback on its AI tech. It is only open to small groups of US folks at the moment.


There are three demo options on the application.

“Imagine It” which allows users to name a place, which the chatbot will then describe (see video clip from https://aitestkitchen.withgoogle.com/ )

“Talk About It” where users can converse with the bot about a specific prompt

“List It” where users offer a topic and the system will break it down into bullet points.


In 2016 'Partnership on AI' was formed by Google, Facebook, Amazon, IBM and Microsoft. The partnerships intent was to advance the publics understanding of AI, as well as formulating standards for future AI researchers to abide by. However in practical terms this hasn't been achieved. In 2019 Google announced a new external AI ethics board, which received much criticism. There seem to be various AI ethics boards.


On 26 June 2019, The US Government, Committee of Science, Space and Technology House Representatives, held discussion about Artificial Intelligence: Societal and Ethical Implications with various people presenting views, the idea being to add the information towards creating guidelines for the ethical development of artificial intelligence.


In November, 2021, the Minister Khumbudzo Ntshavheni: Artificial intelligence regulation while encouraging innovation, reminded us that in 2019, AU Ministers responsible for Communications and ICTs adopted the Sharm El Sheikh Declaration wherein there was an agreement, amongst others; to establish a Working Group on AI, based on existing initiatives and in collaboration with African institutions to address the following:

  • The creation of common African stance on AI,

  • The development of Africa-wide AI capacity building framework,

  • Establishment of an AI think-tank to assess and recommend projects to collaborate on, in-line with Agenda 2063 and SDGs

  • In Nairobi, Kenya, Strathmore University established the @iLabAfrica Research Centre that is seeking to promote cutting-edge research in AI, among other emerging technologies.

  • In Nigeria, University of Lagos recently opened the AI Hub that will focus on deep learning and tools to collect data for AI purposes

The South African Government's Report of the Presidential Commission on the 4th Industrial Revolution, also addresses the development and advance of AI as a key focus area in the digital economic development strategy of South Africa.


The EU Ethics Guidelines for Trustworthy AI (the website has ironically been archived and moved to a new futurism platform), focused on building trust in human-centric AI. Here the report and draft guidelines are available. Guidelines list seven key requirements that AI systems should meet in order to be trustworthy:

  1. Human agency and oversight

  2. Technical robustness and safety

  3. Privacy and Data governance

  4. Transparency

  5. Diversity, non-discrimination and fairness

  6. Societal and environmental well-being

  7. Accountability



AI is being used on the apps and websites we interact with. We do not even realize it.


As the AI develops and as the technological world changes, I do feel strongly as educators we should be teaching our learners to be responsible technology users and creators.


I strongly agree with the Harvard Business Review Why You Need an AI Ethics Committee

As educators we should be lifelong learners, learning about technology as it changes should be part of that journey.


In 2018 ACER Education wrote a comment about "How Artificial Intelligence can impact on the schools of tomorrow.


Consider the following for your school or education community. What if we stand together and begin to forming and implementing the following ideas:

  • Create an AI ethics committee at school among learners

  • Have debates about AI ethics among learners for them to research pros and cons

  • Teachers to be informed about AI benefits for teaching and learn about AI potential dangers in education

  • Promote the learners as responsible, ethical coders of the future.

The World Health Organisation's, Global Forum on Bioethics in Research (GFBR) is holding a two-day meeting in Cape Town, South Africa in November 2022 on the theme:“Ethics of artificial intelligence in global health research." We should keep an eye on this meeting's findings and decisions.


Business in South Africa is using AI, universities are experimenting and exploring.

Let's begin in our schools as well. As we teach our children to code, how do we also teach them about AI and ML?


Exploring an AI discussion in education - interested? Complete this form: https://tinyurl.com/AIinEdudiscuss



I look forward to hearing from you.


When having discussion about responsible coding ethics and values, you are also assisting in supporting the UNSDG such as:



Regards, Karen.

Sources:


Commenti


bottom of page