Background and Opinion
Since the release of ChatGPT, Large Language Models (LLMs) of Generative AI (Gen AI, GAI) have had a rapid uptake across all fields, most notably in creative industries (writing, image, audio, and video generation in content production); in the health and pharmaceuticals industries (medtech), where large amounts of data can be analysed and synthesised, and insights drawn efficiently from the data; in life, earth and space sciences; military, transport and logistics; in myriad customer service applications across a vast array of business types, where customer communications are often repetitive; and in education, where the promise is to reduce the workloads of teachers and improve student learning outcomes.
A recent Morgan Stanley research paper explores how Gen AI is projected to transform teaching and learning in the education sector, and drive $200 billion into the global value of education (2023). While our concern is, and should be, in the education field, the impact that GenAI has on a wide range of industries needs to be considered in our preparation of students for future learning and work.
To gain a greater understanding, I began experimenting with AI tools and reading emerging research during 2023, and although the field is evolving rapidly, and changes are being implemented briskly across product development, application, and political levels, I have found both positive and negative experiences, along with occasional helpful outcomes (one or two of which are demonstrated in the writing of this paper). In the following, I will attempt to briefly document some of these findings.
ChatGPT as the 'go to'
The emergence in November 2022, of ChatGPT heralded the fastest uptake of any software application in history (Gordon 2023) and the effects on workplaces are beginning to be felt across the world as many companies experiment with replacing writers and content creators with AI tools (Board 2023).
A February 15 article on the ABC (Purtill 2024) points out how ChatGPT has impacted copywriters and content creators with fee reductions and loss of job value. However, there is an upside, where a copywriting role has turned towards the consultant/educator role in training clients in the use of the application.
As Sean Makin points out, "AI algorithms can only generate designs based on pre-existing templates and design rules. They cannot replace the creativity and originality that human designers bring to the table" (Makin 2023). Quite clearly, human understanding, initiative and capability is required because an AI algorithm can only generate an output based on what already exists. By its very definition a creative act is bringing that into being which previously did not exist. Where AI has its strength is in the automation of repetitive and time-consuming tasks.
A March 2023 report from Goldman Sachs estimates that "one fourth of current work tasks could be automated by AI in the US and Europe". They suggest that 27% of employment in the Education Instruction and Library field could be exposed to automation by AI (Briggs et al. 2023, 7). The authors point out that their view of future work involves AI in a capacity to "replace sometimes, complement often". The positive power of AI, then, seems to be about complementing work and delivering productivity gains through workload improvements, and this aligns with the many claims of AI tools for education, such as generating teaching resources, attending to email communications, and planning lesson programs.
In January, Open AI released GPT Store, a platform aimed at capitalising on ChatGPT's consumer success "as a place for users to discover and build GPTs, or AI customized for tasks like teaching math or designing stickers" (Tong 2024).
Even prior to this release, application developers, such as AgentGPT, Microsoft's Bing, and others, were using ChatGPT as an engine for their apps. Tong notes, however, that part of the motivation behind OpenAI's GPT Store lay in the fact that "growth declined when some schools were out of session and the chatbot's novelty wore off".
As part of that same product launch Open AI introduced ChatGPT Teams, which they described as "a version of ChatGPT that companies pay for so their employees can use ChatGPT at work. ChatGPT Teams segregates a company's data, so any information entered into the chatbot remains private to the company", which is not what happens in its consumer model.
In a recent upgrade, "OpenAI has released a memory feature that will enable ChatGPT to remember past conversations so users don't have to repeat information during chats; useful for generating the right formatting preferences for content" (Crowley and Lawson 2024c). Crowley and Martin write that this feature is excluded from ChatGPT Teams and Enterprise customers, leaving some concerns for privacy and data management unattended.
On January 30, Open AI announced a partnership with Common Sense Media to create guidelines and educational materials aimed at preventing harm to children, although Common Sense Media had already published a guide (Masood 2023). Common Sense Media reviews and ranks the suitability of technology for children, using a 'nutrition-like' label to indicate age-based and family-use suitability for the products.
AI Tool Report indicates that this is in response to the US Federal Trade Commission investigation into ChatGPT's breaching of data and security laws (Crowley and Lawson 2024a). TechCrunch reports that OpenAI is seeking to win the trust of parents and policymakers -- and, we can assume, educators too -- by working to minimise harms to children in the use of their platform.
"An Impact Research poll commissioned by Common Sense Media late last year found that 58% of students aged 12 to 18 have used ChatGPT compared to 30% of parents of school-aged children" (Wiggers 2024). It is interesting how the data shows students as early adopters of the technology, with an assumed prospect for advanced or accelerated learning.
Context
It is worth noting that the vast majority of reportage and data is USA-centric, and while some data and reports have emerged from the European zone, and there are trends from Canada and Great Britain, there is little comparable information that is specifically Australian-centric.
Enquiry into The Use of Generative Artificial Intelligence in the Education System
An enquiry into The Use of Generative Artificial Intelligence in the Education System was adopted by the House Standing Committee on Employment, Education and Training on 24 May 2023.
The inquiry will include consideration of:
The strengths and benefits of generative AI tools for children, students, educators and systems and the ways in which they can be used to improve education outcomes;
The future impact generative AI tools will have on teaching and assessment practices in all education sectors, the role of educators, and the education workforce generally;
The risks and challenges presented by generative AI tools, including in ensuring their safe and ethical use and in promoting ongoing academic and research integrity;
How cohorts of children, students and families experiencing disadvantage can access the benefits of AI;
International and domestic practices and policies in response to the increased use of generative AI tools in education, including examples of best practice implementation, independent evaluation of outcomes, and lessons applicable to the Australian context; and
Recommendations to manage the risks, seize the opportunities, and guide the potential development of generative AI tools including in the area of standards. (Commonwealth Parliament 2023).
Before its first public hearing, Committee Chair, Ms Lisa Chester said, "Submissions to the inquiry have identified issues that we want to explore further, including the risks AI poses to academic integrity, and the potential it offers to personalise learning and address educational disadvantage."
To date, 12 public hearings have been held in Canberra (ACT), Pymble (NSW), Clayton (Vic), and Ultimo (NSW), and a further is scheduled for 6 March in Canberra. The inquiry is currently accepting written submissions.
Sample Submissions to the Inquiry
I took a snapshot of a few samples of submissions to gauge the scope of issues and concerns.
AATA
The following is drawn from the Australian Association for the Teaching of English submission to the inquiry. The submission draws on contributions from experienced English teacher educators, researchers, and former teachers. It addresses four specific items from the inquiry's terms of reference, which they argue are related to English teaching.
Strengths and Benefits of AI Tools: AI tools, particularly generative AI, offer opportunities to enhance teaching and learning in English, especially in writing. They can assist with textual generation, simplify writing processes, and support literacy tasks.
Future Impact on Teaching and Assessment: AI tools may be used for generating assessment tasks and providing feedback, but there are concerns about their impact on teaching processes and the potential for students to rely too heavily on AI-generated content.
Risks and Challenges: Ethical issues such as bias, privacy, and academic integrity are highlighted, along with concerns about the commercial nature of AI products and the potential for widening inequalities in education.
International and Domestic Practices: There is a call for comprehensive regulation and guidance on the use of AI tools in education, as well as the need for resources to support teachers in integrating these tools effectively while ensuring student safety and learning outcomes (2023).
Overall, the submission emphasises the need for informed engagement with Generative AI in education, highlighting potential benefits and ethical considerations that must be addressed.
2. Moodle
In his submission, Moodle CEO, Dr Martin Dougiamas, warns of the dangers of EdTech being too much under the control of big tech firms such as Microsoft, Amazon, Apple, Meta etc, and their significant influence, often through the infrastructure systems that are in common use, to control and distort what gets taught, and how it gets taught.
Along with inaccuracies of information, he highlights how cultural biases, and knowledge curation inherent in the design of existing Generative AI tools could have negative consequences for Australian students (2023).
3. Curtin University
Curtin University's submission was a collaborative document from a number of staff across the university. They see among the strengths and benefits for children that AI might potentially provide individualised tutoring and student feedback on learning through a 24/7 virtual assistant; that it can be used for multimodal images, videos, avatar, audio, etc; for writing, proofreading and communication aid for students with low levels of literacy or English language skills; suggesting that LLMs can correct errors but also explain why; offer potential help with research such as finding relevant papers for literature review; and potentially engage and help support neurodivergent students.
Into the future, Curtin sees three possible uses: as a tool, as a collaborator, and as a supervisor.
"Currently, people see AI more like a tool, but in the future, we will see Gen-AI more like a collaborator that helps promote self-reflection in learning. ... For education, this will present us with a challenge of how to help equip the future workforce that can collaborate with AI/Gen-AI to create value".
Among risks and challenges for educators, the university expresses a sense that without adequate short term training to prevent unethical use, and long term training in how to develop it, educators are at risk of being left behind.
They point out that the School system is adapting the technology quicker than higher education and that a "different cohort of students is coming through that are adept at using these tools, more personalised assessments, personalised feedback".
However, because School systems have different policies, and there is a lack of consistency across the sector in adoption and use, there remains the potential of students presenting to university with different skill levels in the technology's usage, leaving staff at a disadvantage in capabilities (2023).
4. University of Melbourne
The University of Melbourne's submission to the House Standing Committee Inquiry was authored by Gregor Kennedy, and points out that tools such as ChatGPT are known to produce convincing but false information, referred to by developers as "hallucinating".
The capacity for these tools to rapidly produce text, images, and other content in response to simple user prompts poses a threat to academic integrity because of the appearance of responses that seem authentic, but raising difficulties on knowing whether students have achieved learning outcomes.
One major concern is the efficacy of assessments, and the challenges associated with eliminating potentials for cheating, pointing to a need for research to help guide "the creation of assessments that students perceive as meaningful, authentic and relevant, and for which the motivation to cheat is reduced." Privacy is also a concern:
Generative AI is trained on massive amounts of data scraped from the internet, such as books, articles, websites and posts. This data contains personal information that has been obtained without consent and without "contextual integrity", raising serious privacy concerns. This data also includes text that is copyrighted or proprietary, creating issues around intellectual property when users re-appropriate this content in an educational or research context (Kennedy 2024, 7--8).
Kennedy goes on to say that further privacy issues are raised by the way in which ChatGPT stores user's prompts. Open AI states they will review user prompts, "As part of our commitment to safe and responsible AI ... to improve our systems and to ensure the content complies with our policies and safety requirements" (Open AI 2024 Point 5). A recent announcement, however, seems to illustrate a change in this policy (Crowley and Lawson 2024c).
Generative AI models can only model their answers as reflections and expressions of the data they are trained on. This data is drawn mainly from affluent Western societies, and mainly from the USA (upwards of 80%), and will naturally reflect the biases of those societies. University of Melbourne includes examples to illustrate the patterns and the dangers associated with it.
For example, Stable Diffusion, an AI-powered image generator, was found to largely produce images of white men when asked to create images of people in high-paying jobs. Conversely, it overwhelmingly produced images of women or people of colour when asked to create images of people in low-paying jobs. ... Such biases could also pose serious issues for the education sectors depending on their use. Concerns about bias in generative AI must become part of the education and training of students and staff generally (Kennedy 2024, 8).
4. Department for Education, South Australia
This submission makes a number of interesting points, about the possible benefits of AI in education that are worthy of some reflection. The submission's first main point was a focus on equity and the potential for the technology to support learning in instances where students with a level of learning that may not ordinarily be available to them. "Giving everyone access, the skills to write effective prompts, and tools to think critically about responses could be a driver of equity in education" (2024).
The second important consideration is giving students possible learning opportunities, particularly for those who struggle with traditional learning systems, and perhaps those with learning difficulties. The flexibility of AI could help implement "programs that can adapt to a student's learning style, making it easier for them to understand complex concepts ... and identify their strengths and weaknesses and work on them accordingly".
The submission posits potential benefits, including personalised content, learning support, targeted feedback, and streamlining tasks for both teachers and students.
Generative AI can help support equity in education by providing access to learning support for students who may not have access to other resources. It can explain information in different ways, summarise complex information, create study timetables, test student knowledge, and assist in synthesising and paraphrasing information.
The submission highlights the need for age-appropriate access, controlled usage, and the capability for students to write prompts and evaluate responses.
In addition to the equity issue and building personalised learning experiences for students with different needs, it suggests uses for lesson planning, rewording information, summarising research or concepts, generating ideas or questions, and tailoring tasks or learning experiences to enhance student engagement.
They are at pains, however, to underline the importance of educator-led instruction and that critical thinking and creativity should still be developed through teacher-led learning.
To support schools in exploring generative AI, the South Australian government is working on integrating the technology into their Microsoft Azure Tenancy. This allows for greater control over data, access, and content for teaching and learning purposes, and is currently conducting a trial of this technology in several secondary government schools to gather insights and inform future implementation.
4.1 SA Dept. for Education Public Hearing
Led by CEO Prof. Martin Westwell, the SA Department for Education presented to a hearing in Canberra on Monday 5 February, 2024.
In 2019, we engaged with Andreas Schleicher, who is the director of education at the OECD; he visited Adelaide and spoke with some of our educators about AI's potential impact. That led us to make some changes in our thinking around our junior and secondary curriculum. When ChatGPT came out in November 2022, we didn't ban it, but we wanted to step in and focus on supporting the safe, responsible and ethical use of AI as part of teaching and learning. We very much adopted a safety-first approach. It was an ambitious approach but always cautious in order to make sure that we were realising some of the potential of AI and exploring that with our teachers and students, while making sure that we were doing that in a safe way. We see that, in at least the medium and probably the long term, AI won't replace teachers, but a teacher with AI will replace a teacher without AI, and we wanted to explore what that might look like and how we could learn about the use of AI (Hansard 2024).
Westwell describes setting out to provide a proof of concept that involved eight schools, 110 educators and 1500 students. Working with Microsoft, they built a model on ChatGPT, which they called EdChat, and proceeded to test the capability of the tool in terms of access to the student, and in terms of monitoring student behaviour in the use of it.
His colleague, Julia Oakley describes one instance they responded to, involving a teacher and a student using artificial intelligence.
The teacher set a task for the student. The student had autism spectrum disorder. They were able to use EdChat to reframe the task, in terms that were much better for the student, and also translate the task into the student's first language. It highlighted for both teacher and student that, in a matter of seconds, a task could be adapted for a student who otherwise would have had significant challenges with their learning. There were incredibly positive results. We continue to monitor closely in phase 2 of our proof of concept, and we're looking forward to what else we can learn from this phase (Hansard 2024).
To date, 100 submissions to the House Standing Committee have been received from a range of organisations across Australia with an interest in how AI affects teaching and learning across primary, secondary, tertiary, copyright interests, and private education services. Submissions from Western Australia include those from Curtin University and Moodle, mentioned above, as well as one from Edith Cowan University and one from the Association for Academic Language and Learning.
The above samples demonstrate some of the complexities in applications of Gen AI to education.
Department of Education Framework for Generative AI
In November 2023, the Commonwealth Department of Education released the Australian Framework for Generative Artificial Intelligence (GAI) in Schools in which it:
seeks to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools, and society. The Framework supports all people connected with school education, including school leaders, teachers, support staff, service providers, parents, guardians, students and policy makers (Department of Education 2024).
It is intended that this framework will be reviewed twelve months after its publishing, and every twelve months thereafter, as more knowledge about Gen AI, and the opportunities and challenges it poses, become better known.
The framework contains six principles and 25 guiding statements that support three main goals: educational outcomes, ethical practices, and equity and inclusion.
The six principles of the framework
Teaching and Learning. Generative AI tools are used to support and enhance teaching and learning.
Human and Social Well-being. Generative AI tools are used to benefit all members of the school community.
Transparency. School communities understand how generative AI tools work, how they can be used, and when and how these tools are impacting them.
Fairness. Generative AI tools are used in ways that are accessible, fair, and respectful.
Accountability. Generative AI tools are used in ways that are open to challenge and retain human agency and accountability for decisions.
Privacy, Security, and Safety. Students and others using generative AI tools have their privacy and data protected.
This framework recognises that education ministers from all Australian states and territories have agreed that working towards an effective response to the risks, and to harnessing opportunities from generative AI technologies, is a national education priority.
Privacy, Bias and a Lack of Originality
Concerns with security, data breaches, and the well-being of children are major issues that are being addressed by legislation in the USA and the European Union.
Australian Commonwealth Minister for Science and Industry, Ed Husic, "has tasked an expert panel to help decide how the country should respond and monitor the most high-risk AI technologies" (Karvellas 2024). Husic's concerns cover the prospects of people's futures being affected by "AI that is tethered to bad data" and it is his intention to install "mandatory guardrails" for the ethical use of Artificial Intelligence, with which companies offering AI technologies in Australia must comply.
While Husic was discussing people mostly affected by work prospects, it is easy to see how bad data might impact the use of Gen AI in education, and nothing has more influence on the futures of children than our education system.
In a collaborative opinion paper, "So what if ChatGPT wrote it?" the authors set out to investigate what the opportunities, challenges, and implications related to Gen AI technologies such as ChatGPT in the context of education, business, and society may be; and, secondly, what might be the most important research questions in the context of those domains should be investigated in regard to GAI technologies such as ChatGPT (Dwivedi et al. 2023).
To the latter question, they found one significant challenge to be ChatGPT's lack of originality. The inaccuracies produced, the lack of logical flow, the mimicry, and lack of critical evaluation might, in one view, be put down to software that is still being tested, and with time, and improved training, the output might be more acceptable.
Dwivedi et al. argue, however, that it is not clear whether the model will become better and "lead to more meaningful outcomes". The lack of originality can be explained by the way LLMs work, and occurs because "the input material is drawn from a very large collection of online documents: accurate, inaccurate, hypothetical, polemic" (Kennedy 2024, 4).
The data is stored in fragments and compiled based on what word might be predicted to follow a previous word. These, and other fragments, are collapsed and stored together in the LLM, which makes it impossible to identify the source of any fragment. Moreover, bringing two fragments together that may be individually true can easily combine and produce an inaccuracy "without the LLM having any awareness of that fact".
ChatGPT makes a point on its help pages that the responses can't necessarily be relied upon and "may produce harmful instructions or biased content" (Open AI 2024 Point 4).
It is interesting to note how singer/Songwriter, Nick Cave, called ChatGPT an exercise in "replication as travesty" after he received a song lyric produced by a fan named Mark from New Zealand using ChatGPT (Cain 2023). In his response to Mark, Cave said, "With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don't much like it."
It is quite easy to imagine having a similar response to student work that is created using GAI, when it presents plausibly, but without originality or critical evaluation.
Understanding Educational Tools in AI
UNESCO
In its July 2023 report, *Generative AI and the future of education,* UNESCO highlights a number of opportunities and risks associated with educational AI tools for teaching and learning, many of which have been discussed above.
Among the key points is one that encourages education systems to "return agency to learners and remind young people that we remain at the helm of technology" (Giannini 2023, 4). Giannini expresses concerns that appropriate checks and balances applied to teaching materials may not be rigorously applied when implementing GAI, and the inconsistent applications across Schools Systems may give rise to inequities in future student successes.
She stresses that, while AI tools can simultaneously strengthen educational practices and create new prospects for learning, there is also a concern that it has the potential to undermine the authority and status of teachers, and calls for proper research to be conducted. These sentiments are similar to those expressed by some of the submissions to the House Standing Committee Inquiry noted above. According to Giannnini:
AI is forcing us to ask questions about the 'known-world' that we usually take as a starting point for education. Many of our old assumptions and norms, especially those concerning knowledge and learning, appear unlikely to sustain the 'weight' of this new technology. We can no longer just ask 'How do we prepare for an AI world?' We must go deeper: 'What should a world with AI look like? What roles should this powerful technology play? On whose terms? Who decides?'(Giannini 2023).
IS Scholars
The authors of "So What if ChatGPT wrote it?" are a collection of Information Systems (IS) scholars, 73 in total, from around the world (Dwivedi et al. 2023). Their collaborative opinion paper acknowledges that generative AI is one in a long line of disruptive technologies in education, comparing it to past technologies like calculators and email.
They argue that GAI has the potential to transform teaching and research practices, and the advent of ChatGPT has sparked widespread discussions on the promises and pitfalls of educational applications with the technology. While concerns about academic integrity exist, they argue that closing off access to generative AI based on these concerns would be a mistake.
Instead, the authors suggest incorporating the technology into teaching practices with an open mindset of experience and experimentation, and propose using the concept of IT Mindfulness to engage students and provide guidance on the ethical implications and boundaries of its use (2023, 20).
They describe IT Mindfulness as "including four elements: 1) alertness to distinction, 2) awareness of multiple perspectives, 3) openness to novelty, and 4) orientation in the present." According to Nicholas Roberts and colleagues:
Alertness to distinction involves developing new ideas and ways of looking at things. Specifically, mindful individuals can distinguish how things are the same or different. Mindfulness also involves an openness to novelty, i.e. the active pursuit of new and various kinds of stimuli. Orientation in the present refers to a heightened level of awareness and involvement in whatever particular situation an individual faces. Finally, mindful individuals invoke multiple perspectives and recognize that each perspective holds value. Thus, they are flexible and open-minded when approaching any particular situation (Roberts, Thatcher, and Klein 2006, 4--5).
Dwivedi et al. argue that instructors can use this framework to help engage students in exploring technology tools. Clearly, for the high school teacher, there is an application in IT learning and mastery of digital literacies for students.
One of the authors, Giampaolo Viglia, suggests that if ChatGPT is used in a compulsive way it poses threats for both instructors and students.
For students, who are already suffering from a lower attention span and a significant reduction in book reading intake, the risk is going into a lethargic mode. For teachers, the ability to think critically is a prerequisite for teaching critical thinking. Only by being very prepared on the topic with the right training, teachers might be able to disentangle the work of a student from the work of an AI bot (Dwivedi et al. 2023, 25).
Viglia goes on to say that he doesn't consider that "increasing rules and enforcement" is beneficial; that it is more appropriate to use this advancement to facilitate learning and knowledge, stressing the value of independent thinking, which is what, he suggests, makes us better at being human. He worries that "if ChatGPT does everything or many things for students and professors, it may also kill creativity and critical thinking".
This is similar to sentiments expressed by Jiahui (Jess) Luo whose review of policies in universities across the world found that the predominant concern lies in the "originality of students' work" (2024, 10), but the notion of what constitutes originality in an evolving digital space, and the historicity of how knowledge works as layers built upon the knowledge of others, occupies a silence. She argues that "Rather than stressing originality from a surveillance angle, policies can place more emphasis on the available support to students in producing original work that is meaningful to their learning" (2024, 11).
The contribution in Dwivedi by Ramakrishnan Raman, Gareth H. Davies and Abbas Mardani points to how AI has been applied in education to provide personalised feedback for writing assignments, citing a 2018 study (pre ChatGPT) that used a neural network model to analyse student essays and provide feedback on grammar and organisation, a role for GAI that was also flagged by the AATA submission to the Standing Committee Inquiry.
The contributors further cite a range of opportunities that could be explored by Generative AI:
Basic Educational Material: ChatGPT can be used to provide basic educational materials, which otherwise is created by searching the internet.
Personalised feedback: ChatGPT can be used to provide personalised feedback on writing assignments, such as essays and research papers. The model can analyse student writing and provide feedback on grammar, organisation, and content.
Automating administrative tasks: ChatGPT can be used to automate administrative tasks such as grading assessments and answering frequently asked basic questions. It can help to free up teachers' time to focus on other aspects of teaching and research.
4. Language learning support: ChatGPT can be used to support language learners by providing personalised feedback on grammar and vocabulary, and by assisting with language translation in a classroom setting. It can support language learners by giving them extra practice and feedback on their language abilities.
Enhancing online education: ChatGPT can be used to enhance online education. It can be used to improve online learning by giving students more tools and resources, as well as by making the learning experiences more interesting and participatory.
Individualised Support: ChatGPT can be used to provide one-on-one tutoring for students, by answering questions and providing explanations on various subjects. It may determine the student's comprehension level and offer explanations and tasks that are suitable for them (2023, 26).
I will explore some of the listed opportunities in greater depth further on. However, among the concerns they raise, there are many parallels with those raised previously in this paper, namely: data quality and bias, privacy and security, academic integrity, and ethical concerns, along with some that have not been previously raised, including: interpreting and understanding the model's output, limited explanation capability, and human-computer interaction.
Useful Frameworks
Dwivedi et al. is a considerable study into uses, opportunities, challenges, and applications of Gen AI in education (and other work areas). I have endeavoured to highlight contributions that may have the best applicability to the high school environment. However, there are two other ideas that I would like to draw attention to before moving on to experiments and investigations that demonstrate some of the strengths and weaknesses to which attention has been drawn above.
Prohibit, Allow, or Encourage
Cornell University's Center for Teaching and Learning released a report in September 2023 calling for instructors to adopt one of three policies: prohibit, allow with attribution, or encourage GAI use (Bala and Colvin 2023). While this is geared to Higher Education, the principles remain largely the same for the School System.
In their executive summary, the author's argue that "Educators must take generative artificial intelligence (GAI) into account when considering the learning objectives for their classes, since these technologies will not only be present in the future workplace, but are already being used by students" (2023, 1).
They point to risks of the potential for GAI tools to circumvent learning, but also the capacity to hide biases, inaccuracies, and ethical problems, including violations of privacy and intellectual property.
Among their recommendations are to:
Rethink learning outcomes and "focus student education on higher-level learning objectives, critical thinking, and the skills and knowledge that they will need in the future".
Address safety and ethics. "Instructors must educate their students about the pitfalls of current technology and teach them to approach GAI critically and to validate GAI-produced information rigorously".
Explicitly state policies for use of GAI. While decisions around the permitted use and application of the technology may be task-specific, there are still many "foundational skills that will still need to be developed without the use of Gen AI. In such cases, instructors must directly explain to students why the process of achieving the specified learning outcomes for a class, without reliance on tools that create 'shortcuts,' is integral to a student's academic and personal growth" (2023, 2).
As policy outcomes, they suggest prohibiting GAI when "it interferes with the student developing foundational understanding, skills, and knowledge needed for future courses and careers"; to allow with attribution where it can be shown to be a useful resource, "but the instructor needs to be aware of its use by the student, and the student must learn to take responsibility for accuracy and correct attribution of GAI-generated content"; and to encourage GAI use in circumstances where students can "leverage GAI to focus on higher-level learning objectives, explore creative ideas, or otherwise enhance learning" (2023, 2).
I am of the opinion that these three policies and three guidelines pose a reasonable base from which to consider the applicability of using Generative AI in classrooms.
Technological Pedagogical Content Knowledge (TPACK)
In a 2022 paper, "The Landscape of Teaching Resources for AI Education", Stefania Druga, Nancy Otero, and Amy J Ko, conducted a "systematic analysis of existing online resources for AI education, investigating what learning and teaching affordances these resources have to support AI education" (2022, 96).
This study was conducted prior to the release of ChatGPT and other GAI tools, and therefore its discussion is limited to tools that were available before the advent of the user interface simplicity of Open AI and other GAI systems. What is of particular interest in this study, however, is the framework they used to assess a corpus of 50 AI classroom and teaching resources.
The Technological Pedagogical Content Knowledge (TPACK) describes a framework of teacher knowledge for technology integration (Koehler and Mishra 2009). According to Koehler and Mishra, good teaching with technology requires three core components: content, pedagogy, and technology, along with the relationships that exist and arise among and between them.
The TPACK framework and its knowledge components.
We can see from the image above how technological knowledge must both draw on and contribute to pedagogical knowledge and content knowledge in order to successfully deploy technology in the classroom environment.
As a sequence, this looks like: "(1) the use of appropriate technology (2) in a particular content area (3) as part of a pedagogical strategy (4) within a given educational context (5) to develop students' knowledge of a particular topic or meet an educational objective or student need" (Druga, Otero, and Ko 2022, 97).
Druga et al. argue that, because knowledge of how the technology can be deployed is largely underrepresented by people outside of computer science, explorations of AI applications in education can be challenging.
While this may still be true, with the introduction of a chat-box like user interface (UX) by Open AI, and followed by other Gen AI applications, the usability of the technology now represents much of how the rest of modern computer technology is accessed. Consequently, many of the 50 AI applications the authors examined, were not nearly so accessible. They reported that a number lacked adequate user instructions and had problematic usability.
They do point out, however, that "AI education is considered a vital part of computational thinking and there are arguments to include AI literacy in primary and secondary education curricula" (2022, 96).
In unpacking the TPACK framework, the authors refer to the AI4K12 guidelines, which are organised around "the Five Big Ideas in AI" (AI4K12, n.d.). The guidelines define what every student should know about AI and what they should be able to do with it, serving as a "framework to assist standards writers and curriculum developers on AI concepts, essential knowledge, and skills by grade band" (AI4K12 2021).
The website lists the Five Big Ideas in association with "grade bands" indicating a sequence of learning in "progression charts that span K-2, 3-5, 6-8, and 9-12 grade bands."
The Five Big Ideas in AI.
In their concluding comments, Druga et al. found that the resources they tested had guidance conveying intended use, but the direction was often hard to find, or required obscure technical knowledge to find and comprehend.
In terms of content, they found that many of the AI4K12 big ideas were covered, but most did not cover all five; *Social Impact* was the most frequently overlooked, and *Curricular* the most frequently included.
In pedagogy, most resources supported direct instruction and active learning combinations, though few were responsive to learners' prior knowledge, a critical aspect in equity concerns.
In terms of educational context, most resources had some form of device dependency, constraining the learning and IT contexts in which they were compatible (2022, 101).
While it is apparent that many of the resources tested in this study did not fully meet educators' needs in terms of the TPACK framework, it does show an emergence of opportunity for the development of a common language using these guidelines and frameworks to help educators focus on a combination of accessibility and suitable learning outcomes, beginning with an idea of observable detail of object, leaning into machines learning from data and application to human learning from what the machine has learned.
Works cited.
AATA. 2023. "A Submission in Response to the House Standing Committee on Employment, Education and Training Inquiry into AI in Education."
AgentGPT, and Kevin Price. 2024a. "AgentGPT: SCSA and ACARA Specification Access." February 24, 2024. https://agentgpt.reworkd.ai/.
---------. 2024b. "AgentGPT:Email to Parents." March 1, 2024. https://agentgpt.reworkd.ai/.
AI4K12. 2021. "Grade Band Progression Charts." AI4K12. June 29, 2021. https://ai4k12.org/gradeband-progression-charts/.
---------. n.d. "AI4K12." AI4K12. Accessed February 25, 2024. https://ai4k12.org/.
Australian Government. n.d. "Dictionaries: An Indispensable Guide for Writing and Style | Style Manual." Accessed March 2, 2024. https://www.stylemanual.gov.au/blog/dictionaries-indispensable-guide-writing-and-style.
Bacchi, Carol. 2009. *Bacchi: Analysing Policy*. AU: Pearson Higher Education. https://scholar.google.com/scholar_lookup?hl=en&publication_year=2009&author=C.+Bacchi&title=Analysing+Policy%3A+What%E2%80%99s+the+Problem+Represented+to+Be%3F.
Bala, Kavita, and Alex Colvin. 2023. "CU Committee Report: Generative Artificial Intelligence for Education and Pedagogy | Center for Teaching Innovation." July 18, 2023. https://teaching.cornell.edu/generative-artificial-intelligence/cu-committee-report-generative-artificial-intelligence-education#Section1.
Bedington, Andelyn, Emma F. Halcomb, Heidi A. McKee, Thomas Sargent, and Adler Smith. 2024. "Writing with Generative AI and Human-Machine Teaming: Insights and Recommendations from Faculty and Students." *Computers and Composition* 71 (March): 102833. https://doi.org/10.1016/j.compcom.2024.102833.
Board, The Conference. 2023. "Survey: Majority of US Workers Are Already Using Generative AI Tools--But Company Policies Trail Behind." September 13, 2023. https://www.prnewswire.com/news-releases/survey-majority-of-us-workers-are-already-using-generative-ai-toolsbut-company-policies-trail-behind-301925743.html.
Briggs, Joseph, Devesh Kodnani, Jan Hatzius, and Giovanni Pierdomenico. 2023. "The Potentially Large Effects of Artificial Intelligence on Economic Growth." Global Economcs Analysis. USA: Goldman Sachs. https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.html.
Cain, Sian. 2023. "'This Song Sucks': Nick Cave Responds to ChatGPT Song Written in Style of Nick Cave." *The Guardian*, January 17, 2023, sec. Music. https://www.theguardian.com/music/2023/jan/17/this-song-sucks-nick-cave-responds-to-chatgpt-song-written-in-style-of-nick-cave.
Chahar. 2024. "Education Copilot Contact?" February 15, 2024. https://answers.microsoft.com/en-us/bing/forum/all/education-copilot-contact/6032e889-9a97-46c1-9905-9189b7051811.
ChatGPT. 2024. "Education Specifications Query." February 24, 2024. https://chat.openai.com.
ChatGPT, and Kevin Price. 2024. "ChatGPT Maths Problem Generation." February 12, 2024. https://chat.openai.com.
Commonwealth Parliament, Canberra. 2023. "Inquiry into the Use of Generative Artificial Intelligence in the Australian Education System." Text. Australia. May 24, 2023. https://www.aph.gov.au/Parliamentary_Business/Committees/House/Employment_Education_and_Training/AIineducation.
Crowley, Martin, and Liam Lawson. 2024a. "OpenAI Partners with Common Sense." AI Tool Report. January 30, 2024. https://aitoolreport.beehiiv.com/p/openai-partners-with-common-sense.
---------. 2024b. "Softbank to Rival NVIDIA with $100B AI Project - Kevin@logorythm.Com.Au - Logorythm.Com.Au Mail," February 19, 2024. https://mail.google.com/mail/u/0/#label/AI+Tool+Report/FMfcgzGxRnbMjLddvWmQtfqXpzvvqmJB.
---------. 2024c. "New Memory Feature for ChatGPT." AI Tool Report. February 22, 2024. https://aitoolreport.beehiiv.com/p/new-memory-feature-for-chatgpt.
Curtin University. 2023. "Curtin University Reponses for The House Standing Committee on Employment, Education and Training Report on the Use of Generative Artificial Intelligence in the Australian Education Systm."
Day, Katherine, Renée Otmar, Rose Michael, and Sharon Mullins. 2024. "Can ChatGPT Edit Fiction? 4 Professional Editors Asked AI to Do Their Job -- and It Ruined Their Short Story." The Conversation. February 12, 2024. http://theconversation.com/can-chatgpt-edit-fiction-4-professional-editors-asked-ai-to-do-their-job-and-it-ruined-their-short-story-216631.
Department of Education, Canberra. 2024. "Australian Framework for Generative Artificial Intelligence (AI) in Schools." Text. education.gov.au. scheme=AGLSTERMS.AglsAgent; corporateName=Department of Education; address=50 Marcus Clarke St, Canberra City, ACT 2601; contact=+61 1300 566 046. January 31, 2024. https://www.education.gov.au/schooling/resources/australian-framework-generative-artificial-intelligence-ai-schools.
Dougiamas, Martin. 2023. "Submission to Standing Committee on Genertive AI from Martin Dougiamias Moodle."
Druga, Stefania, Nancy Otero, and Amy J. Ko. 2022. "The Landscape of Teaching Resources for AI Education." In *Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 1*, 96--102. ITiCSE '22. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3502718.3524782.
Dwivedi, Yogesh K., Nir Kshetri, Laurie Hughes, Emma Louise Slade, Anand Jeyaraj, Arpan Kumar Kar, Abdullah M. Baabdullah, et al. 2023. "Opinion Paper: 'So What If ChatGPT Wrote It?' Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy." *International Journal of Information Management* 71 (August): 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642.
Education Copilot. n.d. "AI Lesson Planner." Education Copilot. Accessed March 2, 2024. https://educationcopilot.com/.
European Commission. 2022. "DigComp Framework - European Commission." 2022. https://joint-research-centre.ec.europa.eu/digcomp/digcomp-framework_en.
Georgia Tech. 2024. "Developing Student Learning Outcome Statements | Office of Academic Effectiveness." 2024. https://academiceffectiveness.gatech.edu/assessment-toolkit/developing-student-learning-outcome-statements.
Giannini, Stefania. 2023. "Generative AI and the Future of Education - UNESCO Digital Library." July 2023. https://unesdoc.unesco.org/ark:/48223/pf0000385877.
Gordon, Cindy. 2023. "ChatGPT Is The Fastest Growing App In The History Of Web Applications." Forbes. February 2, 2023. https://www.forbes.com/sites/cindygordon/2023/02/02/chatgpt-is-the-fastest-growing-ap-in-the-history-of-web-applications/.
Habib, Sabrina, Thomas Vogel, Xiao Anli, and Evelyn Thorne. 2024. "How Does Generative Artificial Intelligence Impact Student Creativity?" *Journal of Creativity* 34 (1): 100072. https://doi.org/10.1016/j.yjoc.2023.100072.
Hansard. 2024. "Inquiry into the Use of Generative Artificial Intelligence in the Australian Education System." Australia. https://parlinfo.aph.gov.au/parlInfo/search/display/display.w3p;query=Id%3A%22committees%2Fcommrep%2F27676%2F0000%22.
Karvellas. 2024. "'Risks Have Been Identified': Govt Announces AI Expert Group." ABC Listen. February 14, 2024. https://www.abc.net.au/listen/programs/radionational-breakfast/-risks-have-been-identified-govt-announces-ai-expert-group-/103468914.
Kennedy, Gregor. 2024. "Submission to the House Standing Committee Inquiry into the Use of Generative AI in the Education System." Univeristy of Melbourne.
Koehler, Matthew, and Punya Mishra. 2009. "What Is Technological Pedagogical Content Knowledge (TPACK)?" *Contemporary Issues in Technology and Teacher Education* 9 (1): 60--70. https://www.learntechlib.org/primary/p/29544/.
Korzynski, Pawel, Grzegorz Mazurek, Pamela Krzypkowska, and Kurasinski. 2023. "Artificial Intelligence Prompt Engineering as a New Digital Competence: Analysis of Generative AI Technologies Such as ChatGPT." *Entrepreneurial Business and Economics Review* 11 (3): 25--38. https://www.ceeol.com/search/article-detail?id=1205908.
Laurillard, Diana, 1948. 2012. *Teaching as a Design Science: Building Pedagogical Patterns for Learning and Technology*. *Routledge, Taylor & Francis Group*. Book, Whole. New York, NY: Routledge. https://doi.org/10.4324/9780203125083.
Luo, (Jess) Jiahui. 2024. "A Critical Review of GenAI Policies in Higher Education Assessment: A Call to Reconsider the 'Originality' of Students' Work." *Assessment & Evaluation in Higher Education* 0 (0): 1--14. https://doi.org/10.1080/02602938.2024.2309963.
Makin. 2023. "Will AI Replace Graphic Designers?" April 14, 2023. https://www.linkedin.com/pulse/ai-replace-graphic-designers-sean-makin.
Markauskaite, Lina, Rebecca Marrone, Oleksandra Poquet, Simon Knight, Roberto Martinez-Maldonado, Sarah Howard, Jo Tondeur, et al. 2022. "Rethinking the Entwinement between Artificial Intelligence and Human Learning: What Capabilities Do Learners Need for a World with AI?" *Computers and Education: Artificial Intelligence* 3 (January): 100056. https://doi.org/10.1016/j.caeai.2022.100056.
Masood, Raisa. 2023. "Guide to ChatGPT for Parents and Caregivers | Common Sense Media." August 30, 2023. https://www.commonsensemedia.org/articles/guide-to-chatgpt-for-parents-and-caregivers.
Microsoft Education Team. 2023. "Expanding Microsoft Copilot Access in Education." Microsoft Education Blog. December 14, 2023. https://educationblog.microsoft.com/en-us/2023/12/expanding-microsoft-copilot-access-in-education.
Morgan Stanley. 2023. "Generative AI Is Set to Shake Up Education." Morgan Stanley. December 22, 2023. https://www.morganstanley.com/ideas/generative-ai-education-outlook.
Open AI. 2024. "What Is ChatGPT? | OpenAI Help Center." February 2024. https://help.openai.com/en/articles/6783457-what-is-chatgpt.
Purtill. 2024. "AI Killed Leanne's Copywriting Business. Now She Earns a Living Teaching How to Use ChatGPT." *ABC News*, February 14, 2024. https://www.abc.net.au/news/science/2024-02-15/freelance-copywriters-artificial-intelligence-ai-automate-work/103413972.
Roberts, Nicholas, Jason Bennett Thatcher, and Richard Klein. 2006. "Mindfulness in the Domain of Information Systems." *DIGIT 2006 Proceedings*, January. https://aisel.aisnet.org/digit2006/2.
SA Dept. of Education. 2024. "Submission to the House Standing Committee Inquiry into the Use of Generative AI in the Education System." Department for Education South Australia.
Selber, Stuart A. 2004. *Multiliteracies for a Digital Age*. SIU Press.
Teachflow. 2023. "Home - Teachflow.AI." May 11, 2023. https://teachflow.ai/.
Tong, Anna. 2024. "OpenAI Launches GPT Store to Capitalize on ChatGPT's Consumer Success | Reuters." January 11, 2024. https://www.reuters.com/technology/openai-launches-gpt-store-capitalize-chatgpts-consumer-success-2024-01-10/.
UTAS. 2018. "How to Write ILOs - Teaching & Learning." Asset Listing. Teaching & Learning - University of Tasmania, Australia. 2018. https://www.teaching-learning.utas.edu.au/ilo/writing.
Vuorikari, Riina, Stefano Kluzer, and Yves Punie. 2022. "DigComp 2.2: The Digital Competence Framework for Citizens - With New Examples of Knowledge, Skills and Attitudes." JRC Publications Repository. March 17, 2022. https://doi.org/10.2760/115376.
Waddell. 2024. "The Right Tool for the Right Job: Understanding Where AI Helps (and Where It Can't)." PerfectIt. February 27, 2024. https://www.perfectit.com/blog/the-right-tool-for-the-right-job-understanding-where-ai-helps-and-where-it-cant.
Wiggers, Kyle. 2024. "OpenAI Partners with Common Sense Media to Collaborate on AI Guidelines." *TechCrunch* (blog). January 29, 2024. https://techcrunch.com/2024/01/29/openai-partners-with-common-sense-media-to-collaborate-on-ai-guidelines/.