THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
The advent of generative AI in higher education settings presents new challenges to the way we teach digital skills. A recent report written for the UK Council of Professors and Heads of Computing calls for action from educators to redevelop teaching, assessment and educational practice in light of the impact of generative AI on the computing field, which we explore in this article.
A former colleague tells the story of the early days of writing a computer programme. First he would write out on paper. Next he would convert it manually to machine code using a hard-bound reference manual. He would then punch the resulting binary code into a set of physical cards. The cards were collected and sorted, put in an envelope, addressed, sealed and sent to the university’s one computer. Days, or weeks, later the results were returned, again by mail. If he had made an error, the process would begin again.
My former colleague, a computational linguist, is now retired from academia. Throughout his career, the digital skills he was required to use adapted from sending programmes through the post to using mainframe then desktop computing, to the global adoption of the internet and smart devices as a means of information dissemination. He kept up with the pace of change, but many fall behind. Is it any wonder, when in the span of a single career the required digital skills can change so vastly.
“Generative AI chatbots are designed as informational tools and learning how to plumb the information lake requires careful practice and crafting of prompts that truly reflect a user’s informational intent.”
Our students today do not need to punch holes in cards to interact with technology. Far from it, they are sat on the precipice of a brand new tech era, which is as revolutionary as any before. Generative AI is set to change the way that businesses, and even society functions. Need to look something up? Ask the personal assistant on your smart device. Got a few leftovers in the fridge and don’t know a recipe? No problem. Need a summary of the important tasks in your emails? Instantaneous.
These abilities sound like the stuff of a science fiction film. Indeed, many of these capabilities are pushing the boundaries of what we once thought possible with technology. However, as with any technology an untrained user will get poor, or sub-optimal results. Generative AI chatbots are designed as informational tools and learning how to plumb the information lake requires careful practice and crafting of prompts that truly reflect a user’s informational intent. Chatbots are designed to follow instructions, but often these instructions are missed or followed imprecisely. For example, asking a chatbot to respond in a certain number of words or sentences will often lead to an erroneous result. Chatbots are trained to minimise harmful or toxic information and misuse. This is clearly a good thing, but may lead to users colliding with safety filters if they do not properly understand their purpose.
Not to mention, there are several unspoken harms from using chatbots that users must also be aware of. Chatbots are trained on billions of words of freely available text, taken from the internet. There is no filter for truth or bias in the training data and no criticality as the chatbot consumes new articles. The vast majority of text a chatbot is exposed to is necessarily from western writers with stable internet connections and enough education to be publishing content online – not mention suppositions of the age, race or gender of this group. Of course, the creators of the chatbot do not let it work unfiltered, but even drastic limits on the output of a chatbot necessarily reflects the inherent biases of the creator and the societal, legal and governmental pressures they face. Moving on, the technology underlying chatbots is known to produce false information. There is no constraint to truthfulness in the current iteration of language models. Instead, the model prioritises producing language snippets that appear to be appropriate responses – regardless of their factuality. Ask a language model to write your biography and see what it makes up about you. Finally, we have no information on longitudinal harms. In the same way that Smoking, Thalidomide or Social Media have shown to have long-term negative effects – we may see that the use of Generative AI has some unknown harm on our mental, physical or social health.
Using a personal assistant powered through Generative AI (such as Open AI’s Chat GPT, Google’s Bard, Anthropic’s Claude, or many more that we could name) is a vital digital skill for new graduates. AI is here for good. To be digitally inclusive educators, we must teach AI literacy first, foremost and throughout the curriculum. We expect those immersed in a technology environment to come ready-equipped with the digital skills they need to succeed. How to use a keyboard and mouse. How to navigate the internet. How to format a word document. But what if all these skills are about to become just as obsolete as sealing an envelope full of punch-cards?
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info