THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
Don Welch, Adjunct Professor, New York UniversityWhile discussing regulation with my students the other evening, I was asked, “What is the right amount of regulation for AI?”
This was in the context of the strategic importance of AI militarily, economically and in cybersecurity. Only easy questions, right?
So, I immediately fell back on Thomas Sowell, who said, “There are no solutions. There are only trade-offs.”
This is an incredibly important question that will have a major impact on the world well into the future. I have heard many pundits say that we must get out in front of AI with regulation before AI advances so far that we cannot control it. I have heard others in the industry say it would be foolish to restrict our efforts when we are in an existential AI race with other nations. The best solution probably lies somewhere in between.
Another question is who should determine whether and how we regulate AI. In the United States, our elected leaders are the ones. As far as I know, none of them have any real expertise in AI. They all do, however, have very smart staff who are versed in a variety of topics. These staffers will hear from many different sources and perspectives about what should be done. In a perfect world, our elected leaders would come close to the right trade-off when it comes to AI regulation. We can dream, can’t we?
Even when our elected officials come to the right conclusions, those conclusions must be perceived as reasonable by their constituencies. Right now, there are many demands about what they should do, and unfortunately many of those demands are not coming from people with knowledge or experience.
“When lawmakers decide what regulation is appropriate, they will need public support. Instead of a public that gets its understanding of AI from extreme and oversimplified sources, we can help by learning and then helping others to understand.”
The first question is what we mean by AI. How we define it sets the stage for how we might consider regulating it. There is a lot of hype surrounding large language models and other generative AI products. There are also many AI programming techniques that are more mature and already in productive use. I believe the most powerful programs of the future will combine numerous AI techniques, domain-specific models and generative AI interfaces.
This is a complexity that will be very difficult to regulate. I do think the fears of AI gaining abilities beyond our control make better science fiction than a basis for regulation, but we may still want to put guardrails in place.
Trying to regulate the thirst for knowledge is hard not only because nations will take different approaches, but also because it is difficult to stop someone from discovering what no one else knows. How do we do this without halting all the benefits that will come from the majority of AI research and development? Europe has taken a step by regulating what AI can be used for rather than what it is.
Regulation works best in domains that are mature and understood. There is little debate about what constitutes a fire hazard and what is acceptable. Buildings that are fire-safe may be a bit more expensive and could theoretically put us at a competitive disadvantage against nations that do not have similar protections. Yet no one argues that we should relax fire regulations to be more competitive. We are nowhere near that level of understanding or consensus when it comes to AI, so we cannot regulate it in the same way.
Where we have attempted to regulate social media, we have not been very successful. Is this because we do not fully understand it, or because we acted too late? Could we have effectively regulated social media earlier? At what point was there broad understanding of what social media would become, enabling legislators to create effective regulation that the public would accept? This may be a better example of the challenges we will face with AI.
Regulating what AI can be used for, rather than what it does or how it works, may seem like the most practical approach. We put speed limits on roads, but we do not prevent companies from building cars that can go 200 mph. Automakers still build supercars, but most research focuses on safety, comfort and efficiency. It may be possible to forbid AI in certain domains, which would limit research investment in those areas. Of course, this would work only if there were universal agreement on which uses should be restricted. We would need a regulatory body through which nations could create standards everyone would support.
We have done this before.
The internet grew through meetings of networking experts representing vendors, consumers and service providers. The United States and later other governments provided support, but they did not control these governance groups. The internet had the advantage of requiring interoperability, which constrained efforts to move away from consensus. AI does not have any natural constraints like that. I believe the most important factor was that the true experts did the work. Today, the experts are often drowned out by people with strong opinions but limited understanding.
Hopefully we are moving toward better understanding and more effective regulation of AI. Right now, I do not think the best voices are the ones being heard. Many opinions come from people with strong financial interests, while others come from people without a realistic grasp of AI or technology in general. As IT professionals, we must do our best to understand AI and how it fits into our technology landscape. Only then can we help others understand the technology and its impact on society.
I do not mean only that we should try to influence decision-makers, though that is helpful when possible. When lawmakers decide what regulation is appropriate, they will need public support. Instead of a public that gets its understanding of AI from extreme and oversimplified sources, we can help by learning and then helping others to understand.
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

However, if you would like to share the information in this article, you may use the link below:
www.educationtechnologyinsightseurope.com/cxoinsights/don-welch-nid-3596.html