‘AI has transformed from an interesting tech toy to something that merits a serious policy discussion’
Last year may very well have been “The year of artificial intelligence (AI).” The public takeoff of ChatGPT in early 2023 and the shock over its coherence, expressiveness, and seeming ability to “reason” through complicated problems (including bar exam questions) have caused bouts of moral panic and self-reflection in many fields. In our little corner of legal academia, some teachers are fretting about how AI-generated text should change how we evaluate students. There is also disquiet over how AI tools can either enhance or disrupt the practice of law.
Cracks have begun to appear in AI’s facade as a neutral technology – and this has played out in both corporate drama over OpenAI’s leadership, as well as in high-stakes litigation over copyright to the data that imbues ChatGPT and similar platforms with apparent creativity. AI has transformed from an interesting tech toy to something that merits a serious policy discussion.
As of time of writing, several AI-related bills are pending in the 19th Session of the House of Representatives: House Bills 7913 and 7983 by Representative Keith Micah Tan, House Bill 7396 by Representative Robert Ace Barbers, and House Bill 9448 by Representative Juan Carlos Atayde. Like most new policy proposals in this field, the bills may have incomplete or overinclusive definitions of AI. We can’t blame Congress for this one. Instead of clear-cut definitions, AI has been muddled by industry’s overblown promises and the changing priorities of research grants. The confusion in the literature has seeped into legislative drafting.
HB 7983, for example, defines artificial intelligence as the “simulation of human intelligence in machines that are programmed to think like humans and mimic their actions,” which seems to define AI based on the sophistication of programming. Admittedly, many AI systems have been inspired by approximations of human thought processes. At the core of most modern AI, however, are simple algorithms. What gives these systems their emergent, “human-like” behavior however, is not just the programming but the sheer amount of human-generated training data.
HB 9448, on the other hand, defines AI based on methodologies – machine learning, deep learning, access to extensive data sets, as well as specific capabilities: perception, goal-driven behavior, and adaptation. Curiously, it also puts automation under the same regulatory regime. The proposal prohibits the use of either AI or automation to displace workers. Automation is defined as “the use of technology and machinery to perform tasks or processes with minimal human intervention.” This describes a broader category of technology that can include both factory robots and washing machines, computers and the Internet. Some clarification may be needed as to whether businesses will have to account for any employment opportunities lost to anything with an integrated circuit (or even a well-planned set of gears).
The bills confront the most apparent perils of AI: discrimination, unemployment, even death delivered by drones. These are all very dramatic, but may also distract us from other important questions. Many of these systems would not be possible without access to our data: from the genius of our creative strivings, to the detritus of our digital lives – all these have been abstracted into the models that give AI systems their uncanny abilities.
Should giant tech companies, who now use this data with neither meaningful consent nor compensation, be the only ones to reap the economic benefits? Traditional contract and intellectual property law may not have the proper tools to recognize – much less solve – these problems. This leaves a gap where legislative innovation will be needed.
Given the far reaching importance of AI, is a corporate-led model for its design and operation the only option? Private enterprise is not the only sector capable of shouldering the tremendous cost of building these systems. Congress can fund a public option not only to make the technology accessible, but to ensure that we retain the technological base from which we can shape the future of this technology on our own terms.
Emerson S. Bañez is an Assistant Professor, University of the Philippines College of Law, and an LL.D. Candidate, Kyushu University Graduate School of Law. This article was originally published on Rappler.