I have often wondered which would be superior in the coming years, Human Intelligence or Artificial Intelligence, and a key contemplation on this subject is which would take control over the other? We have heard experts repeatedly talk about how Artificial Intelligence has significantly been integrated into our daily lives, and any attempt by us to thwart this integration, if at all remotely possible, would be akin to reversion to the dark ages again.
To give us a glimpse of what we could expect, a lot of movies have stretched our imagination on the extent to which this technology is going to affect everything and anything. A recent movie that is quite related to this subject is the Hollywood blockbuster; Godzilla Vs Kong. The movie which was directed by Adam Wingard illustrates a battle for supremacy between two Alpha mythical beings called “Titans”. A group called APEX in a bid to prove that humans must remain superior created a giant robot (Mechagodzilla) to battle these titans and we know how the robot fared in the end.
The boggling questions then are; are we superior? If yes, can we stay superior? If no, how can we come back on top? In all these, a critical role amongst key stakeholders that must not be overlooked and cannot be overstressed is the role of Regulators.
This piece critically discusses the concepts of intelligence vis-a-vis the meanings of both artificial and human intelligence, the interconnection that underlies their functionality in the general scale of our existence, the grapple for supremacy and what it portends for the future, as well as the indispensable role of regulators in finding a workable balance between them.
What is Intelligence?
Biologically speaking, a narrow and simplistic way to define Intelligence is that it is the ability to think, to learn from experience, to solve problems, and adapt to new situations. Think about our world and the several unique creatures in it. We can agree, unbiased, that humans are the most complexly intelligent creatures currently roaming this planet. We have even figured out a way to measure our intelligence, including that of other animals just to prove that we are more intelligent. There are significant gaps, in terms of intelligence and complexity between the brains of a worm, a chicken, an ape, and an adult human. However, in recent times we see a potential contender — Artificial Intelligence (AI).
What is Artificial Intelligence (AI)?
AI just like every concept is devoid of a uniform definition. However, in simple terms, Investopedia concisely expressed it to mean “the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions.” Let’s agree for our purpose that “Intelligence” is purely about information processing and the faster and more efficiently an entity can process information to give a precise result, the more intelligent that entity is.
An undeniable attribute of this human creation called Artificial Intelligence is that it is already performing more highly efficient and accurate specific operations than humans can perform, and we are certain that these specific creations, pieced together, will very soon surpass human intelligence. We simply are working on the tools that will create a foundation for the (true, “Super Intelligent”) AI.
For a comprehensive understanding, in terms of complexity, there are different forms of AI.
These different forms are either Strong AI or Weak AI. The results of creating software and algorithms under these categories have brought about categorising AI as Reactive Machines, Limited Memory, Theory of Mind, Self-aware, Artificial Narrow Intelligence (where we currently are), Artificial General Intelligence, and Artificial Superintelligence.
I will not argue whether or not we are at the brink of an “Age of Ultron” scenario, I would however agree that we are slowly reaching the level of creating such an intricately mesmerising entity. With Machine Learning at its full potential, we will witness AI on a seamless autopilot. But before we get there, our imaginations can only run wild on the extra opium fed to it by Hollywood Thriller movies.
Limiting our experience to what we have today, mostly Artificial Narrow Intelligence, we see our computers increasingly perform simple tasks better than we normally would. Take for instance your Google Maps, it practically knows every street name, how to get there and approximately how long it would take to get there. Relying on a similar information database are self-driving cars and semi-autonomous drones, these technologies have in some cases proven to make better judgements than their human counterparts.
We must not fail to mention AlphaGo — the highlight of AI development history. AlphaGo, a Google DeepMind project is an AI agent that is specialized to play Go (a Chinese strategy board game) against human competitors. In a test, after 72 hours of supervised learning, the AI beat the human world champion, Kie Jie. The next even more significant move was the creation of AlphaGo Zero. This AI was capable of teaching itself to play the game and after much practise against itself, beat its predecessor.
Another significant example of this type of AI is Sophia. Sophia is a realistic humanoid robot capable of displaying human-like expressions and interacting with people. While a few writers have oversimplified its existence to be like a chatbot with a face, it truly is a wonder to observe. It is even a citizen of Saudi Arabia. I like to call it the marriage between AI and Robotics [maybe a pre-evolved Ultron].
Perhaps the most significant development in this AI form is OpenAI’s GPT-3. GPT-3 is a creation of OpenAI, a research business co-founded by Elon Musk and has been described as the most important and useful advance in AI for years. This AI is capable of generating an essay in a few seconds, answering any question, generating music, creating designs, and building software components, translating to and from a variety of languages, knows billions of words, and is even capable of coding! Because of all the data GPT-3 has at hand, it requires no further training to fulfil language tasks. Put simply; it’s an AI that is better at creating content that has a language structure – human or machine language – than anything that has come before it.
The Risks and Control Mechanisms
Stephen Hawking and Elon Musk share similar thoughts on Superintelligent AI being the greatest feat man could ever attain, however, it could be the ultimate cause of our extinction if not properly checked and managed.
As Elon Musk puts it, “AI will be the best or worst thing ever for humanity.” The reassuring thing about the level of AI development we have today is that it is currently not capable of engineering itself and it is nowhere near human intelligence. And while we are still far away from what would potentially evolve into Artificial Super Intelligence, what we have today is only narrow, can only process huge amounts of data and provide results on specific tasks with little or no human intervention. Consequently, I believe the technology is at its incubation. It is then imperative that this technology along with its infinite possibilities be properly understood and in the best way, regulated so as to prevent the risks of a rogue or runaway Super Intelligence. In other words, control it from its infancy.
On preventing the technology from eventually causing our extinction, several solutions have been theorised. One of such solutions is to isolate the technology and keep it confined in a “box”. This could mean isolating it from accessing the internet, limiting its contact with the outside world. The problem this would inevitably birth is a great reduction of its ability to perform the functions for which it was created.
Another solution is to design a “theoretical containment algorithm” to ensure that an artificial intelligence “cannot harm people under any circumstances.” However, an analysis of the current computing paradigm showed that no such singular algorithm can be created. Another one that is radical but seems to be working is implanting this technology in human brains. This will facilitate an efficient control of the technology.
Regulations and the Roles of Regulators
Whichever way we decide to solve this problem, one key aspect must not be overlooked — the role of Regulators. We have seen good attempts by various jurisdictions to keep up with regulating the emerging trends in technology and more particularly, Artificial Intelligence. Some countries have managed to develop regulations and guidelines targeted towards ensuring that the wide haze of AI is kept under close watch while still adapting it to meet the crucial and ever-expanding human needs as the world dives into the future.
In June 2018 the European Commission set up the independent High-Level Expert Group on AI to provide guidelines on how AI can achieve a unique degree of reliability.
In light of the guidelines, the Commission has since published its own ‘White Paper on Artificial Intelligence – A European approach to excellence and trust – on 19 February 2020. The Commission’s White Paper outlined that the most alarming risks to be addressed concerning AI are those it poses to the landscape of fundamental rights, the privacy of data, safety and effective performance, and liability identification. The Commission insists that the optimal method of regulation should be risk-based to ensure that reactions to AI development are commensurate with and do not stifle innovation.
In place of proposing regulations at this level, the Commission has laid out some legal requirements which must be captured by any regulatory framework to ensure that AI continues to be dependable, and subjective to the values and principles of the European Union.
Following this White Paper, the EU recently projected that its draft regulation will be released on 21 April 2021.
On 21 April 2021, the European Commission adopted a proposal for a regulation of AI Systems, one which it describes as “the first-ever legal framework on AI.” The AI Regulation will impose significant obligations impacting businesses across many, if not all, sectors of the economy. The AI Regulation will prove controversial, touching off a legislative battle lasting at least until 2022.
The proposed AI Regulation will join other ambitious EU initiatives in the digital sector, such as the Data Governance Act, Digital Services Act and Digital Markets Act, currently working their way through the EU legislative process, as well as the forthcoming Data Act and the ongoing reform of EU antitrust policy. Some of the AI Regulation provisions read across to related provisions in other measures – for example, the practices prohibited for all AI systems (see below) are related to the Digital Services Act measures to combat harmful content on the Internet.
The AI Regulation defines “AI systems” broadly and imposes tailored obligations on actors at different parts of the value chain, from “providers” of AI systems to manufacturers, importers, distributors and users. The AI Regulation imposes especially strict obligations concerning “high-risk AI systems.”
On the other hand, the AI Regulation includes several provisions intended to promote the development and uptake of AI systems in the European Union (EU). The AI Regulation also creates a new regulatory framework, with a European Artificial Intelligence Board overseeing and coordinating enforcement. The AI Regulation envisages a two-year period for application following adoption and publication of the final regulation, meaning that the new requirements could apply as early as 2024.
United States lawmakers and regulators have mainly pursued AI in the area of autonomous or self-driving vehicles. The Department of Transportation is investigating what elements must be considered in drafting regulations for the use of such vehicles, including multi-vehicle convoys, and several states have adopted legislation and regulations allowing for the testing of autonomous vehicles. Also, recent federal legislation has tasked part of the Department of Defense with the responsibility of crafting policies for the development and deployment of AI systems as they concern national defence.
In the 115th Congress, thirty-nine bills have been introduced that have the phrase “artificial intelligence” in the text of the bill. Four of these bills have been enacted into law. Section 238 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 directs the Department of Defense to undertake several activities regarding AI.
General Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.
In January 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.
In May 2018, the Trump Administration held a summit on AI technologies for industry, academia and government participants. At this conference, White House officials outlined the following four core goals of the U.S. government concerning AI technologies: (i) maintaining American AI leadership; (ii) supporting American workers; (iii) an increased focus on research and development; and (iv) removing barriers to innovation. In February 2019, President Trump followed up on these previously stated goals by signing an executive order to create the “American AI Initiative”, which, amongst other things, directs heads of federal agencies to budget an “appropriate” amount of funding for AI research and development.
Australia has been and remains a leading participator in the ongoing discourse on the regulation of AI, attracting different bodies seeking to provide comments as to the optimal model to AI regulation:
The Australian Human Rights Commission equally published a White Paper in 2019 calling for comments on the proposed method of regulation. The paper proposes that a separate regulatory body be established, either out of an existing organisation or an entirely new body, to be known as a ‘Responsible Innovation Organisation’. This body would be saddled with the mandate of guiding the approach to AI and likely be conferred with enforcement powers to see to it that AI is put to appropriate use, in line with Australian law and some vital governing principles
As it stands, however, Australia has also no specific regulatory framework for the development and use of AI and therefore placing heavy reliance on the existing legislation and standards pending when new standards will be birthed. It appears that given the depth of discussion papers and the existence of a current set of AI principles, Australia may not be a long way off an AI-specific regulatory cum legislative instrument.
The UK has queued up in taking an affirmative step to the development of AI, with a concentration on the encouragement of innovation in the sector.
In February 2020, the Committee on Standards in Public Life published ‘Artificial Intelligence and Public Standards’ commenting on the role of public standards in the AI sector. According to the Committee, the current tools and principles established in the UK are sufficient to encapsulate the risks that come with AI development. It is not a matter of establishing new regulatory bodies and laws but instead clarifying and tweaking current laws and standards so they can be more clearly applied to circumstances involving AI.
The UK government recently established the Centre for Data Ethics and Innovation (CDEI) as a specific statutory body aimed at researching issues of AI and its regulation. The CDEI often publishes papers and reports on the status of AI regulation within the UK on their website.
In July 2017, China’s State Council released the Next Generation Artificial Intelligence Development Plan (Development Plan). The Development Plan sets forth long-term strategic goals for AI development in China, concluding in 2030. It contains “guarantee measures,” such as developing a regulatory system and strengthening intellectual property protection, in promoting AI development. Regulation of the issues of ethical and legal support for the development of AI is nascent, but policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People’s Republic of China’s national standards for AI, including over big data, cloud computing, and industrial software. The Development Plan comprises three stages, concluding in 2020, 2025, and 2030, respectively, and sets forth goals concerning building a regulatory framework and ethics framework for each stage. 
An in-depth analysis exposes an interaction between Artificial Intelligence in Nigeria and the Copyright Act. AI-related applications will usually run on software. However, it is important to note that there are no provisions for the protection of software under the Copyright Act in Nigeria.
Copyright protection arguably only extends to the original documented expression of the software. This original expression does not extend to the functionality of the software. It is only limited to the blueprint either in audio, written or any other form permissible by the Copyright Act as being protectable thereunder.
Any sufficiently transformative technology is going to require new laws. This, therefore, means that the rapidly advancing area of artificial intelligence will require a new field of law and new regulations. Artificial development is developing fast and needs to be regulated, as it is, there is little or nothing on the regulation of AI in the world and Nigeria particularly. As stated in the previous section efforts are being made by the European Union to ensure that a new category of persons ‘electronic persons’ are created to cater for artificial intelligence and robots. This is laudable as it will create a platform to attribute rights and obligations to AI systems and robots. The application of Artificial Intelligence in Nigeria is still at an infancy stage and the deployment of AI in the administration of Intellectual Property and other spheres of law in Nigeria in a few years may happen sooner than we think. These may be centred around the Law of Torts to handle liabilities arising out of wrongful acts of these machines, fundamental rights, contracts law, Data Protection, Anti-Terrorism and Technology Law among others.
But like any other technology, AI is a double-edged sword. According to futurist Ray Kurzweil, “if the technological singularity happens, then there won’t be a machine takeover. Instead, we’ll be able to co-exist with AI in a world where machines reinforce human abilities”. While speaking on AI’s Existential threat to humanity, Elon Musk said, “AI is a rare case where I think we need to be proactive in regulation than be reactive.” And again. “I am not normally an advocate of regulation and oversight…I think one should generally err on the side of minimizing those things…but this is a case where you have a very serious danger to the public.”
Though we are years away from ASI, researchers predict that the leap from AGI to ASI will be a short one. No one knows when the first sentient computer life form is going to arrive. But as Narrow AI gets increasingly sophisticated and capable, we can begin to envision a future that is driven by both machines and humans; one in which we are much more intelligent, conscious, and self-aware.
In my opinion, one way to move forward in terms of regulating AI to ultimately prevent it from causing harm to man is for us to proactively create comprehensive regulations both locally and globally to control the creation and application of this technology. The key control issue is responsibility. We cannot always count on expert groups to stay responsible with their creations. As a result, we must timeously create regulations that will balance the interest of the public and innovators. A globally agreed set of rules cannot be overlooked also. We can have an international body where all countries can be represented and this body will be responsible for creating rules that will strike a balance between the interests of the public and innovators, the ultimate goal being to keep us in control of our creation.
 Clockwise Software, “GPT-3 Is Only the Beginning: Intro to Language Model’s Capabilities” (Clockwise Software, December 2020) <https://clockwise.software/blog/what-is-gpt-3/> Last accessed 14 April 2021
 Bernard Marr, “What is GPT-3 and why is it revolutionising Artificial Intelligence?” (Forbes, October 2020) <https://www.forbes.com/sites/bernardmarr/2020/10/05/what-is-gpt-3-and-why-is-it-revolutionizing-artificial-intelligence/?sh=1fd1c3f7481a> Last accessed 15 April 2020
 Norton Rose Fulbright, “EU proposes new Artificial Intelligence Regulation” (Norton Rose Fulbright, April 2021) <https://www.nortonrosefulbright.com/en/knowledge/publications/fdfc4c27/eu-to-propose-new-artificial-intelligence-regulation> Last Accessed 23 April 2021
 Library of Congress, “Regulation of Artificial Intelligence: The Americas and the Caribbean” (LOC, December 2020) <https://www.loc.gov/law/help/artificial-intelligence/americas.php#_ftn67> Last Accessed 17 April 2021
 Lee Tiedrich, “AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation” (Inside Tech Media, January 2020) <https://www.insidetechmedia.com/2020/01/14/ai-update-white-house-issues-10-principles-for-artificial-intelligence-regulation> Last accessed 17 April 2021
 Nathan Greene, David Higbee, Brett Schlossberg, “Ai, Machine Learning & Big Data 2020 | USA” (Global Legal Insights, 2020) <https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/usa> Last accessed 17 April 2021
 The Library of Congress, “Regulation of Artificial Intelligence: East/South Asia and the Pacific” (LOC, December 2020) <https://www.loc.gov/law/help/artificial-intelligence/asia-pacific.php#_ftn30> Last accessed 17 April 2021
 Tanya D. Jajal “Distinguishing between Narrow AI, General AI and Super AI” (Medium, May 2018) <https://medium.com/mapping-out-2050/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22> Last accessed 17 April 2021