Godfather of artificial intelligence warns against future growth of AI

Generative AI  is a quantum jump from usual artificial intelligence

Science fiction writers have been portraying for a long the doomsday scenario of machines ruling the world and taking over from humans.

However, when a person like Geoffrey Hinton who has given more than a decade to Artificial Intelligence research and is known as “The Godfather of AI”, resigns from Google purportedly to warn people against the danger of the growth of AI, it is time to sit back and take notice.

Geoffrey Hinton who was all for the AI technology which has resulted in ChatGPT suddenly seems to have got cold feet and is fearful of the repercussions of  Artificial Intelligence, once machines actually became “intelligent”.

MS Education Academy

He clearly does not see the big multinational companies like Microsoft and Google of the world competing with each other in the field of AI pouring billions of dollars, resulting in something beneficial for human beings. Rather he felt that AI if not checked now could lead to serious harm to humanity.

For more than ten years, Hinton had been involved in the creation of the technology which gave rise to the high quality Artificial Intelligence systems of today.

The bone of contention is the “generative artificial intelligence” the technology behind the creation of ChatGPT, which has left even the creators of this technology scared.

Dr. Hinton has left his job to spread the message about the risks of A.I. which would definitely have a great impact coming from his mouth.

Generative AI  is a quantum jump from usual artificial intelligence because it can produce high-quality text, imagery, and audio which is so authentic that one can be completely convinced that it is created by real people.

With the coming of generative adversarial networks – a type of machine learning algorithm – Generative AI got a tremendous boost around 2014.

Even at that time, critics raised the question of digitally forged images or videos copying real people could be used to indulge in criminal activities.

With generative AI, the machine can generate content including creative literary products, accurate photos, paintings, and videos almost like that being done by humans.

It could automatically create images if given a text description or generate text captions from looking at images, which one felt was a uniquely human task.

The similarity between what a human can do and what a machine can do (particularly in the so-called higher thinking faculties) is slowly getting blurred.

What is being feared is that Generative A.I. can be used as a tool for misinformation.

Obviously, like nuclear energy which can be used both for good and evil, one cannot blame the technology.  But in view of the potential harm of AI, one must build sufficient safeguards and not let machines overtake humans.

Dr Hinton knows that the AI technology they built may be neutral but had the potential to be used for committing crime or creating fake data which looks real.

He says that while nuclear energy cannot be made secretly, one cannot know what a country, or what company is secretly developing AI.

Dr Hinton primarily worked on building neural networks actually taking inspiration from how a brain’s neural networks function. The AI would be taught to learn skills on its own by analyzing data somewhat like the human brain does.

In 2012, Hinton along with two students built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs, and cars.

Google spent $44 million to acquire the company started by Dr. Hinton and his two students.

It is their neural network systems which helped building powerful technologies of ChatGPT and Google Bard, AI chatbots that if asked a question or given a prompt will give you an answer.

Dr Hinton got the Turing Award in 2018 for neural networks.

Dr. Hinton initially liked neural networks machines being able to “understand” and “learn” bits of language and come out with correct answers but when he saw machines imbibe huge amounts of data leaving even humans far behind, he understood that it had the potential to be very dangerous.

The fact that machines could far outstrip the amount of “knowledge” a human brain can contain and the machine  may not be under the control of humans is a chilling concept.

It is almost like creating a Frankenstein.

It is scary to think that humans with biologically evolved brains can become inferior to a machine.

According to Hinton, these sophisticated AI would seriously affect the job market. Who would need a human brain if one had a more advanced “intelligent” machine?

Hinton is not in favour of further scaling up of AI and wants sufficient control and regulation of AI.

A large number of internationally renowned scientists have already in an open letter given a call for caution. They want regulation as far as the growth of AI is concerned.

Prof Stephen Hawking, theoretical physicist and cosmologist has said “efforts to create thinking machines pose a threat to our very existence.”

He feared the consequences of creating something that can match or surpass humans. He said that humans being limited by slow biological evolution could not compete machines and would be superseded.

Elon Musk CEO of SpaceX, Tesla & Twitter has warned that AI is “our biggest existential threat”.

Some more renowned CEOs and professors giving warning of AI growth include Steve Wozniak, Co-founder, Apple, Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute Christof Koch.

The open letter says that AI systems with human-competitive intelligence can pose “profound risks to society and humanity”.

Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable, it said.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) has called for countries to implement UNESCO’s global ethical framework for dealing with AI, immediately following pleas by more than a thousand tech workers for a pause in the training of the most powerful artificial intelligence (AI) systems.

“The world needs stronger ethical rules for artificial intelligence: this is the challenge of our time. UNESCO’s Recommendation on the ethics of A.I. sets the appropriate normative framework.”

UNESCO has urged for the strategies and regulations to be implemented at the national level. UNESCO said it guides countries both on how to maximize the benefits of the tool and reduce its risks, providing policy recommendations alongside values and principles.

Back to top button