Corvids are a circle of bird relatives that are known to be strangely clever at demonstrating self-awareness and resolving disorders through the use of tools. These features are sometimes thought to be incredibly rare in the animal kingdom, as only we and a handful of other species can do all of this. However, we would never think for a moment that a corvid is a human being: we acknowledge the fact that they are intelligent but not intelligent, or even not as intelligent as we are.
And the same goes for artificial intelligence, the most important topic in the world of computing and technology today. While incredibly immediate advances were seen in some areas, such as generative AI video, nothing produced through ChatGPT, Stable Diffusion, or Copilot makes us feel like it’s true human intelligence. Generally classified as weak or narrow AIs, these systems are not self-aware and do not resolve disorders as such; They are necessarily massive probability calculators, which rely heavily on the data sets used to exercise them.
The clinical network has struggled for centuries to understand exactly what the expression human intelligence means, however, in general, we can say that it is the ability to recognize data or deduce them from various resources and then use them to plan, create or solve disorders through logical reasoning or summary thinking. Humans do all of this incredibly well and can apply it in conditions where we have no prior experience or knowledge.
Making a computer have the same functions is the ultimate goal of researchers in the field of synthetic general intelligence (AGI): to create a formula that can perform cognitive functions as well as any human, and even better.
It’s a PC formula that can plan, organize, create, reason, and solve problems, just like a human being.
The magnitude of such a challenge is difficult to comprehend, as an AGI will have to be able to do more than just crunch numbers. Human intelligence relies on language, culture, emotions, and physical senses to perceive, analyze, and solve problems. produce solutions. The human brain is also fragile and manipulable and can make all sorts of mistakes when stressed.
But sometimes, such conditions lead to remarkable achievements. How many of us have achieved wonderful feats of intelligence on exams, even if they are potentially stressful experiences?You might be thinking by now that this is all about achieving this, and that in reality, no one can program a formula to apply an understanding of culture, use sight or hearing, or remember a traumatic occasion to solve a problem.
This is a challenge taken up by corporations and educational institutions around the world, with OpenAI, Google DeepMind, Blue Brain Project, and the recently completed Human Brain Project being the most prominent examples of work done under AGI. , all the studies are underway on the technologies that will eventually become components of an AGI system: deep learning, generative artificial intelligence, neural language processing, computer vision and sound, and even robotics.
As for the potential benefits that AGI can offer, they are pretty obvious. It is possible to improve both medicine and education, expanding the speed and accuracy of any diagnosis and discovering the most productive learning program for a given student. make decisions in complex and multifaceted situations, such as economics and politics, that are rational and favorable to all. It’s a bit simple to introduce games into such a subject, but he believes in the long term in fighting against AGI systems. who reacts and plays like a real user but with all the positive facets (comrade, laughter, sportsmanship) and none of the others. Negative.
Not everyone is convinced that AGI is possible. Philosopher John Searle wrote a paper several decades ago in which he argued that synthetic intelligence can take two forms, one strong AI and one weak, the difference being that the former may simply be consciousness, while the latter may only be consciousness. The latter only seems to be consciousness. For the end user, there would be no visual difference, but the underlying formula is not the same.
The way AGI is progressing lately, in terms of research, puts it somewhere in the middle, even if it’s weaker than strong. While this may seem like a matter of semantics, it can also be argued that if the PC appears to have only human intelligence, it cannot be considered to be truly intelligent and ultimately lacks what we consider to be a mind.
AI critic Hubert Dreyfus argues that computers are only capable of processing symbolically stored data and that human subconscious wisdom (the things we know but never think about directly) is stored symbolically, so a true AGI can never exist.
A full-fledged AGI is also not without risk. At the very least, its widespread application in fast-paced sectors would lead to significant unemployment. We’ve already noticed cases where companies both giant and small have updated the roles of human visitors with generative AI systems. Computers capable of taking on the same responsibilities as a human brain can potentially upgrade managers, politicians, triage nurses, teachers, designers, musicians, authors, and more.
Perhaps the biggest fear about AGI is its safety. The current studies in the box are divided on the issue of security, with some projects blatantly rejecting it. It can be argued that a human brain, in fact, artificial and incredibly intelligent, can simply consider many of the upheavals facing humanity to be insignificant, compared to the answers to questions about lifestyles and the universe itself.
Building an IGA for the benefit of humanity is the purpose of all projects right now.
Despite the advances made in recent years in the fields of deep learning and generative AI, we are still far from having a formula that computer scientists and philosophers universally agree on about synthetic general intelligence. Current AI models are limited to very limited domain names and may not automatically apply what they’ve learned to other domain names.
Generative AI teams express themselves freely through art, music, and writing: they simply produce final results from a given input, based on probability maps created by trained association.
Whether the result is SkyNet or HAL9000, Jarvis or Tars, AGI is still a long way from being a reality, and possibly never will be in our lifetime. This would possibly be a great relief for many people, but it is also a source of frustration for many others, and the race is fine and indeed to achieve this. If you’ve been inspired or repulsed by the existing point of generative AI, you haven’t noticed anything yet.
Nick, Gaming and Computers met in 1981, with a love affair that began with a Sinclair ZX81 kit and a book on ZX Basic. He eventually became a professor of physics and computer science, but in the late 1990s it was time to start writing for a long-defunct British tech site. He then did the same at Madonion, helping to write the support files for 3DMark and PCMark. After a brief stint at Beyond3D. com, Nick joined Futuremark (renamed MadOnion) full-time, as editor-in-chief of its games and hardware division, YouGamers. After the site’s closure, he became a professor of engineering and computer science for many years, but had no taste for writing. 4 years out of TechSpot. com and over a hundred lengthy articles on anything. He openly admits to being too obsessed with GPUs and open-world RPGs, but who is rarely very much in those days?
Nvidia CEO Jensen Huang believes it’s possible we’ll see AI-generated games in less than 10 years
Ubisoft unveils AI-based ‘Neo NPC’ at GDC: ‘This may just be the beginning of a paradigm shift’, but let’s be honest, it probably won’t be.
Wordle on Sunday, March 24
PC Gamer is from Future plc, a leading global media organization and virtual publisher. Visit our corporate website.