Technology Store: Have we noticed the long term of our AI?

Or is it the latest example of the exaggeration that precedes reality? In this week’s tech tent, we notice what’s called GPT-3.

OpenAI is a Californian company created in 2015 with an ambitious mission: for that, the general systems of synthetic intelligence that can surpass humans in maximum jobs would gain advantages for all mankind.

It was founded as a nonprofit organization thanks to Elon Musk’s charitable donations, among others, but temporarily became a for-profit company, with Microsoft making a $1 billion investment.

Now he has launched GPT-3, a product that has had social media, or is obsessed with new technologies, humming with enthusiasm in recent days.

It’s an AI, or to be more precise, a device learning tool that turns out to have amazing skills. Essentially, it’s a text generator, but users think they can do anything from writing a Twitter essay on Jerome K Jerome’s style, answering medical questions or even coding software.

So far, it has only been available to a few other people who have been deployed to sign up for the personal beta version, adding Michael Tefula. He works for a Venture Capital Fund founded in London, but describes himself as a generation enthusiast rather than a developer or computer scientist.

Explain that what makes GPT-3 so difficult is the amount of knowledge you have ingested from the Internet compared to an earlier edition of the program. “This thing is a beast in terms of how much greater is GPT-2.”

So, what can you do with him?

“It’s based on your creativity with the tool, you can basically tell it what you’d like it to do. And you can generate effects based on this message.”

Michael to see what the painting would look like to take complex legal documents and translate them into something understandable.

“I gave him a couple of paragraphs that came here from a document.

“And I also gave him a couple of examples of what a simplified edition of those paragraphs would look like.”

After receiving training, GPT-3 can supply simplified versions of other documents.

Then he looked to see if he could be informed of his taste for writing and generate emails that looked like him, and the effects were still impressive.

Which brings us to one of the disorders with this technology. Last year, OpenAI, recalling his project to protect humanity, said it would not publish a full edition of GPT-2 because it would pose security issues.

In the age of counterfeiting, a set of rules that can generate pieces that may seem like a prominent politician can be dangerous.

So why, some critics asked, the stronger GPT-3 was different? Among them, Facebook’s artificial intelligence director Jerome Pesenti tweeted, “I don’t see how we went from gpt2 to a risk too wonderful to humanity to be bratly released to gpt3 and be able to tweet, consumer or execute shell commands.”

He raised the factor of the set of rules that generate a poisonous language that reflects bias in the knowledge he feeds, noting that when given words like “Jew” or “women”, he generates anti-Semitic or misgigotic tweets.

OpenAI founder Sam Altman seemed willing to allay those fears and tweeted: “We expressed in percentage his fear of prejudice and security in language models, and that’s largely why we started with a beta version and do a security review before programs can be released. . “

But the other question is whether, if it is a risk to humanity, GPT-3 is as wise as it seems. The computer scientist who leads the studies of synthetic intelligence at Oxford University, Michael Wooldridge, is skeptical.

He told me that if the technical achievement is impressive, it is clear that GPT-3 did not perceive what he was doing, so talking about his rivalry with imaginative human intelligence: “It is an attractive technical advance, and will be used to do so. Very attractive. things, however, is not a step towards general AI. Human intelligence is much more than a lot of knowledge and a giant neural network”.

That would possibly be the case, but I’m still ahead of looking for it. Look for evidence of blogs or radio scripts written through a robot in the coming weeks.

Leave a Comment

Your email address will not be published. Required fields are marked *