The Air Force has taken a giant step towards creating a synthetic intelligence formula that would never become in a million years opposed to humanity, unlike the nemesis of “Skynet” in the first two Terminator films, which are what matter.
Recently, a set of synthetic intelligence rules called ARTU, perhaps a reference to Star Wars R2D2, carried out responsibilities on a U-2 Dragon Lady spy plane that are performed through humans, the Air Force announced Wednesday.
” After takeoff, the sensor was definitely transferred to ARTU, which then manipulated the sensor, based on data learned in the past from more than a million computer-simulated educational programs,’ according to a press release from humans leading the Air Force. – for now. ” The pilot and AI have effectively teamed up for sensor percentage and in achieving project objectives. “
The rule set used tactical navigation of the aircraft as an Air Force primary whose code is “Vudu” flew the U-2, which was assigned to nine Reconnaissance Wing, at Beale Air Force Base in California, the press release said.
In short: the type and device effectively carried out a reconnaissance project, a missile attack simulation.
“The ARTU’s main duty is to locate enemy launchers while the pilot searches for menacing aircraft, either by sharing the U-2 radar,” according to the press release, which seemed to recommend that this could only be the birth screams. of a new form of superintelligence.
Air Force officials praised the success of the experiment, because if science fiction has taught us anything, it’s that when computers start making human decisions, nothing can go wrong.
“The functioning of artificial intelligence under the command of a FORMULA of the US military is the only one in the world to do so. But it’s not the first time For the first time, it ushers in a new era of man-machine organization and algorithmic competition,” said Dr. Will Roper, undersecretary of the Air Force for Procurement, Technology and Logistics. . a statement. ” Not realizing the full perspective of AI will mean giving credit for decision-making to our adversaries. “
In July, Task
Mulchandani said he had no idea how to design a set of rules to be self-aware and that the Ministry of Defense also had to stick to synthetic intelligence policies and laws.
“Here at the DOD, things are taken very seriously in terms of the systems that are implemented, so there is this negative connotation that the DOD is slow, the government slow, and so on,” Mulchandani told reporters at a Pentagon press conference. “Well, it’s slow for a reason. There is an adult age of generation that you have to put in it: there are tests and evaluations that are taken very, very seriously here. “
Of course, that’s exactly what Skynet would like us to think.
© 2021 Brookline Media. All rights reserved.