When used in the context of videos and memes, deepfakes can be a source of entertainment.
But they are also a developing concern. In the age of fake news and misinformation, deepfauxs (photos, videos or audio files generated by AI) can potentially be used to confuse and deceive people.
Microsoft, however, has ideas.
On Tuesday, the corporation announced two new technologies, any of which aims to give readers the team they want to eliminate what is genuine and what is not.
The first, Microsoft Video Authenticator, analyzes symbols and videos to give “a percentage of probability, or a confidence score, that the media is artificially manipulated,” according to a blog on Microsoft’s official website.The tool works by detecting combined elements of a symbol that our fragile human eyes might not have detected, such as sophisticated discoloration, gray-level elements, and boundaries.
The technology of the moment, which will be held as a component of the Microsoft Azure cloud service, allows creators to upload “hash and digital certificates” to photos or videos.They then live in metadata when the media broadcasts online.conform to those hashes, providing readers with data about who originally created the content and whether or not it has been replaced since.
So, settled, right?
Well, not exactly, as Microsoft recognizes in its blog, the deepfake generation is becoming more sophisticated, which means that your AI tool will also want to be updated.The ability to load and stumble on media hashes is also useful only for the number of other people who do.
But in the fight to sort the facts of online fiction, this turns out to be a start.
It should be noted that Microsoft is not the only corporate suffering that finds a solution to the problem of deepfake.After banning them in January, Facebook recently revealed its own efforts to stumble upon the deepfakes, while Twitter began tagging them as “manipulated and artificial media.”Reddit launched his own ban.