Adobe’s plans for an online content attribution standard could have big implications for misinformation – TechCrunch

Adobe’s plans for an online content attribution standard could have big implications for misinformation – TechCrunch


Adobe is still in the early stages of working on a technical solution to deal with scale-based online misinformation, taking some major steps toward this lofty goal of becoming an industry standard.

The project was first announced last November, and the team is now moving out with a white paper that is known for its system, called the Content Integrity Initiative (CAI). known as. Beyond the new white paper, the next step in developing the system will be to implement a proof-of-concept concept, which Adobe plans to develop for Photoshop later this year.

Tech Kanch spoke with Adobe’s CAI Director Andy Parsons about the project, which aims to develop a “strong content attribution” system that integrates data from its inception into Adobe’s own industry-standard image editing software. Adds to other media.

“We think we can really provide diges for users who really check, like really digestible digestion history, whatever the media is looking at,” Parsons said.

Adobe highlights the system appeal in two ways. First of all, it will give content creators another strong way to keep their name connected to their work. But even more compelling is the idea that the project could provide a technical solution to misinformation based on images. As we’ve written before, manipulation and even out-of-context images play a huge role in misleading online. As we know – a way to detect authenticity – or “prediction” – to let us know that online can become a series of captures of images and videos that we now lack.

“Finally, you can imagine a social feed or a news site that allows you to filter out things that are likely to be rude,” Parsons said. “But it’s much clearer than calling CAI decisions – we’re just about providing that level of transparency and verifiable data.”

Of course, Internet users are exposed to a lot of misleading things on a daily basis that are not visual content at all. Even if you know where a piece of media comes from, it makes a claim or its scene captures it but they are still misleading without editorial context.

CAI was first announced in partnership with Twitter and the New York Times, and Adobe is now working on establishing a wider partnership, including with other social platforms. It is not difficult to generate interest, and Parsons describes a “wide range of enthusiasm” for these solutions to determine where the images and videos come from.

Beyond EXIF

While Adobe’s inclusion makes CAI sound like a diversion to EXIF ​​data – stored metadata that allows the photographer to know which lens they have used and where the photo was shot. GPS information about it – plans to make CAI stronger.

Parsons said, “Adobe’s own XMP standard, widely used in all tools and hardware, is editable, not verifiable, and thus relatively easily broken for what we’re talking about.” Are. “

“When we talk about trust, we think, ‘Is this the data that a person has verified while capturing an image or creating an image? Can that data be verified?’ And in the case of traditional metadata, including EXIF, that’s not to say that no number of tool bytes and EXIF ​​claims can change the text. When we’re talking, you know, verifiable things like identification and probation and asset history, [they] Encryption is essential.

The idea is that, over time, such a system becomes completely ubiquitous – a fact that Adobe is likely to achieve in a unique way. In the future, an app like Instagram will have its own “CAI implementation”, which will allow the platform to extract data on where an image originates and show it to the user.

The final solution would be to use hashish-like techniques, in which a type of pixel-level cross-checking system resembles a digital fingerprint. This type of technique is already widely used by AI systems to detect online child exploitation and other illegal content on the Internet.

Since Adobe is working to bring partners to the board to support CAI standards, it is also creating a website that can read any image CAI data to fill this gap. Unless there is widespread adoption of the solution.

“… you can drag any asset into the device and view the data in a very transparent way, and in this way any reliance on a particular platform can lead us to divorce.”

For the photographer, this is an opt-in to start adding this type of data, and to some extent modular. For example, a photographer may embed data about their editing process while refusing to attach their identity in a situation where doing so could endanger them.

Thoughtful implementation is key

Although the main applications of this project are to make the Internet a better place, the idea of ​​embedded data layers can be used to determine the beginning of an image that can be assisted by digital rights management (DRM). Which is famous for its use in recreational environments. Industry. DRM has many industry-friendly fluctuations, but it is a user-hostile system that sees the effects of the Digital Millennium Copyright Act in the United States on countless individuals and all sorts of other killings that prevent innovation and are illegal. Threaten people with reasonable legal consequences of actions. .

Since photographers and videographers are often the creators of individual content, ideally they benefit from CAI’s suggestions and not some kind of corporate gatekeeper – but the concerns that speak to such a system. No matter how newborn he is. Adobe emphasizes the benefits to individual creatives, but it is worth noting that sometimes these systems can be misused by corporate interests in unexpected ways.

Being proactive, the growing misunderstanding makes it clear that the way we share information online right now is deeply broken. Due to the fact that the content is often divorced from reality and goes viral on social media, platforms and journalists often fail to clean up after this fact. If the technical solution, if thoughtfully implemented, can at least create a scale to cover the breadth of the problem.



Source link

Leave a Reply

Close Menu