Google to defend generative AI users from copyright claims
13 de Octubre de 2023 a las 11:53
Photo courtesy of Annegret Hilse, Reuters
Article courtesy of Blake Brittain, Reuters
WASHINGTON. - Google (GOOGL.O) said on Thursday that it will defend users of generative artificial-intelligence systems in its Google Cloud and Workspace platforms if they are accused of intellectual property violations, joining Microsoft (MSFT.O), Adobe (ADBE.O) and other companies that have made similar pledges.
Major technology companies like Google have been investing heavily in generative AI and racing to incorporate it into their products. Prominent writers, illustrators and other copyright owners have said in several lawsuits that both the use of their work to train the AI systems and the content the systems create violate their rights.
"To our knowledge, Google is the first in the industry to offer a comprehensive, two-pronged approach to indemnity" that specifically covers both types of claims, a company spokesperson said.
Google said its new policy applies to software, including its Vertex AI development platform and Duet AI system, which generates text and images in Google Workspace and Cloud programs. The press release did not mention Google's more well-known generative AI chatbot program Bard.
The company also said the indemnity does not apply if users "intentionally create or use generated output to infringe the rights of others."
The new wave of lawsuits over generative AI has generally targeted the companies that own the systems, including Google, and not individual end users.
AI defendants have said that the use of training data scraped from the internet to train their systems qualifies as fair use under U.S. copyright law.
EU opens probe into X in test of new tech rules, pressure on TikTok, Meta
Photo courtesy of Dado Ruvic, Reuters
Article courtesy of Reuters
BRUSSELS/DALLAS. - EU industry chief Thierry Breton on Thursday opened an investigation into Elon Musk's X, the first under new EU tech rules, after earlier reprimanding the social media platform, TikTok and Meta for not doing enough to tackle the spread of disinformation following Hamas' attack on Israel.
All three platforms have seen a surge of false content about the Israel and Hamas conflict, with disinformation appearing to be most prevalent on X, social media researchers told Reuters.
Breton's move ramps up the pressure on TikTok and Meta to remove illegal and harmful content from their platforms in order to comply with the Digital Services Act (DSA).
The DSA, which entered into force in November last year, forces very large online platforms and search engines to do more to tackle illegal content and risks to public security and protect their services against manipulative techniques.
X CEO Linda Yaccarino said earlier on Thursday the platform had removed hundreds of Hamas-affiliated accounts and taken action to remove or label tens of thousands of pieces of content since the attack, in response to a letter from Breton.
"We have sent @X a formal request for information, a first step in our investigation to determine compliance with the DSA," Breton said in a posting on X.
X declined to comment.
It has until Oct. 18 to provide details on how its crisis response protocol is activated and functions, and until Oct. 31 on other issues.
A move by Musk to cut off free academic access to a data tool earlier this year is making it more challenging to track keywords and hashtags, forcing researchers to manually sift through content to trace disinformation, researchers said.
Since taking over Twitter, Musk has slashed the workforce to roughly 1,500 from 7,500 employees to cut costs, including many who worked on content moderation, identifying and taking down coordinated propaganda campaigns and curating reliable content.
X has also lost two heads of trust and safety and one head of brand safety, who worked to prevent ads from appearing next to harmful content. The company risks fines of as much as 6% of its global turnover if found guilty of DSA violations.
The Frenchman earlier on Thursday gave TikTok CEO Shou Zi Chew 24 hours to step up efforts to remove illegal and harmful content from the short video app.
Breton's warning in a letter to Chew, first seen by Reuters, follows similar letters to X, formerly Twitter, owner Musk and Meta Platforms' Mark Zuckerberg earlier this week. Breton subsequently posted the letter on social media platform Bluesky.
Breton said in the letter to TikTok, owned by Chinese conglomerate ByteDance, that he had indications that it was being used to disseminate illegal content and disinformation in the EU after the Hamas attacks.
"Given that your platform is extensively used by children and teenagers, you have a particular obligation to protect them from violent content depicting hostage taking and other graphic videos which are reportedly widely circulating on your platform without appropriate safeguards," he said.
Comentarios
escribenos