Friday, November 8th

    Google experts warn that AI may distort reality, and AI overviews repel mobile users

    img
    A new research report warns that generative artificial intelligence (AI) could create distrust for humans by cutting out low-quality, spammy, and malicious content.

    As debate continues over whether generative artificial intelligence could harm humanity, AI's "cutting out of low-quality, spammy and malicious synthetic content" could create distrust for all humans, a new research report warns. This is not surprising. AI-generated "errors" can also lead to fatigue, as we humans must constantly fact-check what we read, see and hear online (the alternative - no fact-checking - is even worse). "This contamination of publicly available data with AI-generated content can potentially impede information seeking and distort collective understanding of sociopolitical reality or scientific consensus," say the six researchers in their June paper, Generative AI Misuse: A Taxonomy of Tacts and Insights from Real-World Data. "We are already seeing cases of false profits being made where high-profile individuals can pass off adverse evidence as artificial intelligence, shifting the burden of proof in costly and inefficient ways."

    Distort reality? Is the scammer poisonous? Similarly, it is not surprising that we have always lived in wrong information and false information that has always been part of our media -diet -even in AI to make all texts, images and videos that are possible before. For example, a third of Americans still believe the 2020 presidential election was rigged (it wasn't).

    What's surprising about this new study? In fact, the 29-page report was co-authored by researchers from across Google, including the DeepMind artificial intelligence research lab, the philanthropic group Google.org, and Jigsaw, a technology incubator focused on security and social threats. Billions of people use Google's search engine and other services every day, and Google is one of the biggest tech companies investing heavily in the future of artificial intelligence. It's good that these researchers pointed to real-world examples of how AI can be misused and reminded us all that we still don't know much about the potential risks as the technology continues to develop at a rapid pace. If you don't have time to read or scan the report, at least look at the introduction and the top three results. First, most abuse is done to deceive people, lie to them to change their minds, or make money. "Manipulating people's likenesses and fabricating evidence is one of the most common tactics used in real-life abuse cases. Most of them were deployed with the apparent intent to influence public opinion, commit fraud or fraud, or make a profit," the researchers wrote.

    Second, you don't have to be a tech whiz to use these tools for evil. "The majority of reported abuses do not involve technically sophisticated uses or attacks of the GenAI system. Instead, we primarily see the use of readily available GenAI capabilities that require minimal technical expertise."

    Third - and I find this the most worrying - many breaches are "not malicious and clearly violate the rules of these tools, which is why we humans build these tools and install them". This brings me to what I believe to be a fundamental principle of technology development: just because you can do something with a technology doesn't mean you should.

    Case in point: Google's overview of artificial intelligence, which the company presented at its developer conference in May. This feature uses artificial intelligence to automatically generate answers to certain Google search questions by aggregating or citing legitimate and reliable sources from the Internet. Unfortunately, the launch of the AI ​​reviews didn't go as planned, with some users reporting that the system suggested adding glue to the pizza sauce to stick to the pizza crust. That prompted Google to say in late May that it would scale back the use of AI summaries, after seeing that \"some odd, inaccurate or unhelpful AI Overviews certainly did show up."

    But overall, Google has defended AI Overviews — even as publishers have argued that it can undercut their ability to fund editorial work — saying the feature is intended to give users helpful information and allow Google "to do the Googling for you."

    Well, one survey shows that perhaps users don\'t exactly find AI Overviews helpful. The release of AI Overviews \"coincided with a significant drop in mobile searches,\" according to a study by a search industry expert named Rand Fishkin and reported on by Search Engine Journal. The study examined Google searches by users in the United States and the European Union. Search Engine Magazine reports that Fishkin saw a “slight increase” in desktop searches in May, but a “significant decline in mobile, as mobile devices account for nearly two-thirds of all Google queries. This finding suggests that users may have been less inclined to search on their mobile devices when confronted with AI-generated summaries."

    But that doesn't mean AI Overviews is a failure. Search Engine Journal noted that users who did "engage" with the AI summaries still clicked on results at a similar or higher rate than they had on other search results. As in all things, we have to wait and see how Google's all-in approach AI develops. Let's hope Google CEO Sundar Pichai and his team have read a report by Gen AI and have already changed some of their front plans based on what their experts found.


    Tags :