Thursday, November 7th

    The biggest hazards of employing Gen AI like ChatGPT, Google Gemini, Microsoft Copilot, and Apple Intelligence in your private life

    img
    Generative artificial intelligence (AI) tools are increasingly being used by consumers for personal and business purposes, but many overlook the potential privacy implications.

    Many consumers are crazy about generative artificial intelligence and are using new tools for all kinds of personal or business things. But many ignore the potential privacy implications, which can be significant. From Opena ChatGTT to Google Gemini to Microsoft Copilot Software and New Apple Intelligence, AI tools for consumers can easily get and dispel. However, the tools have a different privacy policy related to user data and their reservations. In many cases, consumers are not aware of how their data is or may be used.

    This is where being an informed consumer becomes extremely important. Depending on the tool, there are different details about what you can control, said Jodi Daniels, managing director and privacy consultant at Red Clover Advisors, which advises companies on privacy issues. "All tools don't have a one-size-fits-all opt-out," Daniels said. The proliferation of artificial intelligence tools and their integration into many things consumers do on PCs and smartphones makes these questions even more pressing. For example, Microsoft made good on its promise a few months ago when it released the first Surface PC with a dedicated Copilot button on the keyboard for quick access to a chatbot. Apple, for its part, outlined its AI vision last month, which involves several smaller models powered by Apple's devices and chips. Company executives have spoken publicly about the importance the company places on privacy, which can be a challenge with AI models. Here are several ways consumers can protect their privacy in the new age of generative AI.

    Ask AI the privacy questions it needs to be able to answer

    Before choosing a tool, consumers should read the associated privacy policies carefully. How is your information used and how can it be used? Is there an option to turn off data sharing? Is there a way to limit what data is used and how long it is kept? Can data be deleted? Do users have to go through a lot of effort to find the opt-out setting?

    Privacy experts say that if you can't easily answer these questions, or can't find the answers in a service provider's privacy policy, that should raise a red flag. “A tool that cares about privacy is going to tell you,” Daniels said.

    And if it doesn’t, “You have to have ownership of it,” Daniels added. "You can't just assume that a company will do the right thing. Every company has different values ​​and every company makes money differently.

    She gave the example of Grammarly, an editing tool used by many consumers and businesses, where the company clearly explains how it uses data in several places on the site. Keep sensitive data away from large language models

    Some people have a lot of faith in putting sensitive data into generative AI models, but Andrew Frost Moroz, founder of privacy-focused browser Aloha Browser, advises people not to put any kind of sensitive data because they don't really understand how it's used or can be abused.

    This applies to all types of information that people can enter, whether it's personal or work-related. Many companies have expressed a lot of attention to employees who use these models to help them work, as employees may not consider how the model uses this information for training purposes. If you want to enter confidential documents, the AI ​​model can now access it, which can cause various problems. Many companies will only approve custom versions of gen AI tools to create a firewall between proprietary information and large language models.

    Individuals should also exercise caution and not use AI models in anything that isn't public or that you don't want to share with others in any way, Frost Moroz said. Awareness of how you’re using AI is important. If you are using it to summarize an article from Wikipedia, that might not be an issue. But if you're using it to compile a personal legal document, for example, it's not recommended. Or let's say you have an image of a document and want to copy a certain section. You can ask the AI ​​to read the text so you can copy it. In doing so, the AI ​​model will understand the content of the document so consumers can keep it in mind, he said. Opt out with OpenAI, Google

    Each generation of AI tools has its own privacy policy and may have opt-out options. For example, Gemini allows users to set retention periods and certain data deletions, as well as other activity controls. Users can choose not to have their data used by ChatGPT for model training. To do this, they need to navigate to their profile icon in the bottom left corner of the page and select "Data Controls" under the "Settings" heading. They then need to disable the feature that says “Improve the model for everyone.” While this is disabled, new conversations won’t be used to train ChatGPT’s models, according to an FAQ on OpenAI’s website.

    There’s no real upside for consumers to allow gen AI to train on their data and there are risks that are still being studied, said Jacob Hoffman-Andrews, a senior staff technologist at Electronic Frontier Foundation, an international non-profit digital rights group. If a profile is inappropriately posted online, consumers can delete it and it will disappear from search engines. But not training an AI model is another matter entirely, he said. He said that once the AI ​​model is used, there may be some ways to reduce the use of certain information, but it is not stupid, and how to effectively do this is an area of ​​positive research.

    Choose to join, such as Microsoft Copilot, only for sufficient reasons

    The company is incorporated Gen Gen Remate into the daily tools used in individual and professional life. For example, Copilot for Microsoft 365 can work with Word, Excel, and PowerPoint to help users with tasks such as analysis, idea generation, and organization.

    For those tools, Microsoft said it will not share consumer data with third parties without permission, nor will it use customer data to train Copilot or its artificial intelligence capabilities without consent. However, users can choose if they want by logging into the Power Platform admin center, selecting Settings, Tenant Settings, and enabling data sharing for Dynamics 365 Copilot and Power Platform Copilot AI features. They enable data sharing and storage. The benefits of choice include the ability to make existing functions more efficient. The downside, however, is that consumers lose control over how their data is used, an important consideration, privacy experts say. The good news is that consumers who have signed up with Microsoft can withdraw their consent at any time. Users can do this by going to the Tenant Settings page in the Settings section of the Power Platform admin center and turning off data sharing for Dynamics 365 Copilot and Power Platform Copilot AI features.

    Set a short retention period for generative AI for search

    Consumers might not think much before they seek out information using AI, using it like they would a search engine to generate information and ideas. However, even searching for certain types of information using gen AI can be intrusive to a person’s privacy, so there are best practices when using

    Tags :