The meteoric rise of generative artificial intelligence has created the perfect technology sensation with user-centric products such as OpenAI’s ChatGPT, Dall-E, and Lensa. However, the boom in user-friendly AI has come in conjunction with users seemingly ignoring or being left in the dark about the privacy risks these projects pose.

Amidst all the hype, however, international governments and major tech figures are starting to sound the alarm. Citing privacy and security concerns, Italy has just temporarily banned ChatGPT, potentially inspiring a similar bloc in Germany. In the private sector, hundreds of AI researchers and technology leaders, including Elon Musk and Steve Wozniak, signed an open letter insisting on a six-month moratorium on AI development beyond GPT-4.

The relatively swift action to try to rein in the irresponsible development of AI is commendable, but the wider array of threats AI poses to privacy and data security goes beyond a single model or developer. While no one wants to parade AI’s paradigm-shifting capabilities, it is necessary to address its shortcomings head-on to avoid catastrophic consequences.

The AI ​​privacy storm

While it would be easy to say that OpenAI and other big-tech AI projects are solely responsible for AI’s privacy problem, the topic was discussed long before it entered the mainstream. Data privacy scandals in AI happened before this crackdown on ChatGPT—mostly out of the public eye.

Just last year, Clearview AI, an artificial intelligence company reportedly used by thousands of government and law enforcement agencies with limited public knowledge, was banned from selling facial recognition technology to private businesses in the United States. Clearview was also fined $9.4 million in the UK for an illegal facial recognition database. Who says consumer-focused visual AI projects like Midjourney or others can’t be used for similar purposes?

The problem is, they already were. The spate of recent deeply fake pornography and fake news scandals created through consumer-grade AI products has only increased the urgency to protect users from the misuse of AI. It takes the hypothetical concept of digital mimicry and makes it a very real threat to ordinary people and influential public figures.

Related: Elizabeth Warren wants the police at your door in 2024

Generative AI models fundamentally rely on new and existing data to build and enhance their capabilities and usability. It’s one of the reasons why ChatGPT is so impressive. That being said, a model that relies on new data inputs needs to get that data somewhere, and part of that will inevitably include the personal data of the people using it. And this amount of data can be easily abused if centralized entities, governments or hackers get their hands on it.

So what can companies and users working with these products do now with the limited scope of comprehensive regulation and conflicting views on AI development?

What companies and users can do

The fact that governments and other developers are raising flags around AI now actually indicates progress from the glacial pace of regulation for Web2 applications and crypto. But raising flags is not the same as oversight, so maintaining a sense of urgency without being alarmist is critical to creating effective regulations before it’s too late.

Italy’s ChatGPT ban is not the first strike governments have taken against AI. The EU and Brazil are passing laws to sanction certain types of use and development of artificial intelligence. Likewise, the potential for generative artificial intelligence to perform data breaches prompted early legislative action by the Canadian government.

The problem of AI data breaches is quite serious, so much so that OpenAI even had to intervene. If you opened ChatGPT a few weeks ago, you may have noticed that the chat history feature has been disabled. OpenAI has temporarily disabled this feature due to a serious privacy issue where strangers’ challenges were exposed and payment information exposed.

Related: Don’t be surprised if the AI ​​tries to sabotage your crypto

While OpenAI has effectively put out this fire, it can be hard to trust the programs it spearheads Web2 giants are cutting their ethical AI teams to preemptively do the right thing.

At an industry-wide level, an AI development strategy that focuses more on federated machine learning would also increase data privacy. Federated learning is a collaborative artificial intelligence technique that trains artificial intelligence models without anyone having access to the data, instead using multiple independent sources to train the algorithm with their own datasets.

As far as users are concerned, becoming an AI Luddite and completely giving up on using any of these programs is pointless and likely to be impossible soon. But there are ways to be smarter about which generative AI you give access to in your everyday life. For companies and small businesses that incorporate AI products into their operations, it’s even more important to be careful about what data you feed into the algorithm.

Evergreen saying that when you use a free product, your personal information Yippee the product still applies to AI. Keeping this in mind can help you rethink which AI projects you spend your time on and what you actually use them for. If you’ve participated in every single social media trend that involves uploading your own photos to a shady AI-powered website, consider not missing out.

ChatGPT reached 100 million users just two months after its launch, a staggering number that clearly indicates that our digital future will be powered by AI. However, despite these numbers, AI is not yet ubiquitous. Regulators and companies should use this to their advantage to create frameworks for responsible and safe AI development proactively, rather than rushing projects once they get too big to control. Currently, the development of generative artificial intelligence is not balanced between protection and progress, but there is still time to find the right way to ensure that users’ information and privacy remain at the forefront.

Ryan Patterson is the president of Unplugged. Prior to taking the reins at Unplugged, he served as founder, president and CEO of IST Research from 2008 to 2020. He left IST Research with the sale of the company in September 2020. He served two tours with the Defense Advanced Research Agency and 12 years in the United States Marine Corps.

Prince Erik is an entrepreneur, philanthropist and veteran Navy SEAL with business interests in Europe, Africa, the Middle East and North America. Prior to selling the company in 2010, he served as founder and chairman of Frontier Resource Group and as founder of Blackwater USA – a provider of global security, training and logistics solutions to the US government and others.

This article is for general informational purposes and is not intended and should not be construed as legal or investment advice. The views, thoughts and opinions expressed herein are solely those of the author and do not necessarily reflect or represent the views and opinions of Cointelegraph.



Source Link