Security in the Age of Generative AI

With great leaps forward in technology come both risks and rewards

[The image above is generated by Midjourney. The prompt I used to create the image is listed at the end of this email.]

The capabilities of Generative AI are transformational; we see the abilities in ChatGPT and myriad other tools, and the future for human productivity seems limitless. However, with every new advance in technology, there are also advances in the tactics used by bad actors who find ways to use that technology for criminal means.

The proliferation of AI tools can inadvertently enable nefarious actors to manipulate data, create convincing deepfakes, or even launch sophisticated AI-powered cyber-attacks. This new era of AI calls for a corresponding evolution in our cybersecurity measures - we must collectively rise to the occasion and ensure that this technology is used responsibly and securely.

At BlackHat 2023, Maria Markstedter, CEO and founder of Azeria Labs, and Jeff Moss, hacker and founder of Black Hat, emphasize generative AI's transformative nature. Moss highlights that generative AI reshapes our approach to IT problems, turning them into prediction challenges. He also raises concerns about intellectual property, where artists might sue companies for scraping training data from original work. Moss envisions a future where individuals control and possibly sell their authentic data, which is valuable because it's AI-free.

Unlike the early days of the internet, governments are now proactively setting structured rules for AI. Moss believes this proactive approach allows stakeholders to participate in the rule-making process. Markstedter draws a parallel between the generative AI boom and the early days of the iPhone, emphasizing that the rapid adoption of AI is not just because of new technology but due to the vast expansion of its use cases.

Emerging Security Vulnerabilities in the AI Landscape

Generative AI's integration into businesses has led to new security challenges. Businesses' initial apprehensions with ChatGPT, fearing data leaks and proprietary information being fed into ChatGPT's training data. The rise of machine learning as a service on platforms like Azure OpenAI has emphasized the need to balance rapid development and conventional security practices.

The multimodal capabilities of generative AI, which allow it to interpret data from multiple content types, introduce new security risks. For instance, Adept's model ACT-1, which aims to replicate human actions in front of a computer, could inadvertently download malicious code while trying to solve a security problem. A research paper from July 2023, (Ab)using Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs demonstrated how images could be used to inject malicious instructions into AI models. Creating yet another security attack face as LLM and multimodal models evolve.

The Future Role of Security Professionals in the AI Era

The rise of generative AI has sparked debates about its impact on the job market. While AI might not replace security professionals, those with AI skills could replace others, as with many other jobs. The importance needs to be placed on "explainable AI," which allows developers and researchers to understand the decision-making process of AI models.

Navigating the Future of Generative AI in Cybersecurity

Due to these new capabilities, we need to rethink concepts of identity, asset management, and data security in the face of genuinely autonomous systems having access to our data. But we should also not only focus on AI's challenges but also recognize the opportunities it presents.

AI is everywhere. Thousands of new AI apps launch every day. There's a constant message stream from the media reminding you that if you're not getting on board the AI train, you're falling behind. But don't let the pressure of jumping on the AI Express lead you to ignore real cybersecurity risks.

We're embedding AI into our browsers, email inboxes, and document management systems. We're giving it autonomy to act on our behalf as a virtual assistant. We're sharing our personal and company information with it. This creates new cybersecurity risks and amplifies traditional hacker games' risks.

Now is an excellent time to double down on traditional cybersecurity measures

With AI helping hackers improve their classic scams, the call is out to make sure you double down on your traditional cybersecurity defenses.

Incorporate Security Measures into Development ProcessesThe pace at which companies can deploy generative AI applications is unprecedented in the software development world. Standard software development and lifecycle management controls may not always be present. Knight Labs,When we share our personal or corporate information with any software application, we trust that the company handles it responsibly. However, with generative AI tools, we may unintentionally share more than we think. Ryan Faber, founder and CEO of Copymatic, cautions:“AI apps do walk around in our user data to fetch important information to enhance our user experience. The lack of proper procedures for collecting, using, and dumping data raises some serious concerns.”– Ryan Faber, founder and CEO of Copymatic

Data leaks that expose confidential corporate informationAccording to data from Cyberhaven, as of April 19, 9.3% of employees have used ChatGPT in the workplace, and 7.5% have pasted company data into it since it launched. Their analysis shows that 4.0% of employees have pasted confidential data into ChatGPT.

A federal judge in May 2023 imposed $5,000 fines on two lawyers and a law firm in an unprecedented instance in which ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim.

Not only did they use ChatGPT to help write the brief it's a distinct possibility that they also fed in privileged information to ChatGPT, which could breach the attorney-client privilege. This is a cautionary tale for many professionals who are obligated to protect their client’s data.

Malicious use of deepfakesVoice and facial recognition are increasingly used as an access control security measure. AI allows bad actors to create deepfakes that get around that security. Evgeny Pashentsev, in his paper Malicious Use of Deepfakes and Political Stability, states:Deepfake actors can use fake identities to social-engineer unsuspecting users into sharing payment credentials or gain access to IT assets. Social engineering attacks. Malicious actors can use deepfakes to manipulate friends, families, or colleagues of an impersonated person.Case in point, In 2020, a Hong Kong bank manager was duped into transferring $35 million after receiving a voice call from someone he believed to be a familiar company director. The caller shared exciting news about a company acquisition and requested the manager to authorize the hefty transfer. Unaware of advanced voice imitation technology, the manager tragically approved the transfers. In moments, a staggering $35 million disappeared into various US accounts.

How to strengthen your security posture in the age of AI

In an era where artificial intelligence is both a tool and a threat, it is paramount to bolster your security posture to safeguard valuable data and maintain operational integrity. Adopting a robust security framework is not just an organizational necessity but a strategic imperative. Here are some guiding principles and measures that can help fortify your defense against the potential risks associated with AI technologies.

  • Research the company behind the appFor example, companies are more protective of personally identifiable and customer information. Review the company's privacy policy and security features. Also, do you research the companies providing these new services before posting any information you wouldn't want to share publicly?

  • Train employees on the safe and proper use of AI toolsYou should be training employees on good cybersecurity behaviors already. Widespread use of generative AI tools means adding new policies and training topics to that framework.

  • Consider using a security tool designed to prevent oversharingAs generative AI tool production continues, we'll soon see a growing collection of cybersecurity tools designed specifically for their vulnerabilities. LLM Shield is a security product designed to help prevent employees from sharing sensitive or proprietary information with a generative AI chatbot.

  • Use AI to Monitor AIAccording to The GitLab Report, What’s next in DevSecOps in 2023, Among devs who use AI/ML, 61% said that they use AI/ML to check code, that they use bots for testing, and that they use AI/ML for code review.

In conclusion, the age of generative AI presents both challenges and opportunities. By understanding the evolving landscape, continuously updating our skills, and adopting a security-first approach, we can harness the power of AI without compromising our digital safety.

Tip of the Week: Best Security Practices when Using ChatGPT

Safeguarding Your Personal and Sensitive Information

It's paramount to remember that any data shared with ChatGPT is not entirely private. The platform retains the right to utilize this data in subsequent interactions. Therefore, it's advisable to avoid sharing confidential information: Whether it's proprietary code, business strategies, or personal details, it's best to keep them away from ChatGPT.

Also, make sure that you understand the risks of using certain features. In the ChatGPT interface under Settings & Beta > Data Controls, you can save new chats on this browser to your history. The benefit is being able to get back to previous chats; the downside is that you are granting OpenAI the ability to use your data to improve their models.

Ensuring the Authenticity of Information

Generative AI tools, including ChatGPT, operate autonomously, which means they don't undergo a manual vetting process. This autonomy can sometimes lead to the dissemination of inaccurate or fabricated information. To counteract this:

Always verify sources: Before accepting any information from ChatGPT, ensure that the sources cited are legitimate and trustworthy.

Be cautious with other AI platforms: Similar tools, such as Google's Bard, have been observed to produce misleading content. Always approach information from these platforms with a discerning eye.

Navigating the Complex Landscape of Copyright

Generative AI platforms, by their very nature, pull information from existing sources. This can sometimes lead to unintentional copyright infringements. To mitigate this risk, be vigilant with recent sources. If ChatGPT references content from publications post-1927, there's a potential copyright concern, especially within the U.S. jurisdiction. Utilize plagiarism checkers; before publishing or using any content generated by ChatGPT, running it through a reliable plagiarism checker can help ensure its originality.

For ChatGPT users, prioritizing security is paramount. As we increasingly integrate AI-driven chatbots into our daily lives, it's essential to be aware of potential vulnerabilities and take proactive measures. By staying informed and practicing safe online behaviors, ChatGPT users can ensure that their interactions remain private, secure, and free from malicious threats.

What I Read this Week

What I Listened to this Week

AI Tools I am Evaluating

  • AI & Analytics Engine - The AI & Analytics Engine empowers everyone, regardless of data science expertise, to leverage their data and build machine learning models to predict future events and make better decisions without needing to code.

Midjourney Prompt for Newsletter Header Image

For every issue of the Artificially Intelligent Enterprise, I include the MIdjourney prompt I used to create this edition.

Conceptual Artwork of Secure AI Networks - An imaginative and thought-provoking conceptual artwork that depicts the challenge of ensuring security in the age of generative AI. The artwork portrays a network of interconnected AI entities surrounded by intricate digital locks and barriers, symbolizing the protective measures in place to safeguard AI systems. The background features a futuristic digital landscape, illustrating the evolving nature of AI security. The artwork employs a combination of digital art techniques, emphasizing the need for robust security protocols in the AI ecosystem. Post-processing enhances the visual effects and contrasts, creating a visually engaging and conceptually stimulating artwork. This conceptual masterpiece raises questions about the delicate balance between AI innovation and the imperative to ensure data and system security. Created by the visionary digital artist, Mia Rodriguez, this artwork has been featured in tech exhibitions, praised for its portrayal of the complexities of securing generative AI systems. --s 300 --ar 16:9

Join the conversation

or to participate.