As developers continue to adopt AI tools to transform their workflows, AI-generated code has become more common. In fact, 96% of developers reported using AI coding assistants to streamline their work.
Although generative AI (GenAI) tools like ChatGPT can speed up workflows and boost productivity, the security and quality of the outputs aren’t guaranteed. Developers and organizations adopting ChatGPT or other GenAI tools should understand the benefits but also be aware of potential security vulnerabilities and issues associated with AI-generated code, and prioritize implementing strong security practices.
Can ChatGPT write secure code?
When surveyed, 75.8% of developers believe AI-generated code is more secure than human-written code, even though it’s been proven that AI-generated code introduces vulnerabilities. It’s also been found that developers with access to AI assistants based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Interestingly, developers who focused on their interactions with the model, such as providing more effective prompts, created more secure code.
So, while it’s possible for ChatGPT and other AI coding tools to write secure code, their output is entirely reliant on training data and prompt engineering. If vulnerabilities or mistakes exist in the training data, AI-generated code will be insecure from the start. Because of this, developers should assume that any AI-generated code is insecure and take steps to remediate vulnerabilities before deployment.
Failing to do so can spread insecure code throughout the codebase and ultimately impact the security of applications and software.
How reliable is ChatGPT-generated code?
The reliability of ChatGPT-generated code depends on the task it performs and the underlying training data. While ChatGPT now has access to more current information and can browse the internet, it can still hallucinate or generate unreliable code, especially as coding best practices and languages evolve. This can become a security issue if developers assume ChatGPT-generated code is inherently secure, reliable, and up-to-date. Nearly 80% of developers admit to bypassing security measures as they believe AI-generated code is “secure enough,” which could result in widespread vulnerabilities and security issues across a codebase.
How good is ChatGPT at coding?
Studies show that ChatGPT can perform well with easy and medium programming problems or tasks. It’s also been shown that ChatGPT promises to assist with coding tasks, but there are clear limitations. As mentioned, ChatGPT and its output are only as good as the data it's trained on. Biases, mistakes, out-of-date information, or unclear prompts will impact the quality of generated code. Additionally, ChatGPT often struggles with more complex programming problems, meaning it can’t replace specialized skills and experience.
Can ChatGPT find vulnerabilities in code?
ChatGPT can analyze code to look for gaps and inconsistencies and provide feedback, potentially uncovering common vulnerabilities. However, it may not understand complex code and can’t find complex vulnerabilities in code, meaning vulnerabilities could go undetected until something more serious happens.
When reviewing code, ChatGPT can’t scan it in real time, requiring developers to input the code manually for review. This can become an issue if changes or fixes aren’t addressed or updated within ChatGPT, causing vulnerabilities to be overlooked or unresolved. ChatGPT can also produce hallucinations, incorrectly assessing code security.
Can ChatGPT ensure secure coding practices?
While it’s possible to use ChatGPT as an initial code assessment tool, other security safeguards should exist within a developer’s workflow. Relying solely on ChatGPT to ensure secure coding practices will result in vulnerabilities slipping through. AppSec teams should still have comprehensive security tools and measures to ensure secure coding practices are being followed.
Is data safe with ChatGPT?
Using large language models (LLMs) like ChatGPT raises significant data security concerns. While ChatGPT has built-in data security measures, companies should assume that no data within GenAI is secure. Often, users share sensitive information with their prompts without thinking about it, assuming it’s protected. However, confidential or sensitive information could make its way into the model's training data and produce outputs containing it, putting the company at risk.
Additionally, if ChatGPT or another AI tool is ever compromised or a company’s user account is hacked, bad actors would have access to sensitive information or training data, further impacting organizations. To offset this risk, companies should have internal policies surrounding what employees can share with AI tools and how that data is managed.
How to check if ChatGPT-generated code is secure
When considering cybersecurity within an organization, security protocols must evolve to consider GenAI tools. Organizations are at risk without a strong security posture that includes internal policies and protocols for tools like ChatGPT. In most instances, the largest risk to an organization is insecure AI-generated code being released into a codebase, spreading to multiple projects, and causing a breach.
The most effective way to ensure that ChatGPT-generated code is secure is to use a dedicated security analysis tool like Snyk Code. Unlike ChatGPT, SnykCode integrates seamlessly into a developer’s workflow and throughout the development process. Snyk Code also scans code in real time, continuously checking for vulnerabilities. Powered by DeepCode AI, SnykCode prioritizes vulnerabilities and recommends fixes, ensuring teams are focusing on the most critical issues.
While ChatGPT and other GenAI coding tools can benefit a developer’s workflow, a platform like Snyk is vital to ensuring an organization’s codebase and applications are secure and protected against future vulnerabilities or attacks.
Top comments (0)