Recent Posts
Archives

Posts Tagged ‘LLMs’

PostHeaderIcon [DefCon32] Threat Modeling in the Age of AI

As artificial intelligence (AI) reshapes technology, Adam Shostack, a renowned threat modeling expert, explores its implications for security. Speaking at the AppSec Village, Adam examines how traditional threat modeling adapts to large language models (LLMs), addressing real-world risks like biased hiring algorithms and deepfake misuse. His practical approach demystifies AI security, offering actionable strategies for researchers and developers to mitigate vulnerabilities in an AI-driven world.

Foundations of Threat Modeling

Adam introduces threat modeling’s four-question framework: what are we working on, what can go wrong, what are we going to do about it, and did we do a good job? This structured approach, applicable to any system, helps identify vulnerabilities in LLMs. By creating simplified system models, researchers can map AI components, such as training data and inference pipelines, to pinpoint potential failure points, ensuring a proactive stance against emerging threats.

AI-Specific Security Challenges

Delving into LLMs, Adam highlights unique risks stemming from their design, particularly the mingling of code and data. This architecture complicates secure deployment, as malicious inputs can exploit model behavior. Real-world issues, such as AI-driven resume screening biases or facial recognition errors leading to wrongful arrests, underscore the urgency of robust threat modeling. Adam notes that while LLMs excel at specific mitigation tasks, broad security questions yield poor results, necessitating precise queries.

Leveraging AI for Security Solutions

Adam explores how LLMs can enhance security practices. By generating mitigation code or test cases for specific vulnerabilities, AI can assist developers in fortifying systems. However, he cautions against over-reliance, as generic queries produce unreliable outcomes. His approach involves using AI to streamline threat identification while maintaining human oversight, ensuring that mitigations address tangible risks like data leaks or model poisoning.

Future Directions and Real-World Impact

Concluding, Adam dismisses apocalyptic AI fears but stresses immediate concerns, such as deepfake proliferation and biased decision-making. He advocates integrating threat modeling into AI development to address these issues early. By fostering a collaborative community effort, Adam encourages researchers to refine AI security practices, ensuring that LLMs serve as tools for progress rather than vectors for harm.

Links:

PostHeaderIcon [DevoxxGR2024] Meet Your New AI Best Friend: LangChain at Devoxx Greece 2024 by Henry Lagarde

At Devoxx Greece 2024, Henry Lagarde, a senior software engineer at Criteo, introduced audiences to LangChain, a versatile framework for building AI-powered applications. With infectious enthusiasm and live demonstrations, Henry showcased how LangChain simplifies interactions with large language models (LLMs), enabling developers to create context-aware, reasoning-driven tools. His talk, rooted in his experience at Criteo, a leader in retargeting and retail media, highlighted LangChain’s composability and community-driven evolution, offering a practical guide for AI integration.

LangChain’s Ecosystem and Composability

Henry began by defining LangChain as a framework for building context-aware reasoning applications. Unlike traditional LLM integrations, LangChain provides modular components—prompt templates, LLM abstractions, vector stores, text splitters, and document loaders—that integrate with external services rather than hosting them. This composability allows developers to switch LLMs seamlessly, adapting to changes in cost or performance without rewriting code. Henry emphasized LangChain’s open-source roots, launched in late 2022, and its rapid growth, with versions in Python, TypeScript, Java, and more, earning it the 2023 New Tool of the Year award.

The ecosystem extends beyond core modules to include LangServe for REST API deployment, LangSmith for monitoring, and a community hub for sharing prompts and agents. This holistic approach supports developers from prototyping to production, making LangChain a cornerstone for AI engineering.

Building a Chat Application

In a live demo, Henry showcased LangChain’s simplicity by recreating a ChatGPT-like application in under 10 lines of Python code. He instantiated an OpenAI client using GPT-3.5 Turbo, implemented chat history for context awareness, and used prompt templates to define system and human messages. By combining these components, he enabled streaming responses, mimicking ChatGPT’s real-time output without the $20 monthly subscription. This demonstration highlighted LangChain’s ability to handle memory, input/output formatting, and LLM interactions with minimal effort, empowering developers to build cost-effective alternatives.

Henry noted that LangChain’s abstractions, such as strong typing and output parsing, eliminate manual prompt engineering, ensuring robust integrations even when APIs change. The demo underscored the framework’s accessibility, inviting developers to experiment with its capabilities.

Creating an AI Agent for PowerPoint Generation

Henry’s second demo illustrated LangChain’s advanced features by building an AI agent to generate PowerPoint presentations. Using TypeScript, he configured a system prompt from LangSmith’s community hub, defining the agent’s tasks: researching a topic via the Serper API and generating a structured PowerPoint. He defined tools with Zod for runtime type checking, ensuring consistent outputs, and integrated callbacks for UI tracing and monitoring.

The agent, powered by Anthropic’s Claude model, performed internet research on Google Cloud, compiled findings, and generated a presentation with sourced information. Despite minor delays, the demo showcased LangChain’s ability to orchestrate complex workflows, combining research, data processing, and content creation. Henry’s use of LangSmith for prompt optimization and monitoring highlighted the framework’s production-ready capabilities.

Community and Cautions

Henry emphasized LangChain’s vibrant community, which drives its multi-language support and rapid evolution. He encouraged attendees to contribute, noting the framework’s open-source ethos and resources like GitHub for further exploration. However, he cautioned against over-reliance on LLMs, citing their occasional laziness or errors, as seen in ChatGPT’s simplistic responses. LangChain, he argued, augments developer workflows but requires careful integration to ensure reliability in production environments.

His vision for LangChain is one of empowerment, enabling developers to enhance applications incrementally while maintaining control over AI-driven processes. By sharing his demo code on GitHub, Henry invited attendees to experiment and contribute to LangChain’s growth.

Conclusion

Henry’s presentation at Devoxx Greece 2024 was a compelling introduction to LangChain’s potential. Through practical demos and insightful commentary, he demonstrated how the framework simplifies AI development, from basic chat applications to sophisticated agents. His emphasis on composability, community, and cautious integration resonated with developers eager to explore AI. As LangChain continues to evolve, Henry’s talk serves as a blueprint for harnessing its capabilities in real-world applications.

Links: