DEV Community

Golam_Mostafa
Golam_Mostafa

Posted on

Web LLM attacks

Let's explore how to secure your LLM applications using JavaScript, with simple examples and clear explanations.

Diagram of various attack vectors

1. Understanding the Attack Surface

When you build an app with LLMs, you typically have this setup:

  • Users send inputs to your app
  • Your app talks to the LLM service (like OpenAI or Claude)
  • The LLM connects to other parts like databases and files

Common Attack Vectors:

  1. Direct API Manipulation
// ❌ Vulnerable Implementation
function processUserRequest(userInput) {
    const llmResponse = await llm.generate(userInput);
    const filename = llmResponse.filename;
    return fs.readFileSync(filename); // Dangerous!
}

// ✅ Secure Implementation
function processUserRequest(userInput) {
    const llmResponse = await llm.generate(userInput);
    const filename = llmResponse.filename;

    // Check if path is safe
    if (!isSafePath(filename)) {
        throw new Error("Invalid file path");
    }

    // Use path sanitization
    const safePath = sanitizePath(filename);
    return fs.readFileSync(safePath);
}
Enter fullscreen mode Exit fullscreen mode
  1. Hidden Prompt Injection
// Example of checking for hidden content
function checkForHiddenContent(userInput) {
    // Remove HTML tags
    const strippedInput = userInput.replace(/<[^>]*>/g, '');

    // Check for suspicious keywords
    const suspiciousPatterns = [
        'ignore previous',
        'system prompt',
        'you are now'
    ];

    return !suspiciousPatterns.some(pattern => 
        strippedInput.toLowerCase().includes(pattern)
    );
}
Enter fullscreen mode Exit fullscreen mode

2. Security Best Practices

2.1 Secure API Wrapper

class SecureAPIWrapper {
    constructor(llmClient) {
        this.llm = llmClient;
        this.allowedApis = new Set(['getPublicData', 'processText']);
        this.rateLimiter = new RateLimiter();
        this.logger = new AuditLogger();
    }

    async executeApiCall(apiName, params) {
        // Check if API is allowed
        if (!this.allowedApis.has(apiName)) {
            throw new Error("Unauthorized API access");
        }

        // Clean parameters
        const cleanParams = this.sanitizeParams(params);

        // Check rate limit
        if (!await this.rateLimiter.canMakeRequest()) {
            throw new Error("Rate limit exceeded");
        }

        // Log the call
        this.logger.logApiCall(apiName, cleanParams);

        // Make the actual API call
        return await this.llm.callApi(apiName, cleanParams);
    }
}
Enter fullscreen mode Exit fullscreen mode

2.2 Protecting Sensitive Data

class DataProtector {
    constructor() {
        this.patterns = {
            email: /\b[\w\.-]+@[\w\.-]+\.\w{2,}\b/,
            ssn: /\d{3}-\d{2}-\d{4}/,
            creditCard: /\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}/
        };
    }

    sanitizeText(text) {
        let cleanText = text;

        // Replace each pattern with [REDACTED]
        Object.entries(this.patterns).forEach(([type, pattern]) => {
            cleanText = cleanText.replace(pattern, `[REDACTED ${type}]`);
        });

        return cleanText;
    }
}

// Usage example
const protector = new DataProtector();
const userInput = "My email is user@example.com and CC: 1234-5678-9012-3456";
console.log(protector.sanitizeText(userInput));
// Output: "My email is [REDACTED email] and CC: [REDACTED creditCard]"
Enter fullscreen mode Exit fullscreen mode

3. Security Monitoring

class SecurityMonitor {
    constructor() {
        this.events = [];
    }

    logEvent(eventType, severity, details) {
        const event = {
            timestamp: new Date(),
            type: eventType,
            severity,
            details,
        };

        this.events.push(event);

        // If high severity, send alert
        if (severity === 'high') {
            this.sendAlert(event);
        }
    }

    async sendAlert(event) {
        // Send to your monitoring service
        await fetch('your-monitoring-endpoint', {
            method: 'POST',
            body: JSON.stringify(event)
        });
    }
}
Enter fullscreen mode Exit fullscreen mode

4. Security Checklist

✅ Always implement these safety measures:

  1. Input Validation

    • Validate all user inputs
    • Set maximum length limits
    • Check for malicious patterns
  2. API Security

    • Use secure API keys
    • Implement rate limiting
    • Log all API calls
  3. Data Protection

    • Remove sensitive information
    • Encrypt data in transit
    • Regularly check security logs

Example Implementation

class SecureLLMApp {
    constructor() {
        this.apiWrapper = new SecureAPIWrapper(llmClient);
        this.dataProtector = new DataProtector();
        this.monitor = new SecurityMonitor();
    }

    async processUserRequest(userInput) {
        try {
            // 1. Validate input
            if (!checkForHiddenContent(userInput)) {
                throw new Error("Suspicious content detected");
            }

            // 2. Sanitize sensitive data
            const cleanInput = this.dataProtector.sanitizeText(userInput);

            // 3. Make API call
            const response = await this.apiWrapper.executeApiCall(
                'processText', 
                { text: cleanInput }
            );

            // 4. Log success
            this.monitor.logEvent('request_processed', 'info', {
                inputLength: userInput.length
            });

            return response;

        } catch (error) {
            // Log any errors
            this.monitor.logEvent('request_failed', 'high', {
                error: error.message
            });
            throw error;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Remember

  • Always validate user inputs
  • Keep your security measures updated
  • Monitor for unusual behavior
  • Regularly test your security setup

Acknowledgment: This document references information from PortSwigger Web Security.


Top comments (0)