In the rapidly evolving landscape of artificial intelligence (AI), ensuring the integrity and confidentiality of Large Language Models (LLMs) is paramount. The concept of a "trustless" LLM one that operates without the need for trust in any single party has become increasingly significant. By deploying LLMs within Trusted Execution Environments (TEEs), we can achieve tamper-proof execution and maintain data confidentiality, empowering individuals and businesses to create custom LLMs with enhanced security. Various questions will come to mind. One will ask will use TEE over other verifiable computing technologies? Unlike more well-known verifiable computing technologies like ZKPs and FHE, which are optimistically 3-5 years away from practical implementations, TEE-based infrastructure is ready TODAY.
Understanding Trusted Execution Environments (TEEs)
“Rethinking the use of Trusted Execution Environments (TEEs), transforming it from a “trusted" environment into a "trusted due to verification" or TRUSTLESS Execution Environment, is essential. This is achieved by integrating smart contracts, cryptographic primitives, and widely used network technologies, leveraging TEE's embedded security to control operations within the TEE and ensure compliance through blockchain consensus mechanisms. By combining blockchain technology and TEEs, Web3 and AI can ensure trustless computing environments where computations are secure, verifiable, and transparent” These are the words of David Attermann on Trusted Execution Environment (TEE)
A Trusted Execution Environment (TEE) is a secure area within a processor that ensures code and data loaded inside are protected concerning confidentiality and integrity. This means that unauthorized entities cannot access or alter the data and code within the TEE. TEEs provide isolated execution, integrity of applications, and confidentiality of their assets, making them essential for secure computing.
The Role of TEEs in AI
Integrating TEEs into AI workflows offers several advantages:
Confidentiality: Data processed within a TEE remains confidential, ensuring that sensitive information is protected during AI model training and inference.
Integrity: TEEs guarantee that the AI model and its computations cannot be tampered with, preserving the model's integrity.
Verifiability: Operations within a TEE can be attested, providing proof that computations were executed as intended.
The following steps illustrate how TEEs achieve security and verifiability through encryption and attestation.
Source: Phoenixnap
These properties are crucial for developing trustless LLMs, as they ensure that the model operates securely and as expected.
Oasis Network's Contribution to Trustless LLMs
The Oasis Network leverages TEEs to enhance AI model security and verifiability. By integrating TEEs with the Oasis Runtime Offchain Logic (ROFL) framework, developers can create decentralized AI models with verifiable provenance. This integration ensures that AI models are not only secure but also transparent in their development and deployment processes.
There was an experiment geared at integrating a GPU-enabled TEE with the Oasis Network for fine-tuning an LLM in a TEE
Runtime Offchain Logic (ROFL) Framework
ROFL (Runtime OFf-chain Logic) is a framework that adds support for off-chain components to runtimes like Oasis Sapphire, enabling non-deterministic behavior and access to remote network resources. ROFL allows for off-chain components to seamlessly communicate with the on-chain realm, bringing about full composability across different blockchain platforms and off-chain computation stack.
The ROFL framework enables the execution of off-chain logic within TEEs, allowing for complex computations that are both verifiable and confidential. This is particularly beneficial for AI workloads, as it allows for the training and deployment of LLMs in a secure environment, ensuring that the models remain tamper-proof and that the data they process is kept confidential.
Trustless Agents and Autonomous AI
Building upon the foundation of TEEs and the ROFL framework, the concept of trustless agents emerges. These are autonomous AI entities that operate without human intervention, with their actions and decisions being verifiable and secure. By deploying LLMs within TEEs, these agents can perform tasks with a high degree of trustworthiness, making them suitable for applications that require stringent security and confidentiality measures.
With the introduction of TDX to
ROFL for its TEE implementation the range of what's possible inside a ROFL application is greatly expanded. Intel Trust Domain Extensions (Intel TDX) is Intel's newest confidential computing technology, announced by Intel in February 2022 and offered in public preview by cloud providers Microsoft Azure and Google Cloud Platform.
This hardware-based trusted execution environment (TEE) facilitates the deployment of trust domains (TD), which are hardware-isolated virtual machines (VM) designed to protect sensitive data and applications from unauthorized access.
A CPU-measured Intel TDX module enables Intel TDX. This software module runs in a new CPU Secure Arbitration Mode (SEAM) as a peer virtual machine manager (VMM), and supports TD entry and exit using the existing virtualization infrastructure. The module is hosted in a reserved memory space identified by the SEAM Range Register (SEAMRR).
Intel TDX uses hardware extensions for managing and encrypting memory and protects both the confidentiality and integrity of the TD CPU state from non-SEAM mode
TDX operates as a virtualization-based confidential computing environment, which results in better performance and fewer memory constraints. It allows for the straightforward deployment of legacy applications without requiring changes to the programming model.
Conclusion
The integration of LLMs within TEEs, facilitated by frameworks like ROFL on the Oasis Network, represents a significant advancement in creating trustless AI systems. This approach ensures that AI models are secure, verifiable, and capable of handling sensitive data, thereby empowering individuals and businesses to develop custom LLMs with confidence
Top comments (0)