
Rethinking the Future of Secure Computation
In the last few years, we have seen many Web3 companies focusing on privacy-enhancing technologies (PETs) like secure multiparty computation (MPC), fully homomorphic encryption (FHE), and trusted execution environments (TEEs). This dedicated focus on advancing each technique has driven significant breakthroughs in research and development, enabling real-world applications that even big-tech companies like Apple, Google, and Meta are now incorporating into their products. However, this is not a competition to determine which PET takes precedence. Each PET excels at different use cases and addresses a unique point within the performance-security tradeoff spectrum. This is why, at Nillion, we have developed the Orchestration Layer, a system designed to seamlessly combine different PETs to deliver enhanced security, performance, and functionality. We view PETs as complementary rather than competitive.
In no place is this false dichotomy more evident than in the example of hardware-based solutions (i.e., CPU and GPU TEEs) versus software-based solutions (i.e., MPC, FHE). TEEs are hardware-based solutions that offer superior performance, while the latter are software-based approaches relying on cryptography with stronger security models. In this blog post, we argue that this apparent binary choice is misleading and explore how these two worlds (and their corresponding technologies) can be combined to achieve a superior balance of performance, security, and flexibility for real-world applications. Before diving into the benefits of our hybrid approach, let’s take a closer look at the security models of these techniques.
Computational & Security Models of Software-based solutions (MPC, FHE) and Hardware-based solutions (CPU and GPU TEEs)
1. Software-based solutions (MPC, FHE)
MPC encompasses a set of techniques that allow multiple parties to collaborate using protocols designed to selectively hide, reveal, and transform secret pieces of information. These protocols achieve the same results as plaintext execution but ensure that inputs and outputs remain protected by processing shards, or secret shares, of the data. Different MPC protocols are suited to specific needs and offer unique trade-offs. For example, as we saw in our recent Curl and Wave Hello to Privacy blogposts, MPC can be used for large language model inference while protecting both the client’s inputs and the model itself, but at the same time, they introduce overheads compared to computing directly over clear data (i.e., with no privacy). MPC is natively both distributed and decentralized, meaning that it involves several nodes and each node typically executes the same role – there is no concept of a leader or centralized authority. Additionally, a key assumption in MPC is that as long as nodes do not collude and do not reveal the secret shares to each other, data stays private and secure.
Fully homomorphic encryption (FHE) is yet another PET, offering its own trade-offs. Traditionally, FHE is between two parties, a client and a server, where the client can encrypt their data locally, upload it to an untrusted server, which can compute on the data without ever seeing it, and finally send back an encrypted result to the client, who in turn can decrypt the data and learn the result. Our recent work on Ripple utilized this paradigm to accelerate privacy-preserving machine learning tasks. However, FHE can also be viewed as a building block towards MPC solutions, especially in Web3, where multiple parties publish their encrypted data and another set of parties hold decryption key shares and can only jointly (under MPC) decrypt the result. Similarly to MPC, as long as some of the parties are honest (i.e., depending on the threshold used) private data remain secure.
2. Hardware-based solutions (CPU and GPU TEEs)
Unlike the aforementioned distributed approaches, TEEs are hardware-based solutions that establish a secure enclave within an individual machine – TEEs are not natively distributed or decentralized. This enclave provides a secure and isolated portion of the machine where all software that is executed within the enclave maintains the confidentiality and integrity of the data. In practice, this translates to encrypting a portion of the memory so that its contents remain inaccessible to external readers. The scope of TEEs varies depending on the underlying technology: application-enclaves, such as Intel SGX, containerize and protect the workload of individual applications while strictly limiting potential misbehavior. Conversely, the current trends move toward confidential virtual machines (CVMs), complete operating systems running under the enclave. While CVMs offer greater flexibility than application enclaves, they present substantially larger attack surfaces. Lastly, NVIDIA has its own GPU TEE called confidential computing with a focus on AI models remaining secure, compliant, and uncompromised. GPU TEEs work hand-in-hand with CPU TEEs and the data transfers between them are also encrypted.
Compared to cryptographic techniques like MPC and FHE, the operations performed inside the TEEs deal directly with clear data within the secure enclave (contrary to secret shares or encryptions), resulting in near-plaintext computation speeds. This near-plaintext performance has significantly increased the popularity of TEEs. At the same time, however, this dependence on hardware has been the Achilles heel of TEEs as, for one, it requires trusting the hardware manufacturer, and for another, it might introduce potential vulnerabilities (such as side-channel attacks and supply chain risks).
Nillion’s Approach
The focus on individual software-based techniques has sometimes led to the perception that certain approaches are inherently superior to others and that they are not compatible with each other (we have to choose either one or the other). This can overshadow an important reality: single-technique solutions (e.g., those relying exclusively on MPC, FHE, or TEEs) often face performance and scalability challenges that hybrid approaches can better address. Similarly, hardware-based solutions are frequently highlighted for their versatility and performance but their security model and their larger attack surfaces have hindered widespread adoption.
At Nillion, we believe that combining software-based and hardware-based solutions unlocks the next era of privacy-preserving computation. Traditionally, the primary benefit of combining software-based solutions (MPC or FHE) and TEEs has been to enhance security. TEEs enable semi-honest protocols to become resistant to malicious adversaries without relying on complex cryptographic primitives. In other words, the software solution protects your data, and the hardware guarantees that the software is running correctly through software attestation. This is one of the use cases where we combine TEEs with software-based solutions. But what if the performance of a software-based solution is already insufficient for a real-time application?
Understanding the synergy between MPC and TEEs requires recognizing that their strengths and weaknesses complement each other not only in security (i.e., using a TEE to raise a semi-honest protocol to a malicious one) but also in performance. MPC is excellent at providing distributed trust, ensuring that no single entity can compromise the privacy and confidentiality of a computation but at the same time, performing certain computations under MPC, such as nonlinearities or data structure operations is inefficient. TEEs are particularly effective in scenarios where performance is critical, but solely relying on a TEE means accepting their trust model.
At Nillion, we have been exploring a hybrid approach that processes portions of the computation under MPC/FHE and other portions within a TEE. Starting with an MPC cluster computation ensures data confidentiality on a higher security level, while a later part of the computation can be reconstructed within a secure enclave to offer competitive real-world performance. Note that it is crucial that the MPC computation obfuscates the data before reconstructing them within the TEE so that, even if an attack against the TEE is successful, the data are still protected. To take this one step further, multiple TEEs can be used in a decentralized fashion, where each TEE only gets shards of the data, restricting even more the amount of data that a potential attacker could gain access to. We can view this framework as a series of software-based solutions followed by a stack of decentralized hardware solutions, followed by software-based, and so on, as shown below:
What’s interesting here is that the parts executed on software versus hardware above can be adjusted depending on the use case at hand. Imagine this like having a slider so that when it is on one extreme, everything runs on TEE, and on the other extreme everything runs on MPC, while in the middle you get varying degrees of the two components in the mix. Let’s take a closer look at this next by exploring it in the context of AI agents.
AI Agents in Privacy-Sensitive Applications – A Combination of Software-based and Hardware-based Solutions
AI agents are becoming increasingly common, taking on roles in sensitive environments like finance, where they may act as stock market traders with access to private information. Protecting this data from unintended exposure is critical, while also ensuring the agents can operate in real time. These scenarios require solutions that balance performance and privacy. While MPC offers strong privacy guarantees essential for handling sensitive data, its cryptographic operations introduce significant latency in the context of AI agents which require running models with billions of parameters in under a second. On the other hand, TEEs excel in efficiency and are better suited for the compute-intensive demands of AI workloads, but as we mentioned before relying solely on hardware may not achieve the level of privacy required for sensitive financial data. Neither technique alone is sufficient, due to security and performance trade-offs and that’s why at Nillion we are developing techniques to leverage both MPC and TEEs as part of our Blind modules and the Petnet! The input, which is the most sensitive information, can be fed to some of the machine-learning model layers and processed under MPC using Curl in Nillion’s nilAI (i.e., the blue part in the picture above). Then, the information becomes less sensitive as it is some intermediate result that has been obfuscated and cannot be reversed to deduce the inputs, and it can be transferred to a TEE (or multiple TEEs) using Nillion’s nilAI for the remaining steps. Keep an eye out for more details of our work in this area!
This approach, when applied to AI agents, provides a more general and flexible architectural framework for private computation. Privacy remains consistent, while security measures can be dynamically adjusted based on computational requirements and data sensitivity. Like a slider, we can balance performance and security, ensuring the solution meets users’ needs.
What’s next?
The combination of MPC and TEEs represents a paradigm shift in the approach to secure computation and leverages the complementary strengths of these technologies to balance performance and privacy. Keep an eye out for our work on the hybrid approach for AI agents that can generalize to various applications in sensitive domains like healthcare, finance, and beyond. We have already released multiple Blind Modules: nilVM focusing on general-purpose MPC and threshold signatures, nilDB focusing on MPC storage and analytics, as well as our upcoming nilAI that allows large language models to run securely inside CPU and GPU TEEs. Next up, we are extending our nilAI Blind Module with the aforementioned framework that combines MPC along with multiple TEEs to enhance security while maintaining cutting-edge performance.
Follow @BuildOnNillion on X/Twitter for more updates like these
