Call for Research
1. Floating-Point Support for CoFHE
Integer-only message spaces in CoFHE enable quantized model inference, but supporting floating-point arithmetic could unlock CoFHE's potential for LLM training and inference for non quantized models. This paper invites research into integrating floating-point operations within class group encryption schemes, leveraging multiple subgroups supporting.
2. Verifiability in Privacy-Preserving AI
The absence of verifiability in remotely hosted LLM computations presents a significant challenge. This paper calls for research on developing efficient zero-knowledge proof systems for CoFHE, enabling verifiable operations in privacy-preserving AI.
3. Re-Encryption Protocols for CoFHE Systems
CoFHE’s threshold computation capabilities present a unique opportunity to implement secure re-encryption protocols.
4. GPU-Optimized Library for CoFHE
This paper invites research on designing a GPU-optimized implementation of CoFHE, focusing on bignumber arithmetic for class group encryption. The work would contribute to a practical and scalable CoFHE library, enabling real-time privacy-preserving computation for AI and other high-performance applications.
5. Decentralised ChatGPT for Models Exceeding 50 Billion Parameters
Supporting LLM inference for models exceeding 50 billion parameters requires leveraging CoFHE for privacy-preserving operations. This research invites the development of a scalable framework, utilizing open-source models like DeepSeek, to build the first privacy-preserving sovereign AI system that balances performance and security.
6. Decentralized Privacy-Preserving Search Engines
Centralized search engines dominate the market, posing privacy concerns. This paper invites research on building a decentralized and privacy-preserving search engine using CoFHE. The work focuses on implementing vector search and encrypted data operations to rival existing centralized solutions.
7. Encrypted LLM Training on CoFHE
Current LLM training paradigms rely on plaintext datasets, limiting privacy and data owner incentives. This research explores training LLMs on encrypted datasets using CoFHE, enabling data owners to maintain privacy while receiving rewards for dataset usage. The paper seeks practical approaches to integrate privacy-preserving incentives into decentralized AI systems.
8. Sovereign LLMs On Decentralized Networks
Bitcoin exemplifies sovereign money by operating on a decentralized network of nodes, granting users true ownership of their assets without reliance on centralized intermediaries. Similarly, achieving sovereignty in AI requires eliminating centralized points of control. Current AI agents, hosted through API providers or within Trusted Execution Environments (TEEs), lack this sovereignty. This paper explores the design and development of an encrypted LLM using CoFHE, optimized for test-time computation and continual evolution on the OpenVector network. The research would aim to create a scalable, decentralized AI model that adapts dynamically, contributing a potential pathway toward achieving AGI.
9. Privacy-Preserving Vision Models for Embodied AI
Embodied AI systems like robotics and wearable devices (e.g., AR glasses) are the next big platform demanding privacy-preserving vision models. This research calls for integrating CoFHE to enable encrypted operations on vision models, ensuring user privacy while maintaining real-time processing capabilities. The work could pioneer advancements in privacy-preserving robotics and augmented reality applications.
10. New Transformer Architectures with CoFHE-Optimized Activation Functions
Current transformer architectures rely on activation functions that are computationally expensive to implement within homomorphic encryption frameworks. This paper invites research into developing CoFHE-optimized activation functions, paving the way for a new transformer architecture that seamlessly integrates with privacy-preserving computation. The work focuses on designing activations that are both cryptographically efficient and capable of maintaining or improving the model's performance, enabling secure and scalable AI computations.
11. LLM driven operating system
Current operating systems are not designed to learn like LLMs. For a robotic system it would be inherently necessary. The integration of Large Language Models (LLMs) into humanoid robotics represents a significant advancement in creating intelligent, adaptable, and interactive systems. By embedding LLMs within the operating systems of humanoid robots, these machines can interpret and generate human-like language, facilitating more natural and intuitive human-robot interactions.
12. MPC wallets for crypto AI agents.
AI agents once decentralised would require private keys for autonomous crypto transactions. We can use threshold ECDSA with key distributed in the MPC network to generate signatures for decentralised AI agents.
If you are interested in any of the above research opportunities, do email support@openvector.ai. We'll love to fund and collaborate with you.
Last updated