OpenVector
  • OpenVector
  • Introduction
    • Our Thesis
    • Our Solution
    • Other Approaches
    • Benchmarks
  • Network Architecture
    • Overview
    • Networking
    • Deploy Compute Node
    • Deploy CoFHE Node
    • Deploy Client Node
  • Quick start
    • Overview
    • Setting up Client Node
    • Encrypting Data
    • Tensor Multiplication
    • Decrypting the Ouput
    • Verifying the Output
    • Running the Program
  • Tutorials
    • Building MLP Block
  • Use Cases
    • Confidential LLM Inference
    • Training on Encrypted Data
    • Vector Search on Encrypted Data
    • Encrypted Data Access Control
  • API references
    • CryptoSystem Interface
    • ClientNode Class
    • ComputeRequest Class
    • ComputeResponse Class
  • PYTHON API REFERENCES
    • Overview
    • Tensor
    • nn.Module
    • nn.Functional
  • Contribution
    • Call for Research
    • CoFHE Library
Powered by GitBook
On this page
  1. Introduction

Benchmarks

To showcase the performance improvements so far, we have done CPU-based benchmarks. MatSum represents an addition of two encrypted matrices, MatMulA represents the multiplication of encrypted with plaintext matrix and MatMulB represents the multiplication of two encrypted matrices. Below we compare CoFHE, Microsoft Seal(CKKS) and Zama's Concrete(tFHE) libraries for input matrix of size 64*64 for MatSum and 8*64 for MatMulA and MatMulB and weight matrix of 64*64(for all). The table shows amortized cost of the operations for just 50 sequential operations. "-" represents memory overflow.

Framework
MatSum Latency
MatMulA Latency
MatMulB Latency

CoFHE

4.06 ms

66.6 ms

4224 ms

Microsoft SEAL

0.04 ms

369.17 s

-

Zama ConcreteML

480ms

52 s

-

Since there are no bootstrapping required the amortised latency of our proposed MatSum and MatMul operations can be significantly reduced by leveraging the parallel computing capabilities of GPUs.

PreviousOther ApproachesNextOverview

Last updated 6 months ago