# Benchmarks

To showcase the performance improvements so far, we have done CPU-based benchmarks. MatSum represents an addition of two encrypted matrices, MatMulA represents the multiplication of encrypted with plaintext matrix and MatMulB represents the multiplication of two encrypted matrices.\
\
Below we compare CoFHE, Microsoft Seal(CKKS) and Zama's Concrete(tFHE) libraries for input matrix of size 64\*64 for MatSum and 8\*64 for MatMulA and MatMulB *and weight matrix of 64\*64(for all)*. The table shows amortized cost of the operations for just 50 sequential operations. "-" represents memory overflow. <br>

<table><thead><tr><th>Framework</th><th width="207">MatSum Latency</th><th width="161">MatMulA Latency</th><th>MatMulB Latency</th></tr></thead><tbody><tr><td>CoFHE</td><td>4.06 ms</td><td>66.6 ms</td><td>4224 ms</td></tr><tr><td>Microsoft SEAL</td><td>0.04 ms</td><td>369.17 s</td><td>-</td></tr><tr><td>Zama ConcreteML</td><td>480ms</td><td>52 s</td><td>-</td></tr></tbody></table>

Since there are no bootstrapping required the amortised latency of our proposed MatSum and MatMul operations can be significantly reduced by leveraging the parallel computing capabilities of GPUs. \ <br>
