Announcing Concrete ML v0.6

January 11, 2023
  -  
Benoit Chevallier-Mames

Today, Zama announced the release of a new version of Concrete-ML. Some exciting new features such as fast, high-precision linear models and support for 16-bit accumulators for both built-in and custom neural nets have been added. There are also complete tutorials showing that 16-bit wide encrypted values provide a huge boost to deep neural networks on computer vision datasets.

Fast, high-precision built-in linear models. Linear models work by performing a linear combination of parameters and inputs, and in their generalized variants (GLM), also apply a non-linear link function such as log or logit. Applying the non-linear link function on the client side allows the encrypted inference to avoid the heavy computation of programmable bootstrappings (PBS). In this release, all Concrete-ML linear models have been optimized to perform only linear computations on encrypted data. Using improvements from the Concrete stack, this allows execution with arbitrarily high precision (more than 30 bits) and very low latency (usually 10 milliseconds depending on the model), as can be seen in this tutorial.

Large accumulators for neural networks (16-bits). Support for 16-bit encrypted values in Concrete-ML has been added, which helps vastly improve the accuracy of neural networks and allows them to tackle more complex tasks. Both built-in and custom neural networks can use this new feature. In this tutorial, MNIST images are classified with high accuracy, with easy-to-use, built-in neural nets, without requiring any understanding of quantization. 

Examples of challenging computer vision models. Custom, FHE-ready neural networks, especially convolutional neural networks, strongly benefit from using 16-bit encrypted values. With this feature, Concrete-ML can now handle more complicated computer vision tasks. Three new use-case examples are provided in the open-source repository, showing classification on the CIFAR10 and CIFAR100 datasets:

  • First, a VGG-like CNN tutorial, which trains a quantized model from scratch on CIFAR10 using quantization-aware training (QAT) and 16-bit wide encrypted accumulators. Training this quantized network from scratch reaches 88.7% accuracy.
  • Second, a fine-tuning tutorial, based on the original VGG ImageNet model. Using Fine Tuning and QAT to build a CIFAR100 classifier, our model reaches 68.2% Top-1 accuracy.
  • Finally, an 8-bit VGG-like model for which the input layer is computed on the client side in the clear. Keeping 8-bit accumulators results in superior computation speed, but some layers, especially the input layer, require a larger bit width. This tutorial explains how to split models into two parts: the first being executed in the clear, while the second is computed in the encrypted domain on the server. This approach achieves 62% accuracy when training is performed from scratch on the CIFAR10 dataset.

Better simulation and debugging features. FHE simulation allows the user to evaluate the performance of a model in FHE, without running slow FHE computations on their (potentially large) test dataset.  Our implementation is exact, in the sense that it can be simulated on cleartext data with perfectly identical results. Since PBS operations have a small, configurable error probability, the ability to simulate PBS errors has been added to the Virtual Library. The accuracy of a model obtained with cleartext simulation is identical to the accuracy obtained in the FHE-powered version. Furthermore, to give information about the compiled FHE circuit and its cryptographic parameters, you can now obtain more detailed information from the compiler and its optimizer. 

Concrete-ML live demo.  The first-ever Hugging Face Space for Concrete-ML was released this quarter, showing how to analyze short encrypted texts and infer sentiment polarity. This demo allows you to see FHE working in real time and comes along with a blog post explaining how it works under the hood.

Additional links

Read more related posts

Zama product announcement - January 2023

With these releases, Zama continues to build its suite of products to make fully homomorphic encryption accessible, easy, and fast

Read Article

Announcing TFHE-rs: a fast, pure Rust implementation of TFHE

Releasing TFHE-rs, a pure Rust implementation of TFHE for booleans and small integer arithmetics over encrypted data.

Read Article

Announcing Concrete Numpy v0.9

A major compiler update improving performance, 16-bits integer table lookups, and many other new features...

Read Article