Concrete ML v1.6: Bigger Neural Networks and Pre-trained Tree-based Models

July 5, 2024
Andrei Stoian

Concrete ML v1.6 improves latency on large neural networks, adds support for pre-trained tree-based models and eases collaborative computation by introducing DataFrame schemas and by facilitating the deployment of logistic regression training. While GPU support will be available very soon in Concrete ML, some early latency results are given below. 

Pre-trained tree-based models

Concrete ML has long supported the conversion of pre-trained linear models through  [.c-inline-code]from_sklearn[.c-inline-code]  and pre-trained neural networks using [.c-inline-code]compile_torch_model[.c-inline-code]. Pre-trained models are popular since training models from scratch is error-prone and requires more in depth machine learning knowledge. Furthermore, implementing specific training algorithms, such as federated learning, requires separate specialized toolkits, yet secure deployment of such trained models remains essential. The Concrete ML v1.6 now supports importing pre-trained tree models using the [.c-inline-code]from_sklearn[.c-inline-code] function. The default import settings ensure that  accuracy is maintained on encrypted data, compared to using the original model in the clear. Refer to the documentation for more information.

Latency improvements

Two new notebooks showcase the latency improvements for pre-trained neural networks in Concrete v1.6 

  • Deep MLP model: This notebook demonstrates a 20-layer deep MLP. It compiles a pre-trained model and executes with a latency of 1 second on encrypted data, 20x faster with a hpc7 AWS instance than the previous results from the Zama whitepaper
  • ResNet18 model: This notebook demonstrates a ResNet18 model over ImageNet, executing  on encrypted 256-pixel wide images. While the latency is approximately 56 minutes on GPUs, this model shows a 4x improvement over the state of the art of TFHE ML. The next release will enable end-user GPU support.

Deployment Enhancements

With Concrete ML v1.6, developers can easily deploy logistic regression training as a client-server service. As in previous versions, developers can parametrize the training system, selecting the number of features to be trained on and the training hyper-parameters. In addition, Concrete ML v1.6 allows for packaging the training circuit in order to deploy it on a cloud. See the encrypted training documentation for more details.

DataFrame schemas

The updated DataFrame API in Concrete v1.6 reduces the size of stored DataFrames and allows users to manually control the schema of the DataFrames they encrypt. Schemas describe the encrypted data, enabling multiple users to make their data compatible with each other. The feature is demonstrated in the encrypted DataFrame notebook

The new features and improvements in this release improve the performance and the usability of Concrete ML. The forthcoming GPU support will bring even greater advancements in the near future. Stay tuned!

Additional links

Read more related posts

Zama Product Releases - July 2024

With these releases, Zama continues to build its suite of products to make homomorphic encryption accessible, easy, and fast.

Read Article

TFHE-rs v0.7: Ciphertext Compression, Multi-GPU Support and More

TFHE-rs v0.7 introduces the compression of ciphertexts that encrypt the result of homomorphic computations and many improvements.

Read Article

Concrete v2.7: GPU Wheel, Extended Function Composition and Other Improvements

Concrete v2.7 introduces the first wheel that can accelerate computations on GPUs and adds many other new features.

Read Article

fhEVM v0.5: Enhanced Security and Efficiency for Encrypted Data

fhEVM v0.5 introduces many enhancements to improve the security and efficiency of handling encrypted data in your applications.

Read Article