Encrypted Image Filtering Using Homomorphic Encryption

February 23, 2023
Roman Bredehoft

Zama has created a new Hugging Face space to apply filters over images homomorphically using the developer-friendly tools in Concrete-Numpy and Concrete-ML. This means the data is encrypted both in transit and during processing.

Concrete-Numpy is a Python library that allows computation directly on encrypted data without needing to decrypt it first. The Concrete-Numpy API makes converting regular Python functions to their FHE-equivalent circuit and then deploying them within a Client-Server interface user-friendly. Like Concrete-Numpy, Concrete-ML gives you the ability to use machine learning models with FHE settings without any prior knowledge of cryptography.

Here, you’ll use the utility functions in Concrete-ML to build an image processing filter using Torch models. This has the potential advantage of adding machine learning-related filters, such as image enhancement or noise reduction.


Create and activate a virtual environment using Python 3.8.15.

python -m venv .venv 
source .venv/bin/activate (Linux)
.venv\Scripts\activate (Windows)

Install the libraries: Concrete-ML 0.6.1 which comes with Concrete-Numpy 0.9.0.

pip3 install pip --upgrade
pip install concrete-ml==0.6.1


Using a simple Torch module offers a convenient way of building filters, mainly through the use of convolution operators. You can also use an interface for integrating additional image processing computations built through machine learning models.

Convolution is an important operator because the most used filters (blurring, sharpening) are built using kernels of 2 dimensions. Torch’s convolution does not follow the same shape conventions as usually found in Numpy arrays, so some reshape functions are added before and after the operator. These functions will be executed in FHE along the convolution.

Build a filter Torch model using a 2D kernel:

def forward(self, x):
    """Forward pass with a single convolution using a 2D kernel."""
    kernel = self.kernel.expand(
        self.n_in_channels // self.groups,

    # Reshape the input to (1, 3, Height, Width)
    x = x.transpose(2, 0).unsqueeze(axis=0)

    # Apply the convolution
    x = nn.functional.conv2d(

    # Reshape the output to (Height, Width, 3)
    x = x.transpose(1, 3).reshape((

	return x

Create a sharpen filter by defining the kernel:

kernel = [
   [0, -1, 0],
   [-1, 5, -1],
   [0, -1, 0],

sharpen_filter = TorchConv(kernel, n_out_channels=3, groups=3)

Images and kernels should only be made of integers because the FHE implementation used in Concrete-Numpy currently only supports this type.

Simpler cases, such as inverting the colors of an RGB image, can be completed as shown below. The Hugging Face Space demo has more examples.

class TorchInverted(nn.Module):
   """Torch inverted model."""

   def forward(self, x):
       """Forward pass for inverting an image's colors."""
       return 255 - x

These computations are considered when executing in FHE. Input or output images also require some post-processing in the clear. Output values, for instance, need to be clipped to proper RGB standards as filters don't handle such constraints. 

def post_processing(output_image):
   """Apply post-processing to the encrypted output images."""

   output_image = output_image.clip(0, 255)
   return output_image


Once the filters are defined, compile them to their equivalent FHE circuit in order to apply them on encrypted images. Concrete-ML provides tools that allow any Torch model to be compiled in Concrete-Numpy. 

First, import the tools:

def post_processing(output_image):
   """Apply post-processing to the encrypted output images."""

   output_image = output_image.clip(0, 255)
   return output_image

The compiler only considers static shapes, so this tutorial uses RGB images of shape (100, 100, 3). If this shape has to change, the filters need to be compiled again.

INPUT_SHAPE = (100, 100, 3)

The compilation process considers a representative set of inputs (see the documentation). The images found in the set indicate to the compiler some representative ranges of values needed for computing cryptographic parameters.

This input set is randomly generated by creating synthetic RGB images composed of random integers (3 channels, with integers from 0 to 255). It is big enough to ensure the images are made of different ranges. Another option would be to compose the input set of several existing images, as long as they are all equal in size. 

Compile the `sharpen_filter`:

# Generate the input set
inputset = tuple(
   ) for _ in range(100)

# Convert the Torch module to the Numpy module
numpy_module = NumpyModule(

numpy_filter_proxy, parameters_mapping = generate_proxy_function(

# Compile the module and retrieve the FHE circuit
compiler = Compiler(
   {parameters_mapping["inputs"]: "encrypted"},
fhe_circuit = compiler.compile(inputset)


Concrete-Numpy offers tools for creating a Client-Server interface using FHE circuits, including the ability to save and load serialized client and server instances as well as encrypt inputs, execute a FHE circuit, and decrypt outputs.

You can then build a simple Client-Server interface that handles these filters using Concrete-Numpy’s API.

Development interface.

Save the necessary files for developing the interface:

def save(filter, path_dir):
   """Export all needed artifacts for the client and server interfaces."""

   # Save the circuit for the server
   path_circuit_server = path_dir / "server.zip"

   # Save the circuit for the client
   path_circuit_client = path_dir / "client.zip"

Load the client and server interfaces:

import concrete.numpy as cnp   

# Load the server
server = cnp.Server.load(path_dir / "server.zip")

# Load the client and indicate where to store the private keys
client = cnp.Client.load(path_dir / "client.zip", key_dir)

Client interface.

There are two main steps in the client interface: encrypting the inputs and decrypting the outputs. As with the filter, some preprocessing is required before encryption. Then, these encrypted images need to be serialized to be sent to or received from the server.

Pre-process, encrypt, and serialize an input image before sending it to the server:

def encrypt_serialize(client, filter, input_image):
   """Encrypt and serialize the input image in the clear."""

   # Encrypt the image
   encrypted_image = client.encrypt(preprocessed_image)

   # Serialize the encrypted image to be sent to the server
   serialized_encrypted_image = client.specs.serialize_public_args(encrypted_image)
   return serialized_encrypted_image

Deserialize, decrypt, and post-process an output received from the server:

def deserialize_decrypt_post_process(
   """Deserialize, decrypt and post-process the output image in the clear."""
   # Deserialize the encrypted image
   encrypted_output_image = client.specs.unserialize_public_result(

   # Decrypt the image
   output_image = client.decrypt(encrypted_output_image)

   # Post-process the image
   post_processed_output_image = filter.post_processing(output_image)

   return post_processed_output_image

Generate the keys and retrieve the serialized evaluation before sending it to the server:

# Generate the keys

# Retrieve the evaluation keys

Server interface.

In order to execute the filter in FHE, the server needs to load the FHE circuit and run it over the encrypted input image using the evaluation key. For the Client interface, Concrete-Numpy’s API lets you easily serialize and deserialize these objects.

def run(server, serialized_encrypted_image, serialized_evaluation_keys):
   """Run the filter on the server over an encrypted image."""

   # Deserialize the encrypted input image and the evaluation keys
   encrypted_image = server.client_specs.unserialize_public_args(
   evaluation_keys = cnp.EvaluationKeys.unserialize(

   # Execute the filter in FHE
   encrypted_output = server.run(
       encrypted_image, evaluation_keys

   # Serialize the encrypted output image
   serialized_encrypted_output = server.client_specs.serialize_public_result(

   return serialized_encrypted_output


Check out the Hugging Face space to see the above steps in action. You can upload an image and then pick a filter. Uploaded images are automatically cropped and resized to a (100, 100) shape under the hood.

You then generate the keys and encrypt the image. 

Next, send the encrypted image to the server, executing the chosen filter in FHE, and receive the encrypted output. The server does not have access to the input image nor to the output result. Everything is encrypted from start to finish. The execution time does not exceed a few seconds per image using a machine with 8 vCPUs.

The demo displays the encrypted output’s representation. This is simulated by generating a random private key and using it to decrypt the output. As the real key would be unknown to an attacker, nothing more than an image with random pixels can be seen. The legitimate user, however, retains access. 

Currently, both the client and server run on the same machine for technical reasons, independent from the framework. In the future, other demos on separated machines will be built.

Finally, decrypt the output and compare it to the original image:


This Hugging Face space allows anyone to interact with the fundamental principles of FHE as applied to image processing. Zama first introduced encrypted image filtering as dummy code in the 6-minute introduction to homomorphic encryption. Now, in real time, you see how to build common image processing filters using Torch models. With the Concrete-Numpy library and Concrete-ML, you can easily convert these models into their equivalent FHE circuit and then deploy the circuit with a Client-Server interface. This interface could be used to create a service that lets users apply any image filter while keeping the image private with respect to the server. It could also be possible to create filters based on machine learning models, such as image enhancement or noise reduction.

Read more related posts

Titanic Competition with Privacy Preserving Machine Learning

A Privacy-Preserving Machine Learning (PPML) solution to the famous ML Titanic challenge using concrete-ml

Read Article

Announcing Concrete ML v0.6

Some exciting new features such as fast, high-precision linear models and support for 16-bit accumulators...

Read Article

Presenting Concrete ML at Google Tech Talks

The team was recently invited to present Concrete ML at Google Tech Talks

Read Article