Privacy Preserving Machine Learning for Healthcare using CrypTFlow

Pratik Bhatu
9 min readMar 1, 2021

--

Machine learning in healthcare is a very exciting and active research area with a great potential in improving the healthcare landscape. It can be used for assisting medical professionals in tasks like segmentation of tumors, detection of pathologies, and prognosis of diseases. There are various active projects like Microsoft’s InnerEye and Stanford’s CheXpert that are building ML models to help in these tasks. While there is a lot of work being done in improving these ML models, there is a hurdle when it comes to practically deploying these models in the real world.

Pathology detection from Chest X-rays (Image from CheXpert A Large Chest X-Ray Dataset And Competition)

A common scenario is as follows. The radiology lab, shares a patient’s data, say a chest x-ray, with a company that specializes in machine learning for healthcare. The company (ML provider) then does the computation and sends back the inferred prognosis to the radiology lab.

Medical prognosis using machine learning (Image by author using [1], [2], [3], [4])

However, a patient’s medical data is sensitive and needs to be dealt with in a confidential and secure manner. It is subject to strict privacy regulations that vary across jurisdictions. For the radiology lab to share data with an ML provider, both parties will have to get into a complex legal agreement which might require the consent of the patient too. Even after this, there is a chance that the data leaks as now there is one more party, the ML provider, that has access to this sensitive data. This increases the attack surface. A solution could be that the ML provider shares its technology to the radiology lab but since it is proprietary, the provider could be unwilling to do so.

So is it possible to do this computation without the radiology lab ever sharing the patient’s sensitive data and the ML provider sharing it’s proprietary model?

Using CrypTFlow

CrypTFlow[9][10] is an open source system that converts Tensorflow and ONNX models into Secure Multi-party Computation (MPC) protocols. MPC is a powerful cryptographic technique that allows mutually distrusting parties to compute a publicly known joint function of their secret inputs in a way that the parties learn nothing about the inputs of each other beyond what is revealed by their (possibly different) outputs. While we will look at an example from the healthcare domain below, CrypTFlow is also widely applicable to any domain where data is private, such as finance.

The computation we want to perform is inference on the ML model. The data private to the radiology lab is the patient x-ray, and the data private to the ML provider are the model weights. This computation is known to both the parties and so even the radiology lab is aware of the model architecture. For this the provider needs to share a pruned version of the model with the radiology lab which does not contain the model weights (its secret sauce).

Stripping of model weights (Image by author using [3])

Once both parties have this pruned model they can use CrypTFlow to generate MPC protocols for it.

CrypTFlow Compiler. (Image by author using [3], [5], [6], [7], [8])

The radiology lab additionally needs to preprocess the x-ray as required by the model and the details for that can be shared by the ML provider to the radiology lab. Additionally the processed x-ray is then converted to fixed point from floating point as all computations in CrypTFlow use fixed point values.

Pre-processing of client data. (Image by author using [1])

Now we are ready for the multi party computation to begin. MPC protocols are interactive in nature with both parties exchanging multiple messages with each other as the computation progresses. However, these messages are random and provably do not reveal anything about private inputs.

Run time execution of MPC Protocol. (Image by author using [2], [5], [6])

At the end of the computation the radiology lab computes the prognosis report.

Let’s walk through the steps involved in using CrypTFlow. You can download and set up CrypTFlow following the instructions from it’s GitHub. CrypTFlow takes as input tensorflow frozen graphs in .pb protobuf format or .onnx models.

We will compile a ShuffleNet V2 model that makes a prognosis of COVID-19 from chest x-rays. We leveraged Azure Machine Learning and Custom Vision to train the model using data from here. One can upload training data into Custom Vision, and it will automatically train a model for the task. One can then export the tensorflow model and the inference scripts using the export button and choosing the docker option. This will download a zip file which contains a model.pb file and an inference script in predict.py. If you don’t want to train the model from scratch, you can instead clone the cryptflow-demo repository for running the demo.

Clone the CrypTFLow repository and follow setup instructions from here. Then load the virtual environment that was installed while setting up CrypTFlow.

source path/to/EzPC/mpc_venv/bin/activate

Next we clone the cryptflow-demo repository and follow the steps in its requirements section. We can view the model.pb in it using Netron and can see that the model weights are embedded in the nodes. A portion of this model looks like:

Screenshot of a ShuffleNet V2 tensorflow .pb model viewed in Netron.

The model takes as input a chest x-ray resized to 224x224 and outputs a vector containing the likelihood of [COVID19, Normal, Pneumonia]. The placeholder node is the input node, where the ? in the input represents variable batch size. Since we are running inference on a single image, we will set the batch size to 1 while compiling. From the cryptflow-demodirectory (ensure the mpc_venv is activated), to run the model on the covid19positive image do:

python run_model.py covid19positive.jpeg

This preprocess the input image by resizing it and then runs the tensorflow model on it. It dumps

Output =  [[  6.9056687  -4.3557773 -18.939848 ]]

We can see that the 0th index has the highest value and so the inferred label for this image is COVID19, which is correct.

Now we will compile this model with CrypTFlow. To compile the model, from the cryptflow-demo directory do:

python path/to/EzPC/Athos/CompileTFGraph.py --config config.json --role server

where the contents of config.json are:

{
"model_name": "model.pb",
"input_tensors": { "Placeholder" : "1,224,224,3"},
"target": "SCI",
"scale": 20,
"output_tensors": [ "fc/fc"]
}

We specify the target as SCI[10] as that is the two party computation protocol in CrypTFlow. The role is specified as server as we own the model. The input_tensors field expects the name of the input node in the tensorflow graph along with its shape and the output_tensors node expects the name of the output node. The scale is used for converting floating point to fixed point (see appendix for further information). This generates the following files:

  • model_SCI_OT.out: The compiled MPC program.
  • model_input_weights_fixedpt_scale_20.inp: The model weights in fixed point.
  • client.zip: A zip file that contains the pruned model, metadata and the config.json.

We can see that the pruned model in the client.zip (optimised_model.pb) has been stripped of weights and is instead replaced with “Variable” nodes which read input from the user.

Model after weight stripping. (Screenshot of model viewed in Netron)

This client.zip is to be shared with the client and the client can then extract the files and compile the model with:

python path/to/EzPC/Athos/CompileTFGraph.py --config config.json --role client

This generates the same model_SCI_OT.out program for the client. The client now needs to pre-process its x-ray data according to the inference script and additionally convert it to fixed point. The relevant parts of the predict.py script from Custom Vision has been extracted out and we can dump the processed image as a numpy array using pre_process.py. We can then convert this to fixed point using convert_np_to_fixedpt.py:

python pre_process.py covid19positive.jpeg
python path/to/EzPC/Athos/CompilerScripts/convert_np_to_fixedpt.py --inp xray.npy --config config.json

This dumps a xray_fixedpt_scale_20.inp file which is to be used by the client as input. We are now ready to run the computation. The server runs and feeds model weights to the program:

./model_SCI_OT.out r=1 p=12345 < model_input_weights_fixedpt_scale_20.inp

And the client runs and feeds the x-ray to the program:

./model_SCI_OT.out r=2 ip=SERVER_IP_ADDRESS p=12345 < xray_fixedpt_scale_20.inp > output.txt

Here p specifies a port number and ip is the IP address of the server. If the client and server are running on the same machine you can use 127.0.0.1. When the computation finishes running, the inference result is stored in fixed point in output.txt in the clients machine and the server does not learn anything. To retrieve the result in floating point, the client should then run

python path/to/EzPC/Athos/CompilerScripts/get_output.py output.txt config.json

This dumps the floating point output in model_output.npy as a flattened numpy array. Next we need to verify the results. To get the original tensorflow output do:

python run_tf.py xray.npy 

This dumps the output in tensroflow_output.npy. Next we compare it with our generated output:

python path/to/EzPC/Athos/CompilerScripts/comparison_scripts/compare_np_arrs.py -v -i tensorflow_output.npy model_output.npy 

And we get the output as Arrays matched upto 1 decimal points.

We can see that the actual values in fact do match up to 1 decimal points:

Tensorflow: [  6.9056687  -4.3557773 -18.939848 ]
CrypTFlow: [ 6.9075231 -4.3768930 -18.942830 ]

So in this post we saw how we can use CrypTFlow to securely run a trained ShuffleNet V2 model that makes a prognosis of COVID-19 from chest x-rays. The ML provider can share the model architecture to the radiology lab and then both can compile this model and execute the generated MPC protocol. The radiology lab does not share the patient’s private x-ray, and the ML provider doesn’t share it’s proprietary model weights. Both together compute the output which is finally then only revealed to the radiology lab. Cryptography guarantees that the radiology lab learns nothing about model weights beyond what is revealed by the output and the ML provider learns nothing about the image.

Stay tuned for a future blog post that will show an end-to-end system using Custom Vision to train models and deploying them in the Azure Machine Learning Platform with CrypTFlow.

Appendix

  1. Floating point to fixed point.
    CrypTFlow represents a 32-bit floating-point number 𝑟 by a 64-bit integer ⌊𝑟 ⋅ 2ˢ⌋ for a precision or scale 𝑠. Then operations on 32-bit floating-point numbers are simulated by operations on 64-bit integers. For example 𝑟₁ * 𝑟₂ is simulated as (⌊𝑟₁ ⋅ 2ˢ⌋ * ⌊𝑟₂ ⋅ 2ˢ⌋ )/ 2ˢ . A large s causes integer overflows and a small s leads to accuracy loss. Model owners need to test different scaling factors with their validation data. It’s better to test with the debug CPP target as that runs much faster. Scaling factor can be specified in the config.json with default being 12. See CompileTFGraph.py --help.
  2. MPC Protocols.
    CrypTFlow comes with multiple cryptographic backends. In the example above we use SCI[10] for doing a 2PC computation. The same computation can be achieved faster if a third party is available. That party does not get access to either model weights or patient data but helps with the computation. Specify target as PORTHOS[9] in the config.json. See this for more instructions.
  3. Some Demos.
    We have various demos for networks like ResNet and DenseNet available on the repository. Check them out here.
  4. Massaging models.
    If your model contains a node that CrypTFlow does not support, say for example a Sigmoid at the end of the network, you can use the remove_tf_nodes script to remove that node. The client can then locally compute sigmoid on the output after getting the result of the secure computation.

References

  1. Mikael Häggström, Normal posteroanterior (PA) chest radiograph (X-ray).jpg, distributed under CC0 1.0.
  2. IconTrack, Medical Report Icon, distributed under CC BY 3.0.
  3. Trevor Dsouza, Machine Learning Icon, distributed under CC BY 3.0.
  4. Stockio.com, Hospital Icon, free for personal and commercial use.
  5. Med Marki, Machine Code Icon, distributed under CC BY 3.0.
  6. IBM-Design, Secure Icon, distributed under CC BY 4.0.
  7. Google Inc, Tensorflow Logo, “TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.”
  8. Open Neural Network Exchange, ONNX Logo, distributed under X11.
  9. CrypTFlow: Secure TensorFlow Inference, Nishant Kumar, Mayank Rathee, Nishanth Chandran, Divya Gupta, Aseem Rastogi, Rahul Sharma, IEEE S&P 2020
  10. CrypTFlow2: Practical 2-Party Secure Inference, Deevashwer Rathee, Mayank Rathee, Nishant Kumar, Nishanth Chandran, Divya Gupta, Aseem Rastogi, Rahul Sharma, ACM CCS 2020

This blog post is part of a series of articles on the EzPC project at Microsoft Research India. Please contact the EzPC team (Pratik Bhatu, Nishanth Chandran, Divya Gupta, Aseem Rastogi, Rahul Sharma) here for further information.

Thanks to Rahul Raina, Sr. Data Scientist, Microsoft Canada for contributing to this project and blog series.

--

--

Responses (2)