By Evan Sultanik

Many machine learning (ML) models are Python pickle files under the hood, and it makes sense. The use of pickling conserves memory, enables start-and-stop model training, and makes trained models portable (and, thereby, shareable). Pickling is easy to implement, is built into Python without requiring additional dependencies, and supports serialization of custom objects. There’s little doubt about why choosing pickling for persistence is a popular practice among Python programmers and ML practitioners.

Pre-trained models are typically treated as “free” byproducts of ML since they allow the valuable intellectual property like algorithms and corpora that produced the model to remain private. This gives many people the confidence to share their models over the internet, particularly for reusable computer vision and natural language processing classifiers. Websites like PyTorch Hub facilitate model sharing, and some libraries even provide APIs to download models from GitHub repositories automatically.

Here, we discuss the underhanded antics that can occur simply from loading an untrusted pickle file or ML model. In the process, we introduce a new tool, Fickling, that can help you reverse engineer, test, and even create malicious pickle files. If you are an ML practitioner, you’ll learn about the security risks inherent in standard ML practices. If you are a security engineer, you’ll learn about a new tool that can help you construct and forensically examine pickle files. Either way, by the end of this article, pickling will hopefully leave a sour taste in your mouth.

Do you know how pickles are stored? It’s jarring!

Python pickles are compiled programs run in a unique virtual machine called a Pickle Machine (PM). The PM interprets the pickle file’s sequence of opcodes to construct an arbitrarily complex Python object. Python pickle is also a streaming format, allowing the PM to incrementally build the resulting object as portions of the pickle are downloaded over the network or read from a file.

The PM uses a Harvard architecture, segregating the program opcodes from writable data memory, thus preventing self-modifying code and memory corruption attacks. It also lacks support for conditionals, looping, or even arithmetic. During unpickling, the PM reads in a pickle program and performs a sequence of instructions. It stops as soon as it reaches the STOP opcode and whatever object is on top of the stack at that point is the final result of unpickling.

From this description, one might reasonably conclude that the PM is not Turing-complete. How could this format possibly be unsafe? To corrode the words of Mishima’s famous aphorism:

Computer programs are a medium that reduces reality to abstraction for transmission to our reason, and in their power to corrode reality inevitably lurks the danger of the weird machines.

The PM contains two opcodes that can execute arbitrary Python code outside of the PM, pushing the result onto the PM’s stack: GLOBAL and REDUCE. GLOBAL is used to import a Python module or class, and REDUCE is used to apply a set of arguments to a callable, typically previously imported through GLOBAL. Even if a pickle file does not use the REDUCE opcode, the act of importing a module alone can and will execute arbitrary code in that module, so GLOBAL alone is dangerous.

For example, one can use a GLOBAL to import the exec function from __builtins__ and then REDUCE to call exec with an arbitrary string containing Python code to run. Likewise for other sensitive functions like os.system and subprocess.call. Python programs can optionally limit this behavior by defining a custom unpickler; however, none of the ML libraries we inspected do so. Even if they did, these protections can almost always be circumvented; there is no guaranteed way to safely load untrusted pickle files, as is highlighted in this admonition from the official Python 3.9 Pickle documentation:

Warning The pickle module is not secure. Only unpickle data you trust.

It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never unpickle data that could have come from an untrusted source, or that could have been tampered with.

Consider signing data with hmac if you need to ensure that it has not been tampered with.

Safer serialization formats such as JSON may be more appropriate if you are processing untrusted data.

We are not aware of any ML file formats that include a checksum of the model, eitherSome libraries like Tensorflow do have the capability to verify download checksums, however, verification is disabled by default and based upon a checksum embedded in the filename, which can be easily forged..

The dangers of Python pickling have been known to the computer security community for quite some time.

Introducing Fickling: A decompiler, static analyzer, and bytecode rewriter for pickle files

Fickling has its own implementation of a Pickle Virtual Machine (PM), and it is safe to run on potentially malicious files, because it symbolically executes code rather than overtly executing it.

Let’s see how Fickling can be used to reverse engineer a pickle file by creating an innocuous pickle containing a serialized list of basic Python types:

$ python3 -c "import sys, pickle; \
  sys.stdout.buffer.write(pickle.dumps([1, ‘2’, {3: 4}]))" \
  > simple_list.pickle
$ python3 -m pickle simple_list.pickle
[1, ‘2’, {3: 4}]

Running fickling on the pickle file will decompile it and produce a human-readable Python program equivalent to what code would be run by the real PM during deserialization:

$ fickling simple_list.pickle
result = [1, ‘2’, {3: 4}]

In this case, since it’s a simple serialized list, the code is neither surprising nor very interesting. By passing the --trace option to Fickling, we can trace the execution of the PM:

$ fickling --trace simple_list.pickle
PROTO
FRAME
EMPTY_LIST
    Pushed []
MEMOIZE
    Memoized 0 -> []
MARK
    Pushed MARK
BININT1
    Pushed 1
SHORT_BINUNICODE
    Pushed '2'
MEMOIZE
    Memoized 1 -> '2'
EMPTY_DICT
    Pushed {}
MEMOIZE
    Memoized 2 -> {}
BININT1
    Pushed 3
BININT1
    Pushed 4
SETITEM
    Popped 4
    Popped 3
    Popped {}
    Pushed {3: 4}
APPENDS
    Popped {3: 4}
    Popped '2'
    Popped 1
    Popped MARK
STOP
    result = [1, '2', {3: 4}]
    Popped [1, '2', {3: 4}]

You can run Fickling’s static analyses to detect certain classes of malicious pickles by passing the --check-safety option:

$ fickling --check-safety simple_list.pickle
Warning: Fickling failed to detect any overtly unsafe code,
but the pickle file may still be unsafe.
Do not unpickle this file if it is from an untrusted source!

What would it look like if the pickle file were malicious? Well, why not make one! We can do that by injecting arbitrary Python code into the pickle file:

$ fickling --inject 'print("Hello World!")' testpickle > testpickle.pwn3d
$ python3 -m pickle testpickle.pwn3d
Hello World!
[1, '2', {3: 4}]

It works! So let’s see Fickling’s decompilation:

$ fickling testpickle.pwn3d
_var0 = eval('print("Hello World!")')
result = [1, '2', {3: 4}]

and its analysis:

$ fickling --check-safety testpickle.pwn3d
Call to `eval('print("Hello World!")')` is almost certainly
evidence of a malicious pickle file

Fickling can also be used as a Python library, and has a programmatic interface to decompile, analyze, modify, and synthesize Pickle files. It is open source, and you can install it by running:

pip3 install fickling

Making Malicious ML Models

Since the majority of ML models use pickling extensively, there is a potential attack surface for weight/neuron perturbations on models, including fault injections, live trojans, and weight poisoning attacks among others. For example, during deserialization, code injected into the pickle could programmatically make changes to the model depending on the local environment, such as time of day, timezone, hostname, system locale/language, or IP address. These changes could be subtle, like a bitflip attack, or more overt, like injecting arbitrary delays in the deserialization to deny service.

Fickling has a proof-of-concept based on the official PyTorch tutorial that injects arbitrary code into an existing PyTorch model. This example shows how loading the generated model into PyTorch will automatically list all of the files in the current directory (presumably containing proprietary models and code) and exfiltrate them to a remote server.

This is concerning for services like Microsoft’s Azure ML, which supports running user-supplied models in their cloud instances. A malicious, “Fickled” model could cause a denial of service, and/or achieve remote code execution in an environment that Microsoft likely assumed would be proprietary. If multiple users’ jobs are not adequately compartmentalized, there is also the potential of exfiltrating other users’ proprietary models.

How do we dill with it?

The ideal solution is to avoid pickling altogether. There are several different encodings—JSON, CBOR, ProtoBuf—that are much safer than pickling and are sufficient for encoding these models. In fact, PyTorch already includes state_dict and load_state_dict functions that save and load model weights into a dictionary, which can be easily serialized into a JSON format. In order to fully load the model, the model structure (how many layers, layer types, etc.) is also required. If PyTorch implements serialization/deserialization methods for the model structure, the entire model can be much more safely encoded into JSON files.

Outside of PyTorch, there are other frameworks that avoid using pickle for serialization. For example, the Open Neural Network Exchange (ONNX) aims to provide a universal standard for encoding AI models to improve interoperability. The ONNX specification uses ProtoBuf to encode their model representations.

ReSpOnSiBlE DiScLoSuRe

We reported our concerns about sharing ML models to the PyTorch and PyTorch Hub maintainers on January 25th and received a reply two days later. The maintainers said that they will consider adding additional warnings to PyTorch and PyTorch Hub. They also explained that models submitted to PyTorch Hub are vetted for quality and utility, but the maintainers do not perform any background checks on the people publishing the model or carefully audit the code for security before adding a link to the GitHub repository on the PyTorch Hub indexing page. The maintainers do not appear to be following our recommendation to switch to a safer form of serialization; they say that the onus is on the user to ensure the provenance and trustworthiness of third party models.

We do not believe this is sufficient, particularly in the face of increasingly prevalent typosquatting attacks (see those of pip and npm). Moreover, a supply chain attack could very easily inject malicious code into a legitimate model, even though the associated source code appears benign. The only way to detect such an attack would be to manually inspect the model using a tool like Fickling.

Conclusions

As ML continues to grow in popularity and the majority of practitioners rely on generalized frameworks, we must ensure the frameworks are secure. Many users do not have a background in computer science, let alone computer security, and may not understand the dangers of trusting model files of unknown provenance. Moving away from pickling as a form of data serialization is relatively straightforward for most frameworks and is an easy win for security. We relish the thought of a day when pickling will no longer be used to deserialize untrusted files. In the meantime, try out Fickling and let us know how you use it!

Acknowledgements

Many thanks goes out to our team for their hard work on this effort: Sonya Schriner, Sina Pilehchiha, Jim Miller, Suha S. Hussain, Carson Harmon, Josselin Feist, and Trent Brunson