PyTorch Developer Podcast
Mobile selective build
Episode Summary
What is mobile selective build? Why are we so obsessed with reducing binary size? How does selective build work? Why doesn't static linking just work? Why can't you just read out the ops used in a TorchScript model to determine what operators you actually need? What are the tradeoffs of statically determining the operator dependency graph versus tracing? What's up with the SELECTIVE_NAME macro? How the heck does selective build work at all when you have multiple mobile apps in a single Buck build system? What takeaways should I have as a regular PyTorch developer?
Episode Notes
What is mobile selective build? Why are we so obsessed with reducing binary size? How does selective build work? Why doesn't static linking just work? Why can't you just read out the ops used in a TorchScript model to determine what operators you actually need? What are the tradeoffs of statically determining the operator dependency graph versus tracing? What's up with the SELECTIVE_NAME macro? How the heck does selective build work at all when you have multiple mobile apps in a single Buck build system? What takeaways should I have as a regular PyTorch developer?
Further reading:
Liner notes:
binary size is premium; ship only what you actually need
big idea:
- get the ops your model needs -> apply this to build of pytorch
get the ops your model needs
- TorchScript ~> read it out directly from the model itself
- but what if ops use other ops?
- need a dependency graph. done with static analysis llvm (jiakai) ~> with a (possibly inaccurate) yaml checked in for easy kickstart if you don't want to run the pass (updated by bot, not operational since Feb, recommend rebuilding from scratch if you run into trouble)
- other possibility: dynamic tracing
- pro: no need for dependency graph, just look at what was called; works for dtypes
- con: need representative inputs, if control flow might not cover everything
apply this to build of pytorch
- ordinarily: static linking ensures stuff that isn't used gets pruned
- but this doesn't work with distributed operator registration based on static initializers
- how?
- codegen - just don't generate it
- no codegen - SELECTIVE_NAME - C++ doesn't support string in macro
- build system integration
- buck constraint: only one library
- therefore: generate multiple copies of glue library
- alt: atomize library into each operator. caffe2 used to do this; each library takes a long time to build (1m) and crashes xcode because there's too many
common hiccups
- modify implementation details, some op is/isn't called anymore ~> error! usually just means some yaml needs regenerating. PyTorch Edge developers are very friendly and can help