- Tensor calculation (such as NumPy) with high GPU acceleration.
- Deep neural networks built on a tape-based autograde system.
Originally developed at Idiap Research Institute, NYU, NEC Laboratories America, Facebook and Deepmind Technologies, with contributions from the Torch and Caffe2 projects, PyTorch now has a thriving open source community. PyTorch 1.10, launched in October 2021, has 426 contributors, and the repository currently has 54,000 stars.
This article is an overview of PyTorch, including new features in PyTorch 1.10 and a quick guide to getting started with PyTorch. I previously reviewed PyTorch 1.0.1 and compared TensorFlow and PyTorch. I suggest you read the review for an in-depth discussion of the PyTorch architecture and how the library works.
The evolution of PyTorch
In the beginning, scientists and researchers were attracted to PyTorch because it was easier to use than TensorFlow for development of models with graphic processors (GPU). PyTorch is the default impatient execution mode, which means that its API calls are executed on call, instead of being added to the schedule to be launched later. TensorFlow has since improved its support for impatient performance, but PyTorch is still popular in academia and research.
At this point, PyTorch is ready for production, which allows you to easily switch between impatient modes and graphics with
TorchScriptand speeding up the road to production with
torch.distributed back end allows scalable distributed training and optimization of productivity in research and production, and the rich ecosystem of tools and libraries expands PyTorch and supports the development of computer vision, natural language processing and more. Finally, PyTorch is well supported on major cloud platforms, including Alibaba, Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure. Cloud support ensures seamless development and easy scaling.
What’s new in PyTorch 1.10
According to the PyTorch blog, PyTorch 1.10 updates are focused on improving learning and productivity, as well as developer usability. look Notes on PyTorch version 1.10 for details. Here are some highlights from this issue:
- CUDA Graphs application programming interfaces are integrated to reduce CPU costs for CUDA workloads.
- Several APIs such as FX,
nn.Modulethe parameterization was moved from beta to stable. FX is a Pythonic platform for transforming PyTorch programs;
torch.specialperforms special functions such as gamma and Bessel functions.
- A new JL compiler based on LLVM supports automatic merging in processors as well as in GPUs. The LLVM JIT – based compiler can combine sequences from
torchcalls to the library to improve performance.
- Android NNAPI support is now available in beta. NNAPI (Android Neural Networks API) allows Android applications to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including graphics processors and dedicated neural processor modules (NPUs).
PyTorch 1.10 includes over 3400 commit, which shows a project that is active and focused on improving performance through various methods.
How to get started with PyTorch
Reading the version update notes won’t tell you much if you don’t understand the basics of the project or how to start using it, so let’s fill it out.
IN PyTorch tutorial page offers two songs: one for those familiar with other deep learning frameworks and one for news. If you need the newb record, which introduces tensors, datasets, autograd and other important concepts, I suggest you follow it and use Start Microsoft Learn option as shown in Figure 1.
If you are already familiar with the concepts of deep learning, I suggest you start the quick start notebook shown in Figure 2. You can also click on Start Microsoft Learn or Run in Google Colabor you can run the notebook locally.
PyTorch projects to watch
As shown on the left side of the screenshot in Figure 2, PyTorch has many recipes and tutorials. There are also many models and examples of how to use them, usually as notebooks. Three projects in PyTorch ecosystem I find particularly interesting: Captum, PyTorch Geometric (PyG) and skorch.
As noted in this project GitHub repositorythe word captum means understanding Latin. As described on the repository page and elsewhere, Captum is “PyTorch model interpretability library“It contains various grading and interference-based attribution algorithms that can be used to interpret and understand PyTorch models. It also has fast integration for models built with domain-specific libraries such as torchvision, torchtext, and more.
Figure 3 shows all attribution algorithms currently maintained by Captum.
PyTorch Geometric (PyG)
PyTorch geometric (PyG) is a library that data scientists and others can use to write and train graphical neural networks for structured data applications. As described on the GitHub repository page:
PyG offers in-depth training methods on graphics and other irregular structures, also known as geometric deep learning. In addition, it consists of easy-to-use mini-batch chargers for working with very small and single giant graphics, support for multiple GPUs, distributed learning graphics via Quiver, a large number of common sets of comparative data (based on simple interfaces to create your own), the manager of GraphGym experiments and useful transformations, both for learning on random graphics and on 3D grids or point clouds.
Figure 4 is an overview of the PyTorch Geometric architecture.
skorch is a scikit-learn-compatible neural network library that envelops PyTorch. The purpose of skorch is to make it possible to use PyTorch with sklearn. If you are familiar with sklearn and PyTorch, you do not need to learn new concepts, and the syntax should be well known. In addition, skorch abstracts the training cycle, making very standard code obsolete. Simple
net.fit(X, y) is sufficient, as shown in Figure 5.
Overall, PyTorch is one of several high-level frameworks for deep neural networks with GPU support. You can use it to develop and produce models, you can run it on site or in the cloud, and you can find many pre-built PyTorch models to use as a starting point for your own models.
Copyright © 2022 IDG Communications, Inc.