PyTorch or TensorFlow?

The answer seems to be "yes". It depends on the model you want to use (if you're reusing or building on someone's work).

Here's my current understanding of PyTorch (site, repo) and TensorFlow (site, repo) so far.

PyTorch is said to be more popular in research. It's had a more stable API and is closer to the way things are done in Python. TensorFlow (hailing from the lands of Google) has gone through a buch of non-backwards-compatible changes when it comes to the API.

However, TensorFlow seems to be popular for business applications, because a lot of usable projects exist around it and it tends to be more compatible with business solutions.

There's the matter of allegiance as well. TensorFlow is close to Google, while PyTorch is loyal to Facebook. It makes sense that AWS for example would be more inclined to offer a PyTorch solution instead of indirectly helping Google's cause.

Now, TensorFlow and PyTorch are not the only players either. Just looking at what major tools in the space support (on the example of NVIDIA's Triton Inference Server) there seems a bunch more "major framework backends": and TensorRT, ONNX Runtime for example. But maybe I'm mixing stuff up at this point.

My takeaway is, that you need to be flexible, and not get too focused on a single tool for working with deep learning models.

UPDATE: I found this article which provides a more in-depth perspective on this comparison.

Hi! I'm Vladislav, I help companies deploy GPU-heavy AI products to Kubernetes. If you're interested in this topic, make sure to sign up to the newsletter.