Technology has always been based on religious conflicts. After a few beers, the facts tend to get lost in discussions about the advantages and disadvantages of various operating systems, cloud service providers, or deep learning frameworks. Instead, people begin to fight like it’s the holy grail for their technology.
Consider the constant discussion of IDEs. Others prefer IntelliJ, while still others favor more traditional editors like Vim. Some people favor Visual Studio. What your preferred text editor might say about your personality is a topic of perpetual debate that is, of course, only half-ironic.
Similar conflicts appear to be escalating between PyTorch and TensorFlow. Both sides have a sizable following. Furthermore, each side has solid justifications for why their preferred deep learning framework is superior.
Having said that, the data reveals a fairly obvious truth. Currently, TensorFlow is the most widely used deep learning framework. Every month, it receives nearly twice as many questions on StackOverflow as PyTorch does.
TensorFlow, on the other hand, hasn’t increased since around 2018. Up until the day this post was published, PyTorch had been steadily gaining popularity.
I’ve also included Keras in the figure below for completeness’ sake. Along with TensorFlow, it was released around the same time. It has, however, clearly tanked in recent years. The simplest explanation for this is that Keras is too slow and overly simplistic for the needs of the majority of deep learning practitioners.

TensorFlow’s StackOverflow traffic may not be waning quickly, but it is waning nonetheless. And there are reasons to think that this decline will worsen over the following few years, especially in the Python community.
PyTorch has a more pythonic feel.
TensorFlow, which was created by Google, may have been among the first frameworks to attend the deep learning party in late 2015. However, as with many first versions of any software, the first one was rather difficult to use.
Because of this, Meta began creating PyTorch as a way to provide nearly identical functionalities to TensorFlow while making it more user-friendly.
As soon as the creators of TensorFlow realized this, they incorporated many of PyTorch’s most well-liked features into TensorFlow 2.0.
TensorFlow can be used to accomplish anything PyTorch can, which is a good generalization. You’ll just have to work twice as hard to write the code. Even today, it lacks a lot of intuitiveness and seems very un-Pythonic.
On the other hand, if you enjoy using Python, PyTorch feels very natural to use.
More models are accessible in PyTorch.
Many businesses and academic institutions lack the powerful computing resources required to create sizable models. But when it comes to machine learning, size really does matter; the bigger the model, the more impressive its performance.
Engineers can use HuggingFace to quickly and easily integrate large, trained, and tuned models into their pipelines. But astonishingly, only 85% of these models can be used with PyTorch. Just 8% of HuggingFace models are only available through TensorFlow. Both frameworks can use the remainder.
This means that if you intend to use big models, you should either avoid using TensorFlow or spend a lot of money on computing power to build your own model.
PyTorch is superior for research and education.
PyTorch is reputed to be more well-liked by academics. Three out of four research papers use PyTorch, so this is justified. Even among researchers who initially used TensorFlow—remember, it joined the deep learning party earlier—the majority have switched to PyTorch at this point.
These trends are alarming and continue even though Google has a sizable presence in AI research and primarily uses TensorFlow.
Perhaps more strikingly, teaching is influenced by research, which in turn defines what students may learn. PyTorch-using professors are more likely to incorporate it into their lectures than non-users. They may have stronger beliefs about PyTorch’s success in addition to being more at ease explaining it and fielding questions about it.
Therefore, college students may learn much more about PyTorch than TensorFlow. Furthermore, given that today’s college students will likely be tomorrow’s workers, you can probably guess where this trend is headed.
The ecosystem of PyTorch has expanded more quicker.
Software frameworks are only important insofar as they are participants in an ecosystem, in the end. Both PyTorch and TensorFlow’s ecosystems are quite advanced, with resources for other trained models than HuggingFace, data management programs, mechanisms for preventing failures, and more.
It’s important to note that TensorFlow currently has a slightly more advanced ecosystem than PyTorch. Remember, though, that PyTorch arrived later and has seen a sizable increase in users over the past few years. As a result, it is reasonable to assume that PyTorch’s ecosystem will eventually surpass that of TensorFlow.
The better deployment infrastructure is provided by TensorFlow.
TensorFlow may be difficult to code, but once it is written, it is much simpler to deploy than PyTorch. Deployment to the cloud, servers, mobile devices, and IoT devices can be accomplished quickly thanks to tools like TensorFlow Serving and TensorFlow Lite.
On the other hand, PyTorch has a history of releasing deployment tools slowly. Nevertheless, it has recently been rapidly catching up to TensorFlow.
Although it’s difficult to say at this point, PyTorch may eventually catch up to or even surpass TensorFlow’s deployment infrastructure.
TensorFlow code will probably remain in place for some time because switching frameworks after deployment is expensive. However, it’s quite possible that more and more modern deep learning applications will be created and implemented using PyTorch.
Python is only a small part of TensorFlow.
TensorFlow is still alive. Just not as well-known as it once was.
The main cause of this is the widespread adoption of PyTorch by users of Python for machine learning.
But there are other languages available for machine learning besides Python. Only because Python is the “O.G.” of machine learning did TensorFlow’s creators decide to base their support on this language.
TensorFlow is now compatible with Java, C++, and JavaScript. The community is also beginning to develop support for other languages, including Haskell, Julia, Scala, and Rust.
On the other hand, PyTorch is heavily based on Python, which explains why it has such a pythonic feel. Although there is a C++ API, TensorFlow provides twice as much support for other languages.
It’s entirely possible that PyTorch will surpass TensorFlow in the Python ecosystem. TensorFlow, on the other hand, will continue to be a key player in deep learning thanks to its impressive ecosystem, deployment features, and support for other languages.
How much you love Python will largely determine whether you choose TensorFlow or PyTorch for your upcoming project.