Performance and User-friendliness of low-level vs higher-level deep neural network frameworks

ยป https://github.com/jgoeszoom/framework-overhead

By Joseph Gozum, Jack Gu, Justin Lam

During my third year of undergraduate school at university, I was taking a class on using Nvidia GPUs to accelerate certain computer processes such as vector multipication. For the final project of the class was to create a project exploring their uses and at the same time, deep neural network frameworks were expanding and becoming very popular.

So my group decided to explore different deep neural network frameworks at varying levels of abstraction (e.g., cuDNN (lowest), Tensorflow (middle), and Keras(highest)). We looked at the performance changes between each and the user friendliness.

The main metrics that we looked at was the amount of lines of code and how long it took to compute an interference (i.e., going through the neural network).