With their capability to recognize complex patterns in data, deep learning models are rapidly becoming the most prominent set of tools for a broad range of data science tasks from image classification to natural language processing. This trend is supplemented by the availability of deep learning software platforms and modern hardware environments. We propose a declarative benchmarking framework to evaluate the performance of different software and hardware systems. We further use our framework to analyze the performance of three different software frameworks on different hardware setups for a representative set of deep learning workloads and corresponding neural network architectures.
Research areas