Exploring Sparsity in Recurrent Neural Networks | Synced

In order to deploy Recurrent Neural Networks (RNNs) efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network.

By · · 1 min read

Source: Synced | AI Technology & Industry Review

In order to deploy Recurrent Neural Networks (RNNs) efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network.