The computational complexity of deep neural networks is a major obstacle of many application scenarios driven by low-power devices, including federated learning. A recent finding shows that random sketches can substantially reduce the model complexity without affecting prediction accuracy.