CRCV Events
Speaker: Dr. Aritra Dutta From: University of Southern Denmark Abstract When there is a lot of training data or the deep neural network is too large, distributed parallel training becomes essential, which refers to either data or model parallelism. In both cases, parallelism introduces various overheads. Network communication is one such significant overhead in large-scale distributed deep learning. To mitigate …
CRCV