Convolution operation in deep convolutional neural networks is the most computationally expensive as compared to other operations. Most of the model computation (FLOPS) in the deep architecture belong to convolution operation. In this paper, we are proposing a novel skip convolution operation that employs significantly fewer computation as compared to the traditional one without sacrificing model accuracy. Skip convolution operation produces structured sparsity in the output feature maps without requiring sparsity in the model parameters for computation reduction. The existing convolution operation performs the redundant computation for object feature representation while the proposed convolution skips redundant computation. Our empirical evaluation for various deep models (VGG, ResNet, MobileNet, and Faster R-CNN) over various benchmarked datasets (CIFAR-10, CIFAR-100, ImageNet, and MS-COCO) show that skip convolution reduces the computation significantly while preserving feature representational capacity. The proposed approach is model-agnostic and can be applied over any architecture. The proposed approach does not require a pretrained model and does train from scratch. Hence we achieve significant computation reduction at training and test time. We are also able to reduce computation in an already compact model such as MobileNet using skip convolution. We also show empirically that the proposed convolution works well for other tasks such as object detection. Therefore, SkipConv can be a widely usable and efficient way of reducing computation in deep CNN models.