GPT models are 10% off from 31st March PDT.Try it now!

Artificial Intelligence

ResNet

ResNet (Residual Network) is a type of deep convolutional neural network (CNN) architecture introduced by Kaiming He and his team at Microsoft Research in 2015. ResNet is designed to solve the problem of vanishing gradients in very deep neural networks, allowing for the construction of much deeper networks without sacrificing performance.

Key Features

  • Residual Learning – ResNet introduces residual blocks, where each block learns a residual function (the difference between the input and output) rather than learning outputs directly.
  • Skip Connections – The architecture employs shortcut connections that bypass layers, allowing gradient flow and enabling much deeper networks without performance degradation.
  • Deep Architecture – ResNet variants range from 34 to 152 layers, with the ability to maintain high performance at extreme depths.
  • Bottleneck Architecture – Deeper versions use bottleneck designs to reduce parameters while preserving performance.

Common Variants

  • ResNet-34, ResNet-50, ResNet-101, ResNet-152
  • ResNet-110 (for smaller-scale tasks)

Applications

  • Image classification and ImageNet benchmarks
  • Object detection as backbone networks
  • Semantic segmentation
  • Face recognition
  • Medical imaging analysis

FAQ

ResNet is a deep CNN architecture introduced in 2015 by Kaiming He and colleagues to tackle vanishing gradients in very deep networks. It uses residual learning so layers learn the difference (residual) from the input, enabling much deeper models without accuracy degradation.