Khuyen Le
1 min readJul 30, 2023

--

Dear Daniel,

Thank you for your comment. In fact, the model introduced in this block is Inception v1 (a.k.a Google Lenet), it was built based on the original article "Go deeper with convolutions" [Szegedy C. et al, 2015]. As you know, batch normalization and regularization techniques were not applied to this original model. There have been subsequent advancements and variations such as Inception v2, v3, v4 and Inception ResNet. The Inception v2 introduced batch normalization and factorization method where the 5x5 convolutional layer in the Inception block was factorized into two 3x3 convolutional layers, leading to improve computational speed. The Inception v3 was built upon the success of Inception v2 while addressing some of its limitations such as factoring 7x7 convolutional layers in Inception v2 with two consecutive 3x3 convolutions; adding batch normalization to make the training process more stable and accelerating the convergence, it also acts as a form of regularization, which can lead to better generalization. Furthermore, the label smoothing regularization is also applied in this model to make the training process more robust against noisy labels (https://arxiv.org/pdf/1512.00567v3.pdf) Besides, there are also other versions of Inception networks such as Inception v4 and Inception ResNet (you can refer to the following paper for more detail https://arxiv.org/pdf/1602.07261.pdf.

Sincerely,

--

--

Khuyen Le
Khuyen Le

Written by Khuyen Le

Postdoctoral Researcher at 3IA Côte d'Azur - Interdisciplinary Institute for Artificial Intelligence

No responses yet