In previous posts, we discovered some popular convolutional neural networks (CNNs) such as LeNet, AlexNet, VGG, NiN, GoogLeNet at which the model performance increases proportionally with the number of layers. One may ask if models can learn better with a higher number of layers? Generally, it is not always correct. Because the vanishing gradient phenomena in models, which have a large layer number, harms the convergence of these models from the beginning. The following figure is a piece of concrete evidence for that. The model with 56 layers underperformed the one with 20 layers.
To overcome this drawback, a researcher…
GoogLeNet is a deep convolutional neural network that was proposed by Szegedy et al. . This network won the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC-2014) where 92.3% classification performance was achieved. In particular, this model was designed in a special architecture that allows for increasing the depth and width of the network but keeping the computing resource.
The VGG model has in total 22 layers and it is composed of 9 Inception blocks. Each Inception block consists of four parallel paths at which convolution layers with different kernel sizes are applied [Figure 1]:
This post aims to introduce briefly two classic convolutional neural networks, VGG16 and NiN (a.k.a Network in Network). We are going to discover their architectures as well as their implementations on the Keras platform. You can refer to my previous blogs for some related topics: Convolutional neural networks, LeNet, and Alexnet models.
VGG is a deep convolutional neural network that was proposed by Karen Simonyan and Andrew Zisserman . VGG is an acronym for their group name, Visual Geometry Group, from the Oxford University. This model secured 2nd place in the ILSVRC-2014 competition where 92.7% classification performance was achieved. The…
Alexnet is a convolutional neural network that was designed by Alex Krizhevsky, in collaboration with Ilya Sutskever and Geoffrey Hinton. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2010, this network was trained to classify 1.2 million high-resolution images into 1000 different classes. It achieved top-1 and top-5 error rates of 37.5% and 17%, which outperforms state-of-the-art methods at that time.
The design of Alexnet and LeNet are very similar, but Alexnet is much deeper with more filters per layer. It consists of eight layers: five convolutional layers (some of them are followed by max-pooling layers), two fully connected hidden…
LeNet (or LeNet-5) is a convolutional neural network structure proposed by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner in 1989. The first purpose of this network is to recognize handwritten digits in images. It was successfully applied for identifying handwritten zip code numbers provided by the US Postal Service .
In this article, we are going to discover the architecture of this network as well as its application in MNIST handwritten digit images.
LeNet consists of 2 parts:
Activation functions are mathematical functions that are attached to each neuron in the networks and determine whether it should be activated or not. Typically, in each layer, the neurons perform a linear transformation on the input using the weight and bias:
Then, an activation function is applied to the above result:
Convolutional neural network (CNN) is a class of deep neural network and is commonly applied for processing structured arrays of data such as images. CNN is widely used in computer vision. They have many applications in image and video recognition, image classification, natural language processing, etc.
A convolutional neural network includes an input layer, hidden layers, and an output layer.
Machine learning engineers do not only need to have good programming skill, but they also need to have some skills of a data scientist as collecting and managing data, skills of statistician for analyzing data, and also skills of a mathematician …. It is because a machine learning project requires a lot of steps, from managing data, building and evaluating machine learning models, to applying this model for predicting the new data in the testing set. In this article, we are going to discover all these steps, including:
As you know, the applications of Machine learning appear everywhere in our life. For example, when you search for something in Google, based on Machine learning algorithms, it recommends to you the most performance results related to your keywords. Facebook, Youtube, Amazon also use recommendation systems for suggesting the users their products. Apple develops Machine learning algorithms for face and fingerprint recognition to activate your devices without using your password, … Thanks to Machine learning, our lives have become much more convenient.
In this article, we will study various types of Machine learning algorithms and their use-cases. …