groupNormalizationLayer
Description
A group normalization layer normalizes a mini-batch of data across grouped subsets of channels for each observation independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use group normalization layers between convolutional layers and nonlinearities, such as ReLU layers.
After normalization, the layer scales the input with a learnable scale factor γ and shifts it by a learnable offset β.
Creation
Syntax
Description
creates a group normalization layer and sets the optional layer
= groupNormalizationLayer(numGroups
,Name,Value
)Epsilon
, Parameters and Initialization, Learning Rate and Regularization, and Name
properties using one or more name-value arguments. You can
specify multiple name-value arguments. Enclose each property name in quotes.
Input Arguments
numGroups
— Number of groups
positive integer | 'all-channels'
| 'channel-wise'
Number of groups into which to divide the channels of the input data, specified as one of the following:
Positive integer – Divide the incoming channels into the specified number of groups. The specified number of groups must divide the number of channels of the input data exactly.
'all-channels'
– Group all incoming channels into a single group. This operation is also known as layer normalization. Alternatively, uselayerNormalizationLayer
.'channel-wise'
– Treat all incoming channels as separate groups. This operation is also known as instance normalization. Alternatively, useinstanceNormalizationLayer
.
Properties
Group Normalization
Epsilon
— Constant to add to mini-batch variances
1e-5
(default) | positive scalar
Constant to add to the mini-batch variances, specified as a positive scalar.
The software adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.
Before R2023a: Epsilon
must be greater than
or equal to 1e-5
.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
NumChannels
— Number of input channels
"auto"
(default) | positive integer
This property is read-only.
Number of input channels, specified as one of the following:
"auto"
— Automatically determine the number of input channels at training time.Positive integer — Configure the layer for the specified number of input channels.
NumChannels
and the number of channels in the layer input data must match. For example, if the input is an RGB image, thenNumChannels
must be 3. If the input is the output of a convolutional layer with 16 filters, thenNumChannels
must be 16.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| char
| string
Parameters and Initialization
ScaleInitializer
— Function to initialize channel scale factors
'ones'
(default) | 'narrow-normal'
| function handle
Function to initialize the channel scale factors, specified as one of the following:
'ones'
– Initialize the channel scale factors with ones.'zeros'
– Initialize the channel scale factors with zeros.'narrow-normal'
– Initialize the channel scale factors by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the form
scale = func(sz)
, wheresz
is the size of the scale. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the channel scale factors when the Scale
property is empty.
Data Types: char
| string
| function_handle
OffsetInitializer
— Function to initialize channel offsets
'zeros'
(default) | 'ones'
| 'narrow-normal'
| function handle
Function to initialize the channel offsets, specified as one of the following:
'zeros'
– Initialize the channel offsets with zeros.'ones'
– Initialize the channel offsets with ones.'narrow-normal'
– Initialize the channel offsets by independently sampling from a normal distribution with a mean of zero and standard deviation of 0.01.Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the form
offset = func(sz)
, wheresz
is the size of the scale. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the channel offsets when the Offset
property is empty.
Data Types: char
| string
| function_handle
Scale
— Channel scale factors
[]
(default) | numeric array
Channel scale factors γ, specified as a numeric array.
The channel scale factors are learnable parameters. When you train a network using the
trainnet
function or initialize a dlnetwork
object, if Scale
is nonempty, then the software uses the Scale
property as the initial value. If Scale
is empty, then the software uses the initializer specified by
ScaleInitializer
.
Depending on the type of layer input, the trainnet
and
dlnetwork
functions automatically reshape this property to have of
the following sizes:
Layer Input | Property Size |
---|---|
feature input | NumChannels -by-1 |
vector sequence input | |
1-D image input (since R2023a) | 1-by-NumChannels |
1-D image sequence input (since R2023a) | |
2-D image input | 1-by-1-by-NumChannels |
2-D image sequence input | |
3-D image input | 1-by-1-by-1-by-NumChannels |
3-D image sequence input |
Data Types: single
| double
Offset
— Channel offsets
[]
(default) | numeric array
Channel offsets β, specified as a numeric vector.
The channel offsets are learnable parameters. When you train a network using the trainnet
function or initialize a dlnetwork
object, if Offset
is nonempty, then the software uses the Offset
property as the initial value. If Offset
is empty, then the software uses the initializer specified by
OffsetInitializer
.
Depending on the type of layer input, the trainnet
and
dlnetwork
functions automatically reshape this property to have of
the following sizes:
Layer Input | Property Size |
---|---|
feature input | NumChannels -by-1 |
vector sequence input | |
1-D image input (since R2023a) | 1-by-NumChannels |
1-D image sequence input (since R2023a) | |
2-D image input | 1-by-1-by-NumChannels |
2-D image sequence input | |
3-D image input | 1-by-1-by-1-by-NumChannels |
3-D image sequence input |
Data Types: single
| double
Learning Rate and Regularization
ScaleLearnRateFactor
— Learning rate factor for scale factors
1
(default) | nonnegative scalar
Learning rate factor for the scale factors, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if ScaleLearnRateFactor
is 2
, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions
function.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
OffsetLearnRateFactor
— Learning rate factor for offsets
1
(default) | nonnegative scalar
Learning rate factor for the offsets, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate
for the offsets in a layer. For example, if OffsetLearnRateFactor
is 2
, then the learning rate for the offsets in the layer is twice
the current global learning rate. The software determines the global learning rate based
on the settings specified with the trainingOptions
function.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
ScaleL2Factor
— L2 regularization factor for scale factors
1
(default) | nonnegative scalar
L2 regularization factor for the scale factors, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization
factor to determine the learning rate for the scale factors in a layer. For example, if
ScaleL2Factor
is 2
, then the
L2 regularization for the offsets in the layer is twice the
global L2 regularization factor. You can specify the global
L2 regularization factor using the trainingOptions
function.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
OffsetL2Factor
— L2 regularization factor for offsets
1
(default) | nonnegative scalar
L2 regularization factor for the offsets, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization
factor to determine the learning rate for the offsets in a layer. For example, if
OffsetL2Factor
is 2
, then the
L2 regularization for the offsets in the layer is twice the
global L2 regularization factor. You can specify the global
L2 regularization factor using the trainingOptions
function.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Layer
Name
— Layer name
""
(default) | character vector | string scalar
NumInputs
— Number of inputs
1
(default)
This property is read-only.
Number of inputs to the layer, returned as 1
. This layer accepts a
single input only.
Data Types: double
InputNames
— Input names
{'in'}
(default)
This property is read-only.
Input names, returned as {'in'}
. This layer accepts a single input
only.
Data Types: cell
NumOutputs
— Number of outputs
1
(default)
This property is read-only.
Number of outputs from the layer, returned as 1
. This layer has a
single output only.
Data Types: double
OutputNames
— Output names
{'out'}
(default)
This property is read-only.
Output names, returned as {'out'}
. This layer has a single output
only.
Data Types: cell
Examples
Create Group Normalization Layer
Create a group normalization layer that normalizes incoming data across three groups of channels. Name the layer groupnorm
.
layer = groupNormalizationLayer(3,Name="groupnorm")
layer = GroupNormalizationLayer with properties: Name: 'groupnorm' NumChannels: 'auto' Hyperparameters NumGroups: 3 Epsilon: 1.0000e-05 Learnable Parameters Offset: [] Scale: [] Use properties method to see a list of all properties.
Include a group normalization layer in a Layer
array. Normalize the incoming 20 channels in four groups.
layers = [ imageInputLayer([28 28 3]) convolution2dLayer(5,20) groupNormalizationLayer(4) reluLayer maxPooling2dLayer(2,Stride=2) fullyConnectedLayer(10) softmaxLayer]
layers = 7x1 Layer array with layers: 1 '' Image Input 28x28x3 images with 'zerocenter' normalization 2 '' 2-D Convolution 20 5x5 convolutions with stride [1 1] and padding [0 0 0 0] 3 '' Group Normalization Group normalization 4 '' ReLU ReLU 5 '' 2-D Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0 0 0] 6 '' Fully Connected 10 fully connected layer 7 '' Softmax softmax
More About
Group Normalization Layer
A group normalization layer divides the channels of the input data into groups and normalizes the activations across each group. To speed up training of convolutional neural networks and reduce the sensitivity to network initialization, use group normalization layers between convolutional layers and nonlinearities, such as ReLU layers.
You can also use a group normalization layer to perform layer normalization or instance normalization. Layer normalization combines and normalizes activations across all channels in a single observation. Instance normalization normalizes the activations of each channel of the observation separately.
The layer first normalizes the activations of each group by subtracting the group mean and dividing by the group standard deviation. Then, the layer shifts the input by a learnable offset β and scales it by a learnable scale factor γ.
Group normalization layers normalize the activations and gradients propagating through a neural network, making network training an easier optimization problem. To take full advantage of this fact, you can try increasing the learning rate. Since the optimization problem is easier, the parameter updates can be larger and the network can learn faster. You can also try reducing the L2 and dropout regularization.
You can use a group normalization layer in place of a batch normalization layer. Doing so is particularly useful when training with small batch sizes, as it can increase the stability of training.
Algorithms
Group Normalization Layer
The group normalization operation normalizes the elements xi of the input by first calculating the mean μG and variance σG2 over spatial, time, and grouped subsets of the channel dimensions for each observation independently. Then, it calculates the normalized activations as
where ϵ is a constant that improves numerical stability when the variance is very small. To allow for the possibility that inputs with zero mean and unit variance are not optimal for the operations that follow group normalization, the group normalization operation further shifts and scales the activations using the transformation
where the offset β and scale factor γ are learnable parameters that are updated during network training.
Layer Input and Output Formats
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray
objects.
The format of a dlarray
object is a string of characters in which each
character describes the corresponding dimension of the data. The formats consist of one or
more of these characters:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the
first two dimensions correspond to the spatial dimensions of the images, the third
dimension corresponds to the channels of the images, and the fourth dimension
corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray
objects in automatic differentiation
workflows, such as those for developing a custom layer, using a functionLayer
object, or using the forward
and predict
functions with
dlnetwork
objects.
This table shows the supported input formats of GroupNormalizationLayer
objects and the
corresponding output format. If the software passes the output of the layer to a custom
layer that does not inherit from the nnet.layer.Formattable
class, or a
FunctionLayer
object with the Formattable
property
set to 0
(false
), then the layer receives an
unformatted dlarray
object with dimensions ordered according to the formats
in this table. The formats listed here are only a subset. The layer may support additional
formats such as formats with additional "S"
(spatial) or
"U"
(unspecified) dimensions.
Input Format | Output Format |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In dlnetwork
objects, GroupNormalizationLayer
objects also support
these input and output format combinations.
Input Format | Output Format |
---|---|
|
|
|
|
|
|
|
|
References
[1] Wu, Yuxin, and Kaiming He. “Group Normalization.” Preprint submitted June 11, 2018. https://arxiv.org/abs/1803.08494.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Version History
Introduced in R2020bR2023a: Epsilon
supports values less than 1e-5
The Epsilon
option also
supports positive values less than 1e-5
.
R2023a: Layer supports 1-D image sequence data
GroupNormalizationLayer
objects support normalizing 1-D image sequence data (data with
one spatial and one time dimension).
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)