complexFullyConnectedLayer
Description
A complex fully connected layer multiplies the input by a complex weight matrix and then adds a complex bias vector.
Creation
Syntax
Description
returns a complex fully connected layer and specifies the layer = complexFullyConnectedLayer(outputSize)OutputSize property.
also sets the layer = complexFullyConnectedLayer(outputSize,dim)OperationDimension property.
sets optional properties using one or more name-value arguments.layer = complexFullyConnectedLayer(outputSize,Name=Value)
Input Arguments
Output size for the complex fully connected layer, specified as a positive integer.
Example:
10
Operation dimension, specified as one of these values:
"spatial-channel"— Flatten the"S"(spatial) and"C"(channel) dimensions of the input data, then multiply by the weights matrix and add the bias vector for each element in the"B"(batch),"T"(time), and"U"(unspecified) dimensions, independently.positive integer — Use the specified dimension of the layer input data
Xas the inner dimension of the matrix multiplicationWeights*Xin the layer operation, and apply the operation independently for each of the remaining dimensions.
This argument sets the OperationDimension property.
Data Types: single | double | char | string
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN, where Name is
the argument name and Value is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: complexFullyConnectedLayer(10,Name="cfc1") creates a
complex fully connected layer with an output size of 10 and the name
'cfc1'.
Function to initialize the weights, specified as one of the following:
"complex-glorot-normal"– Initialize the weights with the complex normal Glorot initializer [1]. The complex normal Glorot initializer independently samples real and imaginary parts of the weights from a normal distribution with zero mean and variance1/(InputSize + OutputSize)."complex-glorot-uniform-square"– Initialize the weights with the complex uniform square Glorot initializer [1]. The complex uniform square Glorot initializer independently samples real and imaginary parts of the weights from a uniform distribution in the interval(-sqrt(3/(InputSize + OutputSize)), sqrt(3/(InputSize + OutputSize)))."complex-he-normal"– Initialize the weights with the complex normal He initializer [1]. The complex normal He initializer samples real and imaginary parts of the weights from a normal distribution with zero mean and variance1/InputSize."complex-he-uniform-square"– Initialize the weights with the complex uniform square He initializer [1]. The complex uniform square He initializer samples real and imaginary parts of the weights from a uniform distribution in the interval(-sqrt(3/InputSize), sqrt(3/InputSize))."complex-narrow-normal"– Initialize the weights by independently sampling the real and imaginary parts of the weights from a normal distribution with zero mean and standard deviation 0.01."zeros"– Initialize the weights with a real array of zeros."ones"– Initialize the weights with a real array of ones.Function handle — Initialize the weights with a custom function. If you specify a function handle, then the function syntax must be of the form
weights = func(sz), whereszis the size of the weights. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the weights when the Weights
property is empty.
Data Types: char | string | function_handle
Function to initialize the biases, specified as one of these values:
"zeros"— Initialize the biases with zeros."ones"— Initialize the biases with ones."complex-narrow-normal"— Initialize the biases by independently sampling real and imaginary parts of the biases from a normal distribution with a mean of zero and a standard deviation of 0.01.Function handle — Initialize the biases with a custom function. If you specify a function handle, then the function must have the form
bias = func(sz), whereszis the size of the biases.
The layer only initializes the biases when the Bias
property is empty.
Data Types: char | string | function_handle
Initial layer weights, specified as a matrix.
The layer weights are learnable parameters. You can specify the initial value of
the weights directly using the Weights property of the layer.
When you train a network, if the Weights property
of the layer is nonempty, then the trainnet
function uses the Weights property as the initial
value. If the Weights property is empty, then the
software uses the initializer specified by the WeightsInitializer property of the layer.
At training time, Weights is an
OutputSize-by-InputSize matrix.
Data Types: single | double
Complex Number Support: Yes
Initial layer biases, specified as a matrix.
The layer biases are learnable parameters. When you train a neural network, if
Bias is nonempty, then the trainnet
function uses the Bias property as the initial
value. If Bias is empty, then software uses the
initializer specified by BiasInitializer.
At training time, Bias is an
OutputSize-by-1 matrix.
Data Types: single | double
Complex Number Support: Yes
Learning rate factor for the weights, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine
the learning rate for the weights in this layer. For example, if
WeightLearnRateFactor is 2, then the
learning rate for the weights in this layer is twice the current global learning
rate. You can specify the global learning rate by using the trainingOptions function.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Learning rate factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine
the learning rate for the biases in this layer. For example, if
BiasLearnRateFactor is 2, then the
learning rate for the biases in the layer is twice the current global learning
rate. You can specify the global learning rate by using the trainingOptions function.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
L2 regularization factor for the weights, specified as a nonnegative scalar.
The software multiplies this factor by the global
L2 regularization factor to
determine the L2 regularization for the
weights in this layer. For example, if WeightL2Factor is
2, then the L2
regularization for the weights in this layer is twice the global
L2 regularization factor. You can
specify the global L2 regularization
factor by using the trainingOptions function.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
L2 regularization factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global
L2 regularization factor to
determine the L2 regularization for the
biases in this layer. For example, if BiasL2Factor is
2, then the L2
regularization for the biases in this layer is twice the global
L2 regularization factor. You can
specify the global L2 regularization
factor by using the trainingOptions function.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Properties
Complex Fully Connected
Output size for the complex fully connected layer, specified as a positive integer.
Example:
10
This property is read-only after object creation. To set this property, use the
dim input argument when you create the
ComplexFullyConnectedLayer object.
Operation dimension, specified as one of these values:
"spatial-channel"— Flatten the"S"(spatial) and"C"(channel) dimensions of the input data, then multiply by the weights matrix and add the bias vector for each element in the"B"(batch),"T"(time), and"U"(unspecified) dimensions, independently.positive integer — Use the specified dimension of the layer input data
Xas the inner dimension of the matrix multiplicationWeights*Xin the layer operation, and apply the operation independently for each of the remaining dimensions.
The ComplexFullyConnectedLayer object stores this property as a character vector or double type.
Data Types: double | char
Input size for the complex fully connected layer, specified as a positive integer
or 'auto'. If InputSize is
'auto', then the software automatically determines the input size
during training.
Parameters and Initialization
Function to initialize the weights, specified as one of the following:
"complex-glorot-normal"– Initialize the weights with the complex normal Glorot initializer [1]. The complex normal Glorot initializer independently samples real and imaginary parts of the weights from a normal distribution with zero mean and variance1/(InputSize + OutputSize)."complex-glorot-uniform-square"– Initialize the weights with the complex uniform square Glorot initializer [1]. The complex uniform square Glorot initializer independently samples real and imaginary parts of the weights from a uniform distribution in the interval(-sqrt(3/(InputSize + OutputSize)), sqrt(3/(InputSize + OutputSize)))."complex-he-normal"– Initialize the weights with the complex normal He initializer [1]. The complex normal He initializer samples real and imaginary parts of the weights from a normal distribution with zero mean and variance1/InputSize."complex-he-uniform-square"– Initialize the weights with the complex uniform square He initializer [1]. The complex uniform square He initializer samples real and imaginary parts of the weights from a uniform distribution in the interval(-sqrt(3/InputSize), sqrt(3/InputSize))."complex-narrow-normal"– Initialize the weights by independently sampling the real and imaginary parts of the weights from a normal distribution with zero mean and standard deviation 0.01."zeros"– Initialize the weights with a real array of zeros."ones"– Initialize the weights with a real array of ones.Function handle — Initialize the weights with a custom function. If you specify a function handle, then the function syntax must be of the form
weights = func(sz), whereszis the size of the weights. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the weights when the Weights
property is empty.
Data Types: char | string | function_handle
Function to initialize the biases, specified as one of these values:
"zeros"— Initialize the biases with zeros."ones"— Initialize the biases with ones."complex-narrow-normal"— Initialize the biases by independently sampling real and imaginary parts of the biases from a normal distribution with a mean of zero and a standard deviation of 0.01.Function handle — Initialize the biases with a custom function. If you specify a function handle, then the function must have the form
bias = func(sz), whereszis the size of the biases.
The layer initializes the biases only when the Bias property
is empty.
Data Types: char | string | function_handle
Layer weights, specified as a matrix.
The layer weights are learnable parameters. You can specify the initial value of the weights
directly using the Weights property of the layer. When
you train a network, if the Weights property of the layer
is nonempty, then the trainnet
function uses the Weights property as the initial value.
If the Weights property is empty, then the software uses
the initializer specified by the WeightsInitializer
property of the layer.
At training time, Weights is an
OutputSize-by-InputSize matrix.
Data Types: single | double
Complex Number Support: Yes
Layer biases, specified as a matrix.
The layer biases are learnable parameters. When you train a neural network, if Bias is nonempty, then the trainnet
function uses the Bias property as the initial value. If
Bias is empty, then software uses the initializer
specified by the BiasInitializer property.
At training time, Bias is an
OutputSize-by-1 matrix.
Data Types: single | double
Complex Number Support: Yes
Learning Rate and Regularization
Learning rate factor for the weights, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate for the weights in this layer. For example, if WeightLearnRateFactor is 2, then the learning rate for the weights in this layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.
Data Types: double
Learning rate factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.
The ComplexFullyConnectedLayer object stores this property as double
type.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
L2 regularization factor for the weights, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the weights in this layer. For example, if WeightL2Factor is 2, then the L2 regularization for the weights in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.
Data Types: double
L2 regularization factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. The software determines the global L2 regularization factor based on the settings you specify using the trainingOptions function.
The ComplexFullyConnectedLayer object stores this property as double
type.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64
Layer
This property is read-only.
Number of inputs to the layer, stored as 1. This layer accepts a
single input only.
Data Types: double
This property is read-only.
Input names, stored as {'in'}. This layer accepts a single input
only.
Data Types: cell
This property is read-only.
Number of outputs from the layer, stored as 1. This layer has a
single output only.
Data Types: double
This property is read-only.
Output names, stored as {'out'}. This layer has a single output
only.
Data Types: cell
Examples
Create a complex fully connected layer with an output size of 10 and the name 'cfc1'.
layer = complexFullyConnectedLayer(10,Name="cfc1")layer =
ComplexFullyConnectedLayer with properties:
Name: 'cfc1'
Hyperparameters
InputSize: 'auto'
OutputSize: 10
OperationDimension: 'spatial-channel'
Learnable Parameters
Weights: []
Bias: []
Show all properties
Create a layer array including a sequence input layer and two complex fully connected layers separated by a zReLU layer.
layers = [... sequenceInputLayer(100), ... complexFullyConnectedLayer(10), ... zreluLayer, ... complexFullyConnectedLayer(1), ... ];
Convert the layer array to a dlnetwork object.
net = dlnetwork(layers);
Create random sample complex sequence data.
data = randn(1,100) + 1i * randn(1,100);
Compute the output of the untrained network.
predict(net,data)
ans = single
-0.2372 + 0.5038i
Algorithms
Complex fully connected layers support real-valued and complex-valued inputs. The output is always complex, even if the inputs, weights, and biases are real.
Most layers in a layer array or layer graph pass data to subsequent layers as formatted
dlarray objects.
The format of a dlarray object is a string of characters in which each
character describes the corresponding dimension of the data. The format consists of one or
more of these characters:
"S"— Spatial"C"— Channel"B"— Batch"T"— Time"U"— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the
first two dimensions correspond to the spatial dimensions of the images, the third
dimension corresponds to the channels of the images, and the fourth dimension
corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray objects in automatic differentiation
workflows, such as those for:
developing a custom layer
using a
functionLayerobjectusing the
forwardandpredictfunctions withdlnetworkobjects
This table shows the supported input formats of ComplexFullyConnectedLayer objects and the
corresponding output format. If the software passes the output of the layer to a custom
layer that does not inherit from the nnet.layer.Formattable class, or to
a FunctionLayer object with the Formattable property set
to 0 (false), then the layer receives an unformatted
dlarray object with dimensions ordered according to the formats in this
table. The formats listed here are only a subset of the formats that the layer supports. The
layer might support additional formats, such as formats with additional
"S" (spatial) or "U" (unspecified)
dimensions.
| Input Format | Output Format |
|---|---|
|
|
| |
| |
| |
|
|
|
|
| |
| |
|
|
| |
| |
|
|
| |
| |
|
References
[1] Barrachina, Jose Agustin, Chengfang Ren, Gilles Vieillard, Christelle Morisseau, and Jean-Philippe Ovarlez. "Theory and Implementation of Complex-Valued Neural Networks". Preprint, submitted February 16, 2023. https://arxiv.org/abs/2302.08286
[2] Trabelsi, Chiheb, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, João Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, and Christopher J Pal. "Deep Complex Networks". Preprint, submitted February 25, 2018. https://arxiv.org/abs/1705.09792.
Version History
Introduced in R2026a
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
选择网站
选择网站以获取翻译的可用内容,以及查看当地活动和优惠。根据您的位置,我们建议您选择:。
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 MathWorks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- América Latina (Español)
- Canada (English)
- United States (English)
欧洲
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)