主要内容

complexFullyConnectedLayer

Complex fully connected layer

Since R2026a

    Description

    A complex fully connected layer multiplies the input by a complex weight matrix and then adds a complex bias vector.

    Creation

    Description

    layer = complexFullyConnectedLayer(outputSize) returns a complex fully connected layer and specifies the OutputSize property.

    layer = complexFullyConnectedLayer(outputSize,dim) also sets the OperationDimension property.

    layer = complexFullyConnectedLayer(outputSize,Name=Value) sets optional properties using one or more name-value arguments.

    example

    Input Arguments

    expand all

    Output size for the complex fully connected layer, specified as a positive integer.

    Example: 10

    Operation dimension, specified as one of these values:

    • "spatial-channel" — Flatten the "S" (spatial) and "C" (channel) dimensions of the input data, then multiply by the weights matrix and add the bias vector for each element in the "B" (batch), "T" (time), and "U" (unspecified) dimensions, independently.

    • positive integer — Use the specified dimension of the layer input data X as the inner dimension of the matrix multiplication Weights*X in the layer operation, and apply the operation independently for each of the remaining dimensions.

    This argument sets the OperationDimension property.

    Data Types: single | double | char | string

    Name-Value Arguments

    expand all

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: complexFullyConnectedLayer(10,Name="cfc1") creates a complex fully connected layer with an output size of 10 and the name 'cfc1'.

    Function to initialize the weights, specified as one of the following:

    • "complex-glorot-normal" – Initialize the weights with the complex normal Glorot initializer [1]. The complex normal Glorot initializer independently samples real and imaginary parts of the weights from a normal distribution with zero mean and variance 1/(InputSize + OutputSize).

    • "complex-glorot-uniform-square" – Initialize the weights with the complex uniform square Glorot initializer [1]. The complex uniform square Glorot initializer independently samples real and imaginary parts of the weights from a uniform distribution in the interval (-sqrt(3/(InputSize + OutputSize)), sqrt(3/(InputSize + OutputSize))).

    • "complex-he-normal" – Initialize the weights with the complex normal He initializer [1]. The complex normal He initializer samples real and imaginary parts of the weights from a normal distribution with zero mean and variance 1/InputSize.

    • "complex-he-uniform-square" – Initialize the weights with the complex uniform square He initializer [1]. The complex uniform square He initializer samples real and imaginary parts of the weights from a uniform distribution in the interval (-sqrt(3/InputSize), sqrt(3/InputSize)).

    • "complex-narrow-normal" – Initialize the weights by independently sampling the real and imaginary parts of the weights from a normal distribution with zero mean and standard deviation 0.01.

    • "zeros" – Initialize the weights with a real array of zeros.

    • "ones" – Initialize the weights with a real array of ones.

    • Function handle — Initialize the weights with a custom function. If you specify a function handle, then the function syntax must be of the form weights = func(sz), where sz is the size of the weights. For an example, see Specify Custom Weight Initialization Function.

    The layer only initializes the weights when the Weights property is empty.

    Data Types: char | string | function_handle

    Function to initialize the biases, specified as one of these values:

    • "zeros" — Initialize the biases with zeros.

    • "ones" — Initialize the biases with ones.

    • "complex-narrow-normal" — Initialize the biases by independently sampling real and imaginary parts of the biases from a normal distribution with a mean of zero and a standard deviation of 0.01.

    • Function handle — Initialize the biases with a custom function. If you specify a function handle, then the function must have the form bias = func(sz), where sz is the size of the biases.

    The layer only initializes the biases when the Bias property is empty.

    Data Types: char | string | function_handle

    Initial layer weights, specified as a matrix.

    The layer weights are learnable parameters. You can specify the initial value of the weights directly using the Weights property of the layer. When you train a network, if the Weights property of the layer is nonempty, then the trainnet function uses the Weights property as the initial value. If the Weights property is empty, then the software uses the initializer specified by the WeightsInitializer property of the layer.

    At training time, Weights is an OutputSize-by-InputSize matrix.

    Data Types: single | double
    Complex Number Support: Yes

    Initial layer biases, specified as a matrix.

    The layer biases are learnable parameters. When you train a neural network, if Bias is nonempty, then the trainnet function uses the Bias property as the initial value. If Bias is empty, then software uses the initializer specified by BiasInitializer.

    At training time, Bias is an OutputSize-by-1 matrix.

    Data Types: single | double
    Complex Number Support: Yes

    Learning rate factor for the weights, specified as a nonnegative scalar.

    The software multiplies this factor by the global learning rate to determine the learning rate for the weights in this layer. For example, if WeightLearnRateFactor is 2, then the learning rate for the weights in this layer is twice the current global learning rate. You can specify the global learning rate by using the trainingOptions function.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Learning rate factor for the biases, specified as a nonnegative scalar.

    The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. You can specify the global learning rate by using the trainingOptions function.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    L2 regularization factor for the weights, specified as a nonnegative scalar.

    The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the weights in this layer. For example, if WeightL2Factor is 2, then the L2 regularization for the weights in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor by using the trainingOptions function.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    L2 regularization factor for the biases, specified as a nonnegative scalar.

    The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor by using the trainingOptions function.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Layer name, specified as a character vector or a string scalar. For Layer array input, the trainnet and dlnetwork functions automatically assign names to unnamed layers.

    This argument sets the Name property.

    Data Types: char | string

    Properties

    expand all

    Complex Fully Connected

    Output size for the complex fully connected layer, specified as a positive integer.

    Example: 10

    This property is read-only after object creation. To set this property, use the dim input argument when you create the ComplexFullyConnectedLayer object.

    Operation dimension, specified as one of these values:

    • "spatial-channel" — Flatten the "S" (spatial) and "C" (channel) dimensions of the input data, then multiply by the weights matrix and add the bias vector for each element in the "B" (batch), "T" (time), and "U" (unspecified) dimensions, independently.

    • positive integer — Use the specified dimension of the layer input data X as the inner dimension of the matrix multiplication Weights*X in the layer operation, and apply the operation independently for each of the remaining dimensions.

    The ComplexFullyConnectedLayer object stores this property as a character vector or double type.

    Data Types: double | char

    Input size for the complex fully connected layer, specified as a positive integer or 'auto'. If InputSize is 'auto', then the software automatically determines the input size during training.

    Parameters and Initialization

    Function to initialize the weights, specified as one of the following:

    • "complex-glorot-normal" – Initialize the weights with the complex normal Glorot initializer [1]. The complex normal Glorot initializer independently samples real and imaginary parts of the weights from a normal distribution with zero mean and variance 1/(InputSize + OutputSize).

    • "complex-glorot-uniform-square" – Initialize the weights with the complex uniform square Glorot initializer [1]. The complex uniform square Glorot initializer independently samples real and imaginary parts of the weights from a uniform distribution in the interval (-sqrt(3/(InputSize + OutputSize)), sqrt(3/(InputSize + OutputSize))).

    • "complex-he-normal" – Initialize the weights with the complex normal He initializer [1]. The complex normal He initializer samples real and imaginary parts of the weights from a normal distribution with zero mean and variance 1/InputSize.

    • "complex-he-uniform-square" – Initialize the weights with the complex uniform square He initializer [1]. The complex uniform square He initializer samples real and imaginary parts of the weights from a uniform distribution in the interval (-sqrt(3/InputSize), sqrt(3/InputSize)).

    • "complex-narrow-normal" – Initialize the weights by independently sampling the real and imaginary parts of the weights from a normal distribution with zero mean and standard deviation 0.01.

    • "zeros" – Initialize the weights with a real array of zeros.

    • "ones" – Initialize the weights with a real array of ones.

    • Function handle — Initialize the weights with a custom function. If you specify a function handle, then the function syntax must be of the form weights = func(sz), where sz is the size of the weights. For an example, see Specify Custom Weight Initialization Function.

    The layer only initializes the weights when the Weights property is empty.

    Data Types: char | string | function_handle

    Function to initialize the biases, specified as one of these values:

    • "zeros" — Initialize the biases with zeros.

    • "ones" — Initialize the biases with ones.

    • "complex-narrow-normal" — Initialize the biases by independently sampling real and imaginary parts of the biases from a normal distribution with a mean of zero and a standard deviation of 0.01.

    • Function handle — Initialize the biases with a custom function. If you specify a function handle, then the function must have the form bias = func(sz), where sz is the size of the biases.

    The layer initializes the biases only when the Bias property is empty.

    Data Types: char | string | function_handle

    Layer weights, specified as a matrix.

    The layer weights are learnable parameters. You can specify the initial value of the weights directly using the Weights property of the layer. When you train a network, if the Weights property of the layer is nonempty, then the trainnet function uses the Weights property as the initial value. If the Weights property is empty, then the software uses the initializer specified by the WeightsInitializer property of the layer.

    At training time, Weights is an OutputSize-by-InputSize matrix.

    Data Types: single | double
    Complex Number Support: Yes

    Layer biases, specified as a matrix.

    The layer biases are learnable parameters. When you train a neural network, if Bias is nonempty, then the trainnet function uses the Bias property as the initial value. If Bias is empty, then software uses the initializer specified by the BiasInitializer property.

    At training time, Bias is an OutputSize-by-1 matrix.

    Data Types: single | double
    Complex Number Support: Yes

    Learning Rate and Regularization

    Learning rate factor for the weights, specified as a nonnegative scalar.

    The software multiplies this factor by the global learning rate to determine the learning rate for the weights in this layer. For example, if WeightLearnRateFactor is 2, then the learning rate for the weights in this layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

    Data Types: double

    Learning rate factor for the biases, specified as a nonnegative scalar.

    The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

    The ComplexFullyConnectedLayer object stores this property as double type.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    L2 regularization factor for the weights, specified as a nonnegative scalar.

    The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the weights in this layer. For example, if WeightL2Factor is 2, then the L2 regularization for the weights in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

    Data Types: double

    L2 regularization factor for the biases, specified as a nonnegative scalar.

    The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. The software determines the global L2 regularization factor based on the settings you specify using the trainingOptions function.

    The ComplexFullyConnectedLayer object stores this property as double type.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Layer

    Layer name, specified as a character vector. For Layer array input, the trainnet and dlnetwork functions automatically assign names to unnamed layers.

    Data Types: char

    This property is read-only.

    Number of inputs to the layer, stored as 1. This layer accepts a single input only.

    Data Types: double

    This property is read-only.

    Input names, stored as {'in'}. This layer accepts a single input only.

    Data Types: cell

    This property is read-only.

    Number of outputs from the layer, stored as 1. This layer has a single output only.

    Data Types: double

    This property is read-only.

    Output names, stored as {'out'}. This layer has a single output only.

    Data Types: cell

    Examples

    collapse all

    Create a complex fully connected layer with an output size of 10 and the name 'cfc1'.

    layer = complexFullyConnectedLayer(10,Name="cfc1")
    layer = 
      ComplexFullyConnectedLayer with properties:
    
                      Name: 'cfc1'
    
       Hyperparameters
                 InputSize: 'auto'
                OutputSize: 10
        OperationDimension: 'spatial-channel'
    
       Learnable Parameters
                   Weights: []
                      Bias: []
    
      Show all properties
    
    

    Create a layer array including a sequence input layer and two complex fully connected layers separated by a zReLU layer.

    layers = [...
        sequenceInputLayer(100), ...
        complexFullyConnectedLayer(10), ...
        zreluLayer, ...
        complexFullyConnectedLayer(1), ...
        ];

    Convert the layer array to a dlnetwork object.

    net = dlnetwork(layers);

    Create random sample complex sequence data.

    data = randn(1,100) + 1i * randn(1,100);

    Compute the output of the untrained network.

    predict(net,data)
    ans = single
    
    -0.2372 + 0.5038i
    

    Algorithms

    expand all

    References

    [1] Barrachina, Jose Agustin, Chengfang Ren, Gilles Vieillard, Christelle Morisseau, and Jean-Philippe Ovarlez. "Theory and Implementation of Complex-Valued Neural Networks". Preprint, submitted February 16, 2023. https://arxiv.org/abs/2302.08286

    [2] Trabelsi, Chiheb, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, João Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, and Christopher J Pal. "Deep Complex Networks". Preprint, submitted February 25, 2018. https://arxiv.org/abs/1705.09792.

    Version History

    Introduced in R2026a