Main Content

clippedReluLayer

Clipped Rectified Linear Unit (ReLU) layer

Description

A clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling.

This operation is equivalent to:

f(x)={0,x<0x,0x<ceilingceiling,xceiling.

This clipping prevents the output from becoming too large.

Creation

Description

layer = clippedReluLayer(ceiling) returns a clipped ReLU layer with the clipping ceiling equal to ceiling.

layer = clippedReluLayer(ceiling,'Name',Name) sets the optional Name property.

example

Properties

expand all

Clipped ReLU

Ceiling for input clipping, specified as a positive scalar.

Example: 10

Layer

Layer name, specified as a character vector or string scalar. For Layer array input, the trainnet and dlnetwork functions automatically assign names to layers with the name "".

The ClippedReLULayer object stores this property as a character vector.

Data Types: char | string

This property is read-only.

Number of inputs to the layer, returned as 1. This layer accepts a single input only.

Data Types: double

This property is read-only.

Input names, returned as {'in'}. This layer accepts a single input only.

Data Types: cell

This property is read-only.

Number of outputs from the layer, returned as 1. This layer has a single output only.

Data Types: double

This property is read-only.

Output names, returned as {'out'}. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create a clipped ReLU layer with the name clip1 and the clipping ceiling equal to 10.

layer = clippedReluLayer(10,Name="clip1")
layer = 
  ClippedReLULayer with properties:

       Name: 'clip1'

   Hyperparameters
    Ceiling: 10

Include a clipped ReLU layer in a Layer array.

layers = [ ...
    imageInputLayer([28 28 1])
    convolution2dLayer(5,20)
    clippedReluLayer(10)
    maxPooling2dLayer(2,Stride=2)
    fullyConnectedLayer(10)
    softmaxLayer]
layers = 
  6x1 Layer array with layers:

     1   ''   Image Input       28x28x1 images with 'zerocenter' normalization
     2   ''   2-D Convolution   20 5x5 convolutions with stride [1  1] and padding [0  0  0  0]
     3   ''   Clipped ReLU      Clipped ReLU with ceiling 10
     4   ''   2-D Max Pooling   2x2 max pooling with stride [2  2] and padding [0  0  0  0]
     5   ''   Fully Connected   10 fully connected layer
     6   ''   Softmax           softmax

Algorithms

expand all

References

[1] Hannun, Awni, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, et al. "Deep speech: Scaling up end-to-end speech recognition." Preprint, submitted 17 Dec 2014. http://arxiv.org/abs/1412.5567

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.

Version History

Introduced in R2017b