Main Content

deepDreamImage

Visualize network features using deep dream

Description

I = deepDreamImage(net,layer,channelIdx) returns an array of images that strongly activate the channels channels within the network net of the layer with numeric index or name given by layer. These images highlight the features learned by a network.

I = deepDreamImage(___,Name,Value) returns an image with additional options specified by one or more name-value arguments.

example

Examples

collapse all

Load a pretrained network.

net = imagePretrainedNetwork("squeezenet");

Visualize the 16 features learned by the convolutional layer fire3-squeeze1x1 using deepDreamImage. Set PyramidLevels to 1 so that the images are not scaled.

layer = "fire3-squeeze1x1";
channels = 1:16;

I = deepDreamImage(net,layer,channels, ...
    PyramidLevels=1, ...
    Verbose=0);

tiledlayout("flow")
for i = 1:numel(channels)
    nexttile
    imshow(I(:,:,:,i))
end

Figure contains 16 axes objects. Axes object 1 contains an object of type image. Axes object 2 contains an object of type image. Axes object 3 contains an object of type image. Axes object 4 contains an object of type image. Axes object 5 contains an object of type image. Axes object 6 contains an object of type image. Axes object 7 contains an object of type image. Axes object 8 contains an object of type image. Axes object 9 contains an object of type image. Axes object 10 contains an object of type image. Axes object 11 contains an object of type image. Axes object 12 contains an object of type image. Axes object 13 contains an object of type image. Axes object 14 contains an object of type image. Axes object 15 contains an object of type image. Axes object 16 contains an object of type image.

Input Arguments

collapse all

Trained network, specified as a dlnetwork object.

deepDreamImage only supports networks with an image input layer.

Layer to visualize, specified as a character vector or string scalar. Specify layer as the index or the name of the layer you want to visualize the activations of. To visualize classification layer features, select the last fully connected layer.

Tip

Selecting ReLU or dropout layers for visualization may not produce useful images because of the effect that these layers have on the network gradients.

Channel index, specified as scalar or vector of channel indices. If channelIdx is a vector, the layer activations for each channel are optimized independently. The possible choices for channelIdx depend on the selected layer.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: deepDreamImage(net,layer,channels,NumIterations=100,ExecutionEnvironment="gpu") generates images using 100 iterations per pyramid level and uses the GPU.

Image to initialize deep dream algorithm, specified as a 2-D array. Use this syntax to see how an image is modified to maximize network layer activations. The minimum height and width of the initial image depend on all the layers up to and including the selected layer:

  • For layers towards the end of the network, the initial image must be at least the same height and width as the image input layer.

  • For layers towards the beginning of the network, the height and width of the initial image can be smaller than the image input layer. However, it must be large enough to produce a scalar output at the selected layer.

  • The number of channels of the initial image must match the number of channels in the image input layer of the network.

If you do not specify an initial image, the software uses a random image with pixels drawn from a standard normal distribution. For more information, see PyramidLevels.

Number of multi-resolution image pyramid levels to use to generate the output image, specified as a positive integer. Increase the number of pyramid levels to produce larger output images at the expense of additional computation. To produce an image of the same size as the initial image, set the number of levels to 1.

Example: PyramidLevels=3

Scale between each pyramid level, specified as a scalar with value > 1. Reduce the pyramid scale to incorporate fine grain details into the output image. Adjusting the pyramid scale can help generate more informative images for layers at the beginning of the network.

Example: PyramidScale=1.4

Number of iterations per pyramid level, specified as a positive integer. Increase the number of iterations to produce more detailed images at the expense of additional computation.

Example: NumIterations=10

Type of scaling to apply to output image, specified as one of these values.

ValueDescription
"linear"Scale output pixel values in the interval [0,1]. The output image corresponding to each layer channel, I(:,:,:,channel), is scaled independently.
"clipped"Clip the image to the range [0 255] and then scale to the interval [0,1]. This option often produces more vibrant images.
"none"Disable output scaling.

Scaling the pixel values can cause the network to misclassify the output image. If you want to classify the output image, set the OutputScaling value to "none".

Example: OutputScaling="linear"

Indicator to display progress information in the command window, specified as the comma-separated pair consisting of Verbose and either 1 (true) or 0 (false). The displayed information includes the pyramid level, iteration, and the activation strength.

Example: Verbose=0

Data Types: logical

Hardware resource, specified as one of these values:

  • "auto" — Use a GPU if one is available. Otherwise, use the CPU.

  • "gpu" — Use the GPU. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information about supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

  • "cpu" — Use the CPU.

Output Arguments

collapse all

Output image, returned as 4-D array containing a sequence of grayscale or RGB images. The function concatenates the images along the fourth dimension of I such that the image that maximizes the output of channels(k) is I(:,:,:,k). You can display the output image using the imshow (Image Processing Toolbox) function.

Algorithms

This function implements a version of deep dream that uses a multi-resolution image pyramid and Laplacian Pyramid Gradient Normalization to generate high-resolution images. For more information on Laplacian Pyramid Gradient Normalization, see this blog post: DeepDreaming with TensorFlow.

By default, the software performs computations using single-precision, floating-point arithmetic to train a neural network using the trainnet function. The trainnet function returns a network with single-precision learnables and state parameters.

When you use prediction or validation functions with a dlnetwork object with single-precision learnable and state parameters, the software performs the computations using single-precision, floating-point arithmetic.

When you use prediction or validation functions with a dlnetwork object with double-precision learnable and state parameters:

  • If the input data is single precision, the software performs the computations using single-precision, floating-point arithmetic.

  • If the input data is double precision, the software performs the computations using double-precision, floating-point arithmetic.

References

[1] DeepDreaming with TensorFlow. https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/deepdream.ipynb

Version History

Introduced in R2017a

expand all