INHERIT VIA BACKPROPAGATION MEANS

53 次查看(过去 30 天)
DIFFERENCE AND WORKING OF INHERIT VIA BACK PROGATION AND INHERIT VIA INTERNAL RULE

采纳的回答

Andy Bartlett
Andy Bartlett 2022-11-3
编辑:Andy Bartlett 2022-11-3
Inherit via Back Propagation
The purpose of the setting Inherit via Back Propagation is for a block have its output data type set external to the block.
This is a powerful capability that allows the creation of subsystems that react to incoming data types in ways desired by the author of the subsystem.
As a simple example, look under the mask of the Data Type Conversion Inherited block from the base Simulink library.
The Conversion block has its Output Data Type parameter set to Inherit via Back Propagation.
Using the Inherit via Back Propagation setting in combination with the Data Type Duplication block will automatically set the data type of the Conversion block output to be the same as the Inport block named DTS Reference.
The Data Type Conversion Inherited is equivalent to MATLAB's powerful cast-like capability.
y = cast(u,'like',DTS_reference);
Any Simulink block with the option Inherit via Back Propagation, not just data type conversion, can have a 'like' capability.
In addition, the data type can be set via many mechanisms not just Data Type Duplication. The Data Type Propagation block is just one of the many other mechanisms that can be used. Lots of flexibility is in the hands of the subsystem author to create their own custom data type propagation rules.
To understand more about the flow of data type information between blocks, please watch this
Inherit via Internal Rule
Let's try to get a sense of the Inherit via Internal Rule at a conceptual level. The goal here is not to describe every detail of the rule for every block. The goal is to just give a deeper sense of the high level intent of the rule.
Let's use an specific example.
Product Block documentation for Inherit via internal rule says
"Simulink chooses a data type to
balance numerical accuracy, performance, and generated code size,
while taking into account the properties of the embedded target hardware."
To determine the "balance of numerical accuracy, performance, and generated code size", the block considers
  • Attributes of the inputs, especially their data types
  • Parameters specifying operation of the block
  • Production embedded target as specified on model's Hardware Implementation pane, especially ASIC/FPGA vs micro
Double dominates, then Single, then everything else
Using this information, Product block will first give priority to floating-point.
  • If any input is double, then the output is double
  • Otherwise, if any input is single, then the output is single
Note: you can expect double then single to be the dominant rule on other blocks supporting an inherit via internal rule.
Fixed-Point and Integer
If floating-point does not "win", then fixed-point rules will be applied. Note, the rule treats integers as fixed-point types that just happen to have trivial scaling.
Numerical Accuracy Starting Point: First Determine Ideal Full Precision Type
As an example, let's consider the ideal product of int8 times int16.
------------------------------
Input 1 representable extremes.
Data type is numerictype(1,16,0)
Real World Notation: Binary Point
Value
-32768 = 1000000000000000.
-1 = 1111111111111111.
0 = 0000000000000000.
1 = 0000000000000001.
32767 = 0111111111111111.
------------------------------
Input 2 representable extremes.
Data type is numerictype(1,8,0)
Real World Notation: Binary Point
Value
-128 = 10000000.
-1 = 11111111.
0 = 00000000.
1 = 00000001.
127 = 01111111.
------------------------------
Product of Input 1 and 2 extremes.
Data type is numerictype(1,24,0)
Real World Notation: Binary Point
Value
-4194176 = 110000000000000010000000.
-1 = 111111111111111111111111.
0 = 000000000000000000000000.
1 = 000000000000000000000001.
4194304 = 010000000000000000000000.
------------------------------
Example product hi times hi.
Type Real World Notation: Binary Point
Value
numerictype(1,16,0) 32767 = 0111111111111111.
numerictype(1,8,0) 127 = 01111111.
numerictype(1,24,0) 4161409 = 001111110111111110000001.
------------------------------
Example product lo times lo.
Type Real World Notation: Binary Point
Value
numerictype(1,16,0) -32768 = 1000000000000000.
numerictype(1,8,0) -128 = 10000000.
numerictype(1,24,0) 4194304 = 010000000000000000000000.
------------------------------
Example product hi times lo.
Type Real World Notation: Binary Point
Value
numerictype(1,16,0) 32767 = 0111111111111111.
numerictype(1,8,0) -128 = 10000000.
numerictype(1,24,0) -4194176 = 110000000000000010000000.
The just big enough type to perfectly represent all possible products without overflow or loss of precisions is 24 bit signed, numerictype(1,24,0), equivalently fixdt(1,24,0).
If we did the same with non-trivial fixed-point scaling, the bit patterns and sizes would look exactly the same except for scaling. Say signed 8 bits with 4 bits to the right of the binary-point multiplied times signed 16 bits with 3 bits to the right of the binarypoint.
------------------------------
Input 1 representable extremes.
Data type is numerictype(1,16,3)
Real World Notation: Binary Point
Value
-4096 = 1000000000000.000
-0.125 = 1111111111111.111
0 = 0000000000000.000
0.125 = 0000000000000.001
4095.875 = 0111111111111.111
------------------------------
Input 2 representable extremes.
Data type is numerictype(1,8,4)
Real World Notation: Binary Point
Value
-8 = 1000.0000
-0.0625 = 1111.1111
0 = 0000.0000
0.0625 = 0000.0001
7.9375 = 0111.1111
------------------------------
Product of Input 1 and 2 extremes.
Data type is numerictype(1,24,7)
Real World Notation: Binary Point
Value
-32767 = 11000000000000001.0000000
-0.0078125 = 11111111111111111.1111111
0 = 00000000000000000.0000000
0.0078125 = 00000000000000000.0000001
32768 = 01000000000000000.0000000
------------------------------
Example product hi times hi.
Type Real World Notation: Binary Point
Value
numerictype(1,16,3) 4095.875 = 0111111111111.111
numerictype(1,8,4) 7.9375 = 0111.1111
numerictype(1,24,7) 32511.0078125 = 00111111011111111.0000001
------------------------------
Example product lo times lo.
Type Real World Notation: Binary Point
Value
numerictype(1,16,3) -4096 = 1000000000000.000
numerictype(1,8,4) -8 = 1000.0000
numerictype(1,24,7) 32768 = 01000000000000000.0000000
------------------------------
Example product hi times lo.
Type Real World Notation: Binary Point
Value
numerictype(1,16,3) 4095.875 = 0111111111111.111
numerictype(1,8,4) -8 = 1000.0000
numerictype(1,24,7) -32767 = 11000000000000001.0000000
>> with
The just big enough type to perfectly represent all possible products without overflow or loss of precisions is again 24 bit signed, but with 3 + 4 = 7 bits to the right of the binary point. So the full-precision types is numerictype(1,24,7), equivalently fixdt(1,24,7).
If Production Target is ASIC/FPGA
Given the full-precison type, the next step is to consider the production target. If that target is ASIC or FPGA, then a 24 bit data type is perfectly natural and can be supported efficiently. So if the model's Hardware Implementation pane says the production target is ASIC/FPGA, then the output will be 24-bits, and exactly match the full precision type.
If Production Target is a microprocessor, low cost case
If the Hardware Implementation pane says the production target is a microcontrol such as ARM Cortex M, and support long long is turned on, the the target provides four integer types native to the C compiler [8, 16, 32, 64].
The full-precision type needs 24 bits which is less than the biggest integer available (64 bits), so handling 24 bits is low cost and the output will be full-precision. But the output will be put in the smallest of the four native integer sizes that can hold the ideal size. So 24 bits ideal will be "puffed up" to 32 bits native with the extra padding bits put on the most significant end.
numerictype(0,24,0) will puff to numerictype(0,32,0).
numerictype(0,24,7) will puff to numerictype(0,32,7).
For that latter example, this is what the "puffing up" looks like.
Product of Input 1 and 2 extremes.
Type Real World Notation: Binary Point
Value
numerictype(1,24,7) -32767 = 11000000000000001.0000000
numerictype(1,32,7) -32767 = 1111111111000000000000001.0000000
numerictype(1,24,7) -0.0078125 = 11111111111111111.1111111
numerictype(1,32,7) -0.0078125 = 1111111111111111111111111.1111111
numerictype(1,24,7) 0 = 00000000000000000.0000000
numerictype(1,32,7) 0 = 0000000000000000000000000.0000000
numerictype(1,24,7) 0.0078125 = 00000000000000000.0000001
numerictype(1,32,7) 0.0078125 = 0000000000000000000000000.0000001
numerictype(1,24,7) 32768 = 01000000000000000.0000000
numerictype(1,32,7) 32768 = 0000000001000000000000000.0000000
If Production Target is a microprocessor, too expensive case
If the ideal product needs more bits than the biggest integer provided by the compiler of the production hardware, then supporting that large type is deemed to have too big of a negative impact on performance and code size. So the ideal product type will be trimmed down. Overflows would have the worst impact on numeric accuracy, so bits will NOT be trimmed from the most significant range end. Instead bits will be trimmed from the least significant precision end.
Let's consider an example of signed 32 bits times signed 16 bits. So the ideal product is signed 48 bits. Now let's assume the biggest integer size available on the production embedded target is 32 bits. So 8 precision bits from the least significant end of the ideal product type will be discard.
Example
nt_input1 = numerictype(1,32,30);
nt_input2 = numerictype(1,8,4);
nt_ideal_product = numerictype(1,40,34);
nt_out_internal_rule = numerictype(1,32,26);
example values, ideal vs. actual internal rule output
Product of Input 1 and 2 extremes.
Type Real World Notation: Binary Point
Value
numerictype(1,40,34) -15.999999992549419 = 110000.0000000000000000000000000010000000
numerictype(1,32,26) -15.999999985098839 = 110000.00000000000000000000000001
numerictype(1,40,34) 5.8207660913467407e-11 = 000000.0000000000000000000000000000000001
numerictype(1,32,26) 0 = 000000.00000000000000000000000000
numerictype(1,40,34) 16 = 010000.0000000000000000000000000000000000
numerictype(1,32,26) 16 = 010000.00000000000000000000000000
As you can see, 8 precision bits have been dropped leading to modest rounding errors. But the good news is that no big overflow errors have occured due to using a smaller than ideal output data type.
When trimming bits, ideal integer may get non-trivial fixed-point scaling
When trimming of bits is necessary, it can occur that built-in integer inputs can produce a fixed-point output.
nt_input1 = numerictype(1,32, 0);
nt_input2 = numerictype(1, 8, 0);
nt_ideal_product = numerictype(1,40, 0);
nt_out_internal_rule = numerictype(1,32, -8);
Notice that the output type has non-trivial scaling (Slope = 256) because the 8 least significant bits of the ideal product were dropped. Note because the Slope is greater than 1, binary point notation breaks down. This is easy to handle by switching from "point" notation to "scientific notation." But instead for switching decimal-point notation to decimal power of 10 scientific notation, we are switching from binary-point notation to binary power of 2 scientific notation.
Type Real World Notation: Integer Mantissa
Value and Pow2 Exponent
numerictype(1,40,0) 272730423169 = 0011111101111111111111111111111110000001 * 2^0
numerictype(1,32,-8) 272730423040 = 00111111011111111111111111111111 * 2^8
Summary on Internal Rule
Hopefully, these examples have given a better sense of what is meant by
Internal Rule Goal: balance numerical accuracy, performance, and generated code size
Similar principles would apply to other blocks. Double wins, then Single, then full-precision is left alone, puffed-up, or trimmed-down depending on the target. For integer and fixed-point, avoid overflows and give up precision only if too costly to keep.

更多回答(0 个)

产品


版本

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by