OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

Conversion of a Unet from .pt to .onnx not being correct due to skip connections

  • Thread starter Thread starter Miquel
  • Start date Start date
M

Miquel

Guest
I am converting my Unet to .onnx, but the resulting segmentation is not the same as the original in python. My model is a typical Unet:

Code:
model = unet.UNet(in_channels=1, out_classes=1, out_channels_first_layer=32, padding = 1)

And the code to convert it is the following:

Code:
model = torch.load('my_unet.pt')
model.to('cuda')
x = torch.ones((1, 1, 256, 128)).cuda()
ONNX_FILE_PATH = 'segmentation_model.onnx'
torch.onnx.export(model, x, ONNX_FILE_PATH, input_names=['input'], 
                  output_names=['output'], export_params=True, opset_version=11)

The warning, though, indicates me that there is some problem with the skip connection:

Code:
/home/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/home/lib/python3.8/site-packages/unet/decoding.py:143: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  skip_shape = torch.tensor(skip_connection.shape)
/home/lib/python3.8/site-packages/unet/decoding.py:144: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  x_shape = torch.tensor(x.shape)
/home/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
  return torch.floor_divide(self, other)
/home/lib/python3.8/site-packages/unet/decoding.py:150: TracerWarning: Converting a tensor to a Python list might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  skip_connection = F.pad(skip_connection, pad.tolist())

For the moment, I tried by including verbose=True in the torch.onnx.export, and I suspect the problem lies in the decoder, but I cannot tell the problem given the information printed.

Solution found!

It turned out working by making a deep copy of the model first: unet_model_copy = copy.deepcopy(model), and then do the export with the same line of code as usual. Not sure why but it worked.
<p>I am converting my Unet to .onnx, but the resulting segmentation is not the same as the original in python. My model is a typical Unet:</p>
<pre><code>model = unet.UNet(in_channels=1, out_classes=1, out_channels_first_layer=32, padding = 1)
</code></pre>
<p>And the code to convert it is the following:</p>
<pre><code>model = torch.load('my_unet.pt')
model.to('cuda')
x = torch.ones((1, 1, 256, 128)).cuda()
ONNX_FILE_PATH = 'segmentation_model.onnx'
torch.onnx.export(model, x, ONNX_FILE_PATH, input_names=['input'],
output_names=['output'], export_params=True, opset_version=11)
</code></pre>
<p>The warning, though, indicates me that there is some problem with the skip connection:</p>
<pre><code>/home/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/home/lib/python3.8/site-packages/unet/decoding.py:143: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
skip_shape = torch.tensor(skip_connection.shape)
/home/lib/python3.8/site-packages/unet/decoding.py:144: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
x_shape = torch.tensor(x.shape)
/home/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
/home/lib/python3.8/site-packages/unet/decoding.py:150: TracerWarning: Converting a tensor to a Python list might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
skip_connection = F.pad(skip_connection, pad.tolist())
</code></pre>
<p>For the moment, I tried by including <code>verbose=True</code> in the <code>torch.onnx.export</code>, and I suspect the problem lies in the decoder, but I cannot tell the problem given the information printed.</p>
<p><strong>Solution found!</strong></p>
<p>It turned out working by making a deep copy of the model first: <code>unet_model_copy = copy.deepcopy(model)</code>, and then do the export with the same line of code as usual. Not sure why but it worked.</p>
 
Top