国产片侵犯亲女视频播放_亚洲精品二区_在线免费国产视频_欧美精品一区二区三区在线_少妇久久久_在线观看av不卡

腳本之家,腳本語言編程技術及教程分享平臺!
分類導航

Python|VBS|Ruby|Lua|perl|VBA|Golang|PowerShell|Erlang|autoit|Dos|bat|

服務器之家 - 腳本之家 - Python - pytorch中的上采樣以及各種反操作,求逆操作詳解

pytorch中的上采樣以及各種反操作,求逆操作詳解

2020-05-11 09:37一只tobey Python

今天小編就為大家分享一篇pytorch中的上采樣以及各種反操作,求逆操作詳解,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧

import torch.nn.functional as F

import torch.nn as nn

F.upsample(input, size=None, scale_factor=None,mode='nearest', align_corners=None)

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
r"""Upsamples the input to either the given :attr:`size` or the given
:attr:`scale_factor`
The algorithm used for upsampling is determined by :attr:`mode`.
Currently temporal, spatial and volumetric upsampling are supported, i.e.
expected inputs are 3-D, 4-D or 5-D in shape.
The input dimensions are interpreted in the form:
`mini-batch x channels x [optional depth] x [optional height] x width`.
The modes available for upsampling are: `nearest`, `linear` (3D-only),
`bilinear` (4D-only), `trilinear` (5D-only)
Args:
  input (Tensor): the input tensor
  size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]):
    output spatial size.
  scale_factor (int): multiplier for spatial size. Has to be an integer.
  mode (string): algorithm used for upsampling:
    'nearest' | 'linear' | 'bilinear' | 'trilinear'. Default: 'nearest'
  align_corners (bool, optional): if True, the corner pixels of the input
    and output tensors are aligned, and thus preserving the values at
    those pixels. This only has effect when :attr:`mode` is `linear`,
    `bilinear`, or `trilinear`. Default: False
.. warning::
  With ``align_corners = True``, the linearly interpolating modes
  (`linear`, `bilinear`, and `trilinear`) don't proportionally align the
  output and input pixels, and thus the output values can depend on the
  input size. This was the default behavior for these modes up to version
  0.3.1. Since then, the default behavior is ``align_corners = False``.
  See :class:`~torch.nn.Upsample` for concrete examples on how this
  affects the outputs.
"""

nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)

?
1
2
3
4
5
6
7
8
9
10
11
12
"""
Parameters:
  in_channels (int) – Number of channels in the input image
  out_channels (int) – Number of channels produced by the convolution
  kernel_size (int or tuple) – Size of the convolving kernel
  stride (int or tuple, optional) – Stride of the convolution. Default: 1
  padding (int or tuple, optional) – kernel_size - 1 - padding zero-padding will be added to both sides of each dimension in the input. Default: 0
  output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
  groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
  bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
  dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
"""

計算方式:

pytorch中的上采樣以及各種反操作,求逆操作詳解

定義:nn.MaxUnpool2d(kernel_size, stride=None, padding=0)

調用:

?
1
2
3
def forward(self, input, indices, output_size=None):
  return F.max_unpool2d(input, indices, self.kernel_size, self.stride,
             self.padding, output_size)
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
r"""Computes a partial inverse of :class:`MaxPool2d`.
:class:`MaxPool2d` is not fully invertible, since the non-maximal values are lost.
:class:`MaxUnpool2d` takes in as input the output of :class:`MaxPool2d`
including the indices of the maximal values and computes a partial inverse
in which all non-maximal values are set to zero.
.. note:: `MaxPool2d` can map several input sizes to the same output sizes.
     Hence, the inversion process can get ambiguous.
     To accommodate this, you can provide the needed output size
     as an additional argument `output_size` in the forward call.
     See the Inputs and Example below.
Args:
  kernel_size (int or tuple): Size of the max pooling window.
  stride (int or tuple): Stride of the max pooling window.
    It is set to ``kernel_size`` by default.
  padding (int or tuple): Padding that was added to the input
Inputs:
  - `input`: the input Tensor to invert
  - `indices`: the indices given out by `MaxPool2d`
  - `output_size` (optional) : a `torch.Size` that specifies the targeted output size
Shape:
  - Input: :math:`(N, C, H_{in}, W_{in})`
  - Output: :math:`(N, C, H_{out}, W_{out})` where
計算公式:見下面
Example: 見下面
"""

pytorch中的上采樣以及各種反操作,求逆操作詳解

F. max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)

見上面的用法一致!

?
1
2
3
4
5
6
def max_unpool2d(input, indices, kernel_size, stride=None, padding=0,
         output_size=None):
  r"""Computes a partial inverse of :class:`MaxPool2d`.
  See :class:`~torch.nn.MaxUnpool2d` for details.
  """
  pass

以上這篇pytorch中的上采樣以及各種反操作,求逆操作詳解就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支持服務器之家。

原文鏈接:https://blog.csdn.net/zz2230633069/article/details/83279626

延伸 · 閱讀

精彩推薦
主站蜘蛛池模板: 天天摸天天摸 | 亚洲精品中文字幕在线观看 | 日韩综合一区 | 黄色在线观看网站 | 国产欧美日韩精品一区 | 久久精品免费 | 国产精品不卡 | 国产日韩欧美一区二区 | 亚洲三区在线观看 | 国产伦精品一区二区三区四区视频 | 香蕉久久一区二区不卡无毒影院 | 国产精品亚洲第一区在线暖暖韩国 | 蜜桃一区二区 | 老司机av导航 | 免费的黄视频 | 91久久国产综合久久91精品网站 | 婷婷精品 | 午夜av影院| 日韩一区二区三区在线观看 | av一区二区三区 | 婷婷精品 | 精品国产一区二区三区在线观看 | 欧美在线免费观看 | 久久久婷婷一区二区三区不卡 | 日韩美女国产精品 | 国产精品美乳在线观看 | 一区二区三区在线看 | 国产精品毛片久久久久久久 | 日韩成人在线免费视频 | 成人网av | 国产精品一区二 | 久草福利在线视频 | 国产综合精品一区二区三区 | 国产精品中文字幕在线 | 日韩精品在线免费观看 | av毛片| 久久99国产精品久久99果冻传媒 | 精品视频久久久 | 中文字幕在线观看第一页 | 亚洲综合一二区 | 久久尤物免费一区二区三区 |