精品欧美一区二区三区在线观看 _久久久久国色av免费观看性色_国产精品久久在线观看_亚洲第一综合网站_91精品又粗又猛又爽_小泽玛利亚一区二区免费_91亚洲精品国偷拍自产在线观看 _久久精品视频在线播放_美女精品久久久_欧美日韩国产成人在线

【TVM 教程】如何使用 TVM Pass Instrument 原創(chuàng)

發(fā)布于 2025-6-16 17:26
瀏覽
0收藏

Apache TVM 是一個(gè)深度的深度學(xué)習(xí)編譯框架,適用于 CPU、GPU 和各種機(jī)器學(xué)習(xí)加速芯片。更多 TVM 中文文檔可訪問(wèn) →https://tvm.hyper.ai/

作者:Chi-Wei Wang

隨著實(shí)現(xiàn)的 Pass 越來(lái)越多,instrument pass 執(zhí)行、分析每個(gè) Pass 效果和觀察各種事件也愈發(fā)重要。

可以通過(guò)向 tvm.transform.PassContext 提供 tvm.ir.instrument.PassInstrument 實(shí)例列表來(lái)檢測(cè) Pass。我們提供了一個(gè)用于收集計(jì)時(shí)信息的 pass 工具(tvm.ir.instrument.PassTimingInstrument),可以通過(guò) tvm.instrument.pass_instrument() 裝飾器使用擴(kuò)展機(jī)制。

本教程演示開發(fā)者如何用 PassContext 檢測(cè) Pass。另請(qǐng)參閱 Pass Infrastructure。

import tvm
import tvm.relay as relay
from tvm.relay.testing import resnet
from tvm.contrib.download import download_testdata
from tvm.relay.build_module import bind_params_by_name
from tvm.ir.instrument import (
    PassTimingInstrument,
    pass_instrument,
)

創(chuàng)建 Relay 程序示例?

在 Relay 中使用預(yù)定義的 ResNet-18 網(wǎng)絡(luò)。

batch_size = 1
num_of_image_class = 1000
image_shape = (3, 224, 224)
output_shape = (batch_size, num_of_image_class)
relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, image_shape=image_shape)
print("Printing the IR module...")
print(relay_mod.astext(show_meta_data=False))

輸出結(jié)果:

Printing the IR module...
#[version = "0.0.5"]
def @main(%data: Tensor[(1, 3, 224, 224), float32] /* ty=Tensor[(1, 3, 224, 224), float32] */, %bn_data_gamma: Tensor[(3), float32] /* ty=Tensor[(3), float32] */, %bn_data_beta: Tensor[(3), float32] /* ty=Tensor[(3), float32] */, %bn_data_moving_mean: Tensor[(3), float32] /* ty=Tensor[(3), float32] */, %bn_data_moving_var: Tensor[(3), float32] /* ty=Tensor[(3), float32] */, %conv0_weight: Tensor[(64, 3, 7, 7), float32] /* ty=Tensor[(64, 3, 7, 7), float32] */, %bn0_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %bn0_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %bn0_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %bn0_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn1_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn1_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn1_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn1_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_conv1_weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %stage1_unit1_bn2_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn2_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn2_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn2_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_conv2_weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %stage1_unit1_sc_weight: Tensor[(64, 64, 1, 1), float32] /* ty=Tensor[(64, 64, 1, 1), float32] */, %stage1_unit2_bn1_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn1_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn1_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn1_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_conv1_weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %stage1_unit2_bn2_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn2_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn2_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn2_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_conv2_weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %stage2_unit1_bn1_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage2_unit1_bn1_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage2_unit1_bn1_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage2_unit1_bn1_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage2_unit1_conv1_weight: Tensor[(128, 64, 3, 3), float32] /* ty=Tensor[(128, 64, 3, 3), float32] */, %stage2_unit1_bn2_gamma: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit1_bn2_beta: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit1_bn2_moving_mean: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit1_bn2_moving_var: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit1_conv2_weight: Tensor[(128, 128, 3, 3), float32] /* ty=Tensor[(128, 128, 3, 3), float32] */, %stage2_unit1_sc_weight: Tensor[(128, 64, 1, 1), float32] /* ty=Tensor[(128, 64, 1, 1), float32] */, %stage2_unit2_bn1_gamma: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn1_beta: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn1_moving_mean: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn1_moving_var: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_conv1_weight: Tensor[(128, 128, 3, 3), float32] /* ty=Tensor[(128, 128, 3, 3), float32] */, %stage2_unit2_bn2_gamma: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn2_beta: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn2_moving_mean: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn2_moving_var: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_conv2_weight: Tensor[(128, 128, 3, 3), float32] /* ty=Tensor[(128, 128, 3, 3), float32] */, %stage3_unit1_bn1_gamma: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage3_unit1_bn1_beta: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage3_unit1_bn1_moving_mean: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage3_unit1_bn1_moving_var: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage3_unit1_conv1_weight: Tensor[(256, 128, 3, 3), float32] /* ty=Tensor[(256, 128, 3, 3), float32] */, %stage3_unit1_bn2_gamma: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit1_bn2_beta: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit1_bn2_moving_mean: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit1_bn2_moving_var: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit1_conv2_weight: Tensor[(256, 256, 3, 3), float32] /* ty=Tensor[(256, 256, 3, 3), float32] */, %stage3_unit1_sc_weight: Tensor[(256, 128, 1, 1), float32] /* ty=Tensor[(256, 128, 1, 1), float32] */, %stage3_unit2_bn1_gamma: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn1_beta: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn1_moving_mean: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn1_moving_var: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_conv1_weight: Tensor[(256, 256, 3, 3), float32] /* ty=Tensor[(256, 256, 3, 3), float32] */, %stage3_unit2_bn2_gamma: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn2_beta: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn2_moving_mean: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn2_moving_var: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_conv2_weight: Tensor[(256, 256, 3, 3), float32] /* ty=Tensor[(256, 256, 3, 3), float32] */, %stage4_unit1_bn1_gamma: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage4_unit1_bn1_beta: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage4_unit1_bn1_moving_mean: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage4_unit1_bn1_moving_var: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage4_unit1_conv1_weight: Tensor[(512, 256, 3, 3), float32] /* ty=Tensor[(512, 256, 3, 3), float32] */, %stage4_unit1_bn2_gamma: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit1_bn2_beta: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit1_bn2_moving_mean: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit1_bn2_moving_var: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit1_conv2_weight: Tensor[(512, 512, 3, 3), float32] /* ty=Tensor[(512, 512, 3, 3), float32] */, %stage4_unit1_sc_weight: Tensor[(512, 256, 1, 1), float32] /* ty=Tensor[(512, 256, 1, 1), float32] */, %stage4_unit2_bn1_gamma: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn1_beta: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn1_moving_mean: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn1_moving_var: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_conv1_weight: Tensor[(512, 512, 3, 3), float32] /* ty=Tensor[(512, 512, 3, 3), float32] */, %stage4_unit2_bn2_gamma: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn2_beta: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn2_moving_mean: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn2_moving_var: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_conv2_weight: Tensor[(512, 512, 3, 3), float32] /* ty=Tensor[(512, 512, 3, 3), float32] */, %bn1_gamma: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %bn1_beta: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %bn1_moving_mean: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %bn1_moving_var: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %fc1_weight: Tensor[(1000, 512), float32] /* ty=Tensor[(1000, 512), float32] */, %fc1_bias: Tensor[(1000), float32] /* ty=Tensor[(1000), float32] */) -> Tensor[(1, 1000), float32] {
  %0 = nn.batch_norm(%data, %bn_data_gamma, %bn_data_beta, %bn_data_moving_mean, %bn_data_moving_var, epsilon=2e-05f, scale=False) /* ty=(Tensor[(1, 3, 224, 224), float32], Tensor[(3), float32], Tensor[(3), float32]) */;
  %1 = %0.0 /* ty=Tensor[(1, 3, 224, 224), float32] */;
  %2 = nn.conv2d(%1, %conv0_weight, strides=[2, 2], padding=[3, 3, 3, 3], channels=64, kernel_size=[7, 7]) /* ty=Tensor[(1, 64, 112, 112), float32] */;
  %3 = nn.batch_norm(%2, %bn0_gamma, %bn0_beta, %bn0_moving_mean, %bn0_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 112, 112), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %4 = %3.0 /* ty=Tensor[(1, 64, 112, 112), float32] */;
  %5 = nn.relu(%4) /* ty=Tensor[(1, 64, 112, 112), float32] */;
  %6 = nn.max_pool2d(%5, pool_size=[3, 3], strides=[2, 2], padding=[1, 1, 1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %7 = nn.batch_norm(%6, %stage1_unit1_bn1_gamma, %stage1_unit1_bn1_beta, %stage1_unit1_bn1_moving_mean, %stage1_unit1_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %8 = %7.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %9 = nn.relu(%8) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %10 = nn.conv2d(%9, %stage1_unit1_conv1_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %11 = nn.batch_norm(%10, %stage1_unit1_bn2_gamma, %stage1_unit1_bn2_beta, %stage1_unit1_bn2_moving_mean, %stage1_unit1_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %12 = %11.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %13 = nn.relu(%12) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %14 = nn.conv2d(%13, %stage1_unit1_conv2_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %15 = nn.conv2d(%9, %stage1_unit1_sc_weight, padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %16 = add(%14, %15) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %17 = nn.batch_norm(%16, %stage1_unit2_bn1_gamma, %stage1_unit2_bn1_beta, %stage1_unit2_bn1_moving_mean, %stage1_unit2_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %18 = %17.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %19 = nn.relu(%18) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %20 = nn.conv2d(%19, %stage1_unit2_conv1_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %21 = nn.batch_norm(%20, %stage1_unit2_bn2_gamma, %stage1_unit2_bn2_beta, %stage1_unit2_bn2_moving_mean, %stage1_unit2_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %22 = %21.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %23 = nn.relu(%22) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %24 = nn.conv2d(%23, %stage1_unit2_conv2_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %25 = add(%24, %16) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %26 = nn.batch_norm(%25, %stage2_unit1_bn1_gamma, %stage2_unit1_bn1_beta, %stage2_unit1_bn1_moving_mean, %stage2_unit1_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %27 = %26.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %28 = nn.relu(%27) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %29 = nn.conv2d(%28, %stage2_unit1_conv1_weight, strides=[2, 2], padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %30 = nn.batch_norm(%29, %stage2_unit1_bn2_gamma, %stage2_unit1_bn2_beta, %stage2_unit1_bn2_moving_mean, %stage2_unit1_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;
  %31 = %30.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %32 = nn.relu(%31) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %33 = nn.conv2d(%32, %stage2_unit1_conv2_weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %34 = nn.conv2d(%28, %stage2_unit1_sc_weight, strides=[2, 2], padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %35 = add(%33, %34) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %36 = nn.batch_norm(%35, %stage2_unit2_bn1_gamma, %stage2_unit2_bn1_beta, %stage2_unit2_bn1_moving_mean, %stage2_unit2_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;
  %37 = %36.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %38 = nn.relu(%37) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %39 = nn.conv2d(%38, %stage2_unit2_conv1_weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %40 = nn.batch_norm(%39, %stage2_unit2_bn2_gamma, %stage2_unit2_bn2_beta, %stage2_unit2_bn2_moving_mean, %stage2_unit2_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;
  %41 = %40.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %42 = nn.relu(%41) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %43 = nn.conv2d(%42, %stage2_unit2_conv2_weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %44 = add(%43, %35) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %45 = nn.batch_norm(%44, %stage3_unit1_bn1_gamma, %stage3_unit1_bn1_beta, %stage3_unit1_bn1_moving_mean, %stage3_unit1_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;
  %46 = %45.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %47 = nn.relu(%46) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %48 = nn.conv2d(%47, %stage3_unit1_conv1_weight, strides=[2, 2], padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %49 = nn.batch_norm(%48, %stage3_unit1_bn2_gamma, %stage3_unit1_bn2_beta, %stage3_unit1_bn2_moving_mean, %stage3_unit1_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;
  %50 = %49.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %51 = nn.relu(%50) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %52 = nn.conv2d(%51, %stage3_unit1_conv2_weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %53 = nn.conv2d(%47, %stage3_unit1_sc_weight, strides=[2, 2], padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %54 = add(%52, %53) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %55 = nn.batch_norm(%54, %stage3_unit2_bn1_gamma, %stage3_unit2_bn1_beta, %stage3_unit2_bn1_moving_mean, %stage3_unit2_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;
  %56 = %55.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %57 = nn.relu(%56) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %58 = nn.conv2d(%57, %stage3_unit2_conv1_weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %59 = nn.batch_norm(%58, %stage3_unit2_bn2_gamma, %stage3_unit2_bn2_beta, %stage3_unit2_bn2_moving_mean, %stage3_unit2_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;
  %60 = %59.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %61 = nn.relu(%60) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %62 = nn.conv2d(%61, %stage3_unit2_conv2_weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %63 = add(%62, %54) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %64 = nn.batch_norm(%63, %stage4_unit1_bn1_gamma, %stage4_unit1_bn1_beta, %stage4_unit1_bn1_moving_mean, %stage4_unit1_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;
  %65 = %64.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %66 = nn.relu(%65) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %67 = nn.conv2d(%66, %stage4_unit1_conv1_weight, strides=[2, 2], padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %68 = nn.batch_norm(%67, %stage4_unit1_bn2_gamma, %stage4_unit1_bn2_beta, %stage4_unit1_bn2_moving_mean, %stage4_unit1_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;
  %69 = %68.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %70 = nn.relu(%69) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %71 = nn.conv2d(%70, %stage4_unit1_conv2_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %72 = nn.conv2d(%66, %stage4_unit1_sc_weight, strides=[2, 2], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %73 = add(%71, %72) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %74 = nn.batch_norm(%73, %stage4_unit2_bn1_gamma, %stage4_unit2_bn1_beta, %stage4_unit2_bn1_moving_mean, %stage4_unit2_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;
  %75 = %74.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %76 = nn.relu(%75) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %77 = nn.conv2d(%76, %stage4_unit2_conv1_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %78 = nn.batch_norm(%77, %stage4_unit2_bn2_gamma, %stage4_unit2_bn2_beta, %stage4_unit2_bn2_moving_mean, %stage4_unit2_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;
  %79 = %78.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %80 = nn.relu(%79) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %81 = nn.conv2d(%80, %stage4_unit2_conv2_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %82 = add(%81, %73) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %83 = nn.batch_norm(%82, %bn1_gamma, %bn1_beta, %bn1_moving_mean, %bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;
  %84 = %83.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %85 = nn.relu(%84) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %86 = nn.global_avg_pool2d(%85) /* ty=Tensor[(1, 512, 1, 1), float32] */;
  %87 = nn.batch_flatten(%86) /* ty=Tensor[(1, 512), float32] */;
  %88 = nn.dense(%87, %fc1_weight, units=1000) /* ty=Tensor[(1, 1000), float32] */;
  %89 = nn.bias_add(%88, %fc1_bias, axis=-1) /* ty=Tensor[(1, 1000), float32] */;
  nn.softmax(%89) /* ty=Tensor[(1, 1000), float32] */
}

使用 Instrument 創(chuàng)建 PassContext?

要用 instrument 運(yùn)行所有 Pass,將其通過(guò)參數(shù)?instruments?傳遞給構(gòu)造函數(shù)?PassContextPassTimingInstrument?用于分析每個(gè) Pass 執(zhí)行時(shí)間的內(nèi)置函數(shù)。

timing_inst = PassTimingInstrument()
with tvm.transform.PassContext(instruments=[timing_inst]):
    relay_mod = relay.transform.InferType()(relay_mod)
    relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
    # 在退出上下文之前,獲取配置文件結(jié)果。
    profiles = timing_inst.render()
print("Printing results of timing profile...")
print(profiles)

輸出結(jié)果:

Printing results of timing profile...
InferType: 6628us [6628us] (46.29%; 46.29%)
FoldScaleAxis: 7691us [6us] (53.71%; 53.71%)
        FoldConstant: 7685us [1578us] (53.67%; 99.92%)
                InferType: 6107us [6107us] (42.65%; 79.47%)

將當(dāng)前 PassContext 與 Instrument 一起使用?

也可以使用當(dāng)前的?PassContext,并通過(guò)?override_instruments?方法注冊(cè)?PassInstrument?實(shí)例。注意,如果已經(jīng)存在了任何 instrument,override_instruments?將執(zhí)行?exit_pass_ctx?方法。然后它切換到新 instrument,并調(diào)用新 instrument 的?enter_pass_ctx?方法。有關(guān)這些方法,參閱以下部分和?tvm.instrument.pass_instrument()

cur_pass_ctx = tvm.transform.PassContext.current()
cur_pass_ctx.override_instruments([timing_inst])
relay_mod = relay.transform.InferType()(relay_mod)
relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
profiles = timing_inst.render()
print("Printing results of timing profile...")
print(profiles)

輸出結(jié)果:

Printing results of timing profile...
InferType: 6131us [6131us] (44.86%; 44.86%)
FoldScaleAxis: 7536us [4us] (55.14%; 55.14%)
        FoldConstant: 7532us [1549us] (55.11%; 99.94%)
                InferType: 5982us [5982us] (43.77%; 79.43%)

注冊(cè)空列表以清除現(xiàn)有 instrument。

注意,PassTimingInstrument?的?exit_pass_ctx?被調(diào)用了。配置文件被清除,因此不會(huì)打印任何內(nèi)容。

cur_pass_ctx.override_instruments([])
# 取消 .render() 的注釋以查看如下警告:
# 警告:沒(méi)有 Pass 分析,您是否啟用了 Pass 分析?
# profiles = timing_inst.render()

創(chuàng)建自定義 Instrument 類?

可以使用?tvm.instrument.pass_instrument()?裝飾器創(chuàng)建自定義 instrument 類。

創(chuàng)建一個(gè)工具類(計(jì)算每次 Pass 引起的每個(gè)算子出現(xiàn)次數(shù)的變化)。可以在 Pass 之前和之后查看?op.name?來(lái)找到每個(gè)算子的名稱,從而計(jì)算差異。

@pass_instrument
class RelayCallNodeDiffer:
    def __init__(self):
        self._op_diff = []
        # Pass 可以嵌套。
        # 使用堆棧來(lái)確保得到之前/之后正確的 pairs。
        self._op_cnt_before_stack = []

    def enter_pass_ctx(self):
        self._op_diff = []
        self._op_cnt_before_stack = []

    def exit_pass_ctx(self):
        assert len(self._op_cnt_before_stack) == 0, "The stack is not empty. Something wrong."

    def run_before_pass(self, mod, info):
        self._op_cnt_before_stack.append((info.name, self._count_nodes(mod)))

    def run_after_pass(self, mod, info):
        # 彈出最新記錄的 Pass。
        name_before, op_to_cnt_before = self._op_cnt_before_stack.pop()
        assert name_before == info.name, "name_before: {}, info.name: {} doesn't match".format(
            name_before, info.name
        )
        cur_depth = len(self._op_cnt_before_stack)
        op_to_cnt_after = self._count_nodes(mod)
        op_diff = self._diff(op_to_cnt_after, op_to_cnt_before)
        # 只記導(dǎo)致差異的 Pass。
        if op_diff:
            self._op_diff.append((cur_depth, info.name, op_diff))

    def get_pass_to_op_diff(self):
        """
        return [
          (depth, pass_name, {op_name: diff_num, ...}), ...
        ]
        """
        return self._op_diff

    @staticmethod
    def _count_nodes(mod):
        """Count the number of occurrences of each operator in the module"""
        ret = {}

        def visit(node):
            if isinstance(node, relay.expr.Call):
                if hasattr(node.op, "name"):
                    op_name = node.op.name
                else:
                    # 某些 CallNode 可能沒(méi)有“名稱”,例如 relay.Function
                    return
                ret[op_name] = ret.get(op_name, 0) + 1

        relay.analysis.post_order_visit(mod["main"], visit)
        return ret

    @staticmethod
    def _diff(d_after, d_before):
        """Calculate the difference of two dictionary along their keys.
        The result is values in d_after minus values in d_before.
        """
        ret = {}
        key_after, key_before = set(d_after), set(d_before)
        for k in key_before & key_after:
            tmp = d_after[k] - d_before[k]
            if tmp:
                ret[k] = d_after[k] - d_before[k]
        for k in key_after - key_before:
            ret[k] = d_after[k]
        for k in key_before - key_after:
            ret[k] = -d_before[k]
        return ret

應(yīng)用 Pass 和多個(gè) Instrument 類?

可以在?PassContext?中使用多個(gè) instrument 類。但注意,instrument 方法是按?instruments?參數(shù)的順序執(zhí)行的,所以對(duì)于像?PassTimingInstrument?這樣的 instrument 類,不可避免地要將其他 instrument 類的執(zhí)行時(shí)間計(jì)入最終的分析結(jié)果。

call_node_inst = RelayCallNodeDiffer()
desired_layouts = {
    "nn.conv2d": ["NHWC", "HWIO"],
}
pass_seq = tvm.transform.Sequential(
    [
        relay.transform.FoldConstant(),
        relay.transform.ConvertLayout(desired_layouts),
        relay.transform.FoldConstant(),
    ]
)
relay_mod["main"] = bind_params_by_name(relay_mod["main"], relay_params)
# timing_inst 放在 call_node_inst 之后。
# 所以 `call_node.inst.run_after_pass()` 的執(zhí)行時(shí)間也算在內(nèi)。
with tvm.transform.PassContext(opt_level=3, instruments=[call_node_inst, timing_inst]):
    relay_mod = pass_seq(relay_mod)
    profiles = timing_inst.render()
# 取消注釋下一行以查看時(shí)序配置文件結(jié)果。
# print(profiles)

輸出結(jié)果:

/workspace/python/tvm/driver/build_module.py:268: UserWarning: target_host parameter is going to be deprecated. Please pass in tvm.target.Target(target, host=target_host) instead.
  "target_host parameter is going to be deprecated. "

可以看到每個(gè)操作類型增加/減少了多少 CallNode。

from pprint import pprint

print("Printing the change in number of occurrences of each operator caused by each pass...")
pprint(call_node_inst.get_pass_to_op_diff())

輸出結(jié)果:

Printing the change in number of occurrences of each operator caused by each pass...
[(1, 'CanonicalizeOps', {'add': 1, 'nn.bias_add': -1}),
 (1, 'ConvertLayout', {'expand_dims': 1, 'layout_transform': 23}),
 (1, 'FoldConstant', {'expand_dims': -1, 'layout_transform': -21}),
 (0, 'sequential', {'add': 1, 'layout_transform': 2, 'nn.bias_add': -1})]

異常處理?

以下演示了?PassInstrument?的方法發(fā)生異常的詳細(xì)情況。

定義在進(jìn)入/退出?PassContext?中引發(fā)異常的?PassInstrument?類:

class PassExampleBase:
    def __init__(self, name):
        self._name = name

    def enter_pass_ctx(self):
        print(self._name, "enter_pass_ctx")

    def exit_pass_ctx(self):
        print(self._name, "exit_pass_ctx")

    def should_run(self, mod, info):
        print(self._name, "should_run")
        return True

    def run_before_pass(self, mod, pass_info):
        print(self._name, "run_before_pass")

    def run_after_pass(self, mod, pass_info):
        print(self._name, "run_after_pass")

@pass_instrument
class PassFine(PassExampleBase):
    pass

@pass_instrument
class PassBadEnterCtx(PassExampleBase):
    def enter_pass_ctx(self):
        print(self._name, "bad enter_pass_ctx!!!")
        raise ValueError("{} bad enter_pass_ctx".format(self._name))

@pass_instrument
class PassBadExitCtx(PassExampleBase):
    def exit_pass_ctx(self):
        print(self._name, "bad exit_pass_ctx!!!")
        raise ValueError("{} bad exit_pass_ctx".format(self._name))

若?enter_pass_ctx?發(fā)生異常,PassContext?將禁用 pass instrumentation。它將運(yùn)行每個(gè)成功完成?enter_pass_ctx?的 PassInstrument 的?exit_pass_ctx

下面的例子可以看到?PassFine_0?的?exit_pass_ctx?在異常后執(zhí)行。

demo_ctx = tvm.transform.PassContext(
    instruments=[
        PassFine("PassFine_0"),
        PassBadEnterCtx("PassBadEnterCtx"),
        PassFine("PassFine_1"),
    ]
)
try:
    with demo_ctx:
        relay_mod = relay.transform.InferType()(relay_mod)
except ValueError as ex:
    print("Catching", str(ex).split("\n")[-1])

輸出結(jié)果:

PassFine_0 enter_pass_ctx
PassBadEnterCtx bad enter_pass_ctx!!!
PassFine_0 exit_pass_ctx
Catching ValueError: PassBadEnterCtx bad enter_pass_ctx

PassInstrument?實(shí)例中的異常會(huì)導(dǎo)致當(dāng)前的?PassContext?所有 instrument 被清除,因此調(diào)用?override_instruments?時(shí)不會(huì)打印任何內(nèi)容。

demo_ctx.override_instruments([])  # 沒(méi)有打印 PassFine_0 exit_pass_ctx....等

若?exit_pass_ctx?發(fā)生異常,則禁用 pass instrument,然后傳播異常。這意味著?PassInstrument?在拋出異常之后注冊(cè)的實(shí)例不會(huì)執(zhí)行?exit_pass_ctx

demo_ctx = tvm.transform.PassContext(
    instruments=[
        PassFine("PassFine_0"),
        PassBadExitCtx("PassBadExitCtx"),
        PassFine("PassFine_1"),
    ]
)
try:
    # PassFine_1 執(zhí)行 enter_pass_ctx,但不執(zhí)行 exit_pass_ctx。
    with demo_ctx:
        relay_mod = relay.transform.InferType()(relay_mod)
except ValueError as ex:
    print("Catching", str(ex).split("\n")[-1])

輸出結(jié)果:

PassFine_0 enter_pass_ctx
PassBadExitCtx enter_pass_ctx
PassFine_1 enter_pass_ctx
PassFine_0 should_run
PassBadExitCtx should_run
PassFine_1 should_run
PassFine_0 run_before_pass
PassBadExitCtx run_before_pass
PassFine_1 run_before_pass
PassFine_0 run_after_pass
PassBadExitCtx run_after_pass
PassFine_1 run_after_pass
PassFine_0 exit_pass_ctx
PassBadExitCtx bad exit_pass_ctx!!!
Catching ValueError: PassBadExitCtx bad exit_pass_ctx

以?run_before_pass為例:

should_runrun_before_pass?和?run_after_pass?發(fā)生的異常沒(méi)有明確處理,用上下文管理器(with?語(yǔ)法)安全退出?PassContext

@pass_instrument
class PassBadRunBefore(PassExampleBase):
    def run_before_pass(self, mod, pass_info):
        print(self._name, "bad run_before_pass!!!")
        raise ValueError("{} bad run_before_pass".format(self._name))

demo_ctx = tvm.transform.PassContext(
    instruments=[
        PassFine("PassFine_0"),
        PassBadRunBefore("PassBadRunBefore"),
        PassFine("PassFine_1"),
    ]
)
try:
    # 所有的 exit_pass_ctx 都會(huì)被調(diào)用。
    with demo_ctx:
        relay_mod = relay.transform.InferType()(relay_mod)
except ValueError as ex:
    print("Catching", str(ex).split("\n")[-1])

輸出結(jié)果:

PassFine_0 enter_pass_ctx
PassBadRunBefore enter_pass_ctx
PassFine_1 enter_pass_ctx
PassFine_0 should_run
PassBadRunBefore should_run
PassFine_1 should_run
PassFine_0 run_before_pass
PassBadRunBefore bad run_before_pass!!!
PassFine_0 exit_pass_ctx
PassBadRunBefore exit_pass_ctx
PassFine_1 exit_pass_ctx
Catching ValueError: PassBadRunBefore bad run_before_pass

注意,pass instrumentation 未禁用。所以若調(diào)用?override_instrumentsexit_pass_ctx?先前注冊(cè)的?PassInstrument?將被調(diào)用。

demo_ctx.override_instruments([])

輸出結(jié)果:

PassFine_0 exit_pass_ctx
PassBadRunBefore exit_pass_ctx
PassFine_1 exit_pass_ctx

若不用?with?語(yǔ)法包裝 pass 執(zhí)行,則不會(huì)調(diào)用?exit_pass_ctx。用當(dāng)前的?PassContext

cur_pass_ctx = tvm.transform.PassContext.current()
cur_pass_ctx.override_instruments(
    [
        PassFine("PassFine_0"),
        PassBadRunBefore("PassBadRunBefore"),
        PassFine("PassFine_1"),
    ]
)

輸出結(jié)果:

PassFine_0 enter_pass_ctx
PassBadRunBefore enter_pass_ctx
PassFine_1 enter_pass_ctx

然后調(diào)用 Pass。異常后?exit_pass_ctx?不執(zhí)行。

try:
    # No ``exit_pass_ctx`` got executed.
    relay_mod = relay.transform.InferType()(relay_mod)
except ValueError as ex:
    print("Catching", str(ex).split("\n")[-1])

輸出結(jié)果:

PassFine_0 should_run
PassBadRunBefore should_run
PassFine_1 should_run
PassFine_0 run_before_pass
PassBadRunBefore bad run_before_pass!!!
Catching ValueError: PassBadRunBefore bad run_before_pass

清除 instrument。

cur_pass_ctx.override_instruments([])

輸出結(jié)果:

PassFine_0 exit_pass_ctx
PassBadRunBefore exit_pass_ctx
PassFine_1 exit_pass_ctx

下載 Python 源代碼:use_pass_instrument.py

下載 Jupyter Notebook:use_pass_instrument.ipynb

?著作權(quán)歸作者所有,如需轉(zhuǎn)載,請(qǐng)注明出處,否則將追究法律責(zé)任
收藏
回復(fù)
舉報(bào)
回復(fù)
相關(guān)推薦
社區(qū)精華內(nèi)容

目錄

    精品freesex老太交| 亚洲天堂电影| 懂色av中文字幕一区二区三区| 欧美另类在线观看| 无码精品一区二区三区在线播放| 国产精品专区免费| 亚洲天堂免费看| 国产精品美女黄网| 国产天堂第一区| 欧美国产专区| 精品在线欧美视频| 久久精品亚洲天堂| 亚洲欧美一区二区三区| 亚洲欧美区自拍先锋| 久久久久久国产精品mv| 97人妻精品一区二区三区动漫| 在线成人h网| 中文字幕精品久久久久| 97香蕉碰碰人妻国产欧美| 图片一区二区| 在线观看欧美黄色| 久久亚洲精品无码va白人极品| 成年人免费在线视频| 国产精品一区二区三区网站| 国产精品久久久久久久7电影| 欧美视频www| 成人看的视频| 亚洲免费高清视频| xfplay5566色资源网站| 亚洲电影二区| 欧美日韩久久不卡| 日韩视频在线免费看| 国产丝袜精品丝袜| 亚洲男人的天堂av| 在线亚洲美日韩| 国产小视频福利在线| 99久久99久久精品免费观看| 99在线国产| 国产三级漂亮女教师| 日韩电影在线一区二区三区| 9.1国产丝袜在线观看| 激情五月婷婷在线| 亚洲高清影视| 久久久av网站| 女性裸体视频网站| 日韩中文欧美| 伊人激情综合网| 精品无码国产污污污免费网站| 欧美91在线| 日韩av资源在线播放| 中国xxxx性xxxx产国| 国产精品毛片久久久| 欧美mv和日韩mv的网站| 日本女人性视频| 经典三级久久| 日韩一区二区三区三四区视频在线观看| 在线免费观看av的网站| 成人自拍视频网| 精品视频在线看| 五月天婷婷影视| 在线欧美激情| 欧美变态凌虐bdsm| 国产chinese中国hdxxxx| 波多野结衣在线一区二区| 亚洲第一男人天堂| 美国黄色a级片| 国产欧美日韩精品一区二区三区| 伊人久久男人天堂| 中文字幕美女视频| 欧美一区二区| 88xx成人精品| 在线播放国产一区| 国产老肥熟一区二区三区| 国产精品成人一区二区三区| 婷婷伊人综合中文字幕| 久久精品在这里| 中文字幕日韩一区二区三区不卡| 亚洲区欧洲区| 亚洲成人福利片| 密臀av一区二区三区| 韩国理伦片久久电影网| 欧美tickling挠脚心丨vk| 中国极品少妇videossexhd| 亚洲精品aaaaa| xvideos亚洲人网站| 青青草手机视频在线观看| 国产精品五区| 国产欧美在线观看| 日韩在线一区二区三区四区| 久久久九九九九| 潘金莲一级淫片aaaaaa播放1| 男女在线观看视频| 欧洲一区在线电影| 国模大尺度视频| 欧美激情在线免费| 欧美xxxx18国产| 啦啦啦免费高清视频在线观看| 日本不卡的三区四区五区| http;//www.99re视频| 男人天堂资源在线| 亚洲乱码国产乱码精品精98午夜| 日韩av三级在线| 国产高清日韩| 亚洲天堂第二页| 欧美黄色一级网站| 男女男精品网站| 精品毛片久久久久久| 国产黄网站在线观看| 欧美日韩亚洲一区二区| 香蕉网在线视频| 精品毛片免费观看| 97久久久久久| av在线亚洲天堂| 亚洲国产精品传媒在线观看| 国产96在线 | 亚洲| 亚洲欧美一级| 一区二区三区无码高清视频| 日韩欧美三级在线观看| 国产在线精品一区二区不卡了 | 嫩草国产精品入口| 不卡伊人av在线播放| 成人一二三四区| www.日韩在线| 丁香色欲久久久久久综合网| 经典三级一区二区| 日韩电影免费观看中文字幕 | 日韩欧美一区二区三区| 中文字幕第3页| 中文字幕一区二区三区欧美日韩| 国产精品亚洲片夜色在线| 日本高清中文字幕二区在线| 亚洲国产日韩av| 国产乱国产乱老熟300部视频| 我不卡神马影院| 国产精品美女久久久久久免费 | 日韩中文字幕免费看| 久久精品视频5| 2020国产精品自拍| 亚洲熟妇av日韩熟妇在线| xxxx日韩| 欧美精品九九久久| 亚洲精品国偷拍自产在线观看蜜桃 | 岛国一区二区三区高清视频| 日韩成人伦理| 亚洲成年人在线| 国产精品 欧美 日韩| 成人av在线网站| 欧美日韩一道本| 天堂综合网久久| 欧洲亚洲在线视频| 精品亚洲综合| 在线观看91视频| 麻豆一区在线观看| 国产一区二区看久久| 伊人网在线免费| 视频成人永久免费视频| 久久久久久久爱| 少妇荡乳情欲办公室456视频| 亚洲va欧美va国产va天堂影院| 精品影片一区二区入口| 国产亚洲午夜| 日韩精品电影网站| 六九午夜精品视频| 欧美日韩爱爱视频| 天堂av手机版| 色综合久久久网| 97在线观看免费视频| 久久精品国产**网站演员| 免费成人进口网站| 美女网站色精品尤物极品姐弟| 91chinesevideo永久地址| 午夜在线观看视频18| 在线观看免费成人| 99久久99久久精品国产| 成人福利在线看| 成人免费观看毛片| 国产高清久久| 精品国产免费一区二区三区 | 精选一区二区三区四区五区| 春暖花开亚洲一区二区三区| 久久精品国产2020观看福利| 天堂av2024| 欧美日韩高清在线播放| 麻豆一区二区三区精品视频| 久久蜜桃av一区二区天堂| 手机免费看av网站| 亚洲每日在线| 在线视频精品一区| 日韩欧美在线精品| 成人激情视频网| xx欧美视频| 理论片在线不卡免费观看| 日本1级在线| 7777精品伊人久久久大香线蕉| 国产性xxxx高清| 国产精品久久久久久久久图文区| 折磨小男生性器羞耻的故事| 青青青伊人色综合久久| 美脚丝袜脚交一区二区| 日本激情一区| 蜜桃成人免费视频| 秋霞一区二区三区| 国产精品直播网红| 日本蜜桃在线观看视频| 欧美大片在线看免费观看| 国产在线日本| 亚洲国产精彩中文乱码av在线播放| 91黑人精品一区二区三区| 亚洲一二三专区| 国产小视频你懂的| 国产午夜精品理论片a级大结局 | 亚洲三级电影全部在线观看高清| 国产中文字幕一区二区| 国产一区二区按摩在线观看| 成人免费毛片播放| 亚洲精品看片| 国产爆乳无码一区二区麻豆| 99久久.com| 日本一区二区三区www| 黄色网一区二区| 超碰97人人人人人蜜桃| 国产黄色一区| 国产成人av网址| 波多视频一区| 97精品视频在线| 国产乱码在线| 欧美美女操人视频| 黄视频在线观看网站| 这里只有精品丝袜| 精品成人一区二区三区免费视频| 亚洲国产精品网站| 神马午夜一区二区| 精品蜜桃在线看| 亚洲精品一级片| 日韩美女主播在线视频一区二区三区| 国产三级漂亮女教师| 欧美一区二区三区播放老司机| 91中文字幕在线播放| 欧美日韩高清一区二区三区| 夜夜爽8888| 欧美日韩一本到| 一级做a爱片性色毛片| 狠狠躁夜夜躁人人爽天天天天97| 日韩毛片在线播放| 亚洲v日本v欧美v久久精品| 国产精品500部| 精品欧美aⅴ在线网站| 国产微拍精品一区| 一本到一区二区三区| 99re国产在线| 欧美性猛交xxxxxx富婆| 中文字幕资源网| 91麻豆精品国产无毒不卡在线观看| 91成品人影院| 欧美一区二区三区人| 亚洲乱熟女一区二区| 精品福利视频一区二区三区| 老牛影视av牛牛影视av| 精品一区二区电影| 在线观看精品一区二区三区| 日韩视频精品在线| 欧美videossex另类| 91国在线精品国内播放| 欧美大片免费高清观看| 国产一区香蕉久久| 综合视频一区| 久久综合中文色婷婷| 波多野结衣的一区二区三区| 一区二区三区欧美在线| 好看的日韩av电影| 国产欧美在线一区| 美日韩一区二区| 男插女视频网站| av激情综合网| 人人爽人人爽人人片| 亚洲欧美日韩系列| 亚洲第一在线播放| 欧美男女性生活在线直播观看| www黄色网址| 亚洲乱码av中文一区二区| 在线观看免费黄视频| 午夜精品美女自拍福到在线| 成人不卡视频| 国产精品久久久久av福利动漫| 国产精品三级| 成年人深夜视频| 日韩精品乱码免费| 极品白嫩少妇无套内谢| 国产日韩欧美精品综合| 看片网站在线观看| 在线影院国内精品| 亚洲乱熟女一区二区| 最近2019年好看中文字幕视频| 丁香花电影在线观看完整版| 国产精品对白刺激| 久久影院资源站| 中文字幕中文字幕在线中一区高清 | 久久精品男人天堂av| 妺妺窝人体色www聚色窝仙踪| 91福利在线免费观看| 亚洲第一页综合| 一区二区三区视频观看| 超黄网站在线观看| 成人久久久久久| 欧美猛男男男激情videos| 日本黄大片在线观看| 久久综合综合久久综合| 扒开jk护士狂揉免费| 亚洲无人区一区| 国产免费av电影| 中文字幕日韩av综合精品| 欧美aa免费在线| 高清av免费一区中文字幕| 日韩在线观看一区| 国产成人av影视| 97久久久精品综合88久久| 免费中文字幕在线| 在线播放日韩导航| 超碰免费97在线观看| 欧美在线一区二区视频| 女仆av观看一区| 亚洲一区二区三区av无码| 国产最新精品精品你懂的| 91成人精品一区二区| 色婷婷av一区| 日本精品专区| 欧美在线观看网址综合| 国产人妖ts一区二区| 精品一区二区三区无码视频| 国产在线国偷精品免费看| 国产视频精品免费| 欧美色老头old∨ideo| 啊v视频在线| 国产精品入口免费视频一| 国产欧美一区| 中文字幕第80页| 国产亚洲人成网站| 久久久久久无码精品大片| 亚洲一级黄色av| yiren22亚洲综合| 亚洲欧美日韩另类精品一区二区三区| 蜜桃av噜噜一区| 刘亦菲国产毛片bd| 欧美丰满一区二区免费视频| 99免在线观看免费视频高清| 国产精品十八以下禁看| 日韩免费av| 日韩av影视大全| 一区二区在线观看视频在线观看| 国产成人毛毛毛片| 色综合色综合网色综合| 日韩高清二区| 国产精品www在线观看| 成人黄色777网| 国产做受高潮漫动| 亚洲人成电影在线观看天堂色| 亚洲精品mv| 神马欧美一区二区| 老司机午夜精品| 欧美日韩精品亚洲精品| 欧美v亚洲v综合ⅴ国产v| 国产传媒av在线| 日韩中文一区二区三区| 狠狠网亚洲精品| 国产系列精品av| 精品一区电影国产| 国产精品黄色片| avove在线观看| jiyouzz国产精品久久| 国产一级淫片a视频免费观看| 亚洲免费成人av电影| 少妇精品视频在线观看| 国产手机免费视频| 国产日韩三级在线| 国产色在线视频| 亚洲 日韩 国产第一| 婷婷成人在线| 日日躁夜夜躁aaaabbbb| 一区二区日韩av| 国产一区二区影视| 91成人免费看| 久久综合九色综合欧美狠狠| 国产又粗又长又黄的视频| 日韩精品中文字幕在线一区| 亚洲欧洲自拍| 日本丰满少妇黄大片在线观看| av成人动漫在线观看| 一级黄色片网站| 欧美国产日韩一区二区| 精品视频亚洲| 老司机午夜免费福利| 欧美视频一区二区三区在线观看| 天堂亚洲精品| 日本一区二区免费看| 成人午夜伦理影院| 亚洲综合一区中| 奇米成人av国产一区二区三区| 综合一区av| 欧美成人久久久免费播放| 精品国产一区二区三区久久久蜜月 |