精品欧美一区二区三区在线观看 _久久久久国色av免费观看性色_国产精品久久在线观看_亚洲第一综合网站_91精品又粗又猛又爽_小泽玛利亚一区二区免费_91亚洲精品国偷拍自产在线观看 _久久精品视频在线播放_美女精品久久久_欧美日韩国产成人在线

ByteDance Data Platform: ClickHouse-based Complex Query Implementation and Optimization

原創(chuàng) 精選
Techplur
In this article, we invited Mr. Dong Yifeng, a senior research and development engineer of ByteDance, to introduce how solves ClickHouse's complex queries. As the largest user of ClickHouse in China,

In today's market, ClickHouse is one of the most popular column-oriented database management systems (DBMS). A rising star in the field, ClickHouse has led a new wave of analytical databases in the industry with its impressive performance advantages, and it has a much faster query speed than most other database management systems of the same type.

While ClickHouse can manage large volumes of enterprise business data, it is susceptible to query exception problems in complex queries, which may adversely impact the regular operation of the business.

In this article, we invited Mr. Dong Yifeng, a senior research and development engineer of ByteDance, to introduce how his team solves ClickHouse's complex queries. As the largest user of ClickHouse in China, ByteDance has acquired significant technical experience in this versatile database management system.


Project background

ClickHouse's execution model is similar to that of big data engines such as Druid and ES, and its basic query model can be divided into two phases. To begin with, the Coordinator sends the query to the corresponding Worker node. Once the Worker nodes have completed their computations, the Coordinator aggregates, deals with the data received from the Worker nodes, and returns the results.

A two-stage execution model is better suited for many common business scenarios today, such as large wide-table queries, where ClickHouse excels. ClickHouse offers simplicity and efficiency; in most cases, simplicity means efficiency. However, as the enterprise business grows, increasingly complex business scenarios present ClickHouse with three different types of challenges.


  • Challenge No.1:

When a higher volume of data is returned, the processing is more complicated. Coordinator will be put under greater pressure and can quickly become a bottleneck for the Query. If, for example, some agg operators for recomputation, such as Count Distinct, utilize hash tables for de-duplication, the second stage must merge the hash tables of each Worker on the Coordinator's host. It will be very time-consuming to perform this computation, and it cannot be parallelized.


  • Challenge No.2:

Because the current ClickHouse model does not support Shuffle, the right table must contain all the data. A regular or global join, when a right table contains a large amount of data that is stored in memory,  OOM may occurs. In the event that the data is spilled to disk, although the memory problem can be resolved, the query performance will be adversely affected due to the cost of disk IO and serialization/deserialization. It should be noted that when Join uses Hash Join if the right table is a large one, it will also take longer to complete the build. Therefore, the community has recently optimized the parallel building of some right tables, and the data is split according to the Join key to building multiple hash tables simultaneously. However, a further Split operation is required for the left and right tables.


  • Challenge No.3:

The issue is complex queries (multi-table Join, nested subqueries, window function, etc.). ClickHouse does not support such scenarios well because it does not enhance execution parallelism by Shuffle to spread the data, and its generated pipeline may not be fully parallel. Therefore, in some scenarios, the cluster's full capabilities cannot be utilized.

With the increasing complexity of enterprise business, the need for complex queries, especially with multiple rounds of distributed Joins with many agg computations, will become stronger. It should be noted that it is not necessary in this case for all Queries to follow the pattern that ClickHouse offers, namely, generating large wide tables through an upstream data ETL process. It is more costly to perform the ETL process and may result in some data redundancy.

In this case, since enterprise cluster resources are limited, we hope to fully use machine resources to handle increasingly complex business scenarios and SQL queries. Our objective is to be able to support complex queries efficiently using ClickHouse.


Solutions

We have adopted a multi-stage execution approach for ClickHouse to replace its existing two-stage execution for complex queries. Like other distributed database engines, such as Presto, a complex query is divided into multiple Stages based on data exchanges. The exchanges are used to exchange data between the Stages. There are three primary forms of data exchange between the Stages.

Shuffle based on single or multiple keys

"Gather" is the process of gathering data from one or more nodes to a single node.

"Broadcast" is the process of copying the same data to multiple nodes.

When executing a single Stage, continue to reuse the current underlying execution of ClickHouse. Development is divided into different modules based on the function. Each module predetermined interfaces to reduce the dependencies and coupling between them. It will not affect other modules if the module or its internal logic is changed. It should be noted that the plug-in architecture provides flexible configurations for models, which allows for a variety of policies to be implemented according to different business scenarios.

Upon accepting a complex query, the Coordinator inserts an ExchangeNode and generates a distributed plan based on the current syntax tree, node type, and data distribution. Afterward, the Coordinator node will split plans by ExchangeNode type, creating execution plans for each Stage.

After that, the Coordinator node calls the SegmentScheduler to send the PlanSegment of each Stage to the Workers. Upon receiving the PlanSegment, the InterpreterPlanSegment completes the reading and executing of the data, and the data interaction is completed through the ExchangeManager. Finally, the Coordinator retrieves the data from the ExchangeManager corresponding to the last round of Stage and returns it to the Client.

SegmentScheduler schedules different PlanSegments. Based on specific scheduling policies, upstream and downstream dependencies, data distribution, Stage parallelism, and workstation distribution and status, PlanSegment will be sent to different Worker nodes.

Our current distribution and scheduling strategies are based on two main strategies.

First, dependency scheduling defines the topology based on Stage dependencies, generates a DAG graph, and schedules Stages following the DAG graph. For a two-table Stage, the left/right table reading Stage will be scheduled first, then the Join Stage, as the Join Stage depends on the former.

The second strategy is the AllAtOnce approach, which calculates the information about each Stage before scheduling them all simultaneously.

These two strategies differ in fault tolerance, resource use, and latency. The first strategy relies on scheduling, which provides better fault tolerance. The ClickHouse data can be replicated multiple times. Therefore, if some nodes are having difficulty connecting, you will be able to try these replication nodes. In the case of subsequent dependent nodes, there is no need to perceive the previous Stages being executed.  

Non-Source Stage is not dependent on data, so it is more fault-tolerant, provided the nodes that ensure Stage parallelism remains operational. In extreme cases, it may be necessary to reduce the Stage's parallelism to ensure the Query's normal execution. The scheduling is, however, dependent and cannot be completely parallel, thus increasing the scheduling time. With more Stages, the scheduling delay may take up a significant percentage of the overall SQL. This can be resolved by optimizing: For some without dependencies, parallelism can be enabled to the maximum extent possible. Certain nodes within the same Stage can be parallelized, as well as the Stage without dependencies.

The AllAtOnce strategy can significantly reduce the scheduling delay by parallelism. To prevent a large number of network IO threads, the number of threads can be controlled by means of asynchronization. But the disadvantage is that it is not as fault-tolerant as dependency scheduling, where the Workers of each Stage are determined before scheduling, and the entire Query will fail if one Worker has a connection exception during scheduling.

In another case, the process takes longer if a Stage is scheduled before the upstream data is ready. For example, in the case of Final's agg Stage, the data is unavailable until Partial agg is completed. Even though we have made some optimizations to free up no-load CPU resources, it consumes some resources and requires threads to execute.

ClickHouse query node execution is mainly in the form of SQL interacting with each other between nodes. After splitting the Stage, we need specific support execution plans that can execute a separate PlanSegment. Therefore, the central role of InterpreterPlanSegment is to accept a serialized PlanSegment and be able to run the entire PlanSegment logic on Worker nodes.  

In addition, ByteDance has also made functional and performance enhancements, such as supporting a single Stage to handle multiple Joins, so that the number of Stages and some unnecessary transfers can be reduced, and the entire Join process can be completed with one single Stage. If an exception occurs, the exception information will be reported to the query segment scheduler. The scheduler will cancel the execution of the Workers of other Stages from the Query.

ExchangeManager is the medium for PlanSegment data exchange, which can balance the ability of upstream and downstream data processing. Overall, our design uses Push with queuing to actively push to the downstream when the upstream data is ready, and supports the ability of backpressure on top of that.

Throughout the process, both the upstream and the downstream will optimize sending and reading through queues. Meanwhile, upstream and downstream will have their own queues. When the upstream processing speed is fast, the downstream processing speed is slower, as a result, the downstream cannot handle the upstream processing speed, and backpressure will control the upstream execution speed.

Since push and queuing are used, a relatively unique situation needs to be considered: in some cases, the downstream stage does not have to read all the upstream data. In the case of Limit100, the downstream only needs to read 100 pieces of data, while the upstream may generate a large amount of data. Therefore, when the downstream Stage has read enough data, it should be able to cancel the execution of the upstream Stage and empty the queue.

ExchangeManager uses more optimization points, such as fine-grained memory control, allowing memory to be controlled according to multiple levels, such as instance, query, segment, etc., to avoid OOM. Moreover, there are longer-term considerations in scenarios with low latency requirements and large data volumes. The first objective is to reduce memory usage by spilling data into disks.

The second is to Merge for small data and Split for large data to improve transfer efficiency. For example, in the Sort scenario, the network transmission process of Partial Sort and Merge Sort must be guaranteed to be orderly. The transmitted data cannot be disordered, or Merge Sort will not be performed properly, and the results will be affected.

The third aspect is the reuse of connections and the Optimization of the network. For example, when the upstream and downstream are in the same node, it should go to the memory exchange as far as possible, but not the network. As a result, network overhead and data serialization/deserialization costs are reduced. In addition, ClickHouse performs very effective Optimization on computings, which may cause its memory bandwidth to be a bottleneck in some instances. For specific ExchangeManager scenarios, zero-copy and other optimizations may be used to reduce memory copies.

The fourth step is to handle and monitor exceptions. Compared to a single host, the abnormalities in distributed cases can be more complex and more difficult to perceive. By retrying the query, some nodes with short-term high load or exceptions can be avoided from affecting the query. Having proper monitoring will allow you to detect problems and correct them quickly. It will also allow you to perform more targeted optimization.


Optimization and diagnosis

One of these is the multiple implementations and optimizations of Join. Depending on the size and distribution of the data, you can choose the appropriate implementation of Join.

Currently, the most commonly used method is the Shuffle Join.

Broadcast Join, a scenario of a large table Join a small table, will broadcast the right table to all Worker nodes on the left table, which can avoid the transfer of data from the large left table.

When the left and right tables have been distributed according to the Join key and are connected, there is no need to exchange data, which means that data transfer can be reduced to a minimum.

The essence of network connection optimization is to reduce the establishment and use of connections. Especially when the data needs to Shuffle, each node in the next round of Stage has to pull data from each node in the upstream Stage. Clusters with a high number of nodes and complex queries will establish a large number of connections.

At present, ByteDance's ClickHouse cluster is huge. With ClickHouse's two-stage execution, the maximum number of connections that can be established on a single host is tens of thousands due to its high concurrent capacity. Therefore, it is necessary to perform Optimization of network connections, in particular, to support the reuse of connections, where multiple Stage queries can run on each connection. By reusing connections as much as possible, a fixed number of connections can be established between different nodes, which are reused by different Query and Stage, and the number of connections does not grow as the size of Query and Stage grows.

Network transfer optimization is another aspect. In a data center, remote direct memory access is usually referred to as RDMA, a technology that can go beyond the kernel of the remote host operating system to access data in memory. Since this technique does not require a connection to the operating system, it not only saves a large number of CPU resources but likewise increases system throughput and reduces network communication latency, which is especially suitable for massively parallel computer clusters.  

The ClickHouse platform does a significant amount of Optimization at the computational level, and the network bandwidth is generally smaller than the memory bandwidth. However, in some scenarios where the amount of data is considerable, the network transfer can become a bottleneck. To improve network transmission efficiency and data exchange throughput, you can introduce compression to reduce the amount of transmitted data and use RDMA to reduce some overhead. The results of testing indicate that there are significant gains in some scenarios with high data transfer volumes.

Many other databases also use Runtime Filter for Optimization, and the Join operator is usually the most time-consuming operator in the OLAP engine. Two methods can be used to optimize it. A possible solution would be to improve the performance of the Join operator. For example, for HashJoin, you can optimize the implementation of HashTable; or you can implement a better hashing algorithm, including parallelization.

Furthermore, suppose the operator itself is time-consuming and heavyweight. In that case, the data that the operator processes can be reduced. Then the Runtime Filter is more appropriate in some cases, especially where the fact table joins multiple dimension tables. A fact table in these scenarios is typically massive, and most filtering conditions are set on top of the dimension tables.

By filtering out input data that will not meet the Join conditions in advance at the Probe side of the Join, Runtime Filter significantly reduces data transfer and computation in Join. This will reduce the overall execution time. Due to this, we also support Runtime Filter on complex queries, and currently, we mainly support Min, Max, and Bloom Filters.

If the runtime filter's column (Join column) is built with an index (primary key, skip index...), it is necessary to regenerate the pipeline. Since the hit index may reduce the data read, the parallelism of the pipeline and the range of processing of the related data may be changed as a result. Suppose the columns of the runtime filter are not related to the index. In that case, you can pre-generate the filter conditions when the plan is generated, which is empty at the beginning and only a placeholder, and change the placeholder information to the actual filter conditions when the runtime filter is sent. Thus, even if the runtime filter timeout is issued and the query segment has already begun to execute, the subsequent data may still be filtered as long as the execution has not ended.

However, it should be noted that Runtime Filter is a unique optimization that applies to instances where the right table data volume is not large, and Runtime Filter has a better filtering effect on the left table. Generally, suppose the data volume of the right table is large, then the Runtime Filter building will take longer or will not affect the data filtering in the left table, increasing query time and overhead. Hence, it is crucial to determine whether to turn on Optimization based on the characteristics and size of your data.

Performance diagnosis and analysis are critical for complex queries. The pattern of SQL execution becomes complex due to introducing a multi-stage model for complex queries. Optimization begins with improving all sorts of Metrics, including Query execution time, the execution time of different Stages, start time, end time, amount of IO data to process, data processed by operators, execution status, as well as some Profile Events (such as Runtime Filter's build time, data volume to filter, and other Metrics).

Second, we record the backpressure information and the upstream and downstream queue lengths to infer the execution and bottlenecks of the Stage.

It can be generally assumed that:

When the number of input and output queues are both low or high, it indicates that the current stage is processing normally or is being backpressure downstream. Backpressure information can be used to determine the cause further.

When the number of input and output queues are not the same, this may be an intermediate state of backpressure transmission, or the stage itself is the source of backpressure.

Stages with large output queues that are often backpressure are usually affected by downstream stages. Consequently, you can eliminate the possibility that it is the source of backpressure itself and focus more on its downstream effects.

Stages with low output queues but high input queues are likely to be sources of backpressure. Hence, Optimization aims to improve this stage's processing power.

SQL scenarios are generally diverse, with complex scenarios sometimes requiring your understanding of the engine to provide optimization suggestions. It is currently ByteDance's goal to improve further these experiences, as well as the path of Metrics and Analysis to continuously reduce the burden of Oncall and provide more accurate optimization recommendations in some scenarios.


Perspectives and effectiveness

There are three flaws in the current execution model, and we have optimized complex queries by promoting a new model. As follows, we will test the new model's ability to resolve our problems.

Calculating the second stage is more complex, and more data is available for the first stage.

Hash Join's right table is a large one.

Multi-table Join for complex Query simulation.

SSB 1T data is used for the data set, and an 8-node cluster is used for the environment.

Case1 - The second stage is computationally complex. There is a relatively heavy computational operator UniqExact, which is the computation of count distinct, causing de-duplication by Hash table. Count different uses this algorithm by default, and when we make complex queries, it reduces the execution time from 8.5 seconds to 2.198 seconds. The agg uniqExact algorithm merge in the second stage was originally done by coordinator single-point merge, but now it can be done by multiple nodes in parallel through the group by the key shuffle. Therefore, shuffle reduces the pressure on coordinators to merge agg.

Case2 - The right table is large. Considering ClickHouse's Optimization of multiple tables is still in the early stages, here is a subquery to push down the filter conditions. Lineorder is a large table, and implementing the complex query pattern optimizes the Query execution time from 17 seconds to 1.7 seconds. As Lineorder is a large table, data can be Shuffled to each Worker node based on the Join key, thereby reducing the pressure of building the right table.

Case3 - Multi-table Join. With a complex query, query execution time is optimized from 8.58 seconds to 4.464 seconds, allowing all the right tables to process and build data simultaneously. It would be better to turn on the runtime filter here than the existing schema.

In fact, the optimizer also improves performance for complex queries significantly. RBO rules, such as the push-down of common predicates and the processing of related subqueries, can enhance the efficiency of SQL's execution. With the optimizer in complex query mode, users do not need to do complex tasks themselves since the optimizer will automatically perform these optimizations.

Furthermore, the choice of the Join implementation can significantly affect performance. If the Join Key distribution can be met, the cost of the left and right table shuffle can be reduced by using Colocate Joins. For multi-table Joins, the order in which the Joins are implemented and how they are applied will significantly impact the execution time more than for two-table Joins. By utilizing the statistical information in such data, a better execution pattern can be obtained by optimizing the CBO.

Using the optimizer, business departments can write any SQL according to their business logic. The engine automatically calculates the relatively optimal SQL plan for them and executes it, thus speeding up the entire process.

In summary, ClickHouse's current execution model performs well in many single-table scenarios; therefore, we mainly optimize for complex scenarios by implementing a multi-Stage model, data transfer between Stages, and applying engineering practices to improve execution and network transmission performance. We hope to lower SQL analysis and tuning threshold by enhancing Metrics and intelligent diagnosis. While the first step has been achieved, there is still much work for future ByteDance events.

The priority is to continue to improve the performance of the execution and Exchange processes. The focus is not on engine optimization in general (such as index optimization or arithmetic optimization) but rather on the Optimization of complex query schemas. A Stage reuse approach may be advantageous in scenarios where SQL results are reused repeatedly, such as in multi-table joins and CTE applications. Previously, we have supported stage reuse; however, it is used in fewer scenarios. We are committed to being more flexible and general in the future.

The Metrics and smart diagnosis will also be enhanced. SQL's high flexibility makes it difficult to diagnose and tune some complex queries without Metrics. In the future, ByteDance will continue to make efforts in this direction with its data platform.


Guest Introduction

Mr. Dong Yifeng, senior R&D engineer for the ByteDance data platform. Dong is responsible for ByteDance's enterprise-class experiment platform team and is committed to building the industry's most advanced and valuable experiment platform, transforming A/B testing into infrastructure for business growth. His contributions include creating Libra, a new middle platform for ByteDance, which serves more than 500 business lines, including Douyin, Tiktok, and Toutiao; and launched products such as DataTester and BytePlus Optimize.

責(zé)任編輯:龐桂玉 來源: 51CTO
相關(guān)推薦

2012-10-18 10:15:01

IBMdw

2016-10-12 18:12:48

大數(shù)據(jù)技術(shù)

2024-03-19 11:52:28

2011-08-18 15:40:56

complex中文man

2017-06-28 13:57:59

2018-03-16 13:00:58

2022-09-16 13:53:14

ClouderaApache湖倉

2021-05-07 11:45:06

Cloudera

2025-01-26 09:15:00

模型視頻生成AI

2009-06-10 20:30:49

NetBeans Pl

2020-08-18 16:58:17

漏洞網(wǎng)絡(luò)安全網(wǎng)絡(luò)攻擊

2021-08-09 10:29:29

NVIDIA

2012-03-30 14:39:30

DataNucleusJava

2009-06-18 09:47:50

2009-09-24 18:11:56

Hibernate q

2023-02-21 09:52:49

2011-05-18 11:08:54

Platform云計算

2010-09-03 10:19:42

2014-07-18 14:21:27

OpenStack
點贊
收藏

51CTO技術(shù)棧公眾號

午夜视频一区在线观看| 日韩黄色小视频| 日韩一区二区在线观看视频播放| 在线观看18视频网站| 99国产精品一区二区三区| 韩国亚洲精品| 国产丝袜精品视频| 亚洲性图一区二区| 污视频在线看网站| 波多野结衣在线一区| 青青精品视频播放| 懂色av粉嫩av蜜臀av一区二区三区| 成人久久精品| 精品久久久久久久久中文字幕| 欧美电影一区二区三区| 午夜精品视频在线观看一区二区| 一炮成瘾1v1高h| 亚洲午夜av| 最近2019中文字幕mv免费看 | 亚洲国产精品yw在线观看 | 日韩久久在线| 亚洲高清视频网站| 日韩高清电影一区| 久久免费视频在线| www深夜成人a√在线| 香蕉久久精品| 精品欧美久久久| 欧美日韩在线观看不卡| 超碰99在线| 亚洲激情网站免费观看| 日韩亚洲不卡在线| 亚洲欧洲视频在线观看| 粉嫩aⅴ一区二区三区四区| 国产精品第1页| 中文字字幕在线中文| 欧美精品三区| 中文字幕视频在线免费欧美日韩综合在线看 | 亚洲国产欧美一区二区丝袜黑人 | 国产成人影院| 亚洲电影在线观看| 久久久久亚洲av片无码v| 欧美aaa大片视频一二区| 午夜精彩视频在线观看不卡| japanese在线播放| 欧美18hd| 国产精品成人网| 色综合电影网| 黄色在线网站| 久久久综合视频| eeuss一区二区三区| 11024精品一区二区三区日韩| 丝袜诱惑亚洲看片 | 久久久加勒比| 欧美三级欧美一级| 美女喷白浆视频| 国产精品高清乱码在线观看| 一本大道久久a久久精二百| 久久视频这里有精品| 三级福利片在线观看| 一区二区三区四区在线免费观看 | 日韩av激情| 一区二区免费视频| 日本aa在线观看| 欧美1—12sexvideos| 夜夜爽夜夜爽精品视频| 欧美国产日韩激情| 黄色软件视频在线观看| 五月婷婷综合在线| jizzjizzxxxx| 日韩三区在线| 欧美日韩aaaaa| 在线视频日韩欧美| 99re热精品视频| 日韩精品在线私人| 亚洲精品成人无码| 四虎成人av| 欧美成人免费网| 日韩精品成人一区| 久久久久久久欧美精品| 国产精品视频色| 国产农村妇女毛片精品| 成人综合在线网站| 麻豆91av| 天堂地址在线www| 亚洲视频一区在线| 狠狠干视频网站| 蜜桃av.网站在线观看| 色婷婷综合久久久中文字幕| 日日噜噜夜夜狠狠| 亚洲免费资源| 亚洲国产精品人人爽夜夜爽| 日韩一区二区a片免费观看| 99久久99视频只有精品| 久久久久久久久久久成人| 国产精品男女视频| 麻豆成人免费电影| 国产一区免费观看| 92国产在线视频| 亚洲人成在线播放网站岛国| 久久这里只有精品18| 高清电影一区| 精品久久久久一区| 精品国产av无码| 欧美 亚欧 日韩视频在线 | 裸体大乳女做爰69| av资源一区| 欧美日韩国产综合草草| 久久精品女同亚洲女同13| gogogo高清在线观看一区二区| 欧美不卡视频一区发布| 久久色在线播放| 欧美视频在线观看一区| 免费在线激情视频| 九九热99视频| 美女露胸视频在线观看| 欧美麻豆精品久久久久久| 日韩Av无码精品| 欧美电影免费观看高清| 57pao国产成人免费| 国产精品视频一二区| 久久先锋影音av鲁色资源网| 五月天综合婷婷| 综合在线影院| 精品免费日韩av| 亚洲波多野结衣| 久久资源在线| 久久精品国产美女| 中中文字幕av在线| 欧美日韩一本到| 亚洲精品成人无码熟妇在线| 亚洲私拍自拍| 亚洲xxxxx| 天堂а√在线资源在线| 欧洲一区二区三区免费视频| 久久偷拍免费视频| 亚洲国产高清一区| 亚洲自拍小视频免费观看| 在线毛片网站| 欧美午夜影院一区| 国产精品九九九九九| 亚洲黄色大片| 国产一级特黄a大片99| 亚洲大胆人体大胆做受1| 欧美日韩精品一二三区| 国产精品久久久视频| 久久综合婷婷| 亚洲系列中文字幕| 日韩在线观看成人| 日产电影一区二区三区| 国产剧情在线观看一区二区| 亚洲一二三区在线| 99久久久国产精品免费调教网站 | 悠悠色在线精品| 最新免费av网址| 99精品在线| 成人在线视频网| 黄在线免费看| 日韩三级精品电影久久久| 影音先锋男人资源在线观看| 久久精品国产亚洲高清剧情介绍 | 在线日本高清免费不卡| 国产精品v欧美精品∨日韩| 在线中文字幕第一页| 日韩欧美高清dvd碟片| 欧美黄片一区二区三区| 国产精品1区2区3区| 成人在线国产视频| 日韩高清电影免费| 国产不卡一区二区在线播放| av男人的天堂在线| 8x福利精品第一导航| 高h视频免费观看| 丰满少妇久久久久久久| 99热自拍偷拍| 国产精品三级| 国产综合久久久久久| 91精选在线| 精品视频中文字幕| 中文字幕理论片| 亚洲精品国产高清久久伦理二区| 中文在线观看免费视频| 美女网站久久| www.亚洲一区二区| 久久超级碰碰| 国产精品专区第二| 丝袜国产在线| 亚洲韩国日本中文字幕| 国产日韩在线免费观看| 亚洲男女毛片无遮挡| 亚洲 欧美 日韩在线| 日韩高清在线观看| 日韩一级特黄毛片| 岳的好大精品一区二区三区| 国产欧美精品日韩精品| 免费在线观看av电影| 亚洲人成绝费网站色www| 97久久人国产精品婷婷| 亚洲午夜免费电影| 中文字幕网站在线观看| 国产精品中文字幕一区二区三区| 日韩激情免费视频| 久久美女精品| 精品国产综合| 亚洲欧美综合久久久久久v动漫| 国内精品久久久久| 色影视在线观看| 亚洲精品电影在线| 99久久精品国产成人一区二区 | 在线视频播放大全| 精品福利免费观看| wwwav国产| 欧美国产欧美综合| 黄色国产在线观看| 国产成人精品网址| 手机在线免费观看毛片| 亚洲片区在线| 国产在线无码精品| 色综合咪咪久久网| 欧美高清性xxxxhdvideosex| 日韩成人在线观看视频| 国产精品视频26uuu| 免费h在线看| 欧美激情亚洲自拍| 精品自拍一区| 中文精品99久久国产香蕉| 无码国产精品一区二区免费16| 欧美一级欧美一级在线播放| 中文字幕av片| 日本韩国欧美在线| 国产成人精品一区二三区| 亚洲免费观看视频| 国产馆在线观看| 国产亚洲精久久久久久| 久久人人妻人人人人妻性色av| 国产成人精品免费一区二区| 久国产精品视频| 麻豆国产欧美一区二区三区| 自拍偷拍 国产| 久久久久国产一区二区| 日本三级免费网站| 亚洲免费精品| 国产a级片网站| 亚洲小说欧美另类社区| 精品人妻大屁股白浆无码| 亚洲xxx拳头交| 亚洲成年人专区| 亚洲老妇激情| 一级性生活视频| 欧美体内she精视频在线观看| 欧美 另类 交| 影视一区二区| 少妇久久久久久被弄到高潮| 欧美视频在线观看| 免费拍拍拍网站| 国产欧美一区二区三区国产幕精品| 性一交一乱一伧国产女士spa| 国产一区二区三区四区三区四| 国产91在线亚洲| 亚洲国产专区| 黑鬼大战白妞高潮喷白浆| 乱码第一页成人| 三年中国国语在线播放免费| 免费观看在线综合| 国产欧美一区二| 国产成人精品www牛牛影视| 亚洲香蕉中文网| 久久综合色一综合色88| 成年人在线免费看片| 国产精品福利电影一区二区三区四区 | 伊人成色综合网| 日日噜噜夜夜狠狠视频欧美人| 国产成人手机视频| 激情综合色播激情啊| 污视频在线观看免费网站| 国产成人一区二区精品非洲| 国产激情第一页| 国产日产欧产精品推荐色| 亚洲综合第一区| 亚洲最快最全在线视频| 日韩女优在线观看| 欧美在线免费播放| 国产一区二区三区三州| 精品久久国产97色综合| 欧美日韩在线中文字幕| 日韩中文字幕在线免费观看| 狂野欧美激情性xxxx欧美| 欧美一区二区色| 丁香婷婷久久| av日韩免费电影| 国产欧美日韩免费观看| 黄色网址在线免费看| 亚洲精华国产欧美| 亚洲欧美国产中文| 国产高清亚洲一区| 久久精品—区二区三区舞蹈| 亚洲夂夂婷婷色拍ww47| 五月天激情四射| 欧美一卡二卡在线| 视频国产在线观看| 在线精品视频一区二区三四| 国产乱淫av片免费| 精品性高朝久久久久久久| 免费黄色在线| 日本aⅴ大伊香蕉精品视频| 国产成人免费av一区二区午夜 | 超碰caoprom| 国产精品欧美一级免费| 偷偷操不一样的久久| 在线播放91灌醉迷j高跟美女 | 国产亚洲字幕| 欧美一区二区三区在线播放| 国产精品第十页| 成人综合久久网| 久久久久久亚洲综合影院红桃 | 亚洲精品成人无码| 亚洲3atv精品一区二区三区| 国产孕妇孕交大片孕| 亚洲人成网站999久久久综合| 午夜dj在线观看高清视频完整版| 国产精品久久久久77777| 老司机aⅴ在线精品导航| 在线看无码的免费网站| 视频一区二区三区中文字幕| 久久久久无码国产精品一区李宗瑞 | 欧美12av| 日韩图片一区| 农村末发育av片一区二区| 中文字幕佐山爱一区二区免费| 在线免费观看av网址| 亚洲毛片在线看| 国内激情视频在线观看| 国产精品久久7| 欧美在线国产| www.偷拍.com| 亚洲天堂网中文字| 亚洲系列在线观看| 亚洲天堂av综合网| 欧美成人ⅴideosxxxxx| 精品无人区一区二区三区| 亚洲成人直播| 精品人妻伦一二三区久| 亚洲国产视频网站| www.日日夜夜| 欧美国产日韩一区二区三区| 狂野欧美xxxx韩国少妇| 国产日本欧美在线| 精品一区二区三区视频在线观看| 欧美美女性生活视频| 欧美日韩色综合| 麻豆最新免费在线视频| 成人欧美一区二区三区在线| 午夜国产一区二区| 一级网站在线观看| 亚洲男同1069视频| 亚洲av综合色区无码一二三区| 久久久久久国产精品久久| 国产精品17p| 青青草视频在线免费播放| 97se亚洲国产综合自在线观| 日韩人妻精品中文字幕| 一本色道久久综合亚洲精品小说| 91天天综合| 欧美少妇一级片| 国产激情视频一区二区三区欧美| 妺妺窝人体色www聚色窝仙踪| 精品福利av导航| 日本午夜大片a在线观看| 久久久影院一区二区三区| 日韩一区精品视频| 91av手机在线| 日韩一区二区精品| 成年网站在线视频网站| 久久精品人人做人人爽电影| 蜜乳av另类精品一区二区| xxxxx99| 日韩一级免费观看| 草草视频在线| 日日骚一区二区网站| 国产一区二区导航在线播放| 日本少妇全体裸体洗澡| 亚洲欧美制服另类日韩| 少妇高潮一区二区三区99| 免费看欧美黑人毛片| 国产日韩欧美亚洲| 亚洲天堂中文网| 97成人精品区在线播放| 成人激情电影在线| 色综合久久久无码中文字幕波多| 欧美视频一区二区三区…| 麻豆影视在线观看_| 久久国产一区| 久久99精品久久久久久国产越南| 国产福利久久久| 在线亚洲欧美视频| 91蝌蚪精品视频| 99热手机在线| 亚洲成人久久影院| 五月天婷婷在线视频| 精品国产_亚洲人成在线| 久久国产婷婷国产香蕉|