digraph G {
0 [labelType="html" label="<br><b>AdaptiveSparkPlan</b><br><br>"];
1 [labelType="html" label="<b>Execute InsertIntoHadoopFsRelationCommand</b><br><br>task commit time: 1 ms<br>number of written files: 1<br>job commit time: 7 ms<br>number of output rows: 23,336<br>number of dynamic part: 0<br>written output: 508.3 KiB"];
2 [labelType="html" label="<br><b>WriteFiles</b><br><br>"];
3 [labelType="html" label="<b>Exchange</b><br><br>shuffle records written: 23,336<br>local merged chunks fetched: 0<br>shuffle write time: 4 ms<br>remote merged bytes read: 0.0 B<br>local merged blocks fetched: 0<br>corrupt merged block chunks: 0<br>remote merged reqs duration: 0 ms<br>remote merged blocks fetched: 0<br>records read: 23,336<br>local bytes read: 1169.7 KiB<br>fetch wait time: 0 ms<br>remote bytes read: 0.0 B<br>merged fetch fallback count: 0<br>local blocks read: 1<br>remote merged chunks fetched: 0<br>remote blocks read: 0<br>data size: 3.9 MiB<br>local merged bytes read: 0.0 B<br>number of partitions: 1<br>remote reqs duration: 0 ms<br>remote bytes read to disk: 0.0 B<br>shuffle bytes written: 1169.7 KiB"];
subgraph cluster4 {
isCluster="true";
label="WholeStageCodegen (1)\n \nduration: 343 ms";
5 [labelType="html" label="<b>Scan ExistingRDD</b><br><br>number of output rows: 23,336"];
}
1->0;
2->1;
3->2;
5->3;
}
6
AdaptiveSparkPlan isFinalPlan=true
Execute InsertIntoHadoopFsRelationCommand file:/data/output/export/parquet/ada463f1-7fe5-4e1f-8f49-e31949ae5745, false, Parquet, [path=file:///data/output/export/parquet/ada463f1-7fe5-4e1f-8f49-e31949ae5745], ErrorIfExists, [N°-Facture, Montant-Facture, Magasin, Magasin-Code, Nb-Article, Vendeur, Date-Facture, N°-Client, Marge]
WriteFiles
Exchange SinglePartition, REPARTITION_BY_NUM, [plan_id=1587]
Scan ExistingRDD[N°-Facture#9734,Montant-Facture#9735,Magasin#9736,Magasin-Code#9737,Nb-Article#9738,Vendeur#9739,Date-Facture#9740,N°-Client#9741,Marge#9742]
WholeStageCodegen (1)
== Physical Plan ==
AdaptiveSparkPlan (9)
+- == Final Plan ==
Execute InsertIntoHadoopFsRelationCommand (5)
+- WriteFiles (4)
+- ShuffleQueryStage (3), Statistics(sizeInBytes=3.9 MiB, rowCount=2.33E+4)
+- Exchange (2)
+- * Scan ExistingRDD (1)
+- == Initial Plan ==
Execute InsertIntoHadoopFsRelationCommand (8)
+- WriteFiles (7)
+- Exchange (6)
+- Scan ExistingRDD (1)
(1) Scan ExistingRDD [codegen id : 1]
Output [9]: [N°-Facture#9734, Montant-Facture#9735, Magasin#9736, Magasin-Code#9737, Nb-Article#9738, Vendeur#9739, Date-Facture#9740, N°-Client#9741, Marge#9742]
Arguments: [N°-Facture#9734, Montant-Facture#9735, Magasin#9736, Magasin-Code#9737, Nb-Article#9738, Vendeur#9739, Date-Facture#9740, N°-Client#9741, Marge#9742], MapPartitionsRDD[422] at createDataFrame at AbsExportExecutor.java:55, ExistingRDD, UnknownPartitioning(0)
(2) Exchange
Input [9]: [N°-Facture#9734, Montant-Facture#9735, Magasin#9736, Magasin-Code#9737, Nb-Article#9738, Vendeur#9739, Date-Facture#9740, N°-Client#9741, Marge#9742]
Arguments: SinglePartition, REPARTITION_BY_NUM, [plan_id=1587]
(3) ShuffleQueryStage
Output [9]: [N°-Facture#9734, Montant-Facture#9735, Magasin#9736, Magasin-Code#9737, Nb-Article#9738, Vendeur#9739, Date-Facture#9740, N°-Client#9741, Marge#9742]
Arguments: 0
(4) WriteFiles
Input [9]: [N°-Facture#9734, Montant-Facture#9735, Magasin#9736, Magasin-Code#9737, Nb-Article#9738, Vendeur#9739, Date-Facture#9740, N°-Client#9741, Marge#9742]
(5) Execute InsertIntoHadoopFsRelationCommand
Input: []
Arguments: file:/data/output/export/parquet/ada463f1-7fe5-4e1f-8f49-e31949ae5745, false, Parquet, [path=file:///data/output/export/parquet/ada463f1-7fe5-4e1f-8f49-e31949ae5745], ErrorIfExists, [N°-Facture, Montant-Facture, Magasin, Magasin-Code, Nb-Article, Vendeur, Date-Facture, N°-Client, Marge]
(6) Exchange
Input [9]: [N°-Facture#9734, Montant-Facture#9735, Magasin#9736, Magasin-Code#9737, Nb-Article#9738, Vendeur#9739, Date-Facture#9740, N°-Client#9741, Marge#9742]
Arguments: SinglePartition, REPARTITION_BY_NUM, [plan_id=1580]
(7) WriteFiles
Input [9]: [N°-Facture#9734, Montant-Facture#9735, Magasin#9736, Magasin-Code#9737, Nb-Article#9738, Vendeur#9739, Date-Facture#9740, N°-Client#9741, Marge#9742]
(8) Execute InsertIntoHadoopFsRelationCommand
Input: []
Arguments: file:/data/output/export/parquet/ada463f1-7fe5-4e1f-8f49-e31949ae5745, false, Parquet, [path=file:///data/output/export/parquet/ada463f1-7fe5-4e1f-8f49-e31949ae5745], ErrorIfExists, [N°-Facture, Montant-Facture, Magasin, Magasin-Code, Nb-Article, Vendeur, Date-Facture, N°-Client, Marge]
(9) AdaptiveSparkPlan
Output: []
Arguments: isFinalPlan=true