Greenplum secrets🎩
699 subscribers
38 photos
8 videos
10 files
67 links
The channel about best practice coding for Greenplum / Канал о том как писать оптимальный код в Greenplum. by @smartyru
Download Telegram
Secret 21 (Divide and Conquer Again)
The secret has its roots in the emulation of MPP in PostgreSQL 9 on one of the BigData pilot projects at Auchan.
I was tasked with refactoring an old legacy system where the French were transferring 0.5 billion rows daily from Oracle
to FTP, csv from which they were loaded into PG. The trouble was that the Unloading was non-incremental and over time this volume only grew.
DevOps and I set up an FDW for a direct Oracle => PG bus and it was already 2-3 times faster, but the SLA for data readiness
for top management by 7 am was still not met. The lyrics are in the past, now to the point.
I wondered, what if we split the data flow from Oracle into subsets, raising for each its own sh-script on the PG side,
which pulls its part of the data. I remember well that the idea turned out to be viable, and after sorting the number of threads = 16, on which the flow was parallelized,
turned out to be optimal and coincided with the number of CPUs on the server.

Today I decided to check whether the same approach works for the PXF + Oracle bundle in GP, ​​and the very first test convinced me that yes,
the idea of ​​parallelizing a query with hard code filters is viable and the approach can be taken up.

I will not burden you with SQL scripts this time, I will only convey the essence of the experiment and the conclusion.
Let's take an external table X in Oracle with 35 fields And a query Q from production flow, which regularly unloads data for a period from X to the stg table in GP -
today we will get 15 million rows on average in 5 minutes 13 seconds.
Let's find a low-cardinality field in X, say code, which in our case has 15 different short values ​​like '00', '19', etc., and create a histogram of the distribution of the number of rows by codes.

Let's assume that the filter F code in ('00', '19') corresponds to 6 million rows in total, and the rest of the set of 13 codes corresponds to 9 million rows.
Let's execute 2 parallel queries Q, each with its own filter F, and see that the queries were executed on average in 2 min 7 sec and 2 min 50 sec, respectively,
i.e. the total time for 2 parallel threads was 2 min 50 sec, which gives an acceleration of 46%.
It is possible to further increase the parallelization coefficient, but this is a topic for a separate study.
In principle, if the table does not have a field that can be parallelized, you can create a view with a computable field on the source,
which will conditionally split the entire set into the number of approximately equal parts we need, for example, this could be the code of the first character in the Oracle table rowid
modulo 10, if we want to unload the external data into 10 parallel streams.
Note that the results are averaged over two runs, within which they are close to the average.

Conclusion: Unloading data from an external table can be significantly accelerated if you parallelize it using your own tools.
Используете подход для ускорения PXF, описанный в Секрете 21 ?
Are you using the PXF acceleration approach described in Secret 21?
Anonymous Poll
48%
Y
30%
N
22%
У нас другой метод __We use different approach__
Greenplum secrets🎩
NOT IN ('1','2') пробрасывается при чтении внешней табл-ы по JDBC через PXF?
Is Filter NOT IN ('1','2') pushed down when reading external Oracle table via PXF?
Что касается квиза про NOT IN, то VMWare утверждает , что IN тоже не пробрасывается в Oracle ( по jdbc ).Но тесты в ГП 6.25 (сборка от Аренадата ) говорят об обратном. Рез-т статистически значим. As for the quiz about NOT IN , VMWare claims that IN is also not pushed down to Oracle (by jdbc). However tests in GP 6.25 (build by Arenadata) say the opposite. The result is statistically significant.
Greenplum secrets🎩
Что касается квиза про NOT IN, то VMWare утверждает , что IN тоже не пробрасывается в Oracle ( по jdbc ).Но тесты в ГП 6.25 (сборка от Аренадата ) говорят об обратном. Рез-т статистически значим. As for the quiz about NOT IN , VMWare claims that IN is also…
Vmware заблокировали. Нет больше документации по greenplum. Ссылка выше вчера еще была доступна. Vmware has been blocked. No more documentation on greenplum. The link above was still available yesterday.
😢5
Секрет 777 (Доверяй, но проверяй!)

В секрете 19 мы коснулись темы отбора последних версий записей в SCD2 таблице.
Действительно, такая задача - это рутинная операция в нашем DWH, которая решается типовым запросом
Secret 777 (Trust, but check!)

In secret 19 we touched on the topic of selecting the latest versions of records in the SCD2 table.
Indeed, such a task is a routine operation in our DWH, which is solved by a standard query

select *
from public.foo a
join (
select n, max(version_id) version_id from public.foo where version_id between :v_min and :v_max group by 1) c
using (n, version_id)


Как-то заметил, что выгрузка инкремента из 50 млрд-ной таблицы транзакций в S3 таким способом дает в лучшем случае спилл 20 TB,
в худшем - падает с ошибкой workfile per query size limit exceeded.

Вскрытие показало, что фильтр на версию нужно наложить дважды, как к CTE, вычисляющий актуальные версии ключей так и к самой табл-е,
к которой join-им CTE, что было весьма неочевидно:
Once I noticed that unloading an increment from a 50 billion transaction table to S3 in this way gives a 20 TB spill at best,
at worst - crashes with the error 'workfile per query size limit exceeded'.

Analysis revealed that the version filter needs to be applied twice, both to the CTE that calculates the current versions of the keys and to the table itself,
to which the CTE is joined, which was quite unobvious:


select *
from public.foo a
join (
select n, max(version_id) version_id from public.foo where version_id between :v_min and :v_max group by 1) c
using (n, version_id)
where a.version_id between :v_min and :v_max


Данная мера свела спилл к нулю, Execution time сократился в 31 раз.

На фото планы запросов для жаждущих ( справа - оптимальный ) - как будто ничего особенного, мелочь добавилась.
Если буду писать свой компилятор БД, обещаю учесть этот факап, чтобы условие подставлялось авто-МАТОМ,
если иного нет.
This measure eliminated spill, and execution time was reduced by 31 times.
In the photo, there are query plans for the thirsty (on the right - the optimal one) - nothing special, a small thing was added.
If I write my own DB compiler, I promise to take this fuckup into account so that the condition is substituted a.u.t.o.m.a.t.i.c.a.l.l.y,
unless otherwise stated.
🔥2
Какова разница в часах What is the difference in hours ?
select to_timestamp(449611200) at time zone 'europe/moscow'
union all select to_timestamp(449611200 + 3600) at time zone 'europe/moscow'
Anonymous Poll
10%
0
81%
1
9%
2
Greenplum secrets🎩
Какова разница в часах What is the difference in hours ?
select to_timestamp(449611200) at time zone 'europe/moscow'
union all select to_timestamp(449611200 + 3600) at time zone 'europe/moscow'
Верный ответ: 2. Спасибо всем, кто знал почему и не выдал. На самом деле в ряде случаев 0 тоже будет корректным ответом, но в контексте вопроса стрелки часов в России 1 апр 1984 переводили на летнее время, т.е. на час вперед:
Thanks to everyone who knew why and didn't give solution in comments.
In fact, 0 could also be correct, but in the context of the question, the clock in Russia were switched to summer time on April 1, 1984,
i.e. an hour ahead:

1984-03-31 23:00:00.000000
1984-04-01 01:00:00.000000
🔥3💩1
А вот интересно, сколько платных ИТ курсов вы прошли(проходите) за последний год ?
I wonder how many paid IT courses you've got (are taking) over the past year?
Anonymous Poll
57%
0
9%
1
8%
2
2%
> 2
11%
пока только бесплатные (so far only free ones)
11%
за счет компании-работодателя(at the expense of the employer company)
1%
Некогда, погружение в GP отнимает все время (No time, diving into GP takes up all my time)
Секрет 22 (Скупой платит дважды)

Часто разработчик выбирает тип данных, которого точно хватит на все случаи жизни, напр.
если надо закодировать пол человека цифрами(0-Ж, 1-М, 2-иное), то smalliint вполне хватит.
Аналогично, если мы захотим закодировать ID человека в разрезе городов их рождения, то 4 млрд значений в
int хватит и мы вряд ли захотим в физмодели использовать int8, т.к. уникальный ключ CITY + ID
точно покроет все возможные потребности.

Однако, оказывается, что если по каким-то причинам для такого ID в 2 таблицах DWH будут использованы
int и int8, то даже при согласованных хэшах джойна таких таблиц мы получим Redistribute.

Наглядный пример уже есть тут
Создадим 2 тестовые таблицы с одним полем id с типом int и int8, соответственно, и распределим данные по этому полю id:
Secret 22 (A miser pays twice)

Often the developer chooses a data type that is definitely enough for all occasions, for example,
if we need to encode a person's gender in numbers (0-F, 1-M, 2-other), then smalliint is quite enough.
Similarly, if we want to encode a person's ID assigned to their cities of birth, then 4 billion values ​​in
int will be enough and we are unlikely to want to use int8 in the physical model, since the unique key CITY + ID
will definitely cover all possible needs.

However, it turns out that if for some reason for such an ID in 2 DWH tables
int and int8 are used, then even with consistent hashes of the join of such tables we will get Redistribute.
Let's create 2 test tables with one id field with the int and int8 types, respectively, and distribute the data by this id field:

create table tst_join_types_int (id int) 
with (appendonly=true , orientation=row, compresstype=zstd, compresslevel=1)
DISTRIBUTED BY (id);

create table tst_join_types_int8 (id int8)
with (appendonly=true , orientation=row, compresstype=zstd, compresslevel=1)
DISTRIBUTED BY (id);

insert into tst_join_types_int
select gen
from generate_series(1,1000000) gen;

insert into tst_join_types_int8
select gen
from generate_series(1,1000000) gen;

Посмотрим на план запроса:Let's look at the query plan:
explain analyze 
select *
from tst_join_types_int t1
join tst_join_types_int8 t2 using(id);

Gather Motion 864:1 (slice2; segments: 864) (cost=0.00..1002.34 rows=1000000 width=8) (actual time=150.920..339.981 rows=1000000 loops=1)
-> Hash Join (cost=0.00..975.58 rows=250000 width=8) (actual time=149.745..304.020 rows=250659 loops=1)
Hash Cond: (tst_join_types_int8.id = (tst_join_types_int.id)::bigint)
Extra Text: (seg2) Hash chain length 1.3 avg, 6 max, using 199357 of 524288 buckets.
-> Seq Scan on tst_join_types_int8 (cost=0.00..434.02 rows=250000 width=8) (actual time=0.243..36.083 rows=250659 loops=1)
-> Hash (cost=438.46..438.46 rows=250000 width=4) (actual time=149.033..149.033 rows=250659 loops=1)
-> Redistribute Motion 864:864 (slice1; segments: 864) (cost=0.00..438.46 rows=250000 width=4) (actual time=1.700..67.532 rows=250659 loops=1)
Hash Key: (tst_join_types_int.id)::bigint
-> Seq Scan on tst_join_types_int (cost=0.00..433.48 rows=250000 width=4) (actual time=0.548..51.956 rows=250659 loops=1)
Planning time: 9.074 ms
(slice0) Executor memory: 151K bytes.
(slice1) Executor memory: 220K bytes avg x 864 workers, 220K bytes max (seg0).
(slice2) Executor memory: 20840K bytes avg x 864 workers, 20840K bytes max (seg0). Work_mem: 5875K bytes max.
Memory used: 128000kB
Optimizer: Pivotal Optimizer (GPORCA)
Execution time: 399.750 ms

Как видим, первая таблица все равно перераспределилась по кластеру, хотя ключ ее распределения согласован с ключом второй табл-ы.
Но это еще не все. Оказывается, что сжатая zstd с int весит на 116% больше такой же с int8 ( см.фото ).
Выражаю признательность самому активному подписчику @GaRin_1979 за то, что подсветил феномен с размером int и int8 и поделился ссылкой , которая дала идею данному посту. Отдельный респект автору, который провел большое исследование и опубликовал его рез-т на хабре.
👍7
Secret 22 (conclusion)
As we can see, the first table was still redistributed across the cluster, although its distribution key is consistent with the key of the second table.
But that's not all. It turns out that a compressed zstd with int weighs 116% more than the same one with int8 (see photo).
I express my gratitude to the most active subscriber
@GaRin_1979 for highlighting the phenomenon with the size of int and int8 and sharing the link above which gave the idea for this post. Special respect to the author, who conducted a large study and published its results on Habr.
Случайно наткнулся, не реклама. Радует, что таких ивентов становится все больше. Догнать и перегнать Silicon Valley -)
Forwarded from BigData ninja (=))
Осенний бигдатник от sql-ninja.ru
30 ноября в 10:20 на ст.м. Таганская, Марксистская в БЦ Таганка Атриум

Мета стэк!
Flink, Spark, Debezium, Iceberg, StreamHouse, Промышленная эксплуатация баз данных и самое главное - Нетворкинг 🥹
Да, онлайн будет 😊

купить билет

erid:CQH36pWzJqDgK9RNHwhSjbuJD6So5abX7aDH9bXgpAsMEa
Реклама, ИП "Ким Лестат Альбертович", ИНН 183403475084
Секрет 25 (Bitcoin 2.0 не опять а снова: сделай сам если хочешь хорошо)
Мы много говорили о важности предматериализации на примере SCD2, встречайте новый пример с переходом ключа, долго думал как тут натянуть сову на глобус и кривая меня вывела в крипту.

Преамбула:
В параллельной Вселенной некто придумал крипто-валюту Greencoin, функционирующую по следующим простым законам.

Создатель идеи получает 1 GRC, каждый n-ый ( по времени генерации своего адреса в сети ) реферерал - 1/n GRC.
/* Эмиссия в отличие от идеи Сатоши Накамото неограничена, т.к. бесонечный ряд таки расходится, но не суть -) */
У основателя может быть не более 12 рефералов, и такое же ограничение действует на всех уровнях пирамиды, иными словами не более 12 потомков у каждого отца.
Комиссия за перевод крипты составляет 0.01% от суммы перевода и зачисляется системой отцу реферала, от которого он узнал про систему.

Задача:
Рассмотрим момент эволюции, когда в системе 100 млн участников.
Пусть каждый сделает 1 перевод кому угодно на любую сумму в пределах его баланса.
Зададимся вопросом - каково вознаграждение каждого участника системы в качестве комиссии за перевод.
Смоделируем данные:
Иды системы
create table seq_100m
as select generate_series(1,1e8::int) n;

Таблица связей папа-сын + балансы
create table grc_origin   WITH (appendonly=true,orientation=column,compresstype=zstd,compresslevel=1)
as select n, trunc(n - (random()*12)) up, 1.0/n bal from seq_100m distributed by(n);

Вернем предков праотцов основателей в реальность выделенных системе идов, если оные уехали в небытие случайно
update grc_origin
set up = 1
where up < 1;

Таблица транзакций ( переводов )
create table grc_pmt   WITH (appendonly=true,orientation=column,compresstype=zstd,compresslevel=1)
as select n n_from, (random()*1e8) n_to, bal * random() amt from grc_origin distributed by(n_from);

Справочно: всего GRC в системе
select sum(bal) from grc_origin -- 18.99

Ответ на поставленный вопрос в лоб:
explain analyze
select x.n, bal, t.fee, 1.0 * t.fee / bal from grc_origin x
join (
select up, sum(amt * 0.0001) fee from grc_origin, grc_pmt
where n = n_from
group by 1) t
on x.n = t.up
order by 4 desc;
Execution time: 102615.943 ms(131005.427ms)

А суть-то где? - не выдерживает слушатель как в анекдоте про караван верблюдов. В чем секрет дамасской стали ??
Ответ: выше был неоптимальный вариант.
Как улучшить ?
Предматериализуем подзапрос t и используем полученный результат
 
create table tst_snap WITH (appendonly=true,orientation=column,compresstype=zstd,compresslevel=1)
as select up, sum(amt * 0.0001) fee from grc_origin, grc_pmt
where n = n_from
group by 1 distributed by(up);
--64,796,937 rows affected in 6 s 375 ms

Посмотрим как изменился план
explain analyze
select x.n, bal, t.fee, 1.0 * t.fee / bal from grc_origin x
join tst_snap t
on x.n = t.up
order by 4 desc;
Execution time: 90525.993 ms(94188.913 ms)

В скобках даны рез-ты повторного прогона, также из планов, которые традиционно ниже.
Вывод: Предматериализация CTE при изменении ключа соединения в следующем слайсе может давать выигрыш от 6 до 30%
План запроса в лоб:
Gather Motion 864:1  (slice3; segments: 864)  (cost=0.00..4451.38 rows=46321098 width=28) (actual time=8658.929..98572.735 rows=64796937 loops=1)
Merge Key: ((('1'::double precision * (sum((grc_pmt.amt * '0.000100000000000000005'::double precision)))) / (grc_origin.bal)::double precision))
-> Sort (cost=0.00..1591.13 rows=53613 width=28) (actual time=7470.019..7478.919 rows=75895 loops=1)
Sort Key: ((('1'::double precision * (sum((grc_pmt.amt * '0.000100000000000000005'::double precision)))) / (grc_origin.bal)::double precision))
Sort Method: quicksort Memory: 9726048kB
-> Result (cost=0.00..1457.41 rows=53613 width=28) (actual time=6431.395..7224.114 rows=75895 loops=1)
-> Hash Join (cost=0.00..1455.91 rows=53613 width=24) (actual time=6426.155..6859.024 rows=75895 loops=1)
Hash Cond: ((grc_origin.n)::double precision = grc_origin_1.up)
" Extra Text: (seg302) Hash chain length 1.3 avg, 8 max, using 57270 of 131072 buckets.Hash chain length 4.7 avg, 15 max, using 16244 of 16384 buckets; total 9 expansions."
""
-> Redistribute Motion 864:864 (slice1; segments: 864) (cost=0.00..442.66 rows=115741 width=16) (actual time=0.049..114.590 rows=116893 loops=1)
Hash Key: (grc_origin.n)::double precision
-> Seq Scan on grc_origin (cost=0.00..433.42 rows=115741 width=16) (actual time=0.565..390.875 rows=116911 loops=1)
-> Hash (cost=962.67..962.67 rows=53613 width=16) (actual time=6419.276..6419.276 rows=75895 loops=1)
-> HashAggregate (cost=0.00..962.67 rows=53613 width=16) (actual time=6231.267..6319.833 rows=75895 loops=1)
Group Key: grc_origin_1.up
" Extra Text: (seg302) Hash chain length 4.7 avg, 15 max, using 16244 of 16384 buckets; total 9 expansions."
""
-> Redistribute Motion 864:864 (slice2; segments: 864) (cost=0.00..948.09 rows=115741 width=16) (actual time=153.612..6002.775 rows=117301 loops=1)
Hash Key: grc_origin_1.up
-> Hash Join (cost=0.00..942.29 rows=115741 width=16) (actual time=1413.348..3148.457 rows=116911 loops=1)
Hash Cond: (grc_origin_1.n = grc_pmt.n_from)
" Extra Text: (seg822) Hash chain length 1.3 avg, 7 max, using 93416 of 262144 buckets."
-> Seq Scan on grc_origin grc_origin_1 (cost=0.00..433.42 rows=115741 width=12) (actual time=0.598..559.136 rows=116911 loops=1)
-> Hash (cost=433.16..433.16 rows=115741 width=12) (actual time=1390.377..1390.377 rows=116911 loops=1)
-> Seq Scan on grc_pmt (cost=0.00..433.16 rows=115741 width=12) (actual time=0.630..502.400 rows=116911 loops=1)
Planning time: 64.511 ms
(slice0) Executor memory: 18213K bytes.
" (slice1) Executor memory: 340K bytes avg x 864 workers, 356K bytes max (seg644)."
" (slice2) Executor memory: 19068K bytes avg x 864 workers, 19084K bytes max (seg644). Work_mem: 4567K bytes max."
" (slice3) Executor memory: 20958K bytes avg x 864 workers, 20958K bytes max (seg0). Work_mem: 11257K bytes max."
Memory used: 540672kB
Optimizer: Pivotal Optimizer (GPORCA)
Execution time: 102615.943 ms
План оптимизированного запроса
Gather Motion 864:1  (slice2; segments: 864)  (cost=0.00..5135.44 rows=64796936 width=28) (actual time=4121.411..86304.398 rows=64796937 loops=1)
Merge Key: ((('1'::double precision * tst_snap.fee) / (grc_origin.bal)::double precision))
-> Sort (cost=0.00..1134.33 rows=74997 width=28) (actual time=3499.849..3512.324 rows=75895 loops=1)
Sort Key: ((('1'::double precision * tst_snap.fee) / (grc_origin.bal)::double precision))
Sort Method: quicksort Memory: 9726048kB
-> Result (cost=0.00..941.51 rows=74997 width=28) (actual time=344.682..3222.810 rows=75895 loops=1)
-> Hash Join (cost=0.00..939.41 rows=74997 width=24) (actual time=338.428..2445.560 rows=75895 loops=1)
Hash Cond: ((grc_origin.n)::double precision = tst_snap.up)
" Extra Text: (seg302) Hash chain length 1.3 avg, 8 max, using 57270 of 131072 buckets."
-> Redistribute Motion 864:864 (slice1; segments: 864) (cost=0.00..442.66 rows=115741 width=16) (actual time=0.118..1403.245 rows=116893 loops=1)
Hash Key: (grc_origin.n)::double precision
-> Seq Scan on grc_origin (cost=0.00..433.42 rows=115741 width=16) (actual time=0.555..30.152 rows=116911 loops=1)
-> Hash (cost=432.24..432.24 rows=74997 width=16) (actual time=338.056..338.056 rows=75895 loops=1)
-> Seq Scan on tst_snap (cost=0.00..432.24 rows=74997 width=16) (actual time=1.638..178.638 rows=75895 loops=1)
Planning time: 39.724 ms
(slice0) Executor memory: 17457K bytes.
" (slice1) Executor memory: 340K bytes avg x 864 workers, 356K bytes max (seg644)."
" (slice2) Executor memory: 20983K bytes avg x 864 workers, 20983K bytes max (seg0). Work_mem: 11257K bytes max."
Memory used: 540672kB
Optimizer: Pivotal Optimizer (GPORCA)
Execution time: 90525.993 ms
Secret 25 (Bitcoin 2.0 not again but again: do it yourself if you want it well)
We talked a lot about the importance of pre-materialization using the example of SCD2 and now let me show a new example with the key transition.
I thought for a long time how to stretch an owl onto a globe here and the curve led me to crypto paradigm.

The creator of the idea receives 1 GRC, each n-th (by the time of generation of its address in the network) referral - 1/n GRC.
/* The emission, unlike the idea of ​​Satoshi Nakamoto, is unlimited, since the infinite series does diverge, but that's not the point -) */
The founder can have no more than 12 referrals, and the same limitation applies to all levels of the pyramid, in other words, no more than 12 sons for each father.
The commission for transferring crypto is 0.01% of the transfer amount and is credited by the system to the father of the referral from whom he learned about the system.
Problem:
Let's consider the moment of evolution when there are 100 million participants in the system.
Let each make 1 transfer to anyone for any amount within his balance.
Let's ask ourselves - what is the reward for each participant in the system as a commission for the transfer.
Let's model the data:
System IDs

create table seq_100m
as select generate_series(1,1e8::int) n;

Father-son relationship table + balances
create table grc_origin   WITH (appendonly=true,orientation=column,compresstype=zstd,compresslevel=1)
as select n, trunc(n - (random()*12)) up, 1.0/n bal from seq_100m distributed by(n);

Let's return the ancestors of the founding forefathers to the realm of the ids allocated to the system, if they accidentally went into oblivion
update grc_origin
set up = 1
where up < 1;

Table of transactions (transfers)
create table grc_pmt   WITH (appendonly=true,orientation=column,compresstype=zstd,compresslevel=1)
as select n n_from, (random()*1e8) n_to, bal * random() amt from grc_origin distributed by(n_from);

Reference: total GRC in the system
select sum(bal) from grc_origin -- 18.99

The answer to the question posed is straightforward:
explain analyze
select x.n, bal, t.fee, 1.0 * t.fee / bal from grc_origin x
join (
select up, sum(amt * 0.0001) fee from grc_origin, grc_pmt
where n = n_from
group by 1) t
on x.n = t.up
order by 4 desc;
Execution time: 102615.943 ms(131005.427ms)

Where is the essence? - the listener can't stand it, like in the joke about the camel caravan. What is the secret of Damascus steel??
Answer: the above was not an optimal option.
How to improve it?
Pre-materialize the subquery t and use the result obtained

create table tst_snap   WITH (appendonly=true,orientation=column,compresstype=zstd,compresslevel=1)
as select up, sum(amt * 0.0001) fee from grc_origin, grc_pmt
where n = n_from
group by 1 distributed by(up);
--64,796,937 rows affected in 6 s 375 ms

Let's see how the plan has changed.
explain analyze
select x.n, bal, t.fee, 1.0 * t.fee / bal from grc_origin x
join tst_snap t
on x.n = t.up
order by 4 desc;
Execution time: 90525.993 ms(94188.913 ms)

The results of the rerun are given in brackets, also from the plans, which are traditionally given above.
Conclusion: Pre-materialization of CTE when changing the join key in the next slice can give a gain of 6 to 30%.
Операторы NOT IN и NOT EXISTS дают идентичный датасет в GP? Do NOT IN and NOT EXISTS operators produce identical dataset in GP?
Anonymous Quiz
30%
Y
70%
N
Секрет 26 (Не оставляйте крошки без внимания)

Мы успели осветить проблемы, возникающие на пустых таблицах
или при отсутствии статистики.

На этот раз тривиальный join 3х таблиц чуть не уронил нам сервер, а все потому, что по одной из них не было статистики.
Спилл 31 TB вызвал запрос ниже, выполняемый по несколько раз на дню:
Secret 26 (Don't leave crumbs unattended)
We've already covered the problems that arise on empty tables
or in the absence of statistics.

This time, a trivial join of 3 tables almost brought down our server, and all because there were no statistics for one of them.
Spill 31 TB caused the query below, executed several times a day:

select b.deal_fee_rk, b.invalid_id, b.effective_date, max(b.version_id) version_id
from big b
join medium d
on d.c_comiss_arr_rk = b.collection_rk
join tiny as f on f.type_debt_rk = b.c_debt_rk
where (b.version_id between 1 and 2732523)
and b.valid_flg is true
group by b.deal_fee_rk, b.invalid_id, b.effective_date;


где в AOCO zstd табл-ах big, medium, tiny было 1.5 млрд, 1 млн и 5 строк соотв-но.
Статы не было у последней.
where in AOCO zstd tables big, medium, tiny there were 1.5 billion, 1 million and 5 rows respectively.
The last one had no stats.


Если посмотреть на план запроса ниже, то увидим 2 Broadcast Motion,
где 1й ожидаемо тиражирует крошку tiny на все сегменты, 2й неожиданно дублирует рез-т join-а big и tiny также на все сегменты.
Почему GPORCA использует 2й Broadcast вместо Redistribute с учетом что ключ в следующем слайсе у medium известен - теряюсь в догадках.
Но тем ценнее данный секрет.
If you look at the query plan below, you'll see 2 Broadcast Motions,
where the 1st one, as expected, replicates the tiny crumb to all segments, the 2nd one unexpectedly duplicates the result of the join of big and tiny to all segments as well.
Why GPORCA uses the 2nd Broadcast instead of Redistribute, given that the key in the next slice of medium is known - I'm at a loss.
But this secret is even more valuable.


Я проверил, а что, если стата не собрана только по medium. Тут все гораздо лучше,
и в плане только 1 Broadcast и спилла нет.

В сухом остатке, если в вашем DWH табл-ы создаются PL/pgSQL ф-ей
и gp_autostats_mode_in_functions = on_change ( прописан явно или на уроне GUC ),
Галактика ваших данных (или финансов если арендуете клауд) в опасности, если вы на этом успокоились и не позаботились о статистике ничтожно малых таблиц.

I checked what if the stats are not collected only for medium table. Everything is much better here,
and in exec plan there is only 1 Broadcast and no spill.

The bottom line is, if in your DWH tables are created by PL/pgSQL functions
and gp_autostats_mode_in_functions = on_change (written explicitly or at the GUC level),
The galaxy of your data (or finances if you rent a cloud) is in danger if you calmed down at this and did not take care of the statistics of small tables.


Мораль - собирайте стату по маленьким табл-ам, хотя бы до 10 000 строк,
чтобы такие мины остались за периметром вашей платформы данных.

The moral is - collect stats for small tables, at least up to 10,000 rows,
so that such mines remain outside the perimeter of your data platform.


Всем хорошей Пт 13-го и оптимальных вычислений!
План исходного запроса (когда только tiny табл-а без статы):
Gather Motion 864:1  (slice4; segments: 864)  (cost=0.00..1371.49 rows=1 width=28) (actual time=1871747.423..1875490.705 rows=22129384 loops=1)
-> GroupAggregate (cost=0.00..1371.49 rows=1 width=28) (actual time=1871747.210..1871768.905 rows=26077 loops=1)
" Group Key: big.deal_fee_rk, big.invalid_id, big.effective_date"
-> Sort (cost=0.00..1371.49 rows=1 width=28) (actual time=1871747.155..1871750.100 rows=26217 loops=1)
" Sort Key: big.deal_fee_rk, big.invalid_id, big.effective_date"
Sort Method: quicksort Memory: 2482272kB
-> Redistribute Motion 864:864 (slice3; segments: 864) (cost=0.00..1371.49 rows=1 width=28) (actual time=1475307.979..1871714.814 rows=26217 loops=1)
" Hash Key: big.deal_fee_rk, big.invalid_id, big.effective_date"
-> Hash Join (cost=0.00..1371.49 rows=1 width=28) (actual time=1413328.254..1514773.884 rows=35025 loops=1)
Hash Cond: (medium.c_comiss_arr_rk = big.collection_rk)
Extra Text: (seg0) Initial batch 0:
(seg0) Wrote 10810674K bytes to inner workfile.
(seg0) Wrote 23K bytes to outer workfile.
(seg0) Overflow batches 1..255:
"(seg0) Read 15508008K bytes from inner workfile: 60816K avg x 255 nonempty batches, 236543K max."
"(seg0) Wrote 4697335K bytes to inner workfile: 36987K avg x 127 overflowing batches, 192573K max."
"(seg0) Read 23K bytes from outer workfile: 1K avg x 253 nonempty batches, 1K max."
"(seg0) Hash chain length 50.8 avg, 3476 max, using 4210779 of 33554432 buckets.Initial batch 0:"
""
Extra Text: (seg575) Initial batch 0:
(seg575) Wrote 10810674K bytes to inner workfile.
(seg575) Wrote 24K bytes to outer workfile.
(seg575) Overflow batches 1..255:
"(seg575) Read 15507743K bytes from inner workfile: 60815K avg x 255 nonempty batches, 236532K max."
"(seg575) Wrote 4697070K bytes to inner workfile: 36985K avg x 127 overflowing batches, 192563K max."
"(seg575) Read 24K bytes from outer workfile: 1K avg x 252 nonempty batches, 1K max."
"(seg575) Hash chain length 50.8 avg, 3476 max, using 4210779 of 33554432 buckets."
-> Seq Scan on medium (cost=0.00..431.01 rows=1204 width=8) (actual time=0.998..1.228 rows=1305 loops=1)
-> Hash (cost=940.25..940.25 rows=1 width=36) (actual time=1413176.714..1413176.714 rows=213719427 loops=1)
-> Broadcast Motion 864:864 (slice2; segments: 864) (cost=0.00..940.25 rows=1 width=36) (actual time=444.691..616240.052 rows=213719427 loops=1)
-> Hash Join (cost=0.00..940.20 rows=1 width=36) (actual time=609.326..2707.949 rows=261266 loops=1)
Hash Cond: (big.c_debt_rk = tiny.type_debt_rk)
" Extra Text: (seg427) Hash chain length 1.0 avg, 1 max, using 5 of 262144 buckets."
-> Seq Scan on big (cost=0.00..486.84 rows=111417 width=44) (actual time=1.457..775.750 rows=299164 loops=1)
Filter: ((version_id >= 1) AND (version_id <= 2732523) AND (valid_flg IS TRUE))
-> Hash (cost=431.01..431.01 rows=1 width=8) (actual time=375.789..375.789 rows=5 loops=1)
-> Broadcast Motion 864:864 (slice1; segments: 864) (cost=0.00..431.01 rows=1 width=8) (actual time=0.352..375.764 rows=5 loops=1)
-> Seq Scan on tiny (cost=0.00..431.00 rows=1 width=8) (actual time=10.545..10.553 rows=1 loops=1)
Planning time: 131.145 ms
(slice0) Executor memory: 2247K bytes.
" (slice1) Executor memory: 172K bytes avg x 864 workers, 172K bytes max (seg0)."