时间:2021-07-01 10:21:17 帮助过:26人阅读
greenplum是2(master)+7(segment)的集群规模
系统刚准备上线,是用来做统计数据库的,正在帮忙一个hadoop集群核对其数据的准确性,在这个greenplum库中入了清单数据
后检查分析是部分建表语句存在问题,没有指定字段做分布键,也没有指定其是随机分布,导致默认为第一个字段做为分布键导致数据倾斜。
发现数据库非常慢,几乎是不可用,检查greenplum的状态情况
1、检查greenplum数据库的状态
gpadmin@mdw:~> gpstate 20150227:15:20:13:007202 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: 20150227:15:20:13:007202 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: ‘postgres (Greenplum Database) 4.3.3.1 build 1‘ 20150227:15:20:13:007202 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: ‘PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14:31:50‘ 20150227:15:20:13:007202 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20150227:15:20:13:007202 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments... ...... 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:-Greenplum instance status summary 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Master instance = Active 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Master standby = smdw 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Standby master state = Standby host passive 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total segment instance count from metadata = 56 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Primary Segment Status 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total primary segments = 28 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total primary segment valid (at master) = 24 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[WARNING]:-Total primary segment failures (at master) = 4 <<<<<<<< 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files found = 28 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 28 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files found = 28 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[WARNING]:-Total number postmaster processes missing = 4 <<<<<<<< 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes found = 24 20150227:15:20:19:007202 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Mirror Segment Status 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:----------------------------------------------------- 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total mirror segments = 28 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total mirror segment valid (at master) = 21 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[WARNING]:-Total mirror segment failures (at master) = 7 <<<<<<<< 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid files found = 28 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 28 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total number of /tmp lock files found = 28 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[WARNING]:-Total number postmaster processes missing = 4 <<<<<<<< 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total number postmaster processes found = 24 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[WARNING]:-Total number mirror segments acting as primary segments = 4 <<<<<<<< 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:- Total number mirror segments acting as mirror segments = 24 20150227:15:20:20:007202 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
2、检查数据库服务器的情况
gpadmin@mdw:/> gpssh -h sdw1 -h sdw2 -h sdw3 -h sdw4 -h sdw5 -h sdw6 -h sdw7 "df -hT" [sdw4] df: "/root/.gvfs": 权限不够 [sdw4] 文件系统 类型 容量 已用 可用 已用% 挂载瀿 [sdw4] /dev/sda2 ext3 99G 5.7G 88G 7% / [sdw4] devtmpfs devtmpfs 32G 448K 32G 1% /dev [sdw4] tmpfs tmpfs 95G 100K 95G 1% /dev/shm [sdw4] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [sdw4] /dev/sda5 ext3 197G 188M 187G 1% /home [sdw4] /dev/sdb xfs 4.6T 3.8T 865G 82% /data [sdw5] 文件系统 类型 容量 已用 可用 已用% 挂载瀿 [sdw5] /dev/sda2 ext3 60G 5.5G 51G 10% / [sdw5] devtmpfs devtmpfs 32G 448K 32G 1% /dev [sdw5] tmpfs tmpfs 32G 88K 32G 1% /dev/shm [sdw5] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [sdw5] /dev/sda5 ext3 785G 197M 745G 1% /home [sdw5] /dev/sdb xfs 4.6T 2.4T 2.2T 53% /data [sdw6] df: "/root/.gvfs": 权限不够 [sdw6] 文件系统 类型 容量 已用 可用 已用% 挂载瀿 [sdw6] /dev/sda2 ext3 99G 910M 93G 1% / [sdw6] devtmpfs devtmpfs 32G 448K 32G 1% /dev [sdw6] tmpfs tmpfs 47G 100K 47G 1% /dev/shm [sdw6] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [sdw6] /dev/sda5 ext3 197G 188M 187G 1% /home [sdw6] /dev/sda3 ext3 63G 5.0G 55G 9% /usr [sdw6] /dev/sdb xfs 4.6T 4.5T 93G 99% /data [sdw6] /dev/sr0 iso9660 3.1G 3.1G 0 100% /media/SLES-11-SP2-DVD-x86_6407551 [sdw7] df: "/root/.gvfs": 权限不够 [sdw7] 文件系统 类型 容量 已用 可用 已用% 挂载瀿 [sdw7] /dev/sda2 ext3 60G 440M 56G 1% / [sdw7] devtmpfs devtmpfs 32G 244K 32G 1% /dev [sdw7] tmpfs tmpfs 95G 112K 95G 1% /dev/shm [sdw7] /dev/sda1 ext3 9.9G 180M 9.2G 2% /boot [sdw7] /dev/sda5 ext3 197G 188M 187G 1% /home [sdw7] /dev/sda6 ext3 9.9G 264M 9.1G 3% /opt [sdw7] /dev/sda8 ext3 9.9G 151M 9.2G 2% /srv [sdw7] /dev/sda7 ext3 9.9G 162M 9.2G 2% /tmp [sdw7] /dev/sda9 ext3 40G 4.8G 33G 13% /usr [sdw7] /dev/sda10 ext3 9.9G 358M 9.0G 4% /var [sdw7] /dev/sdb xfs 4.6T 3.7T 943G 80% /data [sdw1] df: "/root/.gvfs": 权限不够 [sdw1] 文件系统 类型 容量 已用 可用 已用% 挂载瀿 [sdw1] /dev/sda1 ext3 99G 5.7G 88G 7% / [sdw1] devtmpfs devtmpfs 32G 444K 32G 1% /dev [sdw1] tmpfs tmpfs 95G 100K 95G 1% /dev/shm [sdw1] /dev/sda2 ext3 7.9G 216M 7.3G 3% /boot [sdw1] /dev/sda3 ext3 197G 188M 187G 1% /home [sdw1] /dev/sdb xfs 4.6T 3.6T 1.1T 78% /data [sdw2] df: "/root/.gvfs": 权限不够 [sdw2] 文件系统 类型 容量 已用 可用 已用% 挂载瀿 [sdw2] /dev/sda1 ext3 99G 5.7G 88G 7% / [sdw2] devtmpfs devtmpfs 32G 444K 32G 1% /dev [sdw2] tmpfs tmpfs 95G 100K 95G 1% /dev/shm [sdw2] /dev/sda2 ext3 7.9G 216M 7.3G 3% /boot [sdw2] /dev/sda3 ext3 197G 188M 187G 1% /home [sdw2] /dev/sdb xfs 4.6T 4.6T 2.0G 100% /data [sdw3] df: "/root/.gvfs": 权限不够 [sdw3] 文件系统 类型 容量 已用 可用 已用% 挂载瀿 [sdw3] /dev/sda2 ext3 60G 5.5G 51G 10% / [sdw3] devtmpfs devtmpfs 32G 448K 32G 1% /dev [sdw3] tmpfs tmpfs 32G 100K 32G 1% /dev/shm [sdw3] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [sdw3] /dev/sda5 ext3 785G 197M 745G 1% /home [sdw3] /dev/sdb xfs 4.6T 3.8T 856G 82% /data 发现segment中的sdw2和sdw6的数据空间/data目录的收益率已经达到100%。
3、数据库上删除清单表
用psql连接数据库,执行drop table 表名,执行后,等待大半天也没反应,使用gpssh -f检查服务器的io使用情况,segment服务器的IO没有读写操作,证明数据库已经没办法分发命令下去。
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
4、分别登陆sdw2和sdw6
#cd /data/primary/gpseg4/gp_log
#ls -ltrh
目录下有gpdb-日期.csv文件,直接执行
#rm -rf *.csv
每个gpseg下的gp_log都执行同样的删除操作,执行完后,空间释放得非常少。
=> df -hT [sdw4] df: "/root/.gvfs": 权限不够 [sdw4] 文件系统 类型 容量 已用 可用 已用% 挂载点 [sdw4] /dev/sda2 ext3 99G 5.7G 88G 7% / [sdw4] devtmpfs devtmpfs 32G 448K 32G 1% /dev [sdw4] tmpfs tmpfs 95G 100K 95G 1% /dev/shm [sdw4] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [sdw4] /dev/sda5 ext3 197G 188M 187G 1% /home [sdw4] /dev/sdb xfs 4.6T 3.7T 868G 82% /data [sdw5] 文件系统 类型 容量 已用 可用 已用% 挂载点 [sdw5] /dev/sda2 ext3 60G 5.5G 51G 10% / [sdw5] devtmpfs devtmpfs 32G 448K 32G 1% /dev [sdw5] tmpfs tmpfs 32G 88K 32G 1% /dev/shm [sdw5] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [sdw5] /dev/sda5 ext3 785G 197M 745G 1% /home [sdw5] /dev/sdb xfs 4.6T 2.4T 2.2T 53% /data [sdw6] df: "/root/.gvfs": 权限不够 [sdw6] 文件系统 类型 容量 已用 可用 已用% 挂载点 [sdw6] /dev/sda2 ext3 99G 911M 93G 1% / [sdw6] devtmpfs devtmpfs 32G 448K 32G 1% /dev [sdw6] tmpfs tmpfs 47G 100K 47G 1% /dev/shm [sdw6] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [sdw6] /dev/sda5 ext3 197G 188M 187G 1% /home [sdw6] /dev/sda3 ext3 63G 5.0G 55G 9% /usr [sdw6] /dev/sdb xfs 4.6T 4.5T 96G 98% /data [sdw6] /dev/sr0 iso9660 3.1G 3.1G 0 100% /media/SLES-11-SP2-DVD-x86_6407551 [sdw7] df: "/root/.gvfs": 权限不够 [sdw7] 文件系统 类型 容量 已用 可用 已用% 挂载点 [sdw7] /dev/sda2 ext3 60G 440M 56G 1% / [sdw7] devtmpfs devtmpfs 32G 244K 32G 1% /dev [sdw7] tmpfs tmpfs 95G 112K 95G 1% /dev/shm [sdw7] /dev/sda1 ext3 9.9G 180M 9.2G 2% /boot [sdw7] /dev/sda5 ext3 197G 188M 187G 1% /home [sdw7] /dev/sda6 ext3 9.9G 264M 9.1G 3% /opt [sdw7] /dev/sda8 ext3 9.9G 151M 9.2G 2% /srv [sdw7] /dev/sda7 ext3 9.9G 162M 9.2G 2% /tmp [sdw7] /dev/sda9 ext3 40G 4.8G 33G 13% /usr [sdw7] /dev/sda10 ext3 9.9G 359M 9.0G 4% /var [sdw7] /dev/sdb xfs 4.6T 3.7T 945G 80% /data [sdw1] df: "/root/.gvfs": 权限不够 [sdw1] 文件系统 类型 容量 已用 可用 已用% 挂载点 [sdw1] /dev/sda1 ext3 99G 5.7G 88G 7% / [sdw1] devtmpfs devtmpfs 32G 444K 32G 1% /dev [sdw1] tmpfs tmpfs 95G 100K 95G 1% /dev/shm [sdw1] /dev/sda2 ext3 7.9G 216M 7.3G 3% /boot [sdw1] /dev/sda3 ext3 197G 188M 187G 1% /home [sdw1] /dev/sdb xfs 4.6T 3.6T 1.1T 78% /data [sdw2] df: "/root/.gvfs": 权限不够 [sdw2] 文件系统 类型 容量 已用 可用 已用% 挂载点 [sdw2] /dev/sda1 ext3 99G 5.7G 88G 7% / [sdw2] devtmpfs devtmpfs 32G 444K 32G 1% /dev [sdw2] tmpfs tmpfs 95G 100K 95G 1% /dev/shm [sdw2] /dev/sda2 ext3 7.9G 216M 7.3G 3% /boot [sdw2] /dev/sda3 ext3 197G 188M 187G 1% /home [sdw2] /dev/sdb xfs 4.6T 4.6T 21G 100% /data [sdw3] df: "/root/.gvfs": 权限不够 [sdw3] 文件系统 类型 容量 已用 可用 已用% 挂载点 [sdw3] /dev/sda2 ext3 60G 5.5G 51G 10% / [sdw3] devtmpfs devtmpfs 32G 448K 32G 1% /dev [sdw3] tmpfs tmpfs 32G 100K 32G 1% /dev/shm [sdw3] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [sdw3] /dev/sda5 ext3 785G 198M 745G 1% /home [sdw3] /dev/sdb xfs 4.6T 3.8T 859G 82% /data [smdw] df: "/root/.gvfs": 权限不够 [smdw] 文件系统 类型 容量 已用 可用 已用% 挂载点 [smdw] /dev/sda2 ext3 60G 5.8G 51G 11% / [smdw] devtmpfs devtmpfs 32G 448K 32G 1% /dev [smdw] tmpfs tmpfs 32G 100K 32G 1% /dev/shm [smdw] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [smdw] /dev/sda5 ext3 40G 23G 16G 60% /home [smdw] /dev/sda6 xfs 757G 3.3G 754G 1% /data [smdw] /dev/sr0 iso9660 3.1G 3.1G 0 100% /media/SLES-11-SP2-DVD-x86_6407551 [ mdw] df: "/root/.gvfs": 权限不够 [ mdw] 文件系统 类型 容量 已用 可用 已用% 挂载点 [ mdw] /dev/sda2 ext3 60G 6.4G 50G 12% / [ mdw] devtmpfs devtmpfs 32G 456K 32G 1% /dev [ mdw] tmpfs tmpfs 32G 100K 32G 1% /dev/shm [ mdw] /dev/sda1 ext3 9.9G 220M 9.2G 3% /boot [ mdw] /dev/sda5 ext3 40G 281M 38G 1% /home [ mdw] /dev/sda6 xfs 757G 6.4G 751G 1% /data
5、停数据库
执行gpstop
报:-gpstop failed(Reason=‘FATAL: the database system is shutting down‘) exiting ...
gpstop停不下来。
6、检查进程
gpssh -h sdw1 -h sdw2 -h sdw3 -h sdw4 -h sdw5 -h sdw6 -h sdw7 -h mdw -h smdw "ps -ef |grep postgres"
数据库segment上进程没有停。下面是一台segment上的进程:
[sdw4] gpadmin 15168 31843 0 Feb26 ? 00:00:14 postgres: port 40000, cems cems 198.168.11.11(52166) con6959 seg12 idle in transaction [sdw4] gpadmin 15170 31838 0 Feb26 ? 00:00:15 postgres: port 40001, cems cems 198.168.11.11(38065) con6959 seg13 idle in transaction [sdw4] gpadmin 15172 31841 0 Feb26 ? 00:00:16 postgres: port 40002, cems cems 198.168.11.11(3175) con6959 seg14 idle in transaction [sdw4] gpadmin 15174 31837 0 Feb26 ? 00:00:16 postgres: port 40003, cems cems 198.168.11.11(31152) con6959 seg15 idle in transaction [sdw4] gpadmin 15176 31843 0 Feb26 ? 00:00:04 postgres: port 40000, cems cems 198.168.11.11(52194) con6959 seg12 idle [sdw4] gpadmin 15178 31838 0 Feb26 ? 00:00:04 postgres: port 40001, cems cems 198.168.11.11(38093) con6959 seg13 idle [sdw4] gpadmin 15180 31841 0 Feb26 ? 00:00:04 postgres: port 40002, cems cems 198.168.11.11(3203) con6959 seg14 idle [sdw4] gpadmin 15182 31837 0 Feb26 ? 00:00:04 postgres: port 40003, cems cems 198.168.11.11(31180) con6959 seg15 idle [sdw4] gpadmin 15204 31843 0 Feb26 ? 00:00:16 postgres: port 40000, cems cems 198.168.11.11(52298) con6949 seg12 idle in transaction [sdw4] gpadmin 15206 31838 0 Feb26 ? 00:00:15 postgres: port 40001, cems cems 198.168.11.11(38197) con6949 seg13 idle in transaction [sdw4] gpadmin 15208 31841 0 Feb26 ? 00:00:16 postgres: port 40002, cems cems 198.168.11.11(3307) con6949 seg14 idle in transaction [sdw4] gpadmin 15210 31837 0 Feb26 ? 00:00:15 postgres: port 40003, cems cems 198.168.11.11(31284) con6949 seg15 idle in transaction [sdw4] gpadmin 31836 1 0 Feb04 ? 00:00:04 /usr/local/greenplum-db-4.3.3.1/bin/postgres -D /data/mirror/gpseg27 -p 50003 -b 57 -z 28 --silent-mode=true -i -M quiescent -C 27 [sdw4] gpadmin 31837 1 0 Feb04 ? 00:10:13 /usr/local/greenplum-db-4.3.3.1/bin/postgres -D /data/primary/gpseg15 -p 40003 -b 17 -z 28 --silent-mode=true -i -M quiescent -C 15 [sdw4] gpadmin 31838 1 0 Feb04 ? 00:10:11 /usr/local/greenplum-db-4.3.3.1/bin/postgres -D /data/primary/gpseg13 -p 40001 -b 15 -z 28 --silent-mode=true -i -M quiescent -C 13 [sdw4] gpadmin 31839 1 0 Feb04 ? 00:00:05 /usr/local/greenplum-db-4.3.3.1/bin/postgres -D /data/mirror/gpseg5 -p 50001 -b 35 -z 28 --silent-mode=true -i -M quiescent -C 5 [sdw4] gpadmin 31840 1 0 Feb04 ? 00:00:03 /usr/local/greenplum-db-4.3.3.1/bin/postgres -D /data/mirror/gpseg2 -p 50002 -b 32 -z 28 --silent-mode=true -i -M quiescent -C 2 [sdw4] gpadmin 31841 1 0 Feb04 ? 00:10:20 /usr/local/greenplum-db-4.3.3.1/bin/postgres -D /data/primary/gpseg14 -p 40002 -b 16 -z 28 --silent-mode=true -i -M quiescent -C 14 [sdw4] gpadmin 31842 1 0 Feb04 ? 00:00:04 /usr/local/greenplum-db-4.3.3.1/bin/postgres -D /data/mirror/gpseg8 -p 50000 -b 38 -z 28 --silent-mode=true -i -M quiescent -C 8 [sdw4] gpadmin 31843 1 0 Feb04 ? 00:10:26 /usr/local/greenplum-db-4.3.3.1/bin/postgres -D /data/primary/gpseg12 -p 40000 -b 14 -z 28 --silent-mode=true -i -M quiescent -C 12 [sdw4] gpadmin 31844 31841 0 Feb04 ? 00:01:15 postgres: port 40002, logger process [sdw4] gpadmin 31845 31838 0 Feb04 ? 00:01:21 postgres: port 40001, logger process [sdw4] gpadmin 31846 31839 0 Feb04 ? 00:01:07 postgres: port 50001, logger process [sdw4] gpadmin 31847 31836 0 Feb04 ? 00:01:13 postgres: port 50003, logger process [sdw4] gpadmin 31848 31843 0 Feb04 ? 00:01:19 postgres: port 40000, logger process [sdw4] gpadmin 31849 31842 0 Feb04 ? 00:01:07 postgres: port 50000, logger process [sdw4] gpadmin 31850 31837 0 Feb04 ? 00:01:25 postgres: port 40003, logger process [sdw4] gpadmin 31851 31840 0 Feb04 ? 00:01:04 postgres: port 50002, logger process [sdw4] gpadmin 31868 31836 0 Feb04 ? 00:08:55 postgres: port 50003, mirror process [sdw4] gpadmin 31871 31837 0 Feb04 ? 00:09:13 postgres: port 40003, primary process [sdw4] gpadmin 31873 31841 0 Feb04 ? 00:08:56 postgres: port 40002, primary process [sdw4] gpadmin 31874 31868 0 Feb04 ? 01:10:10 postgres: port 50003, mirror receiver process [sdw4] gpadmin 31876 31868 0 Feb04 ? 00:34:17 postgres: port 50003, mirror consumer process [sdw4] gpadmin 31877 31868 0 Feb04 ? 00:12:24 postgres: port 50003, mirror consumer writer process [sdw4] gpadmin 31878 31868 0 Feb04 ? 00:36:29 postgres: port 50003, mirror consumer append only process [sdw4] gpadmin 31879 31838 0 Feb04 ? 00:08:55 postgres: port 40001, primary process [sdw4] gpadmin 31881 31868 0 Feb04 ? 00:07:44 postgres: port 50003, mirror sender ack process [sdw4] gpadmin 31882 31868 0 Feb04 ? 00:00:03 postgres: port 50003, mirror verification process [sdw4] gpadmin 31883 31871 0 Feb04 ? 00:07:12 postgres: port 40003, primary receiver ack process [sdw4] gpadmin 31885 31871 0 Feb04 ? 01:15:22 postgres: port 40003, primary sender process [sdw4] gpadmin 31886 31871 0 Feb04 ? 00:07:00 postgres: port 40003, primary consumer ack process [sdw4] gpadmin 31887 31871 0 Feb04 ? 00:15:41 postgres: port 40003, primary recovery process [sdw4] gpadmin 31888 31842 0 Feb04 ? 00:09:02 postgres: port 50000, mirror process [sdw4] gpadmin 31889 31871 0 Feb04 ? 00:01:08 postgres: port 40003, primary verification process [sdw4] gpadmin 31892 31873 0 Feb04 ? 00:07:19 postgres: port 40002, primary receiver ack process [sdw4] gpadmin 31893 31873 0 Feb04 ? 01:10:48 postgres: port 40002, primary sender process [sdw4] gpadmin 31894 31873 0 Feb04 ? 00:06:47 postgres: port 40002, primary consumer ack process [sdw4] gpadmin 31895 31873 0 Feb04 ? 00:15:34 postgres: port 40002, primary recovery process [sdw4] gpadmin 31896 31873 0 Feb04 ? 00:01:13 postgres: port 40002, primary verification process [sdw4] gpadmin 31898 31839 0 Feb04 ? 00:09:08 postgres: port 50001, mirror process [sdw4] gpadmin 31900 31840 0 Feb04 ? 00:09:03 postgres: port 50002, mirror process [sdw4] gpadmin 31901 31879 0 Feb04 ? 00:07:05 postgres: port 40001, primary receiver ack process [sdw4] gpadmin 31902 31879 0 Feb04 ? 01:07:19 postgres: port 40001, primary sender process &nbs