0204UNIFORM SIZE=10M?index?insert分裂

简介: [20170204]UNIFORM SIZE=10M?index?insert分裂.txt --晚上看了链接https://savvinov.com/2017/01/31/long-running-insert/,上班重复测试看看: 1.

[20170204]UNIFORM SIZE=10M?index?insert分裂.txt

--晚上看了链接https://savvinov.com/2017/01/31/long-running-insert/,上班重复测试看看:

1.环境:

SCOTT@book> @ &r/ver1

PORT_STRING                    VERSION        BANNER
------------------------------ -------------- --------------------------------------------------------------------------------
x86_64/Linux 2.4.xx            11.2.0.4.0     Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

CREATE TABLESPACE TEA DATAFILE
  '/mnt/ramdisk/book/tea01.dbf' SIZE 100M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;

2.构造测试例子以及说明:

drop table mytest purge;
create table mytest(x number) tablespace tea;
create index i_mytest_x on mytest(x) tablespace tea;
insert into  mytest select level from dual connect by level <= 200126;
commit;

--//注作者文章存在一些笔误,后面提到216应该是126.这样插入正好插满索引块.

SCOTT@book> select object_id,data_object_id from dba_objects where owner=user and object_name='I_MYTEST_X';
OBJECT_ID DATA_OBJECT_ID
---------- --------------
     89714          89714

SCOTT@book> alter session set events 'immediate trace name treedump level 89714';
Session altered.

--//查看转储:
----- begin tree dump
branch: 0x1800581 25167233 (0: nrow: 399, level: 1)
   leaf: 0x1800582 25167234 (-1: nrow: 540 rrow: 540)
   leaf: 0x1800583 25167235 (0: nrow: 533 rrow: 533)
   leaf: 0x1800584 25167236 (1: nrow: 533 rrow: 533)
   leaf: 0x1800585 25167237 (2: nrow: 533 rrow: 533)
...
   leaf: 0x180070d 25167629 (394: nrow: 500 rrow: 500)
   leaf: 0x180070e 25167630 (395: nrow: 500 rrow: 500)
   leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
   leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
----- end tree dump

--//说明:这样正好插满索引.

delete mytest where x<=199000;
commit;

SCOTT@book> alter session set events 'immediate trace name treedump level 89714';
Session altered.

--//查看转储:
*** 2017-02-04 11:49:46.458
----- begin tree dump
branch: 0x1800581 25167233 (0: nrow: 399, level: 1)
   leaf: 0x1800582 25167234 (-1: nrow: 540 rrow: 0)
   leaf: 0x1800583 25167235 (0: nrow: 533 rrow: 0)
   leaf: 0x1800584 25167236 (1: nrow: 533 rrow: 0)
   leaf: 0x1800585 25167237 (2: nrow: 533 rrow: 0)
   leaf: 0x1800586 25167238 (3: nrow: 533 rrow: 0)
   leaf: 0x1800587 25167239 (4: nrow: 533 rrow: 0)
   leaf: 0x1800588 25167240 (5: nrow: 533 rrow: 0)
...
   leaf: 0x180070c 25167628 (393: nrow: 500 rrow: 0)
   leaf: 0x180070d 25167629 (394: nrow: 500 rrow: 0)
   leaf: 0x180070e 25167630 (395: nrow: 500 rrow: 126)
   leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
   leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
----- end tree dump

--//说明:这样前面的键值都被删除了.仅仅保留后面的键值.

SCOTT@book> insert into mytest with x as (select level i from dual connect by level <= 199000) select i from x where mod(i, 250) = 0;
796 rows created.

SCOTT@book> commit ;
Commit complete.

--//说明:插入一些记录.但是不是连续记录.250,500,750,...,198750,199000.看看索引现在的情况:

SCOTT@book> alter session set events 'immediate trace name treedump level 89714';
Session altered.

*** 2017-02-04 11:50:42.745
----- begin tree dump
branch: 0x1800581 25167233 (0: nrow: 399, level: 1)
   leaf: 0x1800582 25167234 (-1: nrow: 2 rrow: 2)
   leaf: 0x1800583 25167235 (0: nrow: 2 rrow: 2)
   leaf: 0x1800584 25167236 (1: nrow: 2 rrow: 2)
   leaf: 0x1800585 25167237 (2: nrow: 2 rrow: 2)
   leaf: 0x1800586 25167238 (3: nrow: 2 rrow: 2)
   leaf: 0x1800587 25167239 (4: nrow: 2 rrow: 2)
   leaf: 0x1800588 25167240 (5: nrow: 2 rrow: 2)
   leaf: 0x1800589 25167241 (6: nrow: 3 rrow: 3)
...
   leaf: 0x180070b 25167627 (392: nrow: 2 rrow: 2)
   leaf: 0x180070c 25167628 (393: nrow: 2 rrow: 2)
   leaf: 0x180070d 25167629 (394: nrow: 2 rrow: 2)
   leaf: 0x180070e 25167630 (395: nrow: 128 rrow: 128)
   leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
   leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
----- end tree dump

--//昏!!oracle索引并没有集中插入到1个块中,而是分散基本每块2条记录,我的感觉问题可能出在分支0级还记录了相关信息.插入时还查询到.

3.插入1条记录:
SCOTT@book> alter system flush buffer_cache;
System altered.

SCOTT@book> alter system flush buffer_cache;
System altered.

--//说明:INSERT INTO MYTEST VALUES (200127);插入当前最大的键值.
SCOTT@book> @ &r/10046on 12
old   1: alter session set events '10046 trace name context forever, level &1'
new   1: alter session set events '10046 trace name context forever, level 12'
Session altered.

SCOTT@book> INSERT INTO MYTEST VALUES (200127);
1 row created.

SCOTT@book> @ &r/10046off
Session altered.

SCOTT@book> commit ;
Commit complete.

--//查看转储:
=====================
PARSING IN CURSOR #139853026757984 len=34 dep=0 uid=83 oct=2 lid=83 tim=1486180291162867 hv=1063204715 ad='634b0b08' sqlid='5zb5uy0zpydvb'
INSERT INTO MYTEST VALUES (200127)
END OF STMT
PARSE #139853026757984:c=2000,e=2215,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1486180291162862
WAIT #139853026757984: nam='Disk file operations I/O' ela= 41 FileOperation=2 fileno=6 filetype=2 obj#=89713 tim=1486180291163153
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=128 blocks=1 obj#=89713 tim=1486180291163204
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=433 blocks=1 obj#=89713 tim=1486180291163289
WAIT #139853026757984: nam='db file sequential read' ela= 20 file#=6 block#=1409 blocks=1 obj#=89714 tim=1486180291163450
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1808 blocks=1 obj#=89714 tim=1486180291163524
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
WAIT #139853026757984: nam='Disk file operations I/O' ela= 30 FileOperation=2 fileno=3 filetype=2 obj#=0 tim=1486180291163799
WAIT #139853026757984: nam='db file sequential read' ela= 22 file#=3 block#=256 blocks=1 obj#=0 tim=1486180291163853
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=3 block#=1788 blocks=1 obj#=0 tim=1486180291163927
WAIT #139853026757984: nam='db file sequential read' ela= 22 file#=6 block#=1408 blocks=1 obj#=89714 tim=1486180291164121
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=3 block#=144 blocks=1 obj#=0 tim=1486180291164240
WAIT #139853026757984: nam='db file sequential read' ela= 14 file#=6 block#=1805 blocks=1 obj#=89714 tim=1486180291164315
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1410 blocks=1 obj#=89714 tim=1486180291164380
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1805 blocks=1 obj#=89714 tim=1486180291164453
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1804 blocks=1 obj#=89714 tim=1486180291164523
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1803 blocks=1 obj#=89714 tim=1486180291164587
WAIT #139853026757984: nam='db file sequential read' ela= 14 file#=6 block#=1802 blocks=1 obj#=89714 tim=1486180291164661
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1801 blocks=1 obj#=89714 tim=1486180291164724
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1800 blocks=1 obj#=89714 tim=1486180291164807
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1799 blocks=1 obj#=89714 tim=1486180291164877
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1798 blocks=1 obj#=89714 tim=1486180291164942
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1797 blocks=1 obj#=89714 tim=1486180291165008
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1796 blocks=1 obj#=89714 tim=1486180291165076
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1795 blocks=1 obj#=89714 tim=1486180291165172
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1794 blocks=1 obj#=89714 tim=1486180291165240
WAIT #139853026757984: nam='db file sequential read' ela= 14 file#=6 block#=1793 blocks=1 obj#=89714 tim=1486180291165309
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1792 blocks=1 obj#=89714 tim=1486180291165386
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1791 blocks=1 obj#=89714 tim=1486180291165453
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1790 blocks=1 obj#=89714 tim=1486180291165514
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1789 blocks=1 obj#=89714 tim=1486180291165578
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1788 blocks=1 obj#=89714 tim=1486180291165647
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1787 blocks=1 obj#=89714 tim=1486180291165712
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1786 blocks=1 obj#=89714 tim=1486180291165792
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1785 blocks=1 obj#=89714 tim=1486180291165853
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1784 blocks=1 obj#=89714 tim=1486180291165919
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1783 blocks=1 obj#=89714 tim=1486180291165986
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1782 blocks=1 obj#=89714 tim=1486180291166050
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1781 blocks=1 obj#=89714 tim=1486180291166112
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1780 blocks=1 obj#=89714 tim=1486180291166174
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1779 blocks=1 obj#=89714 tim=1486180291166233
...
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1423 blocks=1 obj#=89714 tim=1486180291188373
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1422 blocks=1 obj#=89714 tim=1486180291188431
WAIT #139853026757984: nam='db file sequential read' ela= 13 file#=6 block#=1421 blocks=1 obj#=89714 tim=1486180291188492
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1420 blocks=1 obj#=89714 tim=1486180291188552
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1419 blocks=1 obj#=89714 tim=1486180291188610
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1418 blocks=1 obj#=89714 tim=1486180291188669
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1417 blocks=1 obj#=89714 tim=1486180291188727
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1416 blocks=1 obj#=89714 tim=1486180291188798
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1415 blocks=1 obj#=89714 tim=1486180291188859
WAIT #139853026757984: nam='db file sequential read' ela= 12 file#=6 block#=1414 blocks=1 obj#=89714 tim=1486180291188919
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1413 blocks=1 obj#=89714 tim=1486180291188976
WAIT #139853026757984: nam='db file sequential read' ela= 11 file#=6 block#=1412 blocks=1 obj#=89714 tim=1486180291189036
WAIT #139853026757984: nam='db file sequential read' ela= 10 file#=6 block#=1411 blocks=1 obj#=89714 tim=1486180291189100
EXEC #139853026757984:c=25995,e=26545,p=405,cr=3,cu=821,mis=0,r=1,dep=0,og=1,plh=0,tim=1486180291189522
STAT #139853026757984 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL  (cr=3 pr=405 pw=0 time=26450 us)'
WAIT #139853026757984: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=89714 tim=1486180291189722

*** 2017-02-04 11:51:37.369
WAIT #139853026757984: nam='SQL*Net message from client' ela= 6180190 driver id=1650815232 #bytes=1 p3=0 obj#=89714 tim=1486180297369960
CLOSE #139853026757984:c=0,e=10,dep=0,type=0,tim=1486180297370090
=====================

--//仅仅注意obj#=89714 的行.注意看~的行.
SCOTT@book> select * from dba_extents where segment_name='I_MYTEST_X';
OWNER  SEGMENT_NAME         SEGMENT_TYPE       TABLESPACE_NAME  EXTENT_ID    FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------ -------------------- ------------------ --------------- ---------- ---------- ---------- ---------- ---------- ------------
SCOTT  I_MYTEST_X           INDEX              TEA                      0          6       1408   10485760       1280            6

SCOTT@book> @ &r/convrdba 6 1409
RDBA16               RDBA
-------------- ----------
       1800581   25167233
--//索引的根节点.

SCOTT@book> @ &r/convrdba 6 1808
RDBA16               RDBA
-------------- ----------
       1800710   25167632
--//找到对应的索引块,也就是最后1块,发生90-10分裂.访问dba=6,1408(对应段头),查询空闲块,但是悲剧发生.为了查询空闲块,把整个freelist表扫一遍.
--//先扫 dba=6,1410,接着dba=6,1805,接着一次递减,最后是dba=6,1411.

SCOTT@book> @ &r/convrdba 6 1805
RDBA16               RDBA
-------------- ----------
       180070d   25167629

--//插入键值到dba=6,1809,这样的步骤是插入完成的.奇怪跟踪没有发现扫描这个块dba=6,1809.
--//再次转储:

SCOTT@book> alter session set events 'immediate trace name treedump level 89714';
Session altered.

*** 2017-02-04 15:05:53.475
branch: 0x1800581 25167233 (0: nrow: 400, level: 1)
   leaf: 0x1800582 25167234 (-1: nrow: 2 rrow: 2)
   leaf: 0x1800583 25167235 (0: nrow: 2 rrow: 2)
   leaf: 0x1800584 25167236 (1: nrow: 2 rrow: 2)
....
   leaf: 0x180070d 25167629 (394: nrow: 2 rrow: 2)
   leaf: 0x180070e 25167630 (395: nrow: 128 rrow: 128)
   leaf: 0x180070f 25167631 (396: nrow: 500 rrow: 500)
   leaf: 0x1800710 25167632 (397: nrow: 500 rrow: 500)
   leaf: 0x1800711 25167633 (398: nrow: 1 rrow: 1)
----- end tree dump

4.如果继续插入:
SCOTT@book> alter system flush buffer_cache;
System altered.

SCOTT@book> alter system flush buffer_cache;
System altered.

SCOTT@book> @ &r/10046on 12
old   1: alter session set events '10046 trace name context forever, level &1'
new   1: alter session set events '10046 trace name context forever, level 12'
Session altered.

SCOTT@book> INSERT INTO MYTEST VALUES (200129);
1 row created.
--//因为这个时候索引没有发生分裂.

SCOTT@book> @ &r/10046off
Session altered.

=====================
PARSING IN CURSOR #140276912372776 len=34 dep=0 uid=83 oct=2 lid=83 tim=1486193188666785 hv=1435680765 ad='7d67c060' sqlid='b0s8xtjat5gzx'
INSERT INTO MYTEST VALUES (200129)
END OF STMT
PARSE #140276912372776:c=1999,e=1353,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1486193188666783
WAIT #140276912372776: nam='Disk file operations I/O' ela= 54 FileOperation=2 fileno=6 filetype=2 obj#=89713 tim=1486193188667043
WAIT #140276912372776: nam='db file sequential read' ela= 14 file#=6 block#=128 blocks=1 obj#=89713 tim=1486193188667099
WAIT #140276912372776: nam='db file sequential read' ela= 11 file#=6 block#=433 blocks=1 obj#=89713 tim=1486193188667173
WAIT #140276912372776: nam='db file sequential read' ela= 11 file#=3 block#=2653 blocks=1 obj#=0 tim=1486193188667249
WAIT #140276912372776: nam='db file sequential read' ela= 11 file#=6 block#=1409 blocks=1 obj#=89714 tim=1486193188667357
WAIT #140276912372776: nam='db file sequential read' ela= 14 file#=6 block#=1809 blocks=1 obj#=89714 tim=1486193188667426
EXEC #140276912372776:c=0,e=588,p=5,cr=1,cu=4,mis=0,r=1,dep=0,og=1,plh=0,tim=1486193188667501
STAT #140276912372776 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL  (cr=1 pr=5 pw=0 time=552 us)'
WAIT #140276912372776: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=89714 tim=1486193188667641

*** 2017-02-04 15:26:35.242
WAIT #140276912372776: nam='SQL*Net message from client' ela= 6574399 driver id=1650815232 #bytes=1 p3=0 obj#=89714 tim=1486193195242075
CLOSE #140276912372776:c=0,e=12,dep=0,type=0,tim=1486193195242203
PARSE #140276912401472:c=0,e=14,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=1486193195242284
EXEC #140276912401472:c=0,e=396,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=0,tim=1486193195242710

5.补充说明:
如果建立的表空间:
CREATE TABLESPACE TEA DATAFILE
  '/mnt/ramdisk/book/tea01.dbf' SIZE 10M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;

或者

CREATE TABLESPACE TEA DATAFILE
  '/mnt/ramdisk/book/tea01.dbf' SIZE 10M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT MANUAL
FLASHBACK ON;

--//我都没有再现问题.问题视乎在于EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M.继续探究超出我的能力.

6.测试assm表空间看看:
CREATE TABLESPACE TEA DATAFILE
  '/mnt/ramdisk/book/tea01.dbf' SIZE 100M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 10M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;

--不再重复内容:

=====================
PARSING IN CURSOR #140388084963712 len=34 dep=0 uid=83 oct=2 lid=83 tim=1486194070117967 hv=1063204715 ad='7b4957a0' sqlid='5zb5uy0zpydvb'
INSERT INTO MYTEST VALUES (200127)
END OF STMT
PARSE #140388084963712:c=2000,e=1233,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1486194070117962
WAIT #140388084963712: nam='db file sequential read' ela= 16 file#=6 block#=149 blocks=1 obj#=89720 tim=1486194070118162
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=148 blocks=1 obj#=89720 tim=1486194070118239
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=130 blocks=1 obj#=89720 tim=1486194070118295
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=286 blocks=1 obj#=89720 tim=1486194070118350
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=3 block#=272 blocks=1 obj#=0 tim=1486194070118422
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=3 block#=14742 blocks=1 obj#=0 tim=1486194070118523
WAIT #140388084963712: nam='db file sequential read' ela= 16 file#=6 block#=1430 blocks=1 obj#=89721 tim=1486194070118613
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1836 blocks=1 obj#=89721 tim=1486194070118681
WAIT #140388084963712: nam='db file sequential read' ela= 14 file#=3 block#=256 blocks=1 obj#=0 tim=1486194070118850
WAIT #140388084963712: nam='db file sequential read' ela= 12 file#=3 block#=2966 blocks=1 obj#=0 tim=1486194070118915
WAIT #140388084963712: nam='db file sequential read' ela= 12 file#=6 block#=1429 blocks=1 obj#=89721 tim=1486194070119060
WAIT #140388084963712: nam='db file sequential read' ela= 9 file#=6 block#=1428 blocks=1 obj#=89721 tim=1486194070119118
WAIT #140388084963712: nam='db file sequential read' ela= 9 file#=6 block#=1413 blocks=1 obj#=89721 tim=1486194070119165
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1754 blocks=1 obj#=89721 tim=1486194070119213
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1758 blocks=1 obj#=89721 tim=1486194070119285
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1762 blocks=1 obj#=89721 tim=1486194070119346
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1766 blocks=1 obj#=89721 tim=1486194070119406
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1770 blocks=1 obj#=89721 tim=1486194070119463
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1774 blocks=1 obj#=89721 tim=1486194070119524
WAIT #140388084963712: nam='db file sequential read' ela= 9 file#=6 block#=1778 blocks=1 obj#=89721 tim=1486194070119581
WAIT #140388084963712: nam='db file sequential read' ela= 13 file#=6 block#=1782 blocks=1 obj#=89721 tim=1486194070119658
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1786 blocks=1 obj#=89721 tim=1486194070119727
WAIT #140388084963712: nam='db file sequential read' ela= 12 file#=6 block#=1790 blocks=1 obj#=89721 tim=1486194070119800
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1731 blocks=1 obj#=89721 tim=1486194070119862
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1735 blocks=1 obj#=89721 tim=1486194070119918
...
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1812 blocks=1 obj#=89721 tim=1486194070124952
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1816 blocks=1 obj#=89721 tim=1486194070125007
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1820 blocks=1 obj#=89721 tim=1486194070125061
WAIT #140388084963712: nam='db file sequential read' ela= 10 file#=6 block#=1824 blocks=1 obj#=89721 tim=1486194070125115
WAIT #140388084963712: nam='db file sequential read' ela= 11 file#=6 block#=1840 blocks=1 obj#=89721 tim=1486194070125183
EXEC #140388084963712:c=6999,e=7283,p=114,cr=2,cu=242,mis=0,r=1,dep=0,og=1,plh=0,tim=1486194070125353
STAT #140388084963712 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL  (cr=2 pr=114 pw=0 time=7236 us)'
WAIT #140388084963712: nam='SQL*Net message to client' ela= 4 driver id=1650815232 #bytes=1 p3=0 obj#=89721 tim=1486194070125510

*** 2017-02-04 15:41:14.533
WAIT #140388084963712: nam='SQL*Net message from client' ela= 4407807 driver id=1650815232 #bytes=1 p3=0 obj#=89721 tim=1486194074533350
CLOSE #140388084963712:c=0,e=17,dep=0,type=0,tim=1486194074533515
=====================

--//依旧出现相似的情况.不同点在于:MSSM扫描的块更多.
--//ASSM
$ grep "obj#=89721" /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_14295.trc | grep block#=1|wc
    106    1378   12921

--//MSSM
$ grep "obj#=89714"  /u01/app/oracle/diag/rdbms/book/book/trace/book_ora_13231.trc | grep blocks=1 |wc
    400    5200   48770

--我直接转抄作者的链接内容:
https://savvinov.com/2017/01/31/long-running-insert/

What happens here is that the last insert causes a so-called 90-10 split, which, unlike the usual 50-50 split, requires
the new row to be inserted into a new empty block. In order to save space, Oracle first tries to reuse one of the blocks
below the high watermark which it assumes free (as they are on the freelist). If you trace it, you'll see that Oracle
ends up examining (and taking them off the freelist) the entire range of blocks cleared by the delete statement one by
one.

I've also tried the partial workaround provided by Saibabu: enabling event 45051, and it worked ("partial" because the
amount of I/O remains anomalously high, but fortunately it's mostly repeated reads of the same blocks, so very little
physical I/O will result from it). Other possible solution (apart from fixing the physical structure of the index by
coalescing or rebuilding it after a large delete, which isn't always convenient on a live system during business hours)
would be to migrate to ASSM, but it's not clear whether this would be much of improvement as ASSM has several somewhat
similar bugs, and I'm not sure how many of them are fixed in recent versions.
--//我没有做这个测试.

I was unable to confirm whether or not we are hitting this exact bug or something else — some scripts have been
deployed to obtain additional diagnostic information, and if we manage to get a definitive confirmation, I'll post an
update (but due to the very intermittent nature of the problem I have no idea how long this may take).

目录
相关文章
|
12天前
|
索引 Python
row[i] = col[j] = TrueIndexError: list assignment index out of range
row[i] = col[j] = TrueIndexError: list assignment index out of range
|
3月前
|
关系型数据库 MySQL
mysql 5.5.62版本建表语句报错: Index column size too large. The maximum column size is 767 bytes
mysql 5.5.62版本建表语句报错: Index column size too large. The maximum column size is 767 bytes
|
4月前
|
关系型数据库 MySQL
MySQL【问题 02】报错 1709 - Index column size too large. The maximum column size is 767 bytes. 可能是最简单的方法
MySQL【问题 02】报错 1709 - Index column size too large. The maximum column size is 767 bytes. 可能是最简单的方法
53 0
The Double Linknode for Linear table | Data
The some code in Data book (5th Edition) from the 54 page to 55 page
57 0
|
索引 Python
成功解决ValueError: column index (256) not an int in range(256)
成功解决ValueError: column index (256) not an int in range(256)
成功解决ValueError: column index (256) not an int in range(256)
|
关系型数据库 MySQL 索引
覆盖索引 cover index
覆盖索引 cover index MySQL InnoDB
124 0
|
Oracle 关系型数据库 索引
20180316不使用INDEX FULL SCAN (MIN/MAX)
[20180316]为什么不使用INDEX FULL SCAN (MIN/MAX).txt --//链接:http://www.itpub.net/thread-2100456-1-1.
1173 0