Clean up a couple of ad-hoc computations of the maximum number of tuples
authorTom Lane <tgl@sss.pgh.pa.us>
Fri, 2 Sep 2005 19:02:20 +0000 (19:02 +0000)
committerTom Lane <tgl@sss.pgh.pa.us>
Fri, 2 Sep 2005 19:02:20 +0000 (19:02 +0000)
commit35e9b1cc1ee296959d52383455052cb3743af478
tree760d8047d591cd1e96316b3a9c12764a12dc6ae3
parent962a4bb69f1dd70f1212e27ba2de7634cf91a80d
Clean up a couple of ad-hoc computations of the maximum number of tuples
on a page, as suggested by ITAGAKI Takahiro.  Also, change a few places
that were using some other estimates of max-items-per-page to consistently
use MaxOffsetNumber.  This is conservatively large --- we could have used
the new MaxHeapTuplesPerPage macro, or a similar one for index tuples ---
but those places are simply declaring a fixed-size buffer and assuming it
will work, rather than actively testing for overrun.  It seems safer to
size these buffers in a way that can't overflow even if the page is
corrupt.
src/backend/access/gist/gistvacuum.c
src/backend/access/nbtree/nbtree.c
src/backend/commands/vacuum.c
src/backend/commands/vacuumlazy.c
src/backend/nodes/tidbitmap.c
src/include/access/htup.h