PG数据库升级步骤说明(pg_dumpall和pg_upgrade)

简介:

一、数据库升级9.4到9.5


1.pg_dump导入导出(数据量不大时使用)

postgres用户登录

$ pg_dumpall > db_export.out


$ env|grep LANG

LANG=en_US.UTF-8



如果数据库字符集为en_US.UTF-8,系统字符集为zh_CN.UTF-8


导入步骤为

postgres用户登录

$export LANG=en_US.UTF-8

$psql -f db_export.out postgres


2.pg_upgrade(pg_upgrade方式)

升级版本:9.4 --》9.5

升级前提:9.4数据库中有完整数据,9.5安装完成后仅需要initdb就可以


1)升级前一致性检查


postgres用户登录执行以下命令:

[root@pgdb01 pgdata]# su - postgres

[postgres@pgdb01 ~]$ pwd

/home/postgres


使用9.5新版本bin路径的pg_upgrade执行检查。


[postgres@pgdb01 ~]$ /opt/pg/9.5/bin/pg_upgrade -c -d /pgdata94 -D /pgdata -b /opt/pg/9.4/bin -B /opt/pg/9.5/bin

Performing Consistency Checks

-----------------------------

Checking cluster versions                                   ok

Checking database user is the install user                  ok

Checking database connection settings                       ok

Checking for prepared transactions                          ok

Checking for reg* system OID user data types                ok

Checking for contrib/isn with bigint-passing mismatch       ok

Checking for presence of required libraries                 ok

Checking database user is the install user                  ok

Checking for prepared transactions                          ok


*Clusters are compatible*

[postgres@pgdb01 ~]$

 

2).执行升级


升级方式:复制数据文件方式,9.4旧版本PGDATA中数据复制到9.5新版本PGDATA,执行速度慢,两个数据目录独立,保存双份数据;

          硬链接方式,升级时命令行加上--link参数,不复制数据仅建立连接,执行速度快,仅保存一份数据。


(注意:两个数据库执行这个步骤时,全部为停止状态)


[postgres@pgdb01 ~]$ /opt/pg/9.5/bin/pg_upgrade -d /pgdata94 -D /pgdata -b /opt/pg/9.4/bin -B /opt/pg/9.5/bin

Performing Consistency Checks

-----------------------------

Checking cluster versions                                   ok

Checking database user is the install user                  ok

Checking database connection settings                       ok

Checking for prepared transactions                          ok

Checking for reg* system OID user data types                ok

Checking for contrib/isn with bigint-passing mismatch       ok

Creating dump of global objects                             ok

Creating dump of database schemas

                                                            ok

Checking for presence of required libraries                 ok

Checking database user is the install user                  ok

Checking for prepared transactions                          ok


If pg_upgrade fails after this point, you must re-initdb the

new cluster before continuing.


Performing Upgrade

------------------

Analyzing all rows in the new cluster                       ok

Freezing all rows on the new cluster                        ok

Deleting files from new pg_clog                             ok

Copying old pg_clog to new server                           ok

Setting next transaction ID and epoch for new cluster       ok

Deleting files from new pg_multixact/offsets                ok

Copying old pg_multixact/offsets to new server              ok

Deleting files from new pg_multixact/members                ok

Copying old pg_multixact/members to new server              ok

Setting next multixact ID and offset for new cluster        ok

Resetting WAL archives                                      ok

Setting frozenxid and minmxid counters in new cluster       ok

Restoring global objects in the new cluster                 ok

Restoring database schemas in the new cluster

                                                            ok

Creating newly-required TOAST tables                        ok

Copying user relation files

                                                            ok

Setting next OID for new cluster                            ok

Sync data directory to disk                                 ok

Creating script to analyze new cluster                      ok

Creating script to delete old cluster                       ok


Upgrade Complete

----------------

Optimizer statistics are not transferred by pg_upgrade so,

once you start the new server, consider running:

    ./analyze_new_cluster.sh


Running this script will delete the old cluster's data files:

    ./delete_old_cluster.sh

[postgres@pgdb01 ~]$


[postgres@pgdb01 ~]$ ll

total 1208

-rwx------. 1 postgres postgres     749 Apr  9 18:52 analyze_new_cluster.sh

-rwx------. 1 postgres postgres      30 Apr  9 18:52 delete_old_cluster.sh

[postgres@pgdb01 ~]$ 


3).使用新版本启动脚本启动数据库


[postgres@pgdb01 ~]$ exit

logout

[root@pgdb01 pgdata]# /etc/init.d/postgresql start

Starting PostgreSQL: ok

[root@pgdb01 pgdata]# su - postgres

[postgres@pgdb01 ~]$ psql

psql (9.5.2)

Type "help" for help.


postgres=# \l

                                      List of databases

        Name        |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   

--------------------+----------+----------+-------------+-------------+-----------------------

 a_authentication   | vincent  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 

 a_resources        | vincent  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 

 postgres           | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 

 a_server           | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | postgres=CTc/postgres+

                    |          |          |             |             | =Tc/postgres         +

                    |          |          |             |             | acent=CTc/postgres

 template0          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +

                    |          |          |             |             | postgres=CTc/postgres

 template1          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | postgres=CTc/postgres+

                    |          |          |             |             | =c/postgres

a_security      | uccc     | UTF8     | en_US.UTF-8 | en_US.UTF-8 | a=CTc/uccc        +

                    |          |          |             |             | =Tc/uccc             +

                    |          |          |             |             | a=CTc/uccc

(7 rows)


postgres=# \q


4).生产新的统计数据


[postgres@pgdb01 ~]$ ./analyze_new_cluster.sh 

This script will generate minimal optimizer statistics rapidly

so your system is usable, and then gather statistics twice more

with increasing accuracy.  When it is done, your system will

have the default level of optimizer statistics.


If you have used ALTER TABLE to modify the statistics target for

any tables, you might want to remove them and restore them after

running this script because they will delay fast statistics generation.


If you would like default statistics as quickly as possible, cancel

this script and run:

    "/opt/pg/9.5/bin/vacuumdb" --all --analyze-only


vacuumdb: processing database "a_authentication": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "a_resources": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "a_server": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "a_security": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "a_authentication": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "a_resources": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "a_server": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "a_security": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "a_authentication": Generating default (full) optimizer statistics

vacuumdb: processing database "a_resources": Generating default (full) optimizer statistics

vacuumdb: processing database "postgres": Generating default (full) optimizer statistics

vacuumdb: processing database "a_server": Generating default (full) optimizer statistics

vacuumdb: processing database "template1": Generating default (full) optimizer statistics

vacuumdb: processing database "a_security": Generating default (full) optimizer statistics


Done

[postgres@pgdb01 ~]$ 


5).旧版本数据清理


可以在新数据库运行一段时间后,执行delete_old_cluster.sh 脚本,删除旧版本PGDATA目录

[postgres@pgdb01 ~]$ more delete_old_cluster.sh 

#!/bin/sh


rm -rf '/pgdata94'

[postgres@pgdb01 ~]$


本文转自 pgmia 51CTO博客,原文链接:http://blog.51cto.com/heyiyi/1762076


相关文章
|
1月前
|
SQL 数据库连接 数据库
你不知道ADo.Net中操作数据库的步骤【超详细整理】
你不知道ADo.Net中操作数据库的步骤【超详细整理】
16 0
|
1月前
|
SQL 安全 数据管理
在阿里云数据管理DMS(Data Management Service)中,您可以按照以下步骤来创建和管理数据库
【2月更文挑战第33天】在阿里云数据管理DMS(Data Management Service)中,您可以按照以下步骤来创建和管理数据库
37 7
|
6月前
|
数据库 数据库管理
SQLite操作数据库的步骤
SQLite操作数据库的步骤
43 0
|
7月前
|
关系型数据库 MySQL Go
MySQL数据库安装(超详细完整步骤)
MySQL数据库安装(超详细完整步骤)
736 1
|
4月前
|
druid Java 数据库连接
数据库连接池及Druid使用步骤
数据库连接池及Druid使用步骤
268 2
|
14天前
|
SQL 监控 关系型数据库
PG数据库释放闲置连接
PG数据库释放闲置连接
21 0
|
4月前
|
存储 SQL 关系型数据库
数据库设计的基本原则和主要步骤以及应注意什么?
数据库设计的基本原则和主要步骤以及应注意什么?
173 0
|
1月前
|
关系型数据库 MySQL Linux
【Linux】在Linux上安装MySQL数据库的步骤
【Linux】在Linux上安装MySQL数据库的步骤
127 0
|
2月前
|
Oracle 关系型数据库 Java
plsql链接远程Oracle数据库步骤
实际工作中,我们往往需要使用 PLSQL Develope 工具连接远程服务器上的 ORACLE 数据库进行管理,但是由于 ORACLE 安装在本地电脑步骤繁琐,并且会耗费电脑的很大一部分资源,因此,我们寻求一种不需要在本地安装 ORACLE 数据库而能直接使用 PLSQL Develope 工具连接到远程服务器 ORACLE 的方法。
41 2
|
6月前
|
关系型数据库 Java 分布式数据库
快速体验PolarDB开源数据库的步骤
快速体验PolarDB开源数据库的步骤
321 0

热门文章

最新文章