PostgreSQL 与基友们的故事之 - Redis (无限缓存,实时标签...)

本文涉及的产品
RDS PostgreSQL Serverless,0.5-4RCU 50GB 3个月
推荐场景:
对影评进行热评分析
云原生数据库 PolarDB 分布式版,标准版 2核8GB
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
简介: 在PostgreSQL中直接读写REDIS。好处多多,可以降低应用层设计的复杂度,减少交互次数,降低RT。同时还能使用redis作为PostgreSQL的二级缓存, 提升数据库的性能。PostgreSQL的好基友还真多,redis也是其中一个。 下面简单的介绍一下redis_fdw插件的安装和

在PostgreSQL中直接读写redis。
好处多多,可以降低应用层设计的复杂度,减少交互次数,降低RT。
应用场景举例:
.1. 使用redis作为PostgreSQL的二级缓存, 提升数据库的性能。

例如在生产中有多台redis主机或集群,使用redis扩展PG的二级缓存能力,减少对IO的需求。   
这个完全可以做成触发器形式的,数据插入或更新的时候,同时插入或更新到redis。  
删除的时候,从redis删除。  
AI 代码解读

用来做二级缓存时,更新从postgresql直接操作redis来减少总的交互次数。
查询时直接查询redis或者从postgresql去查询redis都是可以的,如果是乐观的情况,从redis去查是OK的,如果是悲观的情况则从postgresql去读,同时写入redis.
.2.例如用来计算记录的COUNT数,数据插入或删除的时候,记录自增或自减。
.3. 利用来对数据统计标签,例如数据入库时,按照数据的用户ID以及记录的行为习惯进行标签化,将标签信息直接存入REDIS,作为一个实时标签系统。
1
2
当然还有更多的玩法,等待发掘。
PostgreSQL的好基友还真多,redis也是其中一个。

下面简单的介绍一下redis_fdw插件的安装和使用以及目前的限制。
注意redis可以在其他主机上,不要被我的文档局限了。
redis_fdw已经有很多用户在生产中使用,而且PG的fdw接口是非常靠谱的。

建议使用新版的gcc
http://blog.163.com/digoal@126/blog/static/163877040201601313814429/
安装好GCC之后

which gcc
~/gcc4.9.3/bin/gcc
AI 代码解读

安装和部署redis

cd ~
wget http://download.redis.io/releases/redis-3.0.7.tar.gz

tar -zxvf redis-3.0.7.tar.gz

cd redis-3.0.7

make -j 32
make test

make PREFIX=/home/digoal/redis install

cd /home/digoal/redis
cp ~/redis-3.0.7/redis.conf ./

mkdir /disk1/digoal/redis

vi redis.conf
daemonize yes
pidfile /disk1/digoal/redis/redis.pid
port 6379
tcp-backlog 511
bind 0.0.0.0
timeout 60
tcp-keepalive 60
loglevel notice
logfile "/disk1/digoal/redis/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /disk1/digoal/redis/
requirepass digoal
maxclients 10000
maxmemory 10gb
maxmemory-policy noeviction
maxmemory-samples 5
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

./redis-server ./redis.conf

./redis-cli -h 127.0.0.1 -p 6379 -a digoal
AI 代码解读

测试redis可用性

127.0.0.1:6379> help
redis-cli 3.0.7
Type: "help @<group>" to get a list of commands in <group>
      "help <command>" for help on <command>
      "help <tab>" to get a list of possible help topics
      "quit" to exit
AI 代码解读

测试一下redis的性能

cd ~/redis-3.0.7/src
./redis-benchmark -h 127.0.0.1 -p 6379 -n 1000000 -c 32
====== PING_INLINE ======
  1000000 requests completed in 14.35 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69676.70 requests per second

====== PING_BULK ======
  1000000 requests completed in 14.39 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69516.86 requests per second

====== SET ======
  1000000 requests completed in 14.42 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69328.90 requests per second

====== GET ======
  1000000 requests completed in 14.47 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69094.18 requests per second

====== INCR ======
  1000000 requests completed in 14.38 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69521.70 requests per second

====== LPUSH ======
  1000000 requests completed in 14.47 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69118.05 requests per second

====== LPOP ======
  1000000 requests completed in 14.49 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69008.35 requests per second

====== SADD ======
  1000000 requests completed in 14.40 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69458.91 requests per second

====== SPOP ======
  1000000 requests completed in 14.58 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 1 milliseconds
100.00% <= 1 milliseconds
68577.70 requests per second

====== LPUSH (needed to benchmark LRANGE) ======
  1000000 requests completed in 14.40 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69463.74 requests per second

====== LRANGE_100 (first 100 elements) ======
  1000000 requests completed in 13.74 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
72796.09 requests per second

====== LRANGE_300 (first 300 elements) ======
  1000000 requests completed in 14.42 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69328.90 requests per second

====== LRANGE_500 (first 450 elements) ======
  1000000 requests completed in 14.38 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69550.70 requests per second

====== LRANGE_600 (first 600 elements) ======
  1000000 requests completed in 14.43 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
69319.29 requests per second

====== MSET (10 keys) ======
  1000000 requests completed in 14.20 seconds
  32 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
70417.58 requests per second
AI 代码解读

redis的用法可参考
http://redis.io/commands

假设PostgreSQL已安装好,以PostgreSQL 9.5为例:
redis_fdw有几个分支,请根据PostgreSQL的版本切换分支使用。
安装redis_fdw,需要依赖redis的c接口,hiredis。

cd ~
git clone https://github.com/pg-redis-fdw/redis_fdw.git

cd redis_fdw/
git checkout REL9_5_STABLE

git clone https://github.com/redis/hiredis
cd hiredis
make
make PREFIX=/home/digoal/hiredis_bin install
AI 代码解读

修改redis_fdw的Makefile

vi Makefile
# 末尾追加

LDFLAGS += -L/home/digoal/hiredis_bin/lib
AI 代码解读

安装redis_fdw

export PATH=/home/digoal/pgsql9.5.0/bin:$PATH
make USE_PGXS=1
make USE_PGXS=1 install
AI 代码解读

将redis_fdw部署到需要访问redis的PostgreSQL数据库中

cd ~
$ psql
psql (9.5.0)
Type "help" for help.

postgres=# create extension redis_fdw;
CREATE EXTENSION
AI 代码解读

测试redis_fdw的用法,创建server,创建foreign table。
通过SQL可以写入和读取redis中的数据。

postgres=# CREATE SERVER redis_server
postgres-#     FOREIGN DATA WRAPPER redis_fdw
postgres-#     OPTIONS (address '127.0.0.1', port '6379');
CREATE SERVER

postgres=# CREATE FOREIGN TABLE redis_db0 (key text, val text)
postgres-#     SERVER redis_server
postgres-#     OPTIONS (database '0');
CREATE FOREIGN TABLE

postgres=# 
postgres=# CREATE USER MAPPING FOR PUBLIC
postgres-#     SERVER redis_server
postgres-#     OPTIONS (password 'digoal');
CREATE USER MAPPING

postgres=# CREATE FOREIGN TABLE myredishash (key text, val text[])
postgres-#     SERVER redis_server
postgres-#     OPTIONS (database '0', tabletype 'hash', tablekeyprefix 'mytable:');
CREATE FOREIGN TABLE

postgres=# INSERT INTO myredishash (key, val)
   VALUES ('mytable:r1','{prop1,val1,prop2,val2}');
INSERT 0 1

postgres=# UPDATE myredishash
postgres-#     SET val = '{prop3,val3,prop4,val4}'
postgres-#     WHERE key = 'mytable:r1';
UPDATE 1

postgres=# select * from myredishash ;
    key     |           val           
------------+-------------------------
 mytable:r1 | {prop3,val3,prop4,val4}
(1 row)

postgres=# select * from myredishash ;
    key     |           val           
------------+-------------------------
 mytable:r1 | {prop3,val3,prop4,val4}
(1 row)

postgres=# DELETE from myredishash
postgres-#     WHERE key = 'mytable:r1';
DELETE 1

postgres=# select * from myredishash ;
 key | val 
-----+-----
(0 rows)

postgres=# CREATE FOREIGN TABLE myredis_s_hash (key text, val text)
postgres-#     SERVER redis_server
postgres-#     OPTIONS (database '0', tabletype 'hash',  singleton_key 'mytable');
CREATE FOREIGN TABLE

postgres=# INSERT INTO myredis_s_hash (key, val)
postgres-#    VALUES ('prop1','val1'),('prop2','val2');
INSERT 0 2

postgres=# select * from myredis_s_hash ;
  key  | val  
-------+------
 prop1 | val1
 prop2 | val2
(2 rows)

postgres=# UPDATE myredis_s_hash
postgres-#     SET val = 'val23'
postgres-#     WHERE key = 'prop1';
UPDATE 1

postgres=# select * from myredis_s_hash ;
  key  |  val  
-------+-------
 prop1 | val23
 prop2 | val2
(2 rows)

postgres=# DELETE from myredis_s_hash
postgres-#     WHERE key = 'prop2';
DELETE 1

postgres=# \set VERBOSITY verbose
postgres=# INSERT INTO myredis_s_hash (key, val)
   VALUES ('prop1','val1'),('prop2','val2');
ERROR:  23505: key already exists: prop1
LOCATION:  redisExecForeignInsert, redis_fdw.c:2158
postgres=# select * from myredis_s_hash ;       
  key  |  val  
-------+-------
 prop1 | val23
(1 row)
AI 代码解读

redis_fdw详细的用法介绍
.1. CREATE SERVER 支持的 option
指定地址和端口

address: The address or hostname of the Redis server. Default: 127.0.0.1
port: The port number on which the Redis server is listening. Default: 6379
AI 代码解读

.2. CREATE USER MAPPING 支持的 option
指定密码

password: The password to authenticate to the Redis server with. Default:
AI 代码解读

.3. CREATE FOREIGN TABLE 支持的 option
指定数据库ID
表类型(hash,list,set,zset或scalar)
key 前缀 key 集合 singleton_key 指定KEY (三选一)

database: The numeric ID of the Redis database to query. Default: 0
(9.2 and later) tabletype: can be 'hash', 'list', 'set' or 'zset' Default: none, meaning only look at scalar values.
(9.2 and later) tablekeyprefix: only get items whose names start with the prefix Default: none
(9.2 and later) tablekeyset: fetch item names from the named set Default: none
(9.2 and later) singleton_key: get all the values in the table from a single named object. Default: none, meaning don't just use a single object.

You can only have one of tablekeyset and tablekeyprefix, and if you use singleton_key you can't have either.
Structured items are returned as array text, or, if the value column is a text array as an array of values. 
In the case of hash objects this array is an array of key, value, key, value ...
Singleton key tables are returned as rows with a single column of text in the case of lists sets and scalars, rows with key and value text columns for hashes, and rows with a value text columns and an optional numeric score column for zsets.
AI 代码解读

限制
.1. There's no such thing as a cursor in Redis, or MVCC, which leaves us with no way to atomically query the database for the available keys and then fetch each value.
So, we get a list of keys to begin with, and then fetch whatever records still exist as we build the tuples.

如果不带条件的查询,或者创建foreign table时没有指定单一KEY,则需要返回集合
redis_fdw/redis_fdw.c

/* initial cursor */
#define ZERO "0"
/* redis default is 10 - let's fetch 1000 at a time */
#define COUNT " COUNT 1000"

/*
 * redisBeginForeignScan
 *        Initiate access to the database
 */
static void
redisBeginForeignScan(ForeignScanState *node, int eflags)
{
...
    /* Execute the query */
    if (festate->singleton_key)
    {
        /*
         * We're not using cursors for now for singleton key tables. The
         * theory is that we don't expect them to be so large in normal use
         * that we would get any significant benefit from doing so, and in any
         * case scanning them in a single step is not going to tie things up
         * like scannoing the whole Redis database could.
         */
    ...
    }
    else if (qual_value && pushdown)
    {
        /*
         * if we have a qual, make sure it's a member of the keyset or has the
         * right prefix if either of these options is specified.
         *
         * If not set row to -1 to indicate failure
         */
         ...
    else
    {
        /* no qual - do a cursor scan */
        if (festate->keyset)
        {
            festate->cursor_search_string = "SSCAN %s %s" COUNT;
            reply = redisCommand(context, festate->cursor_search_string,
                                 festate->keyset, ZERO);
        }
        else if (festate->keyprefix)
        {
            festate->cursor_search_string = "SCAN %s MATCH %s*" COUNT;
            reply = redisCommand(context, festate->cursor_search_string,
                                 ZERO, festate->keyprefix);
        }
        else
        {
            festate->cursor_search_string = "SCAN %s" COUNT;
            reply = redisCommand(context, festate->cursor_search_string, ZERO);
        }
    }
...
AI 代码解读

.2. We can only push down a single qual to Redis, which must use the TEXTEQ operator, and must be on the 'key' column.

相关实践学习
基于Redis实现在线游戏积分排行榜
本场景将介绍如何基于Redis数据库实现在线游戏中的游戏玩家积分排行榜功能。
云数据库 Redis 版使用教程
云数据库Redis版是兼容Redis协议标准的、提供持久化的内存数据库服务,基于高可靠双机热备架构及可无缝扩展的集群架构,满足高读写性能场景及容量需弹性变配的业务需求。 产品详情:https://www.aliyun.com/product/kvstore &nbsp; &nbsp; ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库&nbsp;ECS 实例和一台目标数据库&nbsp;RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&amp;RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
目录
打赏
0
0
0
1
20691
分享
相关文章
|
13天前
|
Redis--缓存击穿、缓存穿透、缓存雪崩
缓存击穿、缓存穿透和缓存雪崩是Redis使用过程中可能遇到的常见问题。理解这些问题的成因并采取相应的解决措施,可以有效提升系统的稳定性和性能。在实际应用中,应根据具体场景,选择合适的解决方案,并持续监控和优化缓存策略,以应对不断变化的业务需求。
67 29
Redis应用—8.相关的缓存框架
本文介绍了Ehcache和Guava Cache两个缓存框架及其使用方法,以及如何自定义缓存。主要内容包括:Ehcache缓存框架、Guava Cache缓存框架、自定义缓存。总结:Ehcache适合用作本地缓存或与Redis结合使用,Guava Cache则提供了更灵活的缓存管理和更高的并发性能。自定义缓存可以根据具体需求选择不同的数据结构和引用类型来实现特定的缓存策略。
Redis应用—8.相关的缓存框架
Redis缓存设计与性能优化
Redis缓存设计与性能优化涵盖缓存穿透、击穿、雪崩及热点key重建等问题。针对缓存穿透,可采用缓存空对象或布隆过滤器;缓存击穿通过随机设置过期时间避免集中失效;缓存雪崩需确保高可用性并使用限流熔断组件;热点key重建利用互斥锁防止大量线程同时操作。此外,开发规范强调键值设计、命令使用和客户端配置优化,如避免bigkey、合理使用批量操作和连接池管理。系统内核参数如vm.swappiness、vm.overcommit_memory及文件句柄数的优化也至关重要。慢查询日志帮助监控性能瓶颈。
53 9
缓存与数据库的一致性方案,Redis与Mysql一致性方案,大厂P8的终极方案(图解+秒懂+史上最全)
缓存与数据库的一致性方案,Redis与Mysql一致性方案,大厂P8的终极方案(图解+秒懂+史上最全)
解决Redis缓存数据类型丢失问题
解决Redis缓存数据类型丢失问题
221 85
Redis,分布式缓存演化之路
本文介绍了基于Redis的分布式缓存演化,探讨了分布式锁和缓存一致性问题及其解决方案。首先分析了本地缓存和分布式缓存的区别与优劣,接着深入讲解了分布式远程缓存带来的并发、缓存失效(穿透、雪崩、击穿)等问题及应对策略。文章还详细描述了如何使用Redis实现分布式锁,确保高并发场景下的数据一致性和系统稳定性。最后,通过双写模式和失效模式讨论了缓存一致性问题,并提出了多种解决方案,如引入Canal中间件等。希望这些内容能为读者在设计分布式缓存系统时提供有价值的参考。感谢您的阅读!
140 6
Redis,分布式缓存演化之路
Redis 是一个高性能的键值对存储系统,常用于缓存、消息队列和会话管理等场景。
【10月更文挑战第4天】Redis 是一个高性能的键值对存储系统,常用于缓存、消息队列和会话管理等场景。随着数据增长,有时需要将 Redis 数据导出以进行分析、备份或迁移。本文详细介绍几种导出方法:1)使用 Redis 命令与重定向;2)利用 Redis 的 RDB 和 AOF 持久化功能;3)借助第三方工具如 `redis-dump`。每种方法均附有示例代码,帮助你轻松完成数据导出任务。无论数据量大小,总有一款适合你。
113 6
云端问道21期方案教学-应对高并发,利用云数据库 Tair(兼容 Redis®*)缓存实现极速响应
云端问道21期方案教学-应对高并发,利用云数据库 Tair(兼容 Redis®*)缓存实现极速响应
云端问道21期实操教学-应对高并发,利用云数据库 Tair(兼容 Redis®)缓存实现极速响应
本文介绍了如何通过云端问道21期实操教学,利用云数据库 Tair(兼容 Redis®)缓存实现高并发场景下的极速响应。主要内容分为四部分:方案概览、部署准备、一键部署和完成及清理。方案概览中,展示了如何使用 Redis 提升业务性能,降低响应时间;部署准备介绍了账号注册与充值步骤;一键部署详细讲解了创建 ECS、RDS 和 Redis 实例的过程;最后,通过对比测试验证了 Redis 缓存的有效性,并指导用户清理资源以避免额外费用。
Redis经典问题:缓存穿透
本文详细探讨了分布式系统和缓存应用中的经典问题——缓存穿透。缓存穿透是指用户请求的数据在缓存和数据库中都不存在,导致大量请求直接落到数据库上,可能引发数据库崩溃或性能下降。文章介绍了几种有效的解决方案,包括接口层增加校验、缓存空值、使用布隆过滤器、优化数据库查询以及加强监控报警机制。通过这些方法,可以有效缓解缓存穿透对系统的影响,提升系统的稳定性和性能。

相关产品

  • 云原生数据库 PolarDB
  • 云数据库 RDS PostgreSQL 版
  • 推荐镜像

    更多
    AI助理

    你好,我是AI助理

    可以解答问题、推荐解决方案等