开发者社区> 问答> 正文

BucketAllocatorException

Failed allocation for a5b33ab19ce246619b7756e32215edab_108590499; org.apache.hadoop.hbase.io.hfile.bucket.BucketAllocatorException: Allocation too big size=1114202; adjust BucketCache sizes hbase.bucketcache.bucket.sizes to accomodate if size seems reasonable and you want it cached.

这个是提示我的整个bucket cache空间不够还是说它这个size超过了单个bucket的最大值?

另外regionserver上bucket cache 分配json文件中都是类似以下信息,代表什么意思呢?我发现有size比上面值还大的呀?
{ "count" : 4, "countData" : 0, "sizeData" : 0, "filename" : "0108ac83e9e14e538a3f65d83b07ffca", "size" : 262677 }
{ "count" : 68, "countData" : 0, "sizeData" : 0, "filename" : "13f6b5282309436ab3c2f8e7cc1b05ce", "size" : 4549742 }

展开
收起
hbase小能手 2018-11-05 11:53:01 3991 0
1 条回答
写回答
取消 提交回答
  • HBase是一个分布式的、面向列的开源数据库,一个结构化数据的分布式存储系统。HBase不同于一般的关系数据库,它是一个适合于非结构化数据存储的数据库。阿里云HBase技术团队共同探讨HBase及其生态的问题。
    1. 是说超过了单个bucket的大小,这个block的大小有1M多, 如果要缓存这么大的block需要调整hbase.bucketcache.bucket.sizes的配置
      /**

      • Allocate a block with specified size. Return the offset
      • @param blockSize size of block
      • @throws BucketAllocatorException,CacheFullException
      • @return the offset in the IOEngine
        */

    public synchronized long allocateBlock(int blockSize) throws CacheFullException,

      BucketAllocatorException {
    assert blockSize > 0;
    BucketSizeInfo bsi = roundUpToBucketSizeInfo(blockSize);
    if (bsi == null) {
      throw new BucketAllocatorException("Allocation too big size=" + blockSize +
        "; adjust BucketCache sizes " + CacheConfig.BUCKET_CACHE_BUCKETS_KEY +
        " to accomodate if size seems reasonable and you want it cached.");
    }
    long offset = bsi.allocateBlock();
    
    // Ask caller to free up space and try again!
    if (offset < 0)
      throw new CacheFullException(blockSize, bsi.sizeIndex());
    usedSize += bucketSizes[bsi.sizeIndex()];
    return offset;

    }

    1. 那个json是统计每个hfile占用的总blockcache大小
      public static String toJSON(final String filename, final NavigableSet blocks)

    throws JsonGenerationException, JsonMappingException, IOException {

    CachedBlockCountsPerFile counts = new CachedBlockCountsPerFile(filename);
    for (CachedBlock cb: blocks) {
      counts.count++;
      counts.size += cb.getSize();
      BlockType bt = cb.getBlockType();
      if (bt != null && bt.isData()) {
        counts.countData++;
        counts.sizeData += cb.getSize();
      }
    }
    return MAPPER.writeValueAsString(counts);

    }

    2019-07-17 23:12:02
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
低代码开发师(初级)实战教程 立即下载
冬季实战营第三期:MySQL数据库进阶实战 立即下载
阿里巴巴DevOps 最佳实践手册 立即下载