file

#file#

已有0人关注此标签

内容分类

1366009531757309

Android oss文件上传失败

报错信息read failed: EBADF (Bad file descriptor)

注目天空

sublime中python运行问题

crtl+b可正常运行,运行python-run current file无结果显示

mimimi

mongodb无法启动

由于磁盘空间不够, 有一个log的db文件占用空间太大,没办法通过mongodb命令删除db, 只能直接删除对应的db数据文件目录, 但是删除后, 起来就报错了, 有人知道有什么修复办法吗?单db实例: [initandlisten] WiredTiger (2) 1563768196:507074, file:log_db/collection-1400--2187771374350063201.wt, session.open_cursor: /data1/mongo_db/data/log_db/collection-1400--2187771374350063201.wt: No such file or directory2019-07-22T12:03:16.507+0800 E STORAGE [initandlisten] no cursor for uri: table:log_db/collection-1400--21877713743500632012019-07-22T12:03:16.507+0800 I - [initandlisten] Invariant failure c src/mongo/db/storage/wiredtiger/wiredtiger_record_store.cpp 1020

臭豆腐

ls命令看不到文件?

[xxx@xxx bin]#ls -l systemd-network-rwxr-xr-x 1 root root 322588 Jul 23 2014 systemd-network [xxx@xxx bin]#ls -l systemd-net*ls: cannot access systemd-net*: No such file or directory [xxx@xxx bin]#lsattr systemd-network-------------e- systemd-network [xxx@xxx bin]#ls -l {无法看到system开头的文件}

无暇之三月

_getFileSize requires Buffer/File/String.

_getFileSize requires Buffer/File/String. 使用的是JS直传OSS。在使用webuploader选取出来的file文件,通过multipartUpload上传时,会报这个错误。通过分析源码,发现将aliyun-oss-sdk.js中的 is.file = function file(obj) { // (obj instanceof File)不能识别为File,导致Error: _getFileSize requires Buffer/File/String. return typeof File !== 'undefined' && (obj instanceof File || Object.prototype.toString.call(obj) === '[object File]'); } 如此修改后能正常上传,但不知是否有其他副作用?

帅死了我的

centos缺乏libssl.so.1.1依赖库

缺少这个依赖裤怎么安装呢 bedrock_server: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory

钉钉小程序发开之上传下载图片功能的开发者服务器地址如何设置?

dd.uploadFile({ url: '请使用自己服务器地址', fileType: 'image', fileName: 'file', filePath: '...', success: (res) => { dd.alert({ content: '上传成功' }); },});

游客886

问一下这个文件吧磁盘占满了,能直接把这块删掉吗

问一下这个文件吧磁盘占满了,能直接把这块删掉吗could not extend file "base/12981/2619": No space left on device本问题来自云栖社区【PostgreSQL技术进阶社群】。https://yq.aliyun.com/articles/690084 点击链接欢迎加入社区大社群。

王滕滕

请问 小程序直传 oss 的方式,现在能 Work么?

请问 小程序直传 oss 的方式,现在能 Work么? my.chooseImage({ chooseImage: 1, success: res => { const path = res.apFilePaths[0]; console.log(path) var key = 'image.jpg' my.uploadFile({ url: 'https://ocr-image-bucket.oss-cn-shanghai.aliyuncs.com', //ocr image bucket host fileType: 'image', fileName: key, filePath: path, formData: { name: path, key: '${filename}', policy: '<my-policy>', OSSAccessKeyId: '<my-OSSAccessKeyId>', success_action_status: '200', signature: '<my-signature>' }, success: (res) => { my.alert({ content: 'success info: ' + res.data }); }, }); }); 为什么我返回400的错误?POST requires exactly one file upload per request. 只上传一个文件

hbase小能手

各位大佬好,我有个问题想请教一下:cdh5.11-hbase1.2 这个版本的hbase,出现了region 下的store file 全部丢失的问题,且hbck 恢复不了,在写入的时候已经确认写入了,会不会是手动marjor_compact导致的?或者有遇到相似问题的怎么恢复啊?

各位大佬好,我有个问题想请教一下:cdh5.11-hbase1.2 这个版本的hbase,出现了region 下的store file 全部丢失的问题,且hbck 恢复不了,在写入的时候已经确认写入了,会不会是手动marjor_compact导致的?或者有遇到相似问题的怎么恢复啊?部分region 下的store file 丢失,还有很多没丢

sumli

windows 上安装 go-python

windows 上安装 go-python 出现 找不到python.h文件 fatal error: Python.h: No such file or directory compilation terminated 怎么解决然后go-python支持python3吗??

eddie.cheng

Starting MySQL...The server quit without updating PID file [FAILED]cal/mysql/data/mysql.pid).

190527 11:10:13 [Note] Plugin 'FEDERATED' is disabled./usr/local/mysql/bin/mysqld: Table 'mysql.plugin' doesn't exist190527 11:10:13 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.190527 11:10:13 InnoDB: The InnoDB memory heap is disabled190527 11:10:13 InnoDB: Mutexes and rw_locks use GCC atomic builtins190527 11:10:13 InnoDB: Compressed tables use zlib 1.2.11190527 11:10:13 InnoDB: Using Linux native AIO190527 11:10:13 InnoDB: Initializing buffer pool, size = 256.0M190527 11:10:13 InnoDB: Completed initialization of buffer pool190527 11:10:13 InnoDB: highest supported file format is Barracuda.190527 11:10:13 InnoDB: Waiting for the background threads to start190527 11:10:14 InnoDB: 5.5.62 started; log sequence number 1595676190527 11:10:14 [ERROR] /usr/local/mysql/bin/mysqld: unknown variable 'innodb_checksum_algorithm=0'190527 11:10:14 [ERROR] Aborting 190527 11:10:14 InnoDB: Starting shutdown...

游客bx4l4iplhszqy

游客qeznygpepdvvo

阿里云物联网平台服务端订阅

出现以下错误:2019-05-09 10:53:15 [nioEventLoopGroup-2-1] INFO c.a.o.iot.api.http2.IotHttp2Client - connection status changed, connection: 771bd824, status: CREATING2019-05-09 10:53:15 [nioEventLoopGroup-2-1] INFO c.a.o.iot.api.http2.IotHttp2Client - receive setting, connection: 771bd824, subscription count : 8 2019-05-09 10:53:15 [nioEventLoopGroup-2-1] INFO c.a.o.iot.api.http2.IotHttp2Client - connection status changed, connection: 771bd824, status: CREATED2019-05-09 10:53:30 [nioEventLoopGroup-2-1] INFO c.a.o.iot.api.http2.IotHttp2Client - connection status changed, connection: 771bd824, status: CLOSEDException in thread "main" com.aliyun.openservices.iot.api.exception.IotClientException: com.aliyuncs.exceptions.ClientException: SDK.ServerUnreachable : Server unreachable: java.net.SocketException: Unexpected end of file from server at com.aliyun.openservices.iot.api.auth.handler.accesskey.AccessKeyAuthHandler.updateToken(AccessKeyAuthHandler.java:74) at com.aliyun.openservices.iot.api.auth.handler.accesskey.AccessKeyAuthHandler.getAuthParams(AccessKeyAuthHandler.java:56) at com.aliyun.openservices.iot.api.http2.IotHttp2Client.authHeader(IotHttp2Client.java:158) at com.aliyun.openservices.iot.api.message.impl.MessageClientImpl.sendAuth(MessageClientImpl.java:95) at com.aliyun.openservices.iot.api.message.impl.MessageClientImpl.doConnect(MessageClientImpl.java:107) at com.aliyun.openservices.iot.api.message.impl.MessageClientImpl.connect(MessageClientImpl.java:300) at com.aliyun.iot.demo.H2Client.main(H2Client.java:36) Caused by: com.aliyuncs.exceptions.ClientException: SDK.ServerUnreachable : Server unreachable: java.net.SocketException: Unexpected end of file from server at com.aliyuncs.DefaultAcsClient.doAction(DefaultAcsClient.java:295) at com.aliyuncs.DefaultAcsClient.doAction(DefaultAcsClient.java:207) at com.aliyuncs.DefaultAcsClient.doAction(DefaultAcsClient.java:100) at com.aliyuncs.DefaultAcsClient.getAcsResponse(DefaultAcsClient.java:144) at com.aliyun.openservices.iot.api.auth.handler.accesskey.AccessKeyAuthHandler.updateToken(AccessKeyAuthHandler.java:68) ... 6 more

开源大数据EMR

Hive/Impala 作业读取 SparkSQL 导入的 Parquet 表报错

Hive/Impala 作业读取 SparkSQL 导入的 Parquet 表报错(表包含 Decimal 格式的列):Failed with exception java.io.IOException:org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://…/…/part-00000-xxx.snappy.parquet

宋淑婷

无法在EMR spark群集中运行python作业

我正在尝试向AWS EMR spark集群提交python作业。 我在spark-submit选项部分中的设置如下: --master yarn --driver-memory 4g --executor-memory 2g 但是,我在工作期间遇到了一个失败的案例。 以下是错误日志文件: 19/04/09 10:40:25 INFO RMProxy: Connecting to ResourceManager at ip-172-31-53-241.ec2.internal/172.31.53.241:803219/04/09 10:40:26 INFO Client: Requesting a new application from cluster with 3 NodeManagers19/04/09 10:40:26 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (11520 MB per container)19/04/09 10:40:26 INFO Client: Will allocate AM container, with 4505 MB memory including 409 MB overhead19/04/09 10:40:26 INFO Client: Setting up container launch context for our AM19/04/09 10:40:26 INFO Client: Setting up the launch environment for our AM container19/04/09 10:40:26 INFO Client: Preparing resources for our AM container19/04/09 10:40:26 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.19/04/09 10:40:29 INFO Client: Uploading resource file:/mnt/tmp/spark-a8e941b7-f20f-46e5-8b2d-05c52785bd22/__spark_libs__3200812915608084660.zip -> hdfs://ip-172-31-53-241.ec2.internal:8020/user/hadoop/.sparkStaging/application_1554806206610_0001/__spark_libs__3200812915608084660.zip19/04/09 10:40:32 INFO Client: Uploading resource s3://spark-yaowen/labelp.py -> hdfs://ip-172-31-53-241.ec2.internal:8020/user/hadoop/.sparkStaging/application_1554806206610_0001/labelp.py19/04/09 10:40:32 INFO S3NativeFileSystem: Opening 's3://spark-yaowen/labelp.py' for reading19/04/09 10:40:32 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/pyspark.zip -> hdfs://ip-172-31-53-241.ec2.internal:8020/user/hadoop/.sparkStaging/application_1554806206610_0001/pyspark.zip19/04/09 10:40:33 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/py4j-0.10.7-src.zip -> hdfs://ip-172-31-53-241.ec2.internal:8020/user/hadoop/.sparkStaging/application_1554806206610_0001/py4j-0.10.7-src.zip19/04/09 10:40:34 INFO Client: Uploading resource file:/mnt/tmp/spark-a8e941b7-f20f-46e5-8b2d-05c52785bd22/__spark_conf__6746542371431989978.zip -> hdfs://ip-172-31-53-241.ec2.internal:8020/user/hadoop/.sparkStaging/application_1554806206610_0001/__spark_conf__.zip19/04/09 10:40:34 INFO SecurityManager: Changing view acls to: hadoop19/04/09 10:40:34 INFO SecurityManager: Changing modify acls to: hadoop19/04/09 10:40:34 INFO SecurityManager: Changing view acls groups to: 19/04/09 10:40:34 INFO SecurityManager: Changing modify acls groups to: 19/04/09 10:40:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()19/04/09 10:40:36 INFO Client: Submitting application application_1554806206610_0001 to ResourceManager19/04/09 10:40:37 INFO YarnClientImpl: Submitted application application_1554806206610_000119/04/09 10:40:38 INFO Client: Application report for application_1554806206610_0001 (state: ACCEPTED)19/04/09 10:40:38 INFO Client: client token: N/A diagnostics: AM container is launched, waiting for AM container to Register with RM ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1554806436561 final status: UNDEFINED tracking URL: http://ip-172-31-53-241.ec2.internal:20888/proxy/application_1554806206610_0001/ user: hadoop 19/04/09 10:40:39 INFO Client: Application report for application_1554806206610_0001 (state: ACCEPTED)19/04/09 10:40:40 INFO Client: Application report for application_1554806206610_0001 (state: ACCEPTED)19/04/09 10:40:41 INFO Client: Application report for application_1554806206610_0001 (state: ACCEPTED)19/04/09 10:40:42 INFO Client: Application report for application_1554806206610_0001 (state: ACCEPTED)19/04/09 10:40:43 INFO Client: Application report for application_15548062066

宋淑婷

使用带有--py文件的.zip文件(使用zipfile包在python中创建)导入模块时出现问题

我试图将我的应用程序存档在我的测试文件中以激发EMR集群上的提交,如下所示: 模块的文件夹结构: app--- module1------ test.py------ test2.py--- module2------ file1.py------ file2.py我正在通过测试调用Zip函数 import zipfileimport os def zip_deps(): # make zip module1_path = '../module1' module2_path = '../module2' try: with zipfile.ZipFile('deps.zip', 'w', zipfile.ZIP_DEFLATED) as zipf: info = zipfile.ZipInfo(module1_path +'/') zipf.writestr(info, '') for root, dirs, files in os.walk(module1_path): for d in dirs: info = zipfile.ZipInfo(os.path.join(root, d)+'/') zipf.writestr(info, '') for file in files: zipf.write(os.path.join(root, file),os.path.relpath(os.path.join(root, file))) info = zipfile.ZipInfo(module2_path +'/') zipf.writestr(info, '') for root, dirs, files in os.walk(module2_path): for d in dirs: info = zipfile.ZipInfo(os.path.join(root, d)+'/') zipf.writestr(info, '') for file in files: zipf.write(os.path.join(root, file),os.path.relpath(os.path.join(root, file))) except: print('Unexpected error occurred while creating file deps.zip') zipf.close() deps.zip是正确创建的,据我所知,它会压缩我想要的所有文件,每个模块文件夹都在zip的基础级别。事实上,使用:创建的确切拉链 zip -r deps.zip module1 module2 是相同的结构,当我提交它时,这是有效的 spark-submit --py-files deps.zip driver.py EMR出错: Traceback (most recent call last): File "driver.py", line 6, in from module1.test import test_function ModuleNotFoundError: No module named 'module1'FWIW我也尝试使用以下命令使用子进程进行压缩,并且在EMR中我在spark中得到了相同的错误 os.system("zip -r9 deps.zip ../module1")os.system("zip -r9 deps.zip ../module2")

游客886

在pg 10 中安装 estension jsonbx 报错是怎么回事? could not open extension control file "/u01/pgsql_20190211/share/extension/jsonbx.control": No such file or directory

在pg 10 中安装 estension jsonbx 报错是怎么回事? could not open extension control file "/u01/pgsql_20190211/share/extension/jsonbx.control": No such file or directory 本问题来自云栖社区【PostgreSQL技术进阶社群】。https://yq.aliyun.com/articles/690084 点击链接欢迎加入社区大社群。

hbase小能手

连接file或者hdfs出现问题

各位大神,这个问题有遇到过的吗?连接file或者hdfs都会这样

宋淑婷

为什么我的代码输出的是enumaration值而不是字符串?

我正在ruby中为CTF编写基于字典的攻击程序,但是我的输出打印了枚举值而不是字符串。我已经尝试过明确地将输出的变量转换为字符串,但这并没有改变任何东西。 require 'net/http' def checkUsage() if ARGV.length != 1 return false end return trueend def generateUsername() wordArray = Array.new wordlist = File.open("words.txt", "r") for word in wordlist wordArray.push(word) end return wordArray.repeated_permutation(7).to_s end def generatePassword() wordArray = Array.new wordlist = File.open("words.txt", "r") for word in wordlist wordArray.push(word) end return wordArray.repeated_permutation(7).to_s end def requestAuthentication() if(!checkUsage()) puts("Usage: frsDic <wordlist>") return false end uri = URI("http://challenges.laptophackingcoffee.org:3199/secret.php") req = Net::HTTP::Get.new(uri) loop do username = generateUsername() password = generatePassword() if req.basic_auth username, password puts"Username found: " + username puts"Password found: " + password break else puts"Username failed: " + username puts"Password failed: " + password end endend requestAuthentication() 我原本希望打印出由bruteforce找到的用户名/密码字符串,但它只打印枚举值。