string

#string#

已有4人关注此标签

内容分类

9413

当面付支持哪些接口

问一下,当面付只支持这个文档列出的接口吗?https://docs.open.alipay.com/194/105203/ 开通当面付后,能不能请求alipay.trade.app.pay接口? 我还想知道这个文档中https://myjsapi.alipay.com/jsapi/native/trade-pay.html orderStr这个参数除了支持alipay.trade.app.pay拼装的请求query string以外还支持哪些接口的query string?

无暇之三月

_getFileSize requires Buffer/File/String.

_getFileSize requires Buffer/File/String. 使用的是JS直传OSS。在使用webuploader选取出来的file文件,通过multipartUpload上传时,会报这个错误。通过分析源码,发现将aliyun-oss-sdk.js中的 is.file = function file(obj) { // (obj instanceof File)不能识别为File,导致Error: _getFileSize requires Buffer/File/String. return typeof File !== 'undefined' && (obj instanceof File || Object.prototype.toString.call(obj) === '[object File]'); } 如此修改后能正常上传,但不知是否有其他副作用?

hbase小能手

探讨下,hbase中存储都用string吗?日期、整形也处理成str ?

探讨下,hbase中存储都用string吗?日期、整形也处理成str ?

游客886

打扰一下,请教一个dubbo调用的异常,本来这块代码都2个月没改了,今天突然时好时不好,不好用时就抛下面的异常 hessian.io.HessianProtocolException: expected integer at 0x30 java.lang.String 然后更神奇的来了 过了好久他自己重试(至少十几二十分钟以上),原来失败的请求成功了

打扰一下,请教一个dubbo调用的异常,本来这块代码都2个月没改了,今天突然时好时不好,不好用时就抛下面的异常hessian.io.HessianProtocolException: expected integer at 0x30 java.lang.String然后更神奇的来了过了好久他自己重试(至少十几二十分钟以上),原来失败的请求成功了本问题来自云栖社区【阿里Java技术进阶1群】。https://yq.aliyun.com/articles/690084 点击链接欢迎加入社区大社群。

时序数据库

mysql 只有几百条数据的update 更新了10s一般是什么问题,我查询了锁表,机器的io都没发现问题

update 语句:2019-05-13 09:43:37.705 http-nio-8401-exec-3: DEBUG com.ybjdw.order.itemorder.dao.OrderRowMapper.updateByExampleSelective - ==> Preparing: update order_row SET end_date = ?, modify_date = ? WHERE ( item_id = ? )2019-05-13 09:43:37.706 http-nio-8401-exec-3: DEBUG com.ybjdw.order.itemorder.dao.OrderRowMapper.updateByExampleSelective - ==> Parameters: 2019-05-13 09:43:32.695(Timestamp), 2019-05-13 09:43:32.695(Timestamp), 100001015577113592290000(String)2019-05-13 09:43:42.696 http-nio-8401-exec-3: DEBUG com.ybjdw.order.itemorder.dao.OrderRowMapper.updateByExampleSelective - <== Updates: 3 item_id 没有索引,表中有601条数据

一码平川MACHEL

大佬 请教 一个经常遇到的问题: 如果把一个str类型转成int类型, int()强制 提示invalid literal for int() with base 10: '[' 本来想通过string库里面的atoi 将其转成数字类型,返现也不行.

大佬 请教 一个经常遇到的问题: 如果把一个str类型转成int类型, int()强制 提示invalid literal for int() with base 10: '['本来想通过string库里面的atoi 将其转成数字类型,返现也不行.本问题及下方已被采纳的回答均来自云栖社区【Python技术进阶大群】。https://yq.aliyun.com/articles/690084 点击链接欢迎加入社区大社群。

冷丰

Flink 的 Scala API 怎么指定 时间字段

代码如下 import com.alibaba.fastjson.JSON import com.alibaba.fastjson.serializer.SerializeFilter import org.apache.flink.api.common.typeinfo.TypeInformation import org.apache.flink.streaming.api.TimeCharacteristic import org.apache.flink.streaming.api.functions.source.SourceFunction import org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor import org.apache.flink.streaming.api.scala._ import org.apache.flink.streaming.api.windowing.time.Time import org.apache.flink.streaming.connectors.kafka.Kafka011JsonTableSource import org.apache.flink.table.api.scala._ import org.apache.flink.table.sources.wmstrategies.BoundedOutOfOrderTimestamps // 不使用java的Types,使用org.apache.flink.table.api.Types //import org.apache.flink.api.common.typeinfo.Types import org.apache.flink.table.api.Types import org.apache.flink.api.java.typeutils.RowTypeInfo import org.apache.flink.table.api.TableEnvironment import org.apache.flink.types.Row object RowTest1 extends App { class MySource extends SourceFunction[Row] { val data = Array[String]( """{"rider_id":10,"rider_name":"hb","city":"hangzhou","rowtime":1555984311000}""", """{"rider_id":10,"rider_name":"hb","city":"hangzhou","rowtime":1555984315000}""", """{"rider_id":10,"rider_name":"hb","city":"hangzhou","rowtime":1555984343000}""" ) override def run(ctx: SourceFunction.SourceContext[Row]): Unit = { for (i <- data) { val r1 = JSON.parseObject(i) val rider_id = r1.getObject("rider_id", classOf[Int]) val rider_name = r1.getObject("rider_name", classOf[String]) val rowTime = r1.getObject("rowtime", classOf[java.sql.Timestamp]) //println(rider_id, rider_name, rowTime) val row = Row.of(rider_id.asInstanceOf[Object], rider_name.asInstanceOf[Object], rowTime.asInstanceOf[Object]) ctx.collect(row) Thread.sleep(1000) } } override def cancel(): Unit = {} } val env = StreamExecutionEnvironment.getExecutionEnvironment val tEnv = TableEnvironment.getTableEnvironment(env) env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime) val fieldNames = Array[String]("rider_id", "rider_name", "mytime.ROWTIME") val types = Array[TypeInformation[_]](Types.INT, Types.STRING, Types.SQL_TIMESTAMP) val rowSource = env.addSource(new MySource)(Types.ROW(types: _*)) //rowSource.print() .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor[Row](Time.seconds(10)) { override def extractTimestamp(element: Row): Long = element.getField(2).asInstanceOf[java.sql.Timestamp].getTime }) val table1 = rowSource.toTable(tEnv).as('rider_id, 'rider_name, 'mytime) table1.printSchema() tEnv.registerTable("t1", table1) tEnv.sqlQuery( """ | select | rider_id, | count(*) as cnt | from t1 | group by rider_id, TUMBLE(mytime, INTERVAL '10' SECOND) | """.stripMargin).toAppendStream[Row].print() env.execute() } 执行提示Exception in thread "main" org.apache.flink.table.api.ValidationException: Window can only be defined over a time attribute column. 请问: 从DataStream[Row] 转到Table过程中, 怎么指定时间字段呢.

游客bwpva7tpetotk

哪位大哥有短信服务的代码贴一下,万分感谢

问题:templateParamJson can not be blank public static void main(String[] args) { DefaultProfile profile = DefaultProfile.getProfile("default", "LT....", "9u\......"); IAcsClient client = new DefaultAcsClient(profile); CommonRequest request = new CommonRequest(); //request.setProtocol(ProtocolType.HTTPS); request.setMethod(MethodType.POST); request.setDomain("dysmsapi.aliyuncs.com"); request.setVersion("2017-05-25"); request.setAction("SendBatchSms"); request.putQueryParameter("PhoneNumberJson", "136"); request.putQueryParameter("TemplateCode", "SMS_163847932"); request.putQueryParameter("SignNameJson", ""); request.putQueryParameter("TemplateParam", "{\"code\":\"1\"}"); try { CommonResponse response = client.getCommonResponse(request); System.out.println(response.getData()); } catch (ServerException e) { e.printStackTrace(); } catch (ClientException e) { e.printStackTrace(); } }.

李博 bluemind

大家好,请教大家一个问题,String的value属性是如何被初始化的?

本问题来自云栖社区【阿里Java技术进阶2群】。https://yq.aliyun.com/articles/690084 点击链接欢迎加入社区大社群。

小六码奴

使用Spark从同一区域的多个s3桶中读取

我正在尝试从多个s3存储桶中读取文件。 最初桶应该在不同的区域,但看起来这是不可能的。 所以现在我已经将另一个桶复制到与要读取的第一个桶相同的区域,这与我正在执行spark作业的区域相同。 SparkSession设置: val sparkConf = new SparkConf() .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") .registerKryoClasses(Array(classOf[Event])) SparkSession.builder .appName("Merge application") .config(sparkConf) .getOrCreate() 使用创建SparkSession中的SQLContext调用的函数: private def parseEvents(bucketPath: String, service: String)( implicit sqlContext: SQLContext ): Try[RDD[Event]] = Try( sqlContext.read .option("codec", "org.apache.hadoop.io.compress.GzipCodec") .json(bucketPath) .toJSON .rdd .map(buildEvent(_, bucketPath, service).get) ) 主流程: for { bucketOnePath <- buildBucketPath(config.bucketOne.name) _ <- log(s"Reading events from $bucketOnePath") bucketOneEvents: RDD[Event] <- parseEvents(bucketOnePath, config.service) _ <- log(s"Enriching events from $bucketOnePath with originating region data") bucketOneEventsWithRegion: RDD[Event] <- enrichEventsWithRegion( bucketOneEvents, config.bucketOne.region ) bucketTwoPath <- buildBucketPath(config.bucketTwo.name) _ <- log(s"Reading events from $bucketTwoPath") bucketTwoEvents: RDD[Event] <- parseEvents(config.bucketTwo.name, config.service) _ <- log(s"Enriching events from $bucketTwoPath with originating region data") bucketTwoEventsWithRegion: RDD[Event] <- enrichEventsWithRegion( bucketTwoEvents, config.bucketTwo.region ) _ <- log("Merging events") mergedEvents: RDD[Event] <- merge(bucketOneEventsWithRegion, bucketTwoEventsWithRegion) if mergedEvents.isEmpty() == false _ <- log("Grouping merged events by partition key") mergedEventsByPartitionKey: RDD[(EventsPartitionKey, Iterable[Event])] <- eventsByPartitionKey( mergedEvents ) _ <- log(s"Storing merged events to ${config.outputBucket.name}") _ <- store(config.outputBucket.name, config.service, mergedEventsByPartitionKey) } yield () 我在日志中得到的错误(实际存储桶名称已更改,但实际名称确实存在): 19/04/09 13:10:20 INFO SparkContext: Created broadcast 4 from rdd at MergeApp.scala:14119/04/09 13:10:21 INFO FileSourceScanExec: Planning scan with bin packing, max size: 134217728 bytes, open cost is considered as scanning 4194304 bytes.org.apache.spark.sql.AnalysisException: Path does not exist: hdfs:someBucket2我的stdout日志显示主要代码在失败之前走了多远: Reading events from s3://someBucket/////*.gzEnriching events from s3://someBucket/////*.gz with originating region dataReading events from s3://someBucket2/////*.gzMerge failed: Path does not exist: hdfs://someBucket2奇怪的是,无论我选择哪个桶,第一次读取总是有效。但是第二次读取总是失败,无论是什么桶。这告诉我水桶没什么问题,但是在使用多个s3水桶时会有些奇怪。 我只能看到从单个s3存储桶读取多个文件的线程,而不是来自多个s3存储桶的多个文件。

小六码奴

如何使用EMR上的spark有效地读取/解析s3文件夹中.gz文件的负载

我正在尝试通过在EMR上执行的spark应用程序读取s3上目录中的所有文件。 数据以典型格式存储,如“s3a://Some/path/yyyy/mm/dd/hh/blah.gz” 如果我使用深度嵌套的通配符(例如“s3a:// SomeBucket / SomeFolder / / / / *。gz”),性能非常糟糕,需要大约40分钟才能读取几万个小的gzip压缩文件。我的另外两种方法,我的研究告诉我,它的性能要高得多。 使用hadoop.fs库(2.8.5)我尝试读取我提供的每个文件路径。 private def getEventDataHadoop( eventsFilePaths: RDD[String] )(implicit sqlContext: SQLContext): Try[RDD[String]] = Try( { val conf = sqlContext.sparkContext.hadoopConfiguration eventsFilePaths.map(eventsFilePath => { val p = new Path(eventsFilePath) val fs = p.getFileSystem(conf) val eventData: FSDataInputStream = fs.open(p) IOUtils.toString(eventData) }) } ) 这些文件路径由以下代码生成: private[disneystreaming] def generateInputBucketPaths( s3Protocol: String, bucketName: String, service: String, region: String, yearsMonths: Map[String, Set[String]] ): Try[Set[String]] = Try( { val days = 1 to 31 val hours = 0 to 23 val dateFormatter: Int => String = buildDateFormat("00") yearsMonths.flatMap { yearMonth: (String, Set[String]) => for { month: String <- yearMonth._2 day: Int <- days hour: Int <- hours } yield s"$s3Protocol$bucketName/$service/$region/${dateFormatter(yearMonth._1.toInt)}/${dateFormatter(month.toInt)}/" + s"${dateFormatter(day)}/${dateFormatter(hour)}/*.gz" }.toSet } ) hadoop.fs代码失败,因为Path类不可序列化。我想不出怎么能解决这个问题。 所以这导致我使用AmazonS3Client的另一种方法,我只是要求客户端给我文件夹(或前缀)中的所有文件路径,然后将文件解析为字符串,由于它们被压缩可能会失败: private def getEventDataS3(bucketName: String, prefix: String)( implicit sqlContext: SQLContext ): Try[RDD[String]] = Try( { import com.amazonaws.services.s3._, model._ import scala.collection.JavaConverters._ val request = new ListObjectsRequest() request.setBucketName(bucketName) request.setPrefix(prefix) request.setMaxKeys(Integer.MAX_VALUE) val s3 = new AmazonS3Client(new ProfileCredentialsProvider("default")) val objs: ObjectListing = s3.listObjects(request) // Note that this method returns truncated data if longer than the "pageLength" above. You might need to deal with that. sqlContext.sparkContext .parallelize(objs.getObjectSummaries.asScala.map(_.getKey).toList) .flatMap { key => Source .fromInputStream(s3.getObject(bucketName, key).getObjectContent: InputStream) .getLines() } } ) 此代码产生null异常,因为配置文件不能为null(“java.lang.IllegalArgumentException:配置文件不能为null”)。请记住,此代码在AWS中的EMR上运行,因此如何提供所需的凭据?其他人如何使用此客户端在EMR上运行spark作业?

tuututut

StreamingFileSink写parquet文件的问题

在使用 StreamingFileSink 写parquet文件时,由于需要使用forGenericRecord方法,那么toAppendStream 需要的参数类怎么构建或者定义? 下面的代码是直接使用的GenericRecord接口的class。报错如下: org.apache.flink.table.api.TableException: Arity [3] of result [ArrayBuffer(String, String, String)] does not match the number[1] of requested type [GenericType] 的错误。 Table table = tableEnv.sqlQuery(tableSql); // define Hdfs sink StreamingFileSink<GenericRecord> streamingFileSink = StreamingFileSink .forBulkFormat(new Path(basePath), ParquetAvroWriters.forGenericRecord(avroSchema)) .withBucketAssigner(new BasePathBucketAssigner<>()) .withBucketCheckInterval(bucketCheckInterval) .build(); // toStream and addSink tableEnv.toAppendStream(table, GenericRecord.class) .addSink(streamingFileSink); tableEnv.execEnv().execute(executeName);

李博 bluemind

string为什么要被final修饰?

string为什么要被final修饰? 本问题来自云栖社区【阿里Java技术进阶2群】。https://yq.aliyun.com/articles/690084 点击链接欢迎加入社区大社群。

小六码奴

如何使用activerecord-sinatra为PostqreSQL添加行?

我后来安装了PostgreSQL 1 - 已安装的ruby pg 2 - 已安装的ruby: gem install activerecord gem install sinatra-activerecord gem install rake3 - 我创建了连接数据库到app.rb的文件 app.rb configure :development do set :database, {adapter: "postgresql", encoding: "unicode", database: "your_database_name", pool: 2, username: "your_username", password: "your_password"}end configure :production do set :database, {adapter: "postgresql", encoding: "unicode", database: "your_database_name", pool: 2, username: "your_username", password: "your_password"}end4 - 创建app.rb文件的模型 class Article < ActiveRecord::Baseend5 - 创建到Rakefile的迁移 require 'sinatra/activerecord'require 'sinatra/activerecord/rake'require './app'6 - rake db:create_migration NAME=create_articles 7 - 在由文件迁移创建的新文件中 class CreateArticles < ActiveRecord::Migration def change create_table :articles do |t| t.string :title t.string :content t.boolean :published, :default => false t.datetime :published_on, :required => false t.integer :likes, :default => 0 t.timestamps null: false end end end 8 - 我完成了命令db:create和rake db:migration 创建数据库,显示psql控制台中的数据库。现在如何在sinatra中添加数据库线?

小六码奴

如何解决未初始化的常量Search :: error

我正在为与汽车维护相关的约会模型创建一个高级搜索/过滤器,其中schema.rb中的每个表都是: create_table "appointments", force: :cascade do |t| t.string "VIN" t.string "owner_email" t.string "date" t.string "time" t.string "reason" t.string "parts_needed" t.string "hours_needed" t.string "cost" t.datetime "created_at", null: false t.datetime "updated_at", null: false end create_table "searches", force: :cascade do |t| t.string "VIN" t.string "email" t.string "after_date" t.string "before_date" t.string "time" t.datetime "created_at", null: false t.datetime "updated_at", null: false end在我的search.rb模型中,我定义了搜索功能: class Search < ApplicationRecord def search_appointments appointments = Appointment.all # appointments = appointments.where("VIN LIKE ?", VIN) if VIN.present? GIVES ERROR appointments = appointments.where("owner_email LIKE ?", email) if email.present? appointments = appointments.where("date >= ?", after_date) if after_date.present? appointments = appointments.where("date <= ?", before_date) if before_date.present? if !time=="Any" appointments = appointments.where("time LIKE ?", time) if time.present? end return appointments end end然后在我的show.html.erb中显示生成的过滤器: Search Results <% if @search.search_appointments.empty? %> <p> No Appointments Fit This Search</p> <% else %> <%= @search.search_appointments.each do |a| %> Email: <%= a.owner_email%> </br> Date: <%= a.date%> </br> Time: <%= a.time%> </br> VIN: <%= a.VIN %> </br> </br> </br> </br> <% end %> <% end %> </br> <%= link_to 'Return', @search, method: :delete %> 除了search.rb模型中的第一个过滤器尝试(注释掉的行)之外,情况正常。如果我取消注释该行并运行搜索,该行会突出显示并出现错误:uninitialized constant Search::VIN我不明白为什么会这样,因为所有其他过滤器工作得很好。搜索控制器: class SearchesController < ApplicationController def new @search = Search.new end def create @search = Search.create(search_params) redirect_to @search end def show @search = Search.find(params[:id]) end def destroy @search = Search.find(params[:id]) @search.destroy redirect_to admin_path end def search_params params.require(:search).permit(:VIN, :email, :after_date, :before_date, :time) end end我的“新”页面是用户填写过滤器参数,然后单击“提交”按钮将其带到列出已过滤约会的显示页面的表单。

小六码奴

为什么在ActiveRecord Migration中为现有列设置默认值不会扩展到生产中的现有关联?

如果我通过ActiveRecord Migration向现有列添加默认值,则在将更改部署到生产时,现有关联不会受到影响。 我可以转到rails生产控制台并迭代每一条记录,并在每条记录上将新列的值设置为false,但是它很繁琐且不能很好地扩展。 class AddDefaultValuesToAFewColumns < ActiveRecord::Migration[5.2] def change change_column :downloads, :is_deleted, :boolean, :default => false endendcreate_table "downloads", force: :cascade do |t| t.string "version" t.string "comment" t.string "contributors" t.string "release_date" t.datetime "created_at", null: false t.datetime "updated_at", null: false t.string "download_url" t.boolean "is_deleted", default: false end当从rails控制台查询并返回falseis_deleted时,预期的结果将是关联,而不是返回nil。为什么会有这个以及有哪些替代解决方案呢?

小六码奴

自定义按另一个字符串数组排序字符串数组 - Ruby

我有一个目前按字母顺序排序的数组,我正在尝试按字符串的手动顺序对其进行排序。 当前代码: list = ["gold","silver","bronze","steel","copper"] list = list.sort { |a, b| a <=> b }我想要实现的目标:(以空白条目作为分隔符) list = ["gold","silver","bronze","steel","copper"] sort_order = ["bronze","silver","gold","","copper","steel"] list = list.sort_by sort_order输出:青铜| 银| 金| - | 铜| 钢 这可能吗?目前卡住这些错误消息: comparison of Integer with nil failedcomparison of String with String failed

KevinPan

阿里云SDK c++版本函数QueryProductList返回结果不全

阿里云SDK c++版本(aliyun-openapi-cpp-sdk)调用函数QueryProductList返回的结果中无产品的详细信息(list的size为0)。返回的数据结构如下: struct Data { struct ProductInfo { long gmtCreate; std::string description; std::string productName; int nodeType; int dataFormat; std::string productKey; int deviceCount; }; int pageCount; int pageSize; int currentPage; int total; std::vector<ProductInfo> list; }; 从设计上来说,list变量的作用应该就是用来存放产品的信息的,而且python版本的返回结果是有的,所以感觉此处会不会是个bug,还是我使用的方式不对?

爱吴

public static final String YES = "yes" final和String会重复么

String类型的“YES”已经是final类型的外面为什么外面还要有个final修饰,有没有实际的代码层次的意义呢