Json

#Json#

已有0人关注此标签

内容分类

宋淑婷

WS EMR:解析参数时出错:预期:'=',收到:'EOF'表示输入:

我正试图从我的一个内部创建一个集群EC2 instances。键入以下命令以启动我的集群 - aws emr create-cluster --release-label emr-5.20.0 --instance-groups instance-groups.json --auto-terminate and so on...我收到以下错误 - Error parsing parameter '--instance-groups': Expected: '=', received: 'EOF' for input:instance-groups.json ^ 我已经尝试过了--instance-groups=instance-groups.json,但是我得到了同样的错误信息。

宋淑婷

如何通过Spark SQL连接BigQuery?

data = pd.read_gbq(SampleQuery, project_id='XXXXXXXX', private_key='filename.json')这里的filename.json具有以下格式: { "type": "service_account", "project_id": "projectId", "private_key_id": "privateKeyId", "private_key": "privateKey", "client_email": "clientEmail", "client_id": "clientId", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/clientEmail"}现在,我需要将此代码移植到pyspark。但是我很难找到如何使用Spark SQL进行查询。我正在使用AWS EMR集群来运行此查询!

宋淑婷

为什么我不能在AWS Elastic Map Reduce中更改“spark.driver.memory”值?

我想在AWS EMR上调整我的spark集群,spark.driver.memory但由于我的数据集很大,我无法更改默认值导致每个spark应用程序崩溃。 我尝试spark-defaults.conf在主计算机上手动编辑文件,我还尝试在创建集群时直接使用EMR仪表板上的JSON文件对其进行配置。 这是使用的JSON文件: [ { "Classification": "spark-defaults", "Properties": { "spark.driver.memory": "7g", "spark.driver.cores": "5", "spark.executor.memory": "7g", "spark.executor.cores": "5", "spark.executor.instances": "11" } }]使用JSON文件后,可以在“spark-defaults.conf”中正确找到配置,但在spark仪表板上,“spark.driver.memory”的默认值始终为1000M,而其他值正确修改。有人遇到过同样的问题吗?

宋淑婷

使用.each循环API响应

我希望能够在循环中解析API响应。 我在控制器方法中有这个: @payout_batch= PayPal::SDK::REST::Payout.get('xxxxxxx')logger.info "Got Payout Batch Status[#{@payout_batch.batch_header.payout_batch_id}]" rescue ResourceNotFound => err logger.error "Payout Batch not Found"end我可以显示这样的结果: <%= @payout_batch.batch_header.amount.value %>但我希望能够循环遍历.each循环中的所有内容,如果可行的话......我尝试过几种方法,但似乎没有任何工作: <% @payout_batch.batch_header.each do |x| %> <%= (x["amount"]) %> <% end %>还有很多类似的方法。尝试用以下方法定义响应: json = JSON.parse(@payout_batch)并使用json循环,但这似乎无法正常工作。 问题:如何通过循环在视图中产生响应?

宋淑婷

JSON到CSV,跳过某些列并重新排序其他列 - Ruby

我有一个很好地将JSON文件转换为CSV文件的工作脚本,但是我正在尝试编辑脚本以在保存之前对CSV文件进行一些更改,目前没有任何成功。 这是我目前的转换脚本: require 'csv'require 'json'require 'set' def get_recursive_keys(hash, nested_key=nil) hash.each_with_object([]) do |(k,v),keys| k = "#{nested_key}.#{k}" unless nested_key.nil? if v.is_a? Hash keys.concat(get_recursive_keys(v, k)) else keys << k end endend json = JSON.parse(File.open(ARGV[0]).read)headings = Set.newjson.each do |hash| headings.merge(get_recursive_keys(hash))end headings = headings.to_aCSV.open(ARGV[0] + '.csv', 'w') do |csv| csv << headings json.each do |hash| row = headings.map do |h| v = hash.dig(*h.split('.')) v.is_a?(Array) ? v.join(',') : v end csv << row endend我用这个命令运行: for file in directory/*; do ruby json-to-csv.rb "$file"; done如何编辑此脚本以: 删除包含特定标题的列,例如“score”和“original_name”(将剩余的列从左到右按字母顺序重新排序) - 如果可能的话?到目前为止,我所尝试的一切都完全破坏了脚本 - 哪里是开始进行这些更改的最佳位置?

前端小能手

Mpvue 设置 TabBar 图标 文件不存在

在 app.json 里设置 TabBar 图片,显示文件不存在 相关代码app.json "tabBar": { }请问怎么修改

李博bluemind

是否需要将 MySQL 换成 MongoDB?

使用场景是这样的: 总数据量大,但每个用户单独的数据量不大。移动 App 需要有离线的本地数据库,并且与服务器端的数据库同步。同步的中间数据初步打算用 JSON 来做。对数据一致性有要求。现在用的是 MySQL,考虑 MongoDB 的主要原因是: 查询速度快。较好的 JSON 支持。另外,我对 MongoDB 了解较少,想请教一下,什么场景更适合使用 MongoDB,什么场景更适合使用传统的关系数据库? 本问题及下方已被采纳的回答均来自云栖社区【Redis&MongoDB 社区大群】。https://yq.aliyun.com/articles/690084 点击链接欢迎加入社区大社群。

hbase小能手

阿里云OSS出现InvalidPartOrder 无效的分片顺序 , 存的是一些json, 程序是用flume采集kafka数据到oss数据有解决办法吗

阿里云OSS出现InvalidPartOrder 无效的分片顺序 , 存的是一些json, 程序是用flume采集kafka数据到oss数据有解决办法吗?

python小能手

迭代深深嵌套的pandas json对象?

我有一个非常大的json对象,它的格式 [ { "A":"value", "TIME":1551052800000, "C":35, "D":36, "E":34, "F":35, "G":33 }, { "B":"value", "TIME":1551052800000, "C":36, "D":56, "E":44, "F":75, "G":38 }, ......]在熊猫的帮助下转换为json df.to_json(orient='records') 我想循环遍历json主体并更新此json对象中的特定键并通过我的api将其发送回客户端 我想做点什么 for i = 0 objecti = updateCaclulations return i

python小能手

使用python从JSON刮取Web数据

我想从API中删除时间表数据。返回的数据采用JSON格式。我正在使用python。 我试过以下代码: snav_timetable_url = "https://booking.snav.it/api/v1/rates/1040/2019-02-25/1042/2019-02-25?lang=1"fh = urllib.request.urlopen(snav_timetable_url)snav_timetable = fh.read().decode('utf-8')fh.close()snav_timetable_data = json.loads(snav_timetable[len(snav_timetable)-2])snav_timetable_data_cleaned = []for departure in snav_timetable_data 'data': snav_timetable_data_cleaned.append({ 'COMPANY': 'Snav', 'CODICE CORSA': departure['coditinera'], 'DEPARTURE DATE TIME': departure['strDatapart'], 'ARRIVAL DATE TIME': departure['strDatarri'] }) 但得到错误 raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

游客sebhzhf6klgxw

物联网平台-设备采集与MODBUS协议

在阿里云物联网平台的资料中看到子设备可以通过网关与IOT平台连接,里面配置子设备通道可以为MODBUS协议,我想问一下:1、配置了MODBUS通道,是不是可以直接采集到设备的数据,然后将数据转换为标注的ALINK JSON数据?2、直连设备如果也是采用MODBUS协议,不能通过类似的方式处理吗,还是需要自己通过sdk进行开发?

k8s小能手

Kubernetes API服务器支持哪些数据格式?

直接访问API服务器(即不使用kubectl,但使用直接HTTP请求),API服务器支持哪些资源规范格式? 在我到目前为止看到的所有示例中,资源规范都是JSON(例如这里)。但我找不到任何关于此的一般信息。 API服务器是否也接受其他格式的资源规范,例如YAML或protobuf? 同样,当API服务器返回资源以响应GET请求时,资源是否始终以JSON格式返回或是否支持其他格式?

李博bluemind

"[{\"attachId\":192659811086528512}]"怎么把这个json数组中的id变为字符串

3."[{"attachId":192659811086528512}]"怎么把这个json数组中的id变为字符串"[{"attachId":"192659811086528512"}]"要这种形式

李博bluemind

把一个map中的key和value放到数据库的一个为json类型的字段里,怎样获取map中的value填充进去啊

把一个map中的key和value放到数据库的一个为json类型的字段里,怎样获取map中的value填充进去啊

社区小助手

请问一下,json字符串中有重名但大小写不同的key,使用play.api.libs.json.Json.parse解析json没有报错,但是spark-sql使用org.openx.data.jsonserde.JsonSerDe时,会自动将key转为小写,然后putOnce函数报错Duplicate key,请问有谁遇到过这种情况吗,怎么解决比较好呢?目前只能在初始解析时删掉重名的一个key

请问一下,json字符串中有重名但大小写不同的key,使用play.api.libs.json.Json.parse解析json没有报错,但是spark-sql使用org.openx.data.jsonserde.JsonSerDe时,会自动将key转为小写,然后putOnce函数报错Duplicate key,请问有谁遇到过这种情况吗,怎么解决比较好呢?目前只能在初始解析时删掉重名的一个key

python小能手

使用带有2个主键模型基于view UpdateView的类

我正在构建一个带有两个主键的应用程序(它是一个遗留数据库)。 基本上我想要做的是单击表元素并根据模型上的主键重定向到另一个页面。 我没有找到任何关于如何使用Django基于类的视图执行此操作 这是我的代码: models.py class RmDadoscarteira(models.Model): dtcalculo = models.DateField(db_column='dtCalculo', primary_key=True) # Field name made lowercase. cdcarteira = models.CharField(db_column='cdCarteira', max_length=50) # Field name made lowercase. nmcarteira = models.CharField(db_column='nmCarteira', max_length=255, blank=True, null=True) # Field name made lowercase. pl = models.FloatField(db_column='PL', blank=True, null=True) # Field name made lowercase. retornocota1d = models.FloatField(db_column='RetornoCota1d', blank=True, null=True) # Field name made lowercase. var = models.FloatField(db_column='Var', blank=True, null=True) # Field name made lowercase. var_lim = models.FloatField(db_column='VaR_Lim', blank=True, null=True) # Field name made lowercase. var_variacao1d = models.FloatField(db_column='VaR_Variacao1d', blank=True, null=True) # Field name made lowercase. var_variacao63d = models.FloatField(db_column='VaR_Variacao63d', blank=True, null=True) # Field name made lowercase. var_consumolimite = models.FloatField(db_column='VaR_ConsumoLimite', blank=True, null=True) # Field name made lowercase. stress = models.FloatField(db_column='Stress', blank=True, null=True) # Field name made lowercase. stress_lim = models.FloatField(db_column='Stress_Lim', blank=True, null=True) # Field name made lowercase. stress_variacao1d = models.FloatField(db_column='Stress_Variacao1d', blank=True, null=True) # Field name made lowercase. stress_variacao63d = models.FloatField(db_column='Stress_Variacao63d', blank=True, null=True) # Field name made lowercase. stress_consumolimite = models.FloatField(db_column='Stress_ConsumoLimite', blank=True, null=True) # Field name made lowercase. grupo = models.CharField(db_column='Grupo', max_length=20, blank=True, null=True) # Field name made lowercase. var_pl = models.FloatField(db_column='VaR_PL', blank=True, null=True) # Field name made lowercase. stress_pl = models.FloatField(db_column='Stress_PL', blank=True, null=True) # Field name made lowercase. objetos = models.Manager() class Meta: managed = False db_table = 'RM_DadosCarteira' unique_together = (('dtcalculo', 'cdcarteira'),) views.py from django.shortcuts import render, HttpResponsefrom .models import *import jsonimport pandas as pdfrom django.views.generic.base import TemplateViewfrom django.urls import reverse_lazyfrom django.views.generic.edit import UpdateView # View do relatorio Flagship Solutions #def FlagshipSolutions(request): # render(request, 'dash_solutions_completo.html') class VisualizaFundoSolutions(UpdateView): template_name = "prototipo_fundo.html" model = RmDadoscarteira context_object_name = 'fundos_metricas' fields = 'all' success_url = reverse_lazy("portal_riscos:dash_solutions") def FlagshipSolutions(request): # Queryset Tabela Diaria query_carteira = RmDadoscarteira.objetos.filter(grupo='Abertos') # Data Mais recente dt_recente = str(query_carteira.latest('dtcalculo').dtcalculo) # Filtrando queryset para data mais recente query_carteira = query_carteira.filter(dtcalculo=dt_recente) # Preparando os dados para o grafico de utilizacao de var e stress util_var = [round(obj['var_consumolimite'] * 100,2) for obj in query_carteira.values()] util_stress = [round(obj['stress_consumolimite'] * 100,2) for obj in query_carteira.values()] # Queryset Historico Graficos ### Definir um filtro de data query_hist = RmHistoricometricas.objetos.filter(grupo='Abertos').filter(dtcalculo__gte='2018-07-11') ### Queryset temporario ate dados de retorno e var estarem iguais query_data = RmHistoricometricas.objetos.filter(grupo='Abertos').filter(dtcalculo__gte='2018-07-11').filter(info='% VaR') ## Data Frames de Saida # Data Frame Historico df_hist = pd.DataFrame(list(query_hist.values())) # Criando uma chave de concateno df_hist['concat'] = df_hist['dtcalculo'].astype(str) + df_hist['cdcarteira'] df_hist['valor'] = round(df_hist['valor'] * 100, 2) # Data Frame VaR PL Historico df_hist_var = df_hist[df_hist['info']=='% VaR'] # Data Frame Stress PL Historico df_hist_stress = df_hist[df_hist['info']=='% Stress'] # Data Frame Consumo VaR df_hist_var_cons = df_hist[df_hist['info']=='% Utilização Limite VaR'] # Data Frame Consumo Stress df_hist_stress_cons = df_hist[df_hist['info']=='% Utilização Limite Stress'] # Data Frame de Retorno df_hist_ret = df_hist[df_hist['info']=='Retorno'] # Obtendo todas as datas (removendo duplicados) #datas = df_hist.dtcalculo.drop_duplicates(keep='first').reset_index(drop=True) datas = pd.DataFrame(list(query_data.values())) datas = datas.dtcalculo.drop_duplicates(keep='first').reset_index(drop=True) # Obtendo o nome de todos os fundos (removendo duplicados) fundos = list(df_hist.cdcarteira.drop_duplicates(keep='first').reset_index(drop=True)) # Criando um data frame unico com todas as informacoes a serem utilizadas df_hist_saida = pd.DataFrame(columns=['dtcalculo', 'cdcarteira']) # Criando um data frame com o numero de linhas igual a fundos * datas for fundo in fundos: # Data Frame temporario df_temp = pd.DataFrame(columns=['dtcalculo', 'cdcarteira']) # Copiando as datas df_temp['dtcalculo'] = datas # Inserindo o nome do fundo df_temp['cdcarteira'] = [fundo] * len(datas) # Inserindo dados do temp no data frame de saida df_hist_saida = df_hist_saida.append(df_temp) # Resetando index e criando uma chave de concateno para o dataframe de saida df_hist_saida = df_hist_saida.reset_index(drop=True) df_hist_saida['concat'] = df_hist_saida['dtcalculo'].astype(str) + df_hist_saida['cdcarteira'] # Criando coluna de var pl df_hist_saida = df_hist_saida.merge(df_hist_var[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'var_pl'}) # Criando coluna de var pl df_hist_saida = df_hist_saida.merge(df_hist_stress[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'stress_pl'}) # Criando coluna de consumo var df_hist_saida = df_hist_saida.merge(df_hist_var_cons[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'var_cons'}) # Criando coluna de consumo stress df_hist_saida = df_hist_saida.merge(df_hist_stress_cons[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'stress_cons'}) # Criando coluna de retorno df_hist_saida = df_hist_saida.merge(df_hist_stress_cons[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'retorno'}) # Removendo a coluna concatenado df_hist_saida = df_hist_saida.drop('concat', axis=1) # Substituindo NaN por none df_hist_saida = df_hist_saida.fillna('None') # Criando dicionarios de saida dict_var_pl_hist = dict() dict_stress_pl_hist = dict() dict_var_cons_hist = dict() dict_stress_cons_hist = dict() for fundo in fundos: dict_var_pl_hist[fundo] = list(df_hist_saida[df_hist_saida['cdcarteira'] == fundo].var_pl) dict_stress_pl_hist[fundo] = list(df_hist_saida[df_hist_saida['cdcarteira'] == fundo].stress_pl) dict_var_cons_hist[fundo] = list(df_hist_saida[df_hist_saida['cdcarteira'] == fundo].var_cons) dict_stress_cons_hist[fundo] = list(df_hist_saida[df_hist_saida['cdcarteira'] == fundo].stress_cons) # Lista contendo todas as datas utilizadas lista_datas = list(datas.astype(str)) # Alertas alerta_1 = [70] * len(datas) alerta_2 = [85] * len(datas) alerta_3 = [100] * len(datas) # Flagship context ={'query_carteira': query_carteira, 'fundos': json.dumps(fundos), 'util_var': json.dumps(util_var), 'util_stress': json.dumps(util_stress,), 'dict_var_pl_hist': json.dumps(dict_var_pl_hist, default=dict), 'dict_stress_pl_hist': json.dumps(dict_stress_pl_hist, default=dict), 'dict_var_cons_hist': json.dumps(dict_var_cons_hist, default=dict), 'dict_stress_cons_hist': json.dumps(dict_stress_cons_hist, default=dict), 'datas_hist': json.dumps(lista_datas, default=str), 'alerta_1': json.dumps(alerta_1), 'alerta_2': json.dumps(alerta_2), 'alerta_3': json.dumps(alerta_3), } return render(request, 'dash_solutions_completo.html', context) urls.py # Importamos a função index() definida no arquivo views.py from portal_riscos.views import *from django.urls import pathfrom django.contrib.auth.views import LoginView app_name = 'portal_riscos' # urlpatterns contém a lista de roteamento URLs urlpatterns = [ # Dashboard Solutions path('', FlagshipSolutions, name='dash_solutions'), path('solutions_fundos/<pk>/<cdcarteira>', VisualizaFundoSolutions.as_view(), name='solutions_fundos') ] 我要点击并重定向的表格的一部分 class="btn btn-light btn-sm">Atualizar</a> 那是我得到的错误: Environment: Request Method: GETRequest URL: Django Version: 2.1.2Python Version: 3.6.1Installed Applications:['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'portal_riscos', 'widget_tweaks', 'django.contrib.humanize']Installed Middleware:['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback: File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangocorehandlersexception.py" in inner response = get_response(request)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangocorehandlersbase.py" in _get_response response = self.process_exception_by_middleware(e, request)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangocorehandlersbase.py" in _get_response response = wrapped_callback(request, callback_args, *callback_kwargs)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangoviewsgenericbase.py" in view return self.dispatch(request, args, *kwargs)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangoviewsgenericbase.py" in dispatch return handler(request, args, *kwargs)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangoviewsgenericedit.py" in get self.object = self.get_object()File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangoviewsgenericdetail.py" in get_object obj = queryset.get()File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangodbmodelsquery.py" in get (self.model._meta.object_name, num)Exception Type: MultipleObjectsReturned at /solutions_fundos/2019-01-14/FICFI52865Exception Value: get() returned more than one RmDadoscarteira -- it returned 21!

python小能手

从多个对象的数组中读取值(JSON)

我尝试使用python构建HTTP-API。我有一个JSON格式的对象数组。我想读取其中一个对象的值。 在我的python脚本中,我将数据库表附加到对象数组。我正在寻找一种解决方案来选择其中一个对象中的单个值。 我有一个功能: cur.execute()row_headers=[x[0] for x in cur.description]response = cur.fetchall()json_data=[]for result in response: json_data.append(dict(zip(row_headers,result))) return jsonify(json_data)回报看起来像: [ { "ID": 123, "CODE": 4117, "STATUS": "off", }, { "ID": 345, "CODE": 5776, "STATUS": "on", } ]我正在寻找一个函数(inputID): where ID = inputIDset currentcode = set currentstatus =

_无用_

设备Coap对称加密自主接入,在设备设备认证流程总提示4.00错误(请求发送的Payload非法)

使用node-coap 库进行设备Coap对称加密自主接入,在设备设备认证流程总提示4.00错误(请求发送的Payload非法),请问是怎么回事,谢谢。代码: const coap = require('coap');var req = coap.request( { "host" : "a1NQ16kP2ol.coap.cn-shanghai.link.aliyuncs.com", "port" : 5682, "method" : "POST", "pathname" : "/auth", "headers" : { "Accept" : "application/json", "Content-Format" : "application/json" } }); var pp = {"productKey":"xxxx","deviceName":"xxxxxx","clientId":"A44E313FEAEC","sign":"1ed1d1f3472ef656a8c672afc89b3f48"}; req.write(JSON.stringify(pp)); req.on('response', function(res) { console.log( JSON.stringify(res) ); res.pipe(process.stdout)});

漂流-人生

物联网规则引擎无法插入云数据库

设备上报的是JSON报文,SQL测试通过。转发已启动