list

#list#

已有1人关注此标签

内容分类

游客akib4d623tkes

oss bucketName 服务器加密问题

InvalidArgumentKMSMasterKeyID is not applicable if user is not in white list5D2EBE63DD51CDD036C4291Cwang-william.oss-cn-shanghai.aliyuncs.com KMS我是从密钥管理服务器中获取,我选的密钥地区也是对的,为什么他提示不在白名单中,

永远的SSS

Greenplum新建的mirror无法启动

Greenplum在部署的时候没创建mirror,是后来加进去的.添加成功后,发现2个mirror无法启动,如新: [gpadmin@gpm ~]$ gpstate -m20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:-Starting gpstate with args: -m20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.9.0 build 1'20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.9.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Aug 8 2016 05:36:26'20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:-Obtaining Segment details from master...20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:--------------------------------------------------------------20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:--Current GPDB mirror list and status20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:--Type = Group20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:--------------------------------------------------------------20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:- Mirror Datadir Port Status Data Status 20190626:14:27:13:031156 gpstate:gpm:gpadmin-[WARNING]:-gpseg2 /home/gpadmin/gpdata/gpdatam1/gpseg0 41000 Failed <<<<<<<<20190626:14:27:13:031156 gpstate:gpm:gpadmin-[WARNING]:-gpseg2 /home/gpadmin/gpdata/gpdatam1/gpseg1 41001 Failed <<<<<<<<20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:- gpseg1 /home/gpadmin/gpdata/gpdatam1/gpseg2 41000 Passive Synchronized20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:- gpseg1 /home/gpadmin/gpdata/gpdatam1/gpseg3 41001 Passive Synchronized20190626:14:27:13:031156 gpstate:gpm:gpadmin-[INFO]:-------------------------------------------------------------- 20190626:14:27:13:031156 gpstate:gpm:gpadmin-[WARNING]:-2 segment(s) configured as mirror(s) have failed 然后用gprecoverseg -F全量恢复,提示 请教下各位大佬们,谁遇到过这种情况,应该怎么处理?

心意乱

[@aliyun][¥99] 我想知道SpringIOC容器底层到底做了哪些操作!!!

问题是: 在我配置类中的数组顺序是按照我创建时候的顺序当我将数组拿出来时顺序就乱了。 输出地址值时会发现,两个对象不是同一个地址。 当我注入一个 Student 时,在使用方输出地址是没有发生改变的。 如何解决这个顺序问题 结论: 当我在容器中放入数组时(或者 List ,Set)SpringIOC 容器不会采用我们注入的对象自己会开辟一个新对象,这是集合中的顺序可能会发生改变。

小六码奴

EMR上的Spark工作突然耗时30小时(从5小时起)

我有一个Spark作业,它运行在1个主服务器和8个核心的Amazon EMR集群上。简而言之,Spark作业从S3读取一些.csv文件,将它们转换为RDD,在RDD上执行一些相对复杂的连接,最后在S3上生成其他.csv文件。这项工作在EMR集群上执行,过去大约需要5个小时。突然有一天,它开始花费超过30个小时。输入(S3文件)没有明显差异。 我已经检查了日志,并且在漫长的运行中(30小时)我可以看到有关OutOfMemory错误的信息: java.lang.OutOfMemoryError: Java heap space at java.util.IdentityHashMap.resize(IdentityHashMap.java:472) at java.util.IdentityHashMap.put(IdentityHashMap.java:441) at org.apache.spark.util.SizeEstimator$SearchState.enqueue(SizeEstimator.scala:174) at org.apache.spark.util.SizeEstimator$$anonfun$visitSingleObject$1.apply(SizeEstimator.scala:225) at org.apache.spark.util.SizeEstimator$$anonfun$visitSingleObject$1.apply(SizeEstimator.scala:224) at scala.collection.immutable.List.foreach(List.scala:381) at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:224) at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:201) at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:69) .... at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:66) at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:96) at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70) 尽管有明显的OutOfMemory异常,但输出(S3文件)看起来还不错,所以很明显Spark工作正常完成。 什么可以突然产生从5小时执行到30小时的跳跃?

小六码奴

自定义按另一个字符串数组排序字符串数组 - Ruby

我有一个目前按字母顺序排序的数组,我正在尝试按字符串的手动顺序对其进行排序。 当前代码: list = ["gold","silver","bronze","steel","copper"] list = list.sort { |a, b| a <=> b }我想要实现的目标:(以空白条目作为分隔符) list = ["gold","silver","bronze","steel","copper"] sort_order = ["bronze","silver","gold","","copper","steel"] list = list.sort_by sort_order输出:青铜| 银| 金| - | 铜| 钢 这可能吗?目前卡住这些错误消息: comparison of Integer with nil failedcomparison of String with String failed

KevinPan

阿里云SDK c++版本函数QueryProductList返回结果不全

阿里云SDK c++版本(aliyun-openapi-cpp-sdk)调用函数QueryProductList返回的结果中无产品的详细信息(list的size为0)。返回的数据结构如下: struct Data { struct ProductInfo { long gmtCreate; std::string description; std::string productName; int nodeType; int dataFormat; std::string productKey; int deviceCount; }; int pageCount; int pageSize; int currentPage; int total; std::vector<ProductInfo> list; }; 从设计上来说,list变量的作用应该就是用来存放产品的信息的,而且python版本的返回结果是有的,所以感觉此处会不会是个bug,还是我使用的方式不对?

李博 bluemind

redis随机获取list里的值?

redis随机获取list里的值?

一码平川MACHEL

Python:如何检查两个字符串的匹配字符数?

是否有更好的方法来检查两个不同字符串的字符匹配数。例如,当我输入“成功”这个词时,我想知道它与确切位置的“成功”一词有多少个字符。在这种情况下,“成功”匹配“成功”的7个字符中的5个。需要将此功能实施到更大的项目中。现在我有一个概念验证python代码: word1 = input('Enter word1>')word2 = input('Enter word2>') #word1 is being compared to word2word1_character_list = list(word1)word2_character_list = list(word2)matching_word_count = 0for index in range(0,len(word2)): if word1_character_list[index] == word2_character_list[index]: matching_word_count = matching_word_count + 1 print(matching_word_count)我想知道是否有更简单和/或更短的方法吗?或者使用较少变量的方式。

一码平川MACHEL

将路径转换为列表

我有一个由目录(例如'grandpa\parent\child')构成的路径,我需要在列表中进行转换(例如['grandpa', 'parent', 'child'])。 路径可以具有更少或更多的子目录(例如['parent', 'child'])。 我试过迭代os.path.split()但它在所有情况下都不能很好地工作: import os s = []def splitall(path): l = list(os.path.split(path)) s.append(l[1]) return s if l[0] == '' else splitall(l[0]) p = 'grandpa\parent\child'l = splitall(p)print(l)应该有更好的方法,对吧?也许是一种我不知道的方法。

一码平川MACHEL

在python中运行并行请求会话

我正在尝试打开多个Web会话并将数据保存为CSV,使用for循环和requests.get选项编写了我的代码,但是访问90个Web位置需要很长时间。任何人都可以让我知道整个过程如何并行运行loc_var: 代码工作正常,只有问题是为loc_var逐个运行,并花了很长时间。 想要并行访问所有for循环loc_var URL并写入CSV操作 以下是代码: import pandas as pdimport numpy as npimport osimport requestsimport datetimeimport zipfilet=datetime.date.today()-datetime.timedelta(2)server = [("A","web1",":5000","username=usr&password=p7Tdfr")]'''List of all web_ips'''web_1 = ["Web1","Web2","Web3","Web4","Web5","Web6","Web7","Web8","Web9","Web10","Web11","Web12","Web13","Web14","Web15"]'''List of All location'''loc_var =["post1","post2","post3","post4","post5","post6","post7","post8","post9","post10","post11","post12","post13","post14","post15","post16","post17","post18"] for s,web,port,usr in server: login_url='http://'+web+port+'/api/v1/system/login/?'+usr print (login_url) s= requests.session() login_response = s.post(login_url) print("login Responce",login_response) #Start access the Web for Loc_variable for mkt in loc_var: #output is CSV File com_actions_url='http://'+web+port+'/api/v1/3E+date(%5C%22'+str(t)+'%5C%22)and+location+%3D%3D+%27'+mkt+'%27%22&page_size=-1&format=%22csv%22' print("com_action_url",com_actions_url) r = s.get(com_actions_url) print("action",r) if r.ok == True: with open(os.path.join("/home/Reports_DC/", "relation_%s.csv"%mkt),'wb') as f: f.write(r.content) # If loc is not aceesble try with another Web_1 List if r.ok == False: while r.ok == False: for web_2 in web_1: login_url='http://'+web_2+port+'/api/v1/system/login/?'+usr com_actions_url='http://'+web_2+port+'/api/v1/3E+date(%5C%22'+str(t)+'%5C%22)and+location+%3D%3D+%27'+mkt+'%27%22&page_size=-1&format=%22csv%22' login_response = s.post(login_url) print("login Responce",login_response) print("com_action_url",com_actions_url) r = s.get(com_actions_url) if r.ok == True: with open(os.path.join("/home/Reports_DC/", "relation_%s.csv"%mkt),'wb') as f: f.write(r.content) break

一码平川MACHEL

这个代码什么意思?self.plusOne(digits [: - 1])digits.extend([0])

digits = self.plusOne(digits[:-1])digits.extend([0])完整代码: def plusOne(self, digits): """ :type digits: List[int] :rtype: List[int] """ if len(digits) == 0: digits = [1] elif digits[-1] == 9: digits = self.plusOne(digits[:-1]) digits.extend([0]) else: digits[-1] += 1 return digits

k8s小能手

使用gitlabs tiller实例删除自动部署的图表?

我正在使用Gitlab Auto DevOps CI管道,我想使用helm删除部署。 我尝试连接这样的分蘖helm init --client-only --tiller-namespace=gitlab-managed-apps导致 $HELM_HOME has been configured at /Users/marvin/.helm.Not installing Tiller due to 'client-only' flag having been setHappy Helming! helm list --namespace=gitlab-managed-apps回报 Error: could not find tiller

栗山未来。

k8s-scheduler报错Failed to list *v1.StorageClass是什么原因

查看日志一直在报这个错集群是kubeadm方式安装的版本为v1.13.3

灰灰fly

flink 类型转换,scala 继承map接口后,报java.lang.ClassCastException

类型: case class RIchMap(data:Map[String,Any] = Map()) extends Map[String, Any] with GenMap[String, Any] with Serializable 执行:stream.flatMapRichMap(TypeInformation.of(classOf[RichMap])) 传类型: fun: RichMap => TraversableOnce[RichMap]总是报 :java.lang.ClassCastException: scala.collection.immutable.Map$Map1 cannot be cast to com.haima.sage.bigdata.etl.common.model.RichMap at com.haima.sage.bigdata.analyzer.streaming.source.AkkaSink.invoke(AkkaSink.scala:20) at org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52) at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534) at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:689) at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:667) at org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51) at org.apache.flink.streaming.api.scala.DataStream$$anon$6$$anonfun$flatMap$1.apply(DataStream.scala:663) at org.apache.flink.streaming.api.scala.DataStream$$anon$6$$anonfun$flatMap$1.apply(DataStream.scala:663) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.flink.streaming.api.scala.DataStream$$anon$6.flatMap(DataStream.scala:663) at org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50) at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202) at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105) at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711) at java.lang.Thread.run(Thread.java:748)

一码平川MACHEL

如何从QComboBox的弹出窗口中删除白色背景(在顶部和底部)?

我正在用qt5和pyqt5创建一个GUI应用程序。我正在尝试创建一个黑暗的主题,但我遇到了QComboBox的问题。当我试图在QListView上创建一个黑色背景时,我得到一个白色边框或其顶部和列表底部的名称。 我尝试了很多方法,比如padding或marge更改值,但没有任何帮助我尝试了这里已经提到的东西 删除QListView背景 但总是一样的。 QComboBox { font: 12pt Fira Sans Condensed; background-color: #2e2e2e; border-top: 0px solid #3e3e3e; border-left: 0px solid #3e3e3e; border-right: 0px solid #3e3e3e; border-bottom: 2px solid #3e3e3e; padding: 5%; max-height: 30px; min-width: 140px; color: white; selection-background-color: #5e5e5e; } QComboBox::drop-down { border: none; } QComboBox::down-arrow { image: url(icons/QComboBox/down-arrow.png); width: 25px; height: 25px; border-width: 0px; padding-right: 10px; } QComboBox::down-arrow:pressed { position: relative; top: 1px; left: 1px; } QListView { font: 12pt Fira Sans Condensed; background-color: #2e2e2e; outline: 0; color: white; selection-background-color: #5e5e5e; } QListView::item { min-height: 20px; padding: 5%; }self.list = QtWidgets.QListView(self.window.comboBox)self.window.comboBox.addItem("test1")self.window.comboBox.addItem("test2")self.window.comboBox.setView(self.list)这是我得到的

一码平川MACHEL

使用带有2个主键模型基于view UpdateView的类

我正在构建一个带有两个主键的应用程序(它是一个遗留数据库)。 基本上我想要做的是单击表元素并根据模型上的主键重定向到另一个页面。 我没有找到任何关于如何使用Django基于类的视图执行此操作 这是我的代码: models.py class RmDadoscarteira(models.Model): dtcalculo = models.DateField(db_column='dtCalculo', primary_key=True) # Field name made lowercase. cdcarteira = models.CharField(db_column='cdCarteira', max_length=50) # Field name made lowercase. nmcarteira = models.CharField(db_column='nmCarteira', max_length=255, blank=True, null=True) # Field name made lowercase. pl = models.FloatField(db_column='PL', blank=True, null=True) # Field name made lowercase. retornocota1d = models.FloatField(db_column='RetornoCota1d', blank=True, null=True) # Field name made lowercase. var = models.FloatField(db_column='Var', blank=True, null=True) # Field name made lowercase. var_lim = models.FloatField(db_column='VaR_Lim', blank=True, null=True) # Field name made lowercase. var_variacao1d = models.FloatField(db_column='VaR_Variacao1d', blank=True, null=True) # Field name made lowercase. var_variacao63d = models.FloatField(db_column='VaR_Variacao63d', blank=True, null=True) # Field name made lowercase. var_consumolimite = models.FloatField(db_column='VaR_ConsumoLimite', blank=True, null=True) # Field name made lowercase. stress = models.FloatField(db_column='Stress', blank=True, null=True) # Field name made lowercase. stress_lim = models.FloatField(db_column='Stress_Lim', blank=True, null=True) # Field name made lowercase. stress_variacao1d = models.FloatField(db_column='Stress_Variacao1d', blank=True, null=True) # Field name made lowercase. stress_variacao63d = models.FloatField(db_column='Stress_Variacao63d', blank=True, null=True) # Field name made lowercase. stress_consumolimite = models.FloatField(db_column='Stress_ConsumoLimite', blank=True, null=True) # Field name made lowercase. grupo = models.CharField(db_column='Grupo', max_length=20, blank=True, null=True) # Field name made lowercase. var_pl = models.FloatField(db_column='VaR_PL', blank=True, null=True) # Field name made lowercase. stress_pl = models.FloatField(db_column='Stress_PL', blank=True, null=True) # Field name made lowercase. objetos = models.Manager() class Meta: managed = False db_table = 'RM_DadosCarteira' unique_together = (('dtcalculo', 'cdcarteira'),) views.py from django.shortcuts import render, HttpResponsefrom .models import *import jsonimport pandas as pdfrom django.views.generic.base import TemplateViewfrom django.urls import reverse_lazyfrom django.views.generic.edit import UpdateView # View do relatorio Flagship Solutions #def FlagshipSolutions(request): # render(request, 'dash_solutions_completo.html') class VisualizaFundoSolutions(UpdateView): template_name = "prototipo_fundo.html" model = RmDadoscarteira context_object_name = 'fundos_metricas' fields = 'all' success_url = reverse_lazy("portal_riscos:dash_solutions") def FlagshipSolutions(request): # Queryset Tabela Diaria query_carteira = RmDadoscarteira.objetos.filter(grupo='Abertos') # Data Mais recente dt_recente = str(query_carteira.latest('dtcalculo').dtcalculo) # Filtrando queryset para data mais recente query_carteira = query_carteira.filter(dtcalculo=dt_recente) # Preparando os dados para o grafico de utilizacao de var e stress util_var = [round(obj['var_consumolimite'] * 100,2) for obj in query_carteira.values()] util_stress = [round(obj['stress_consumolimite'] * 100,2) for obj in query_carteira.values()] # Queryset Historico Graficos ### Definir um filtro de data query_hist = RmHistoricometricas.objetos.filter(grupo='Abertos').filter(dtcalculo__gte='2018-07-11') ### Queryset temporario ate dados de retorno e var estarem iguais query_data = RmHistoricometricas.objetos.filter(grupo='Abertos').filter(dtcalculo__gte='2018-07-11').filter(info='% VaR') ## Data Frames de Saida # Data Frame Historico df_hist = pd.DataFrame(list(query_hist.values())) # Criando uma chave de concateno df_hist['concat'] = df_hist['dtcalculo'].astype(str) + df_hist['cdcarteira'] df_hist['valor'] = round(df_hist['valor'] * 100, 2) # Data Frame VaR PL Historico df_hist_var = df_hist[df_hist['info']=='% VaR'] # Data Frame Stress PL Historico df_hist_stress = df_hist[df_hist['info']=='% Stress'] # Data Frame Consumo VaR df_hist_var_cons = df_hist[df_hist['info']=='% Utilização Limite VaR'] # Data Frame Consumo Stress df_hist_stress_cons = df_hist[df_hist['info']=='% Utilização Limite Stress'] # Data Frame de Retorno df_hist_ret = df_hist[df_hist['info']=='Retorno'] # Obtendo todas as datas (removendo duplicados) #datas = df_hist.dtcalculo.drop_duplicates(keep='first').reset_index(drop=True) datas = pd.DataFrame(list(query_data.values())) datas = datas.dtcalculo.drop_duplicates(keep='first').reset_index(drop=True) # Obtendo o nome de todos os fundos (removendo duplicados) fundos = list(df_hist.cdcarteira.drop_duplicates(keep='first').reset_index(drop=True)) # Criando um data frame unico com todas as informacoes a serem utilizadas df_hist_saida = pd.DataFrame(columns=['dtcalculo', 'cdcarteira']) # Criando um data frame com o numero de linhas igual a fundos * datas for fundo in fundos: # Data Frame temporario df_temp = pd.DataFrame(columns=['dtcalculo', 'cdcarteira']) # Copiando as datas df_temp['dtcalculo'] = datas # Inserindo o nome do fundo df_temp['cdcarteira'] = [fundo] * len(datas) # Inserindo dados do temp no data frame de saida df_hist_saida = df_hist_saida.append(df_temp) # Resetando index e criando uma chave de concateno para o dataframe de saida df_hist_saida = df_hist_saida.reset_index(drop=True) df_hist_saida['concat'] = df_hist_saida['dtcalculo'].astype(str) + df_hist_saida['cdcarteira'] # Criando coluna de var pl df_hist_saida = df_hist_saida.merge(df_hist_var[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'var_pl'}) # Criando coluna de var pl df_hist_saida = df_hist_saida.merge(df_hist_stress[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'stress_pl'}) # Criando coluna de consumo var df_hist_saida = df_hist_saida.merge(df_hist_var_cons[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'var_cons'}) # Criando coluna de consumo stress df_hist_saida = df_hist_saida.merge(df_hist_stress_cons[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'stress_cons'}) # Criando coluna de retorno df_hist_saida = df_hist_saida.merge(df_hist_stress_cons[['concat', 'valor']], on='concat', how='left') df_hist_saida = df_hist_saida.rename(columns={'valor': 'retorno'}) # Removendo a coluna concatenado df_hist_saida = df_hist_saida.drop('concat', axis=1) # Substituindo NaN por none df_hist_saida = df_hist_saida.fillna('None') # Criando dicionarios de saida dict_var_pl_hist = dict() dict_stress_pl_hist = dict() dict_var_cons_hist = dict() dict_stress_cons_hist = dict() for fundo in fundos: dict_var_pl_hist[fundo] = list(df_hist_saida[df_hist_saida['cdcarteira'] == fundo].var_pl) dict_stress_pl_hist[fundo] = list(df_hist_saida[df_hist_saida['cdcarteira'] == fundo].stress_pl) dict_var_cons_hist[fundo] = list(df_hist_saida[df_hist_saida['cdcarteira'] == fundo].var_cons) dict_stress_cons_hist[fundo] = list(df_hist_saida[df_hist_saida['cdcarteira'] == fundo].stress_cons) # Lista contendo todas as datas utilizadas lista_datas = list(datas.astype(str)) # Alertas alerta_1 = [70] * len(datas) alerta_2 = [85] * len(datas) alerta_3 = [100] * len(datas) # Flagship context ={'query_carteira': query_carteira, 'fundos': json.dumps(fundos), 'util_var': json.dumps(util_var), 'util_stress': json.dumps(util_stress,), 'dict_var_pl_hist': json.dumps(dict_var_pl_hist, default=dict), 'dict_stress_pl_hist': json.dumps(dict_stress_pl_hist, default=dict), 'dict_var_cons_hist': json.dumps(dict_var_cons_hist, default=dict), 'dict_stress_cons_hist': json.dumps(dict_stress_cons_hist, default=dict), 'datas_hist': json.dumps(lista_datas, default=str), 'alerta_1': json.dumps(alerta_1), 'alerta_2': json.dumps(alerta_2), 'alerta_3': json.dumps(alerta_3), } return render(request, 'dash_solutions_completo.html', context) urls.py # Importamos a função index() definida no arquivo views.py from portal_riscos.views import *from django.urls import pathfrom django.contrib.auth.views import LoginView app_name = 'portal_riscos' # urlpatterns contém a lista de roteamento URLs urlpatterns = [ # Dashboard Solutions path('', FlagshipSolutions, name='dash_solutions'), path('solutions_fundos/<pk>/<cdcarteira>', VisualizaFundoSolutions.as_view(), name='solutions_fundos') ] 我要点击并重定向的表格的一部分 class="btn btn-light btn-sm">Atualizar</a> 那是我得到的错误: Environment: Request Method: GETRequest URL: Django Version: 2.1.2Python Version: 3.6.1Installed Applications:['django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'portal_riscos', 'widget_tweaks', 'django.contrib.humanize']Installed Middleware:['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback: File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangocorehandlersexception.py" in inner response = get_response(request)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangocorehandlersbase.py" in _get_response response = self.process_exception_by_middleware(e, request)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangocorehandlersbase.py" in _get_response response = wrapped_callback(request, callback_args, *callback_kwargs)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangoviewsgenericbase.py" in view return self.dispatch(request, args, *kwargs)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangoviewsgenericbase.py" in dispatch return handler(request, args, *kwargs)File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangoviewsgenericedit.py" in get self.object = self.get_object()File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangoviewsgenericdetail.py" in get_object obj = queryset.get()File "C:UsersTBMEPYGAppDataLocalContinuumAnaconda3libsite-packagesdjangodbmodelsquery.py" in get (self.model._meta.object_name, num)Exception Type: MultipleObjectsReturned at /solutions_fundos/2019-01-14/FICFI52865Exception Value: get() returned more than one RmDadoscarteira -- it returned 21!

一码平川MACHEL

检查列表中的多个值

我有1个带有文件名的列表和1个带有过滤词的嵌套列表。筛选器列表具有3个列表,每个列表具有不同的子列表长度。 如何迭代列表并使用该and函数?它需要检查列表因为差的所有值['employer', 'finance']和['employer', 'adress']。 filter = [ ['employer','finance'], ['manifest'], ['epmloyer','adress','home'] ] file_list = [ '01012017_employer_finance.txt', '25102017_cargo_manifest.txt', '12022018_epmloyer_home_adress.txt', ] """search for financial file"""if filter0 in file_list[0] and filter0 in file_list[0]: print('Financial file found') """search for cargo manifest"""if filter1 in file_list[1]: print('Cargo manifest found') """search for adress file"""if filter2 in file_list[2] and filter2 in file_list[2] and filter2 in file_list[2]: print('Financial file found') 到目前为止,我设法得到下面的代码。但是我如何处理不同长度的列表呢?和变量的使用例如:filterx代替filter1 """loop through the file_list"""for file in file_list: print("Identify file:", file) #identify file in list with lists in it if filter[0][0] in file and filter[0][1] in file: print('***Financial file found')

一码平川MACHEL

如何使用python 3.7为pandastable中的特定行设置颜色

我在python中创建了一个简单的pandastable表单,但是我在使用颜色获取行时遇到了一些问题。 我从文档中尝试了以下定义,但它似乎不起作用? pt.setRowColors(rows=rows1, clr="red")这是我的代码: # pandas as pt # rows1 is a list of rows i would like to color app = tk.Tk()f = tk.Frame(app)f.pack(fill=tk.BOTH,expand=1)pt = Table(f, dataframe=myData, showtoolbar=False, showstatusbar=False)pt.show()pt.setRowColors(rows=rows1, clr="red")pt.redraw()我希望30行有红色背景,但它什么也没做。我甚至没有得到错误.....

一码平川MACHEL

Java中这个Python列表的等价性是什么样的?

我正在尝试学习Java,更具体地说,我正在尝试学习使用数组和列表时的一些差异。现在我正试图了解如何list += [i]*i 在Java中实现这一行。 Sum = 5000list = [0, 0]x = 1while len(list) < Sum: list += [x]*x x += 1 我尝试了很多不同的方法,但我似乎无法找到方法。我用Java尝试的方法得到的结果都是错误的。

一码平川MACHEL

类型提示__call __()magic method

我使用一个简单但功能强大的类,它充当数据库表,内置过滤器方法。这是它的一小部分。 PyCharm没有显示#3的类型提示。 from dataclasses import dataclass @dataclassclass Record: ID: int class Table(list): """Like a database table. Usage: table = Table([Record(123), ...]) >> table.filter(123) Record(123) """ def __call__(self, ID) -> Record: return self.filter(ID) def filter(self, ID) -> Record: return Table(x for x in self if x.ID == ID)[0] table = Table([Record(123)]) table[0]. # 1. This works. ".ID" Pops up after typing the period.table.filter(123). # 2. This works too.table(123). # 3. Crickets :-(. Nothing pops up after typing the period.我做错了什么或者这是PyCharm中的错误?