bootstrap

#bootstrap#

已有1人关注此标签

内容分类

小六码奴

使用docker exec执行主机上存在的shell脚本时出现问题

我正在尝试在AWS EMR集群的主节点上执行脚本。目的是创建一个新的conda env并将其链接到jupyter。我正在关注AWS的这个文档。问题是,无论脚本的内容是什么,我都会遇到同样的错误:bash: /home/hadoop/scripts/bootstrap.sh: No such file or directory执行时sudo docker exec jupyterhub bash /home/hadoop/scripts/bootstrap.sh。我确保sh文件位于正确的位置。 但是如果我将bootstrap.sh文件复制到容器内,然后运行相同的docker exec cmd,它就可以了。我在这里错过了什么?我已尝试使用带有以下条目的简单脚本,但它会抛出相同的错误: #!/bin/bashecho "Hello"该文件清楚地说: 内核安装在Docker容器中。完成此操作的最简单方法是使用安装命令创建bash脚本,将其保存到主节点,然后使用sudo docker exec jupyterhub script_name命令在jupyterhub容器中运行脚本。

luneice

阿里云ACM

阿里云ACM产品提供node.js的SDK使用时报错const acm = ACMClient({ ^ TypeError: ACMClient is not a function at Object.<anonymous> (/luneice/project/node/grehub/test.js:8:13) at Module._compile (internal/modules/cjs/loader.js:689:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10) at Module.load (internal/modules/cjs/loader.js:599:32) at tryModuleLoad (internal/modules/cjs/loader.js:538:12) at Function.Module._load (internal/modules/cjs/loader.js:530:3) at Function.Module.runMain (internal/modules/cjs/loader.js:742:12) at startup (internal/bootstrap/node.js:283:19) at bootstrapNodeJSCore (internal/bootstrap/node.js:743:3)

一码平川MACHEL

ImportError:无法导入名称'Message' - django-messages

我分叉了https://github.com/arneb/django-messages/并把它放在我的仓库中:https://github.com/mike-johnson-jr/django-messages/ 当我使用该包时,我在标题中收到错误。完全追溯: Traceback (most recent call last): File "manage.py", line 15, in execute_from_command_line(sys.argv) File "/home/michael/.local/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line utility.execute() File "/home/michael/.local/lib/python3.6/site-packages/django/core/management/__init__.py", line 357, in execute django.setup() File "/home/michael/.local/lib/python3.6/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/home/michael/.local/lib/python3.6/site-packages/django/apps/registry.py", line 112, in populate app_config.import_models() File "/home/michael/.local/lib/python3.6/site-packages/django/apps/config.py", line 198, in import_models self.models_module = import_module(models_module_name) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 994, in _gcd_import File "", line 971, in _find_and_load File "", line 955, in _find_and_load_unlocked File "", line 665, in _load_unlocked File "", line 678, in exec_module File "", line 219, in _call_with_frames_removed File "/home/michael/.local/lib/python3.6/site-packages/django_messages/models.py", line 48, in class Message(models.Model): File "/home/michael/.local/lib/python3.6/site-packages/django_messages/models.py", line 87, in Message get_absolute_url = reverse(get_absolute_url) File "/home/michael/.local/lib/python3.6/site-packages/django/urls/base.py", line 90, in reverse return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) File "/home/michael/.local/lib/python3.6/site-packages/django/urls/resolvers.py", line 562, in _reverse_with_prefix self._populate() File "/home/michael/.local/lib/python3.6/site-packages/django/urls/resolvers.py", line 413, in _populate for url_pattern in reversed(self.url_patterns): File "/home/michael/.local/lib/python3.6/site-packages/django/utils/functional.py", line 37, in get res = instance.__dict__[self.name] = self.func(instance) File "/home/michael/.local/lib/python3.6/site-packages/django/urls/resolvers.py", line 533, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/home/michael/.local/lib/python3.6/site-packages/django/utils/functional.py", line 37, in get res = instance.__dict__[self.name] = self.func(instance) File "/home/michael/.local/lib/python3.6/site-packages/django/urls/resolvers.py", line 526, in urlconf_module return import_module(self.urlconf_name) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 994, in _gcd_import File "", line 971, in _find_and_load File "", line 955, in _find_and_load_unlocked File "", line 665, in _load_unlocked File "", line 678, in exec_module File "", line 219, in _call_with_frames_removed File "/home/michael/projects/datafix/datafix/urls.py", line 65, in path('messages/', include('django_messages.urls')), File "/home/michael/.local/lib/python3.6/site-packages/django/urls/conf.py", line 34, in include urlconf_module = import_module(urlconf_module) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 994, in _gcd_import File "", line 971, in _find_and_load File "", line 955, in _find_and_load_unlocked File "", line 665, in _load_unlocked File "", line 678, in exec_module File "", line 219, in _call_with_frames_removed File "/home/michael/.local/lib/python3.6/site-packages/django_messages/urls.py", line 4, in from django_messages.views import * File "/home/michael/.local/lib/python3.6/site-packages/django_messages/views.py", line 11, in from django_messages.models import Message ImportError: cannot import name 'Message'这里是我的django_messages.models代码: @python_2_unicode_compatibleclass Message(models.Model): """ A private message from user to user """ subject = models.CharField(_("Subject"), max_length=140) body = models.TextField(_("Body")) sender = models.ForeignKey(AUTH_USER_MODEL, related_name='sent_messages', verbose_name=_( "Sender"), on_delete=models.SET_NULL) recipient = models.ForeignKey(AUTH_USER_MODEL, related_name='received_messages', null=True, blank=True, verbose_name=_("Recipient"), on_delete=models.SET_NULL) parent_msg = models.ForeignKey('self', related_name='next_messages', null=True, blank=True, verbose_name=_("Parent message"), on_delete=models.SET_NULL) sent_at = models.DateTimeField(_("sent at"), null=True, blank=True) read_at = models.DateTimeField(_("read at"), null=True, blank=True) replied_at = models.DateTimeField(_("replied at"), null=True, blank=True) sender_deleted_at = models.DateTimeField( _("Sender deleted at"), null=True, blank=True) recipient_deleted_at = models.DateTimeField( _("Recipient deleted at"), null=True, blank=True) objects = MessageManager()

社区小助手

如何使用spark将kafka主题中的writeStream数据写入hdfs?

我一直试图让这段代码工作几个小时: val spark = SparkSession.builder() .appName("Consumer") .getOrCreate() spark.readStream .format("kafka") .option("kafka.bootstrap.servers", url) .option("subscribe", topic) .load() .select("value") .writeStream .format(fileFormat) .option("path", filePath) .option("checkpointLocation", "/tmp/checkpoint") .start() .awaitTermination() 它给出了这个例外: Logical Plan: Project [value#8] +- StreamingExecutionRelation KafkaV2[Subscribe[MyTopic]], [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13] at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295) at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189) Caused by: java.lang.ClassCastException: org.apache.spark.sql.execution.streaming.SerializedOffset cannot be cast to org.apache.spark.sql.sources.v2.reader.streaming.Offset at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$1$$anonfun$apply$9.apply(MicroBatchExecution.scala:405) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$1$$anonfun$apply$9.apply(MicroBatchExecution.scala:390) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at org.apache.spark.sql.execution.streaming.StreamProgress.foreach(StreamProgress.scala:25) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at org.apache.spark.sql.execution.streaming.StreamProgress.flatMap(StreamProgress.scala:25) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$1.apply(MicroBatchExecution.scala:390) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$1.apply(MicroBatchExecution.scala:390) at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:271) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:389) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:133) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:121) at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:271) at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58) at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:121) at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56) at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:117) at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)我不明白发生了什么,我只是试图使用spark streaming将kafka主题的数据写入HDFS。为什么这么难?我怎么能这样做? 我得到了批处理版本才能正常工作: spark.read .format("kafka") .option("kafka.bootstrap.servers", url) .option("subscribe", topic) .load() .selectExpr("CAST(value AS String)") .write .format(fileFormat) .save(filePath)

社区小助手

Spark结构化流媒体从Cassandra中丰富

我使用结构化流式传输来自Kafka的数据 val df = spark .readStream .format("kafka") .option("kafka.bootstrap.servers", "localhost:9092") .option("enable.auto.commit", false) .option("auto.offset.reset", "earliest") .option("group.id", UUID.randomUUID().toString) .option("subscribe", "test") .load() 然后尝试使用Cassandra表加入它 val d = df.select(from_json(col("value").cast("string"), schema).cast("string").alias("url")) .rdd.joinWithCassandraTable[(String, String, String)]("analytics", "nlp2", SomeColumns("url", "ner", "sentiment"), SomeColumns("url")) .toDS() .writeStream .format("console") // <-- use ConsoleSink .option("truncate", false) .option("numRows", 10) .trigger(Trigger.ProcessingTime(5 seconds)) .queryName("rate-console") .start .awaitTermination() 但我得到,当我尝试将数据框转换为rdd时,任何想法为什么? Exception in thread "main" org.apache.spark.sql.AnalysisException: Queries with streaming sources must be executed with writeStream.start();;kafka at org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$.org$apache$spark$sql$catalyst$analysis$UnsupportedOperationChecker$$throwError(UnsupportedOperationChecker.scala:297) at org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForBatch$1.apply(UnsupportedOperationChecker.scala:36) at org.apache.spark.sql.catalyst.analysis.UnsupportedOperationChecker$$anonfun$checkForBatch$1.apply(UnsupportedOperationChecker.scala:34) at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)

社区小助手

Apache Spark:Kafka以自定义格式编写

我正在构建一个使用Kafka主题的Spark SQL应用程序,转换一些数据,然后使用特定的JSON对象写回单独的Kafka主题。 现在我能够查询/转换我想要的内容并编写它: Dataset reader = myData.getRecordCount();reader.select(to_json(struct("record_count")).alias("value")) .write() .format("kafka") .option("kafka.bootstrap.servers", "localhost:9092") .option("topic", "new_separate_topic") .save(); 这会产生如下记录: { "record_count": 989}我需要的是,这一点JSON是一个更大的JSON对象的有效负载(子)属性,我们将其用作我们的微服务的标准消费者对象。 我想写的主题实际上是这样的: { "id": "ABC123", "timestamp": "2018-11-16 20:40:26.108", "user": "DEF456", "type": "new_entity", "data": { "record_count": 989 } }此外,“id”,“user”和“type”字段将从外部填充 - 它们将来自触发整个过程的原始Kafka消息。基本上,我需要为我想写入Kafka的元数据/对象注入一些值,并将“data”字段设置为上面的Spark SQL查询的结果。

社区小助手

Kafka制作人使用默认分区

现在我的kafka生产者正在将所有消息下沉到kafka主题的单个分区,该分区实际上有超过1个分区。 如何创建将使用默认分区器并在主题的不同分区之间分发消息的生产者。 我的kafka制作人的代码片段: Properties props = new Properties();props.put(ProducerConfig.RETRIES_CONFIG, 0);props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrap.servers);props.put(ProducerConfig.ACKS_CONFIG, "all");我正在使用flink kafka生成器来收集关于kafka主题的消息。 speStream.addSink( new FlinkKafkaProducer011(kafkaTopicName, new KeyedSerializationSchemaWrapper<>(new SimpleStringSchema()), props, FlinkKafkaProducer011.Semantic.EXACTLY_ONCE)

社区小助手

Spark Structured Streaming获取最后一个Kafka分区的消息

我正在使用Spark Structured Streaming来读取Kafka主题。 没有任何分区,Spark Structired Streaming消费者可以读取数据。 但是当我向主题添加分区时,客户端仅显示来自最后一个分区的消息。即如果主题中有4个分区,并且I.am推送主题中的1,2,3,4之类的数字,则客户端仅打印4而不是其他值。 我正在使用来自Spark Structured Streaming网站的最新样本和二进制文件。 DataFrame<Row> df = spark .readStream() .format("kafka") .option("kafka.bootstrap.servers", "host1:port1,host2:port2") .option("subscribe", "topic1") .load()

flink小助手

试图将Fuple写入Flink Kafka接收器

我正在尝试编写一个流媒体应用程序,它既可以读取也可以写入Kafka。我目前有这个,但我必须把我的元组课程串起来。 object StreamingJob { def main(args: Array[String]) { // set up the streaming execution environment val env = StreamExecutionEnvironment.getExecutionEnvironment val properties = new Properties() properties.setProperty("bootstrap.servers", "localhost:9092") properties.setProperty("zookeeper.connect", "localhost:2181") properties.setProperty("group.id", "test") val consumer = env.addSource(new FlinkKafkaConsumer08[String]("topic", new SimpleStringSchema(), properties)) val counts = consumer.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } } .map { (_, 1) } .keyBy(0) .timeWindow(Time.seconds(5)) .sum(1) val producer = new FlinkKafkaProducer08[String]( "localhost:9092", "my-topic", new SimpleStringSchema()) counts.map(_.toString()).addSink(producer) env.execute("Window Stream WordCount") env.execute("Flink Streaming Scala API Skeleton") }}我可以得到最接近的工作如下,但FlinkKafkaProducer08拒绝接受type参数作为构造函数的一部分。 val producer = new FlinkKafkaProducer08[(String, Int)]( "localhost:9092", "my-topic", new TypeSerializerOutputFormat[(String, Int)]) counts.addSink(producer)我想知道是否有办法将元组直接写入我的Kafka接收器。

社区小助手

Spark Structured Streaming error读取字段'topic_metadata'时出错

我正在运行spark 2.4.0和Kafka 0.10.2 var streamingInputDF = spark.readStream .format("kafka") .option("kafka.bootstrap.servers", "localhost:9092") .option("subscribe", "twitter-topic") .load() 控制台writeStream: val activityQuery = streamingInputDF.writeStream .format("console") .outputMode("append") .start() activityQuery.awaitTermination()但是,当我启动控制台时,writeStream我得到以下异常 org.apache.spark.sql.streaming.StreamingQueryException: Query [id = d21cd9b4-7f51-4f5f-acbf-943dfaaeb7e5, runId = c2b2c58d-7afe-4ca5-bc36-6a3f496c19b3] terminated with exception: Error reading field 'topic_metadata': Error reading array of size 881783, only 41 bytes available at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution $$ runStream(StreamExecution.scala:295) at org.apache.spark.sql.execution.streaming.StreamExecution $$ anon$1.run(StreamExecution.scala:189)Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 881783, only 41 bytes available at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:73) at org.apache.kafka.clients.NetworkClient.parseResponse(NetworkClient.java:380) at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:449) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:269) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)

flink小助手

如何为consumer设置Kafka抵消?

"假设我的主题中已有10个数据,现在我开始编写消费者 Flink,消费者将使用第11个数据。 因此,我有3个问题: 如何分别获取当前主题的分区数和每个分区的偏移量?如何手动为消费者设置每个分区的起始位置?如果Flink消费者崩溃,几分钟后就会恢复。消费者将如何知道重新启动的位置?示例代码(我试过FlinkKafkaConsumer08,FlinkKafkaConsumer10但都是例外。): public class kafkaConsumer {public static void main(String[] args) throws Exception { // create execution environment StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.enableCheckpointing(5000); Properties properties = new Properties(); properties.setProperty(""bootstrap.servers"", ""192.168.95.2:9092""); properties.setProperty(""group.id"", ""test""); properties.setProperty(""auto.offset.reset"", ""earliest""); FlinkKafkaConsumer09<String> myConsumer = new FlinkKafkaConsumer09<>( ""game_event"", new SimpleStringSchema(), properties); DataStream<String> stream = env.addSource(myConsumer); stream.map(new MapFunction<String, String>() { private static final long serialVersionUID = -6867736771747690202L; @Override public String map(String value) throws Exception { return ""Stream Value: "" + value; } }).print(); env.execute(); } }和pom.xml: <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-clients_2.11</artifactId> <version>1.6.1</version> </dependency>"

javatomcat

在linux上关闭tomcat报错

java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at java.net.Socket.<init>(Socket.java:425) at java.net.Socket.<init>(Socket.java:208) at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:457) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:398) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:485)

jocean

flink 注册kafka得csv格式

用flink注册kafka得消息源,format选择csv报错,选择avro可以。是不是kafka连接器不支持csv格式。报错如下:Exception in thread "main" org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.DeserializationSchemaFactory' inthe classpath。 示例代码如下: tableEnv.connect( new Kafka() .version("0.11") .topic("result_count") .property("bootstrap.servers", "**") ) .withFormat( new Csv().fieldDelimiter(",") ) .withSchema( new Schema() .field("world", Types.STRING) .field("count", Types.INT) ) .inAppendMode() .registerTableSource("result_count")

马铭芳

网络加速CDN【问答合集】

cdn和.htaccess 301跳转能同时使用吗https://yq.aliyun.com/ask/213743网站用oss做图床,cdn想加速网站+oss两个,有无必要?https://yq.aliyun.com/ask/169089高防cdn防御ddos应该做哪些部署https://yq.aliyun.com/ask/190262CDN节点的好处?https://yq.aliyun.com/ask/190219高防cdn能够防御主流的ddos攻击吗https://yq.aliyun.com/ask/190241用cdn加速 是不是ecs 就没必要开很大带宽https://yq.aliyun.com/ask/178826如何让CDN 缓存动态页面https://yq.aliyun.com/ask/190231oss外网域名 需要开通cdn吗https://yq.aliyun.com/ask/183802服务器加上高防CDN是不是可以防打!https://yq.aliyun.com/ask/190118cdn使用的网络节点本质上是什么东西?比如数据中心本质上就是一个大机房,那么网络节点的本质是什么?https://yq.aliyun.com/ask/125356关于cdn怎么收费的阿里云帮助中心问题解答https://yq.aliyun.com/ask/190199CDN回源、网站解析 是什么意思https://yq.aliyun.com/ask/190211IDC到底是什么,和CDN啥关系?https://yq.aliyun.com/ask/125224nginx 在使用cdn的情况下怎么根据真实ip做ip hash 负载均衡https://yq.aliyun.com/ask/190079对象存储OSS和CDN是什么意思https://yq.aliyun.com/ask/183423CDN带宽和IDC带宽有什么区别,为什么价格差距这https://yq.aliyun.com/ask/190089Web应用防火墙服务怎样结合第3方CDN共同服务云主机https://yq.aliyun.com/ask/191769以下哪个软件可以用于cdn服务器 a,apache b,varnish c,tomcat d,dockerhttps://yq.aliyun.com/ask/188726cdn加速器的缓存一般设置多长时间https://yq.aliyun.com/ask/190178阿里云cdn回源对域名有什么好处https://yq.aliyun.com/ask/190058CDN加速和防护是针对网站所在服务器做加速和防护,还是针对的域名做加速防护?https://yq.aliyun.com/ask/188972浅析网站更换ip或使用CDN会不会影响SEO排名https://yq.aliyun.com/ask/190067网站防护可以采用高防CDN吗?https://yq.aliyun.com/ask/193467网站系统怎么用高防CDN去防御DDOS攻击https://yq.aliyun.com/ask/190291CDN的目录刷新是把文件删除了,还是置过期?https://yq.aliyun.com/ask/119757oss cdn 加速域名 什么意思https://yq.aliyun.com/ask/183395高防cdn防御ddos应该做哪些部署https://yq.aliyun.com/ask/190301解析CDN后网站主页打不开了https://yq.aliyun.com/ask/119096cdn加速原理是什么,正常1m宽带服务器能提升多少速度https://yq.aliyun.com/ask/119177cdn加速器的缓存一般设置多长时间https://yq.aliyun.com/ask/190143cdn支持多少并发https://yq.aliyun.com/ask/67133网站做了CDN对于有多个域名的网站改如何优化https://yq.aliyun.com/ask/190090微擎怎么使用cdn加速呢https://yq.aliyun.com/ask/190132网站使用CDN加速,是否CDN消耗多少流量虚拟主机也消耗同样的流量?https://yq.aliyun.com/ask/190062兄弟hl-4050cdn怎么恢复出厂设置https://yq.aliyun.com/ask/190209高防cdn能够彻底抵御ddos攻击吗https://yq.aliyun.com/ask/190239如何正确看待CDN节点https://yq.aliyun.com/ask/190221经常听到CDN,请问网站如何使用CDN?https://yq.aliyun.com/ask/190140如何禁止CDN加速域名直接访问,不产生镜像站https://yq.aliyun.com/ask/190160高防CDN能防御DDOS攻击吗https://yq.aliyun.com/ask/190248如何通过CDN技术来解决黑客入侵,DDoS问题https://yq.aliyun.com/ask/190121什么是CDN镜像https://yq.aliyun.com/ask/193435开租用的香港服务器使用cdn对网站优化有帮助吗https://yq.aliyun.com/ask/190109Bootstrap免费 CDN 加速服务/Bootstrap文件怎么引入https://yq.aliyun.com/ask/190055CDN加速对网站有什么好处https://yq.aliyun.com/ask/190137什么是网站CDN服务,CDN加速原理https://yq.aliyun.com/ask/190223cdn加速靠谱吗?到底能不能提高网站访问速度?https://yq.aliyun.com/ask/190084如何使用cdn节点隐藏ip地址https://yq.aliyun.com/ask/190124为什么我的 wordpress 开启了 wp super cache 中的 CDN 功能后 网站就乱码了呢?访问CDN 域名是正常的https://yq.aliyun.com/ask/193483如何让CDN 缓存动态页面https://yq.aliyun.com/ask/190194带问号 文件链接如何 cdn 加速https://yq.aliyun.com/ask/193443网站做了CDN对于有多个域名的网站改如何优化https://yq.aliyun.com/ask/190131CDN数据解包失败是什么意思https://yq.aliyun.com/ask/193450cdn源服务器带宽不够会影响下载吗https://yq.aliyun.com/ask/193453

马铭芳

前端进阶走进Less详解 【精品问答合集】

1、 LESS 乱码 https://yq.aliyun.com/ask/285612、 bootstrap里面说的variables.less在哪儿? https://yq.aliyun.com/ask/268703、 less的循环功能如何实现?https://yq.aliyun.com/ask/219394、 LESS或者SASS可以引入别的文件中定义的变量么? https://yq.aliyun.com/ask/215925、 less中怎么访问javascript? https://yq.aliyun.com/ask/217676、 less嵌套时怎样写更好 https://yq.aliyun.com/ask/187057、 LESS ( client 方式使用) 会不会影响页面的渲染时间? https://yq.aliyun.com/ask/172208、 关于LESS中对“/”的编译处理 https://yq.aliyun.com/ask/242339、 如何自定义less 的一些编译规则https://yq.aliyun.com/ask/1798110、 gulp如何把独立less文件编译成css后嵌入到html页面,变成嵌入式样式表? https://yq.aliyun.com/ask/2538611、 用了LESS 之后感觉效率并没有多大提升,反而增加了维护成本,是我使用方法不对? https://yq.aliyun.com/ask/2431812、 LESS 参数留空时报错: .text_element(left;;;20px;); 报错 https://yq.aliyun.com/ask/2263013、 win7下在命令行中安装less,npm install -g less后一直显示加载,不能安装 https://yq.aliyun.com/ask/2116514、 Less中的“>”符号的作用是什么 https://yq.aliyun.com/ask/2217515、 class里包含两个类,用less的嵌套怎么写呢 https://yq.aliyun.com/ask/2075016、 SASS,Stylus,LESS这三个css预处理器你选哪个? https://yq.aliyun.com/ask/17273

马铭芳

前端进阶Bootstrap详解 【新手百问合集】

Bootstrap 提供了一个带有网格系统、链接样式、背景的基本结构,小编在这里为大家整理了一下常见的问题,希望能帮助到大家~1、bootstrap列偏移 https://yq.aliyun.com/ask/594212、关于jquery以及bootstrap合并使用ajax兼容性问题 https://yq.aliyun.com/ask/206683、requirejs加载bootstrap框架的话,bootstrap.css怎么实现导入?https://yq.aliyun.com/ask/187144、怎么使用bootstrap进行后端开发 https://yq.aliyun.com/ask/306115、bootstrap 有没有浏览器兼容问题 https://yq.aliyun.com/ask/263356、bootstrap 分页的一个问题 https://yq.aliyun.com/ask/249647、怎么在angular项目中使用bootstrap的插件的? https://yq.aliyun.com/ask/234768、怎么去除 Bootstrap 的圆角效果? https://yq.aliyun.com/ask/172769、关于bootstrap的javascript插件弹出框适应问题 https://yq.aliyun.com/ask/1450610、用bootstrap栅格系统出现错乱 https://yq.aliyun.com/ask/2418411、Bootstrap字体图标在Chrome下无法显示 https://yq.aliyun.com/ask/2906912、关于bootstrap的popover无法显示 https://yq.aliyun.com/ask/2710813、怎么让bootstrap的radio选中? https://yq.aliyun.com/ask/1988314、Bootstrap Modal 如何传递参数 https://yq.aliyun.com/ask/2052815、请教bootstrap可以让图片铺满屏幕吗? https://yq.aliyun.com/ask/3033816、为什么 Bootstrap 动态生成的元素没有 CSS 样式 https://yq.aliyun.com/ask/1715517、bootstrap里面说的variables.less在哪儿? https://yq.aliyun.com/ask/2687018、关于bootstrap“断点划分”和“栅格宽度”的一些疑问 https://yq.aliyun.com/ask/1859619、bootstrap 写的响应式导航,在手机 UC 浏览器中不执行 https://yq.aliyun.com/ask/1779020、bootstrap的Tab切换不了标签页 https://yq.aliyun.com/ask/1323121、如何实现Bootstrap导航在所有设备下拉到最宽?https://yq.aliyun.com/ask/1869122、bootstrap自定义样式与原样式优先级问题 https://yq.aliyun.com/ask/1319023、bootstrap 有没有自带的创建padding-top 的样式?https://yq.aliyun.com/ask/1927124、bootstrap的模态框中无法使用angular的赋值该怎么办? https://yq.aliyun.com/ask/3006125、bootstrap 栅格化布局 最大宽度无效 https://yq.aliyun.com/ask/1377226、如何固定表头? 用bootstrap的响应式表格 https://yq.aliyun.com/ask/2532027、Bootstrap怎么实现在一个图片轮播框里一行同时放两张图? https://yq.aliyun.com/ask/2632728、bootstrap滚动监听在iPhone微信内置的浏览器上有时有效果有时没有 https://yq.aliyun.com/ask/3474429、bootstrap的datetimepicker插件,如何取出获得的时间 https://yq.aliyun.com/ask/1785030、BootStrap如何实现右部固定宽度,左边占满全部的效果? https://yq.aliyun.com/ask/2378031、Bootstrap禁用标签页的链接 https://yq.aliyun.com/ask/2113032、请问在bootstrap导航条中,点击下拉菜单之后怎样改变背景色? https://yq.aliyun.com/ask/2465733、bootstrap 一个页面里有多个table时,不同的th宽度不一样 https://yq.aliyun.com/ask/1877434、怎么样使Bootstrap固定在底部的导航栏只在手机端显示呀? https://yq.aliyun.com/ask/3474835、BOOTSTRAP做表格的时候,表格变成全屏,请问该 怎么样调小?https://yq.aliyun.com/ask/12663 云栖君邀请了两位阿里前端专家,从新人前端和进阶专家的角度帮助大家梳理:一个专业的前端,在职业生涯规划上会经历哪些坑;需要从技能、心理上做哪些准备;本场讲座完全免费哦!7月19日晚7点准时开讲!钉钉扫码免费进群!

苏进

k8s集群通过ceph-helm安装ceph集群以后,可以创建pv以及pvc,但是在pod中使用pvc是提示无法mount,failed to lock image

在k8s集群中,通过ceph-helm创建一个ceph集群,集群创建成功,并且可以正常的创建pv和pvc,但是无法将pvc应用到pod,提示错误:MountVolume.SetUp failed for volume "ceph-pv" : rbd: failed to lock image sujin-image-01 (maybe locked by other nodes), error exit status 1 k8s node使用centos7,内核是Linux bootstrap 3.10.0-693.el7.x86_64,modprobe rbd支持。

happycc

2018程序员拜年的奇思妙想,涨姿势了

人工智能到来了, 各位大神程序员们,大家拜年都用到了什么技术呢, 这里分享一个 python自动拜年攻略哦。 windows环境 1.pip安装2.python安装3.pycharm4.微信 实现 自动拜年回复 Installing with get-pip.pyTo install pip, securely download get-pip.py. [2] PIP 官网首页 get-pip.py 下载地址 右键另存 Then run the following: 在python 加入环境变量; CMD中 在get-pip.py的保存路径下 执行 get-pip.py python的路径 ,及 python下 pip的路径 都配置进入 成功 pip pip 安装 itchat 包的过程 01 pip 安装 itchat 包的过程 02 成功 代码截图 创建Python文件 比如 newYear.py ,代码内容如下 运行 :cmd 中python newYear.py屏幕出现二维码 微信扫码登陆,实现自动回复 微信登陆成功 测试自动回复成功 自动回复效果 各位快来体验下吧, 感觉很不错哦。 欢迎各位朋友来分享过年您都如何拜年都如何回复山海一般的拜年短信百年中遇到的糗事今年拜年您的新意在哪里 欢迎来支招哦

morlory

使用了阿里云的短信服务开发了python的程序,但是在打包为exe的时候出现了问题

————————————————————12.28更新————————————————————现在重新装了pycrypto,但是丢失的文件还是丢失状态(可能他本来就没有这个文件,但是我从它的安装包中找到了那个文件),之后我把安装包中的文件放到了指定位置,但是程序打包时貌似还是找不到,还是报原来的错误,还在寻找解决办法 原问题————————————————————————以下为报错信息:Traceback (most recent call last): File "main.py", line 15, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "login.py", line 10, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "lostcode.py", line 14, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "send_sms.py", line 5, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "libsite-packagesaliyun_python_sdk_core-2.4.4-py2.7.eggaliyunsdkcoreclient.py", line 40, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "libsite-packagesaliyun_python_sdk_core-2.4.4-py2.7.eggaliyunsdkcoreauthSigner.py", line 32, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "libsite-packagesaliyun_python_sdk_core-2.4.4-py2.7.eggaliyunsdkcoreauthalgorithmsha_hmac256.py", line 24, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "buildbdist.win32eggCryptoPublicKeyRSA.py", line 78, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "buildbdist.win32eggCryptoRandom__init__.py", line 28, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "buildbdist.win32eggCryptoRandomOSRNG__init__.py", line 34, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "buildbdist.win32eggCryptoRandomOSRNGnt.py", line 28, in File "c:python27libsite-packagesPyInstaller-3.4.dev0+ab8fd9753-py2.7.eggPyInstallerloaderpyimod03_importers.py", line 396, in load_module exec(bytecode, module.__dict__) File "buildbdist.win32eggCryptoRandomOSRNGwinrandom.py", line 7, in File "buildbdist.win32eggCryptoRandomOSRNGwinrandom.py", line 6, in bootstrapImportError: DLL load failed: 找不到指定的模块。[30668] Failed to execute script main 我使用的的是Python2.7,使用pyinstaller打包的,没打包的程序运行是没有问题的,打包之后出来的exe文件就会报这样的错误,求大神解答

任凡心

启动tomcat之后不能访问

[root@iZj6cdf6hh48rssoki4m1oZ logs]# netstat -pan|grep 8080tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 2295/java [root@iZj6cdf6hh48rssoki4m1oZ logs]# ps -ef|grep 2295root 2295 1 0 00:08 pts/0 00:00:01 /usr/bin/java -Djava.util.logging.config.file=/var/tomcat/tomcat-7/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -server -Xms800m -Xmx800m -XX:PermSize=64M -XX:MaxNewSize=256m -XX:MaxPermSize=128m -Djava.awt.headless=true -Djava.endorsed.dirs=/var/tomcat/tomcat-7/endorsed -classpath /var/tomcat/tomcat-7/bin/bootstrap.jar:/var/tomcat/tomcat-7/bin/tomcat-juli.jar -Dcatalina.base=/var/tomcat/tomcat-7 -Dcatalina.home=/var/tomcat/tomcat-7 -Djava.io.tmpdir=/var/tomcat/tomcat-7/temp org.apache.catalina.startup.Bootstrap startroot 2349 1931 0 00:23 pts/0 00:00:00 grep 2295 可见8080端口已经监听起来了 但是我telnet的时候仍然不行: [root@iZj6cdf6hh48rssoki4m1oZ logs]# telnet 47.91.243.234 8080Trying 47.91.243.234...telnet: connect to address 47.91.243.234: Connection refused