spark-2.2.0-bin-hadoop2.6和spark-1.6.1-bin-hadoop2.6发行包自带案例全面详解(java、python、r和scala)之Basic包下的SparkTC.scala(图文详解)

简介:

spark-1.6.1-bin-hadoop2.6里Basic包下的SparkTC.scala

 

 

复制代码
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// scalastyle:off println
//package org.apache.spark.examples
package zhouls.bigdata

import scala.util.Random
import scala.collection.mutable
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.SparkContext._



/**
 * Transitive closure on a graph.
 */
object SparkTC {
  
  val numEdges = 200
  val numVertices = 100
  val rand = new Random(42)

  def generateGraph: Seq[(Int, Int)] = {
    val edges: mutable.Set[(Int, Int)] = mutable.Set.empty
    while (edges.size < numEdges) {
      val from = rand.nextInt(numVertices)
      val to = rand.nextInt(numVertices)
      if (from != to) edges.+=((from, to))
    }
    edges.toSeq
  }

  
  /*
   * 主函数
   */
  def main(args: Array[String]) {
    val sparkConf = new SparkConf().setAppName("SparkTC").setMaster("local")
    val spark = new SparkContext(sparkConf)
    val slices = if (args.length > 0) args(0).toInt else 2
    var tc = spark.parallelize(generateGraph, slices).cache()

    // Linear transitive closure: each round grows paths by one edge,
    // by joining the graph's edges with the already-discovered paths.
    // e.g. join the path (y, z) from the TC with the edge (x, y) from
    // the graph to obtain the path (x, z).

    // Because join() joins on keys, the edges are stored in reversed order.
    val edges = tc.map(x => (x._2, x._1))//翻转起点和终点,方便join, (x,y) (y,z) ==>(x,z) 需要翻转(x,y)为(y,x)才能join出正确结果

    // This join is iterated until a fixed point is reached.(不断join,union并计算个数直到不变)
    var oldCount = 0L
    var nextCount = tc.count()
    do {
      oldCount = nextCount
      // Perform the join, obtaining an RDD of (y, (z, x)) pairs,
      // then project the result to obtain the new (x, z) paths.
      tc = tc.union(tc.join(edges).map(x => (x._2._2, x._2._1))).distinct().cache()
      nextCount = tc.count()
    } while (nextCount != oldCount)

    println("TC has " + tc.count() + " edges.")
    spark.stop()
  }
}
// scalastyle:on println
复制代码

 

 

 

 

 

 

 

 

复制代码
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// scalastyle:off println
package org.apache.spark.examples

import scala.collection.mutable
import scala.util.Random
import org.apache.spark.sql.SparkSession

/**
 * Transitive closure on a graph.
 */
object SparkTC {
  
  val numEdges = 200
  val numVertices = 100
  val rand = new Random(42)

  /*
   * 1. 计算传递闭包(可到达路径数目)
     * 2. 自动生成图,使用可变Set存储起点,终点 
   */
  def generateGraph: Seq[(Int, Int)] = {
    val edges: mutable.Set[(Int, Int)] = mutable.Set.empty
    while (edges.size < numEdges) {
      val from = rand.nextInt(numVertices)
      val to = rand.nextInt(numVertices)
      if (from != to) edges.+=((from, to))
    }
    edges.toSeq
  }

  def main(args: Array[String]) {
    val spark = SparkSession
      .builder
      .master("local")
      .appName("SparkTC")
      .getOrCreate() 
      
    val slices = if (args.length > 0) args(0).toInt else 2
    var tc = spark.sparkContext.parallelize(generateGraph, slices).cache()

    // Linear transitive closure: each round grows paths by one edge,
    // by joining the graph's edges with the already-discovered paths.
    // e.g. join the path (y, z) from the TC with the edge (x, y) from
    // the graph to obtain the path (x, z).

    // Because join() joins on keys, the edges are stored in reversed order.
    val edges = tc.map(x => (x._2, x._1))//翻转起点和终点,方便join, (x,y) (y,z) ==>(x,z) 需要翻转(x,y)为(y,x)才能join出正确结果

    
    // This join is iterated until a fixed point is reached.(不断join,union并计算个数直到不变)
    var oldCount = 0L
    var nextCount = tc.count()
    do {
      oldCount = nextCount
      // Perform the join, obtaining an RDD of (y, (z, x)) pairs,
      // then project the result to obtain the new (x, z) paths.
      tc = tc.union(tc.join(edges).map(x => (x._2._2, x._2._1))).distinct().cache()
      nextCount = tc.count()
    } while (nextCount != oldCount)

    println("TC has " + tc.count() + " edges.")
    spark.stop()
  }
}
// scalastyle:on println


相关文章
|
13天前
|
Python
python集合的创建案例分享
【4月更文挑战第11天】在Python中,通过大括号或`set()`函数可创建集合。示例包括:使用大括号 `{}` 创建带元素的集合,如 `{1, 2, 3, 4, 5}`;使用 `set()` 函数从列表转换为集合,例如 `set([1, 2, 3, 4, 5])`,以及创建空集合 `set()`。当元素有重复时,集合会自动去重,如 `set([1, 2, 2, 3, 4, 4, 5])`。但尝试将不可哈希元素(如列表、字典)放入集合会引发 `TypeError`。
17 1
|
17天前
|
Python
Python文件操作学习应用案例详解
【4月更文挑战第7天】Python文件操作包括打开、读取、写入和关闭文件。使用`open()`函数以指定模式(如'r'、'w'、'a'或'r+')打开文件,然后用`read()`读取全部内容,`readline()`逐行读取,`write()`写入字符串。最后,别忘了用`close()`关闭文件,确保资源释放。
18 1
|
12天前
|
分布式计算 Hadoop 大数据
大数据技术与Python:结合Spark和Hadoop进行分布式计算
【4月更文挑战第12天】本文介绍了大数据技术及其4V特性,阐述了Hadoop和Spark在大数据处理中的作用。Hadoop提供分布式文件系统和MapReduce,Spark则为内存计算提供快速处理能力。通过Python结合Spark和Hadoop,可在分布式环境中进行数据处理和分析。文章详细讲解了如何配置Python环境、安装Spark和Hadoop,以及使用Python编写和提交代码到集群进行计算。掌握这些技能有助于应对大数据挑战。
|
6天前
|
机器学习/深度学习 人工智能 自然语言处理
总结几个GPT的超实用之处【附带Python案例】
总结几个GPT的超实用之处【附带Python案例】
|
9天前
|
Python
[重学Python]Day 2 Python经典案例简单习题6个
[重学Python]Day 2 Python经典案例简单习题6个
14 0
|
9天前
|
Python
python学习14-模块与包
python学习14-模块与包
|
12天前
|
Python
掌握Python导包技艺:揭秘导包语句的奥秘
掌握Python导包技艺:揭秘导包语句的奥秘
18 0
|
17天前
|
Python
Python数据类型学习应用案例详解
Python基础数据类型包括整数(int)、浮点数(float)、字符串(str)、布尔值(bool)、列表(list)、元组(tuple)、字典(dict)和集合(set)。整数和浮点数支持算术运算,字符串是不可变的文本,布尔值用于逻辑判断。列表是可变有序集合,元组不可变。字典是键值对的无序集合,可变,而集合是唯一元素的无序集合,同样可变。示例代码展示了这些类型的基本操作。
11 1
|
17天前
|
Python
Python控制结构学习应用案例详解
Python控制结构包含条件语句、循环语句和异常处理。条件语句用if-elif-else判断数字正负;for循环示例输出1到10的整数,while循环计算1到10的和;异常处理用try-except-finally处理除零错误,打印提示信息并结束。
10 3
|
17天前
|
Python
Python函数学习应用案例详解
【4月更文挑战第7天】学习Python函数的应用,包括计算两数之和、判断偶数、计算阶乘、生成斐波那契数列及反转字符串。示例代码展示了函数接收参数和返回结果的功能,如`add(a, b)`求和,`is_even(num)`判断偶数,`factorial(n)`计算阶乘,`fibonacci(n)`生成斐波那契数,以及`reverse_string(s)`反转字符串。
14 1