* Pod -> ReplicationController, which also forced me to hack around hostname issue on the master. (Spark master sees the incoming slave request to spark-master and assumes it's not meant for it, since it's name is spark-master-controller-abcdef.) * Remove service env dependencies (depend on DNS instead). * JSON -> YAML. * Add GCS connector. * Make example do something actually useful: A familiar example to anyone at Google, implement wordcount of all of Shakespeare's works. * Fix a minor service connection issue in the gluster example.
6 lines
277 B
Plaintext
6 lines
277 B
Plaintext
spark.master spark://spark-master:7077
|
|
spark.executor.extraClassPath /opt/spark/lib/gcs-connector-latest-hadoop2.jar
|
|
spark.driver.extraClassPath /opt/spark/lib/gcs-connector-latest-hadoop2.jar
|
|
spark.driver.extraLibraryPath /opt/hadoop/lib/native
|
|
spark.app.id KubernetesSpark
|