Shared Variables:
Normally, when a function passed to a Spark operation (such as map or reduce) is executed on a remote cluster node, it works on separate copies of all the variables used in the function. These variables are copied to each machine, and no updates to the variables on the remote machine are propagated back to the driver program. Supporting general, read-write shared variables across tasks would be inefficient. However, Spark does provide two limited types of shared variables for two common usage patterns: broadcast variables and accumulators.
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Answer is not availble for this assesment
Please login to comment
3 Comments
This comment has been removed.
Is there a place where we can download all slides?
Upvote ShareLike a google drive link.
Hi Noah,
No, There is no such link to download all slides at once.
You can do it one by one by clicking on pop-out button present in top-right corner of each PDF.
-- Sachin Giri
Upvote Share